Module 01

Becoming AI-Native

What agents and MCP actually are — and why every PM needs to understand them now

TL;DR

What agents and MCP actually are — and why every PM needs to understand them now

> Overview

This is the foundation module. It answers the three questions every PM is quietly Googling: What exactly is an AI agent? What is MCP and why does everyone keep talking about it? And what does it mean to become AI-native as a product organization? The module establishes the vocabulary, mental models, and strategic lens that every subsequent module builds on.

> Why This Matters for Your Product

If you cannot explain what an agent is in a stakeholder meeting, you lose credibility the moment your engineering team starts building one. If you do not understand MCP, you will miss the biggest infrastructure shift since REST APIs. This module gives PMs the fluency to lead conversations, not just follow them.

> What Is an AI Agent?

An AI agent is software that uses a large language model as its reasoning engine to autonomously pursue goals. Unlike a chatbot that responds to one message at a time, an agent can plan multi-step tasks, use external tools, make decisions, and iterate on its own work. The core loop: perceive, reason, act, observe, repeat. Autonomy exists on a spectrum from suggest-and-wait to execute-independently. Real examples: coding agents that write and test code, research agents that compile reports, support agents that resolve tickets end-to-end.

> The Five Levels of Agent Autonomy

L0 is a basic chatbot with no tools or memory. L1 is a tool-calling assistant that follows fixed workflows. L2 is a reasoning agent that plans its own steps and chooses tools dynamically. L3 is a persistent agent with cross-session memory and learned preferences. L4 is a multi-agent system where specialized agents collaborate. Most production products today are L1–L2. The PM's job is to decide which level each feature needs and when to graduate upward.

> What Is MCP (Model Context Protocol)?

MCP is an open standard created by Anthropic that defines how agents connect to external tools and data. Before MCP, every agent-to-tool connection was custom-built. MCP is the universal plug. It standardizes how agents discover tools, call them, and handle responses. An MCP server exposes tools (functions) and resources (data). An MCP client (the agent) discovers and calls them. This makes integrations portable across LLM providers.

> Why MCP Changes Everything for Product Teams

Integration speed drops from weeks to hours. Tool connections become composable and portable. The security boundary is clearly defined. Product teams can promise real integrations, not vaporware. When you swap LLM providers, your tool layer stays intact.

> The Three Waves: LLMs to Agents to MCP

Wave 1 (2022–2023): LLMs showed AI could generate human-quality text and code. Products added summarization, chatbots, smart search. Wave 2 (2024–2025): wrapping LLMs in agentic loops created multi-step task completion. Wave 3 (2025–2026): MCP gave agents a universal standard for connecting to real systems. LLMs gave agents a brain, tool use gave them hands, MCP gave them a universal way to plug into the world.

> What AI-Native Actually Means

AI-native does not mean adding a chatbot to your app. It means the product is designed from the ground up around AI capabilities. AI-as-feature: bolt-on chatbot, AI summaries, smart search. AI-native: the entire UX is intent-driven, not menu-driven. Example: AI-as-feature says the AI suggests email replies. AI-native says the AI reads, triages, drafts, and sends emails on your behalf with approval gates. The shift requires rethinking UX from buttons and menus to goals and approvals.

> The AI-Native PM Skill Set

Understand the cost model (every LLM call costs money, think in tokens). Think in loops, not requests (agents iterate and self-correct). Design for uncertainty (AI output is probabilistic). Know the safety stack (guardrails, human-in-the-loop, content filtering). Speak the vocabulary (agent, tool use, RAG, MCP, prompt chaining, reflection). Evaluate trade-offs (speed vs. quality, autonomy vs. control, cost vs. capability).

> Interactive & tools

Where is your product? (L0–L4)

Where is your product on the L0–L4 spectrum?

Click a level — not scored, purely reflective.

The three waves

The Three Waves: LLMs → Agents → MCP

Bookmark this section

These six skills are referenced throughout the entire curriculum: cost model, loops not requests, design for uncertainty, safety stack, vocabulary, and trade-off evaluation.

Related Engineering Patterns

These are the technical patterns your engineering team will implement. Understanding them helps you have better conversations.

Tool UseState Management (MCP)Prompt ChainingPlanningHuman-in-the-Loop

Want the full PM curriculum?

This module is free. Sign up to unlock all 15 modules, interactive decision games, and the Developer track with 21 engineering patterns.

Sign Up Free