Skip to Content
LearningO3: MCP & Tools

O3: MCP & Tools

An LLM without tools is a brain in a jar β€” it can reason but can’t act. Tools give AI models the ability to read databases, call APIs, execute code, and interact with the real world. This module covers the evolution from function calling to the Model Context Protocol (MCP) and Agent-to-Agent (A2A) protocol. For how agents orchestrate tools, see O2: AI Agents Deep Dive. For the orchestration layer managing tool calls, see O1: Semantic Kernel.

Why Tools Matter

Without ToolsWith Tools
”The weather in Paris is usually mild” (hallucinated guess)get_weather("Paris") β†’ β€œParis is 18Β°C and sunny right now” (real data)
β€œI think the stock price is around $150”get_stock("MSFT") β†’ β€œ$421.53 as of market close"
"Here’s how to send an email…” (instructions only)send_email(to, subject, body) β†’ β€œEmail sent βœ…β€

Tools transform LLMs from know-it-alls into do-it-alls.

Function Calling: The Foundation

Function calling is the mechanism where the model generates structured JSON describing which tool to call and with what arguments. Your application then executes the function and returns results.

import openai tools = [{ "type": "function", "function": { "name": "get_weather", "description": "Get current weather for a city", "parameters": { "type": "object", "properties": { "city": {"type": "string", "description": "City name"}, "units": {"type": "string", "enum": ["celsius", "fahrenheit"]} }, "required": ["city"] } } }] response = openai.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Weather in Paris?"}], tools=tools ) # Model returns tool_call JSON (NOT the result) β€” your app executes the function # {"name": "get_weather", "arguments": {"city": "Paris", "units": "celsius"}}
ℹ️

Key insight

The model never executes the function β€” it only generates the JSON call. Your application is the executor. This is a critical security boundary.

Tool Definition Schema

Every tool needs three things:

{ "name": "search_knowledge_base", "description": "Search internal docs for relevant information. Use when the user asks about company policies or procedures.", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "Search query" }, "top_k": { "type": "integer", "description": "Number of results", "default": 5 } }, "required": ["query"] } }

The description is the most important field β€” it’s what the model reads to decide when to use the tool. Write it like you’re explaining to a new team member.

Tool Choice Control

tool_choiceBehaviorUse Case
"auto"Model decides whether to call a toolDefault β€” let the model reason
"none"Model cannot call any toolsForce text-only response
"required"Model must call at least one toolEnsure action is taken
{"function": {"name": "X"}}Model must call specific function XForce a particular tool

Parallel Tool Calling

Models can request multiple tool calls in a single turn when tasks are independent:

User: "What's the weather in Paris and Tokyo?" Model: [get_weather("Paris"), get_weather("Tokyo")] ← Two parallel calls

Execute them concurrently and return both results. This reduces round trips and latency.

Model Context Protocol (MCP)

πŸ’‘

The β€œUSB for AI” Analogy

Before USB, every device needed a custom cable. MCP does for AI tools what USB did for peripherals β€” one standard protocol for connecting any tool to any AI application.

MCP vs Function Calling

DimensionFunction CallingMCP
DiscoveryManual β€” hardcode tools per appAutomatic β€” client discovers tools from server
ScopePer-applicationShared across applications
UpdatesRedeploy app to add toolsServer adds tools, clients auto-discover
EcosystemVendor-specific (OpenAI, Anthropic)Open standard, any vendor
Data accessTools onlyTools + Resources + Prompts

MCP Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ AI App β”‚ β”‚ MCP Server β”‚ β”‚ (Client) │◄───────►│ β”‚ β”‚ β”‚ JSON β”‚ ☐ Tools β”‚ β”‚ Claude β”‚ -RPC β”‚ ☐ Resources β”‚ β”‚ Copilot β”‚ β”‚ ☐ Prompts β”‚ β”‚ Custom App β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

MCP servers expose three capability types:

CapabilityWhat It IsExample
ToolsFunctions the model can callsearch_docs, create_ticket, run_query
ResourcesRead-only data the model can accessFile contents, database schemas, config values
PromptsReusable prompt templates”Summarize this document”, β€œReview this PR”

MCP Transport

TransportHow It WorksBest For
stdioServer runs as child process, communicates via stdin/stdoutLocal tools (VS Code, CLI)
HTTP/SSEServer runs remotely, uses HTTP + Server-Sent EventsRemote/shared servers, cloud deployment

MCP Server Example

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js"; import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; const server = new McpServer({ name: "weather-server", version: "1.0.0" }); // Register a tool server.tool("get_weather", { city: { type: "string" } }, async ({ city }) => { const data = await fetch(`https://api.weather.com/${city}`); return { content: [{ type: "text", text: JSON.stringify(data) }] }; }); // Start server const transport = new StdioServerTransport(); await server.connect(transport);

A2A: Agent-to-Agent Protocol

While MCP connects models to tools, A2A connects agents to other agents:

ProtocolConnectsPurpose
MCPModel ↔ ToolTool discovery and execution
A2AAgent ↔ AgentTask delegation between autonomous agents

A2A enables an agent to discover other agents’ capabilities, delegate subtasks, and receive results β€” without knowing the other agent’s implementation.

Security Best Practices

⚠️

Tools are the most dangerous part of an AI system. An LLM with unrestricted tool access can delete databases, send emails, or exfiltrate data.

PracticeImplementation
Least privilegeEach tool gets minimum required permissions
SandboxingCode execution tools run in containers
Rate limitingMax N tool calls per minute per user
Audit loggingLog every tool call with user, args, result
Input validationValidate all tool arguments before execution
Confirmation gatesDestructive actions require human approval
TimeoutKill tool calls exceeding 30s

Key Takeaways

  1. Function calling is the foundation β€” model generates JSON, your app executes
  2. Tool descriptions are critical β€” they guide the model’s tool selection decisions
  3. MCP standardizes tool discovery and sharing across AI applications
  4. MCP exposes Tools + Resources + Prompts via stdio or HTTP/SSE transport
  5. A2A extends the pattern from model↔tool to agent↔agent delegation
  6. Security is non-negotiable: sandbox, rate-limit, audit, and gate every tool

For how agents use these tools in autonomous loops, see O2: AI Agents Deep Dive. For Azure-native deployment of tool-using AI, see O4: Azure AI Foundry.

Last updated on