#
tokens: 32174/50000 4/4 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── AI-learn-resource
│   ├── MCP-About.txt
│   └── README.md
├── Dockerfile
├── README.md
├── requirements.txt
├── test_mcp.py
└── xiaohongshu_mcp.py
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | .DS_Store
2 | 
```

--------------------------------------------------------------------------------
/AI-learn-resource/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Python SDK
  2 | 
  3 | <div align="center">
  4 | 
  5 | <strong>Python implementation of the Model Context Protocol (MCP)</strong>
  6 | 
  7 | [![PyPI][pypi-badge]][pypi-url]
  8 | [![MIT licensed][mit-badge]][mit-url]
  9 | [![Python Version][python-badge]][python-url]
 10 | [![Documentation][docs-badge]][docs-url]
 11 | [![Specification][spec-badge]][spec-url]
 12 | [![GitHub Discussions][discussions-badge]][discussions-url]
 13 | 
 14 | </div>
 15 | 
 16 | <!-- omit in toc -->
 17 | ## Table of Contents
 18 | 
 19 | - [MCP Python SDK](#mcp-python-sdk)
 20 |   - [Overview](#overview)
 21 |   - [Installation](#installation)
 22 |     - [Adding MCP to your python project](#adding-mcp-to-your-python-project)
 23 |     - [Running the standalone MCP development tools](#running-the-standalone-mcp-development-tools)
 24 |   - [Quickstart](#quickstart)
 25 |   - [What is MCP?](#what-is-mcp)
 26 |   - [Core Concepts](#core-concepts)
 27 |     - [Server](#server)
 28 |     - [Resources](#resources)
 29 |     - [Tools](#tools)
 30 |     - [Prompts](#prompts)
 31 |     - [Images](#images)
 32 |     - [Context](#context)
 33 |   - [Running Your Server](#running-your-server)
 34 |     - [Development Mode](#development-mode)
 35 |     - [Claude Desktop Integration](#claude-desktop-integration)
 36 |     - [Direct Execution](#direct-execution)
 37 |     - [Mounting to an Existing ASGI Server](#mounting-to-an-existing-asgi-server)
 38 |   - [Examples](#examples)
 39 |     - [Echo Server](#echo-server)
 40 |     - [SQLite Explorer](#sqlite-explorer)
 41 |   - [Advanced Usage](#advanced-usage)
 42 |     - [Low-Level Server](#low-level-server)
 43 |     - [Writing MCP Clients](#writing-mcp-clients)
 44 |     - [MCP Primitives](#mcp-primitives)
 45 |     - [Server Capabilities](#server-capabilities)
 46 |   - [Documentation](#documentation)
 47 |   - [Contributing](#contributing)
 48 |   - [License](#license)
 49 | 
 50 | [pypi-badge]: https://img.shields.io/pypi/v/mcp.svg
 51 | [pypi-url]: https://pypi.org/project/mcp/
 52 | [mit-badge]: https://img.shields.io/pypi/l/mcp.svg
 53 | [mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE
 54 | [python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg
 55 | [python-url]: https://www.python.org/downloads/
 56 | [docs-badge]: https://img.shields.io/badge/docs-modelcontextprotocol.io-blue.svg
 57 | [docs-url]: https://modelcontextprotocol.io
 58 | [spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg
 59 | [spec-url]: https://spec.modelcontextprotocol.io
 60 | [discussions-badge]: https://img.shields.io/github/discussions/modelcontextprotocol/python-sdk
 61 | [discussions-url]: https://github.com/modelcontextprotocol/python-sdk/discussions
 62 | 
 63 | ## Overview
 64 | 
 65 | The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:
 66 | 
 67 | - Build MCP clients that can connect to any MCP server
 68 | - Create MCP servers that expose resources, prompts and tools
 69 | - Use standard transports like stdio and SSE
 70 | - Handle all MCP protocol messages and lifecycle events
 71 | 
 72 | ## Installation
 73 | 
 74 | ### Adding MCP to your python project
 75 | 
 76 | We recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects. 
 77 | 
 78 | If you haven't created a uv-managed project yet, create one:
 79 | 
 80 |    ```bash
 81 |    uv init mcp-server-demo
 82 |    cd mcp-server-demo
 83 |    ```
 84 | 
 85 |    Then add MCP to your project dependencies:
 86 | 
 87 |    ```bash
 88 |    uv add "mcp[cli]"
 89 |    ```
 90 | 
 91 | Alternatively, for projects using pip for dependencies:
 92 | ```bash
 93 | pip install "mcp[cli]"
 94 | ```
 95 | 
 96 | ### Running the standalone MCP development tools
 97 | 
 98 | To run the mcp command with uv:
 99 | 
100 | ```bash
101 | uv run mcp
102 | ```
103 | 
104 | ## Quickstart
105 | 
106 | Let's create a simple MCP server that exposes a calculator tool and some data:
107 | 
108 | ```python
109 | # server.py
110 | from mcp.server.fastmcp import FastMCP
111 | 
112 | # Create an MCP server
113 | mcp = FastMCP("Demo")
114 | 
115 | 
116 | # Add an addition tool
117 | @mcp.tool()
118 | def add(a: int, b: int) -> int:
119 |     """Add two numbers"""
120 |     return a + b
121 | 
122 | 
123 | # Add a dynamic greeting resource
124 | @mcp.resource("greeting://{name}")
125 | def get_greeting(name: str) -> str:
126 |     """Get a personalized greeting"""
127 |     return f"Hello, {name}!"
128 | ```
129 | 
130 | You can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running:
131 | ```bash
132 | mcp install server.py
133 | ```
134 | 
135 | Alternatively, you can test it with the MCP Inspector:
136 | ```bash
137 | mcp dev server.py
138 | ```
139 | 
140 | ## What is MCP?
141 | 
142 | The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
143 | 
144 | - Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
145 | - Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
146 | - Define interaction patterns through **Prompts** (reusable templates for LLM interactions)
147 | - And more!
148 | 
149 | ## Core Concepts
150 | 
151 | ### Server
152 | 
153 | The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
154 | 
155 | ```python
156 | # Add lifespan support for startup/shutdown with strong typing
157 | from contextlib import asynccontextmanager
158 | from collections.abc import AsyncIterator
159 | from dataclasses import dataclass
160 | 
161 | from fake_database import Database  # Replace with your actual DB type
162 | 
163 | from mcp.server.fastmcp import Context, FastMCP
164 | 
165 | # Create a named server
166 | mcp = FastMCP("My App")
167 | 
168 | # Specify dependencies for deployment and development
169 | mcp = FastMCP("My App", dependencies=["pandas", "numpy"])
170 | 
171 | 
172 | @dataclass
173 | class AppContext:
174 |     db: Database
175 | 
176 | 
177 | @asynccontextmanager
178 | async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
179 |     """Manage application lifecycle with type-safe context"""
180 |     # Initialize on startup
181 |     db = await Database.connect()
182 |     try:
183 |         yield AppContext(db=db)
184 |     finally:
185 |         # Cleanup on shutdown
186 |         await db.disconnect()
187 | 
188 | 
189 | # Pass lifespan to server
190 | mcp = FastMCP("My App", lifespan=app_lifespan)
191 | 
192 | 
193 | # Access type-safe lifespan context in tools
194 | @mcp.tool()
195 | def query_db(ctx: Context) -> str:
196 |     """Tool that uses initialized resources"""
197 |     db = ctx.request_context.lifespan_context.db
198 |     return db.query()
199 | ```
200 | 
201 | ### Resources
202 | 
203 | Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:
204 | 
205 | ```python
206 | from mcp.server.fastmcp import FastMCP
207 | 
208 | mcp = FastMCP("My App")
209 | 
210 | 
211 | @mcp.resource("config://app")
212 | def get_config() -> str:
213 |     """Static configuration data"""
214 |     return "App configuration here"
215 | 
216 | 
217 | @mcp.resource("users://{user_id}/profile")
218 | def get_user_profile(user_id: str) -> str:
219 |     """Dynamic user data"""
220 |     return f"Profile data for user {user_id}"
221 | ```
222 | 
223 | ### Tools
224 | 
225 | Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:
226 | 
227 | ```python
228 | import httpx
229 | from mcp.server.fastmcp import FastMCP
230 | 
231 | mcp = FastMCP("My App")
232 | 
233 | 
234 | @mcp.tool()
235 | def calculate_bmi(weight_kg: float, height_m: float) -> float:
236 |     """Calculate BMI given weight in kg and height in meters"""
237 |     return weight_kg / (height_m**2)
238 | 
239 | 
240 | @mcp.tool()
241 | async def fetch_weather(city: str) -> str:
242 |     """Fetch current weather for a city"""
243 |     async with httpx.AsyncClient() as client:
244 |         response = await client.get(f"https://api.weather.com/{city}")
245 |         return response.text
246 | ```
247 | 
248 | ### Prompts
249 | 
250 | Prompts are reusable templates that help LLMs interact with your server effectively:
251 | 
252 | ```python
253 | from mcp.server.fastmcp import FastMCP
254 | from mcp.server.fastmcp.prompts import base
255 | 
256 | mcp = FastMCP("My App")
257 | 
258 | 
259 | @mcp.prompt()
260 | def review_code(code: str) -> str:
261 |     return f"Please review this code:\n\n{code}"
262 | 
263 | 
264 | @mcp.prompt()
265 | def debug_error(error: str) -> list[base.Message]:
266 |     return [
267 |         base.UserMessage("I'm seeing this error:"),
268 |         base.UserMessage(error),
269 |         base.AssistantMessage("I'll help debug that. What have you tried so far?"),
270 |     ]
271 | ```
272 | 
273 | ### Images
274 | 
275 | FastMCP provides an `Image` class that automatically handles image data:
276 | 
277 | ```python
278 | from mcp.server.fastmcp import FastMCP, Image
279 | from PIL import Image as PILImage
280 | 
281 | mcp = FastMCP("My App")
282 | 
283 | 
284 | @mcp.tool()
285 | def create_thumbnail(image_path: str) -> Image:
286 |     """Create a thumbnail from an image"""
287 |     img = PILImage.open(image_path)
288 |     img.thumbnail((100, 100))
289 |     return Image(data=img.tobytes(), format="png")
290 | ```
291 | 
292 | ### Context
293 | 
294 | The Context object gives your tools and resources access to MCP capabilities:
295 | 
296 | ```python
297 | from mcp.server.fastmcp import FastMCP, Context
298 | 
299 | mcp = FastMCP("My App")
300 | 
301 | 
302 | @mcp.tool()
303 | async def long_task(files: list[str], ctx: Context) -> str:
304 |     """Process multiple files with progress tracking"""
305 |     for i, file in enumerate(files):
306 |         ctx.info(f"Processing {file}")
307 |         await ctx.report_progress(i, len(files))
308 |         data, mime_type = await ctx.read_resource(f"file://{file}")
309 |     return "Processing complete"
310 | ```
311 | 
312 | ## Running Your Server
313 | 
314 | ### Development Mode
315 | 
316 | The fastest way to test and debug your server is with the MCP Inspector:
317 | 
318 | ```bash
319 | mcp dev server.py
320 | 
321 | # Add dependencies
322 | mcp dev server.py --with pandas --with numpy
323 | 
324 | # Mount local code
325 | mcp dev server.py --with-editable .
326 | ```
327 | 
328 | ### Claude Desktop Integration
329 | 
330 | Once your server is ready, install it in Claude Desktop:
331 | 
332 | ```bash
333 | mcp install server.py
334 | 
335 | # Custom name
336 | mcp install server.py --name "My Analytics Server"
337 | 
338 | # Environment variables
339 | mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...
340 | mcp install server.py -f .env
341 | ```
342 | 
343 | ### Direct Execution
344 | 
345 | For advanced scenarios like custom deployments:
346 | 
347 | ```python
348 | from mcp.server.fastmcp import FastMCP
349 | 
350 | mcp = FastMCP("My App")
351 | 
352 | if __name__ == "__main__":
353 |     mcp.run()
354 | ```
355 | 
356 | Run it with:
357 | ```bash
358 | python server.py
359 | # or
360 | mcp run server.py
361 | ```
362 | 
363 | ### Mounting to an Existing ASGI Server
364 | 
365 | You can mount the SSE server to an existing ASGI server using the `sse_app` method. This allows you to integrate the SSE server with other ASGI applications.
366 | 
367 | ```python
368 | from starlette.applications import Starlette
369 | from starlette.routing import Mount, Host
370 | from mcp.server.fastmcp import FastMCP
371 | 
372 | 
373 | mcp = FastMCP("My App")
374 | 
375 | # Mount the SSE server to the existing ASGI server
376 | app = Starlette(
377 |     routes=[
378 |         Mount('/', app=mcp.sse_app()),
379 |     ]
380 | )
381 | 
382 | # or dynamically mount as host
383 | app.router.routes.append(Host('mcp.acme.corp', app=mcp.sse_app()))
384 | ```
385 | 
386 | For more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes).
387 | 
388 | ## Examples
389 | 
390 | ### Echo Server
391 | 
392 | A simple server demonstrating resources, tools, and prompts:
393 | 
394 | ```python
395 | from mcp.server.fastmcp import FastMCP
396 | 
397 | mcp = FastMCP("Echo")
398 | 
399 | 
400 | @mcp.resource("echo://{message}")
401 | def echo_resource(message: str) -> str:
402 |     """Echo a message as a resource"""
403 |     return f"Resource echo: {message}"
404 | 
405 | 
406 | @mcp.tool()
407 | def echo_tool(message: str) -> str:
408 |     """Echo a message as a tool"""
409 |     return f"Tool echo: {message}"
410 | 
411 | 
412 | @mcp.prompt()
413 | def echo_prompt(message: str) -> str:
414 |     """Create an echo prompt"""
415 |     return f"Please process this message: {message}"
416 | ```
417 | 
418 | ### SQLite Explorer
419 | 
420 | A more complex example showing database integration:
421 | 
422 | ```python
423 | import sqlite3
424 | 
425 | from mcp.server.fastmcp import FastMCP
426 | 
427 | mcp = FastMCP("SQLite Explorer")
428 | 
429 | 
430 | @mcp.resource("schema://main")
431 | def get_schema() -> str:
432 |     """Provide the database schema as a resource"""
433 |     conn = sqlite3.connect("database.db")
434 |     schema = conn.execute("SELECT sql FROM sqlite_master WHERE type='table'").fetchall()
435 |     return "\n".join(sql[0] for sql in schema if sql[0])
436 | 
437 | 
438 | @mcp.tool()
439 | def query_data(sql: str) -> str:
440 |     """Execute SQL queries safely"""
441 |     conn = sqlite3.connect("database.db")
442 |     try:
443 |         result = conn.execute(sql).fetchall()
444 |         return "\n".join(str(row) for row in result)
445 |     except Exception as e:
446 |         return f"Error: {str(e)}"
447 | ```
448 | 
449 | ## Advanced Usage
450 | 
451 | ### Low-Level Server
452 | 
453 | For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:
454 | 
455 | ```python
456 | from contextlib import asynccontextmanager
457 | from collections.abc import AsyncIterator
458 | 
459 | from fake_database import Database  # Replace with your actual DB type
460 | 
461 | from mcp.server import Server
462 | 
463 | 
464 | @asynccontextmanager
465 | async def server_lifespan(server: Server) -> AsyncIterator[dict]:
466 |     """Manage server startup and shutdown lifecycle."""
467 |     # Initialize resources on startup
468 |     db = await Database.connect()
469 |     try:
470 |         yield {"db": db}
471 |     finally:
472 |         # Clean up on shutdown
473 |         await db.disconnect()
474 | 
475 | 
476 | # Pass lifespan to server
477 | server = Server("example-server", lifespan=server_lifespan)
478 | 
479 | 
480 | # Access lifespan context in handlers
481 | @server.call_tool()
482 | async def query_db(name: str, arguments: dict) -> list:
483 |     ctx = server.request_context
484 |     db = ctx.lifespan_context["db"]
485 |     return await db.query(arguments["query"])
486 | ```
487 | 
488 | The lifespan API provides:
489 | - A way to initialize resources when the server starts and clean them up when it stops
490 | - Access to initialized resources through the request context in handlers
491 | - Type-safe context passing between lifespan and request handlers
492 | 
493 | ```python
494 | import mcp.server.stdio
495 | import mcp.types as types
496 | from mcp.server.lowlevel import NotificationOptions, Server
497 | from mcp.server.models import InitializationOptions
498 | 
499 | # Create a server instance
500 | server = Server("example-server")
501 | 
502 | 
503 | @server.list_prompts()
504 | async def handle_list_prompts() -> list[types.Prompt]:
505 |     return [
506 |         types.Prompt(
507 |             name="example-prompt",
508 |             description="An example prompt template",
509 |             arguments=[
510 |                 types.PromptArgument(
511 |                     name="arg1", description="Example argument", required=True
512 |                 )
513 |             ],
514 |         )
515 |     ]
516 | 
517 | 
518 | @server.get_prompt()
519 | async def handle_get_prompt(
520 |     name: str, arguments: dict[str, str] | None
521 | ) -> types.GetPromptResult:
522 |     if name != "example-prompt":
523 |         raise ValueError(f"Unknown prompt: {name}")
524 | 
525 |     return types.GetPromptResult(
526 |         description="Example prompt",
527 |         messages=[
528 |             types.PromptMessage(
529 |                 role="user",
530 |                 content=types.TextContent(type="text", text="Example prompt text"),
531 |             )
532 |         ],
533 |     )
534 | 
535 | 
536 | async def run():
537 |     async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
538 |         await server.run(
539 |             read_stream,
540 |             write_stream,
541 |             InitializationOptions(
542 |                 server_name="example",
543 |                 server_version="0.1.0",
544 |                 capabilities=server.get_capabilities(
545 |                     notification_options=NotificationOptions(),
546 |                     experimental_capabilities={},
547 |                 ),
548 |             ),
549 |         )
550 | 
551 | 
552 | if __name__ == "__main__":
553 |     import asyncio
554 | 
555 |     asyncio.run(run())
556 | ```
557 | 
558 | ### Writing MCP Clients
559 | 
560 | The SDK provides a high-level client interface for connecting to MCP servers:
561 | 
562 | ```python
563 | from mcp import ClientSession, StdioServerParameters, types
564 | from mcp.client.stdio import stdio_client
565 | 
566 | # Create server parameters for stdio connection
567 | server_params = StdioServerParameters(
568 |     command="python",  # Executable
569 |     args=["example_server.py"],  # Optional command line arguments
570 |     env=None,  # Optional environment variables
571 | )
572 | 
573 | 
574 | # Optional: create a sampling callback
575 | async def handle_sampling_message(
576 |     message: types.CreateMessageRequestParams,
577 | ) -> types.CreateMessageResult:
578 |     return types.CreateMessageResult(
579 |         role="assistant",
580 |         content=types.TextContent(
581 |             type="text",
582 |             text="Hello, world! from model",
583 |         ),
584 |         model="gpt-3.5-turbo",
585 |         stopReason="endTurn",
586 |     )
587 | 
588 | 
589 | async def run():
590 |     async with stdio_client(server_params) as (read, write):
591 |         async with ClientSession(
592 |             read, write, sampling_callback=handle_sampling_message
593 |         ) as session:
594 |             # Initialize the connection
595 |             await session.initialize()
596 | 
597 |             # List available prompts
598 |             prompts = await session.list_prompts()
599 | 
600 |             # Get a prompt
601 |             prompt = await session.get_prompt(
602 |                 "example-prompt", arguments={"arg1": "value"}
603 |             )
604 | 
605 |             # List available resources
606 |             resources = await session.list_resources()
607 | 
608 |             # List available tools
609 |             tools = await session.list_tools()
610 | 
611 |             # Read a resource
612 |             content, mime_type = await session.read_resource("file://some/path")
613 | 
614 |             # Call a tool
615 |             result = await session.call_tool("tool-name", arguments={"arg1": "value"})
616 | 
617 | 
618 | if __name__ == "__main__":
619 |     import asyncio
620 | 
621 |     asyncio.run(run())
622 | ```
623 | 
624 | ### MCP Primitives
625 | 
626 | The MCP protocol defines three core primitives that servers can implement:
627 | 
628 | | Primitive | Control               | Description                                         | Example Use                  |
629 | |-----------|-----------------------|-----------------------------------------------------|------------------------------|
630 | | Prompts   | User-controlled       | Interactive templates invoked by user choice        | Slash commands, menu options |
631 | | Resources | Application-controlled| Contextual data managed by the client application   | File contents, API responses |
632 | | Tools     | Model-controlled      | Functions exposed to the LLM to take actions        | API calls, data updates      |
633 | 
634 | ### Server Capabilities
635 | 
636 | MCP servers declare capabilities during initialization:
637 | 
638 | | Capability  | Feature Flag                 | Description                        |
639 | |-------------|------------------------------|------------------------------------|
640 | | `prompts`   | `listChanged`                | Prompt template management         |
641 | | `resources` | `subscribe`<br/>`listChanged`| Resource exposure and updates      |
642 | | `tools`     | `listChanged`                | Tool discovery and execution       |
643 | | `logging`   | -                            | Server logging configuration       |
644 | | `completion`| -                            | Argument completion suggestions    |
645 | 
646 | ## Documentation
647 | 
648 | - [Model Context Protocol documentation](https://modelcontextprotocol.io)
649 | - [Model Context Protocol specification](https://spec.modelcontextprotocol.io)
650 | - [Officially supported servers](https://github.com/modelcontextprotocol/servers)
651 | 
652 | ## Contributing
653 | 
654 | We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the [contributing guide](CONTRIBUTING.md) to get started.
655 | 
656 | ## License
657 | 
658 | This project is licensed under the MIT License - see the LICENSE file for details.
659 | 
```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
 1 | playwright>=1.40.0
 2 | pytest-playwright>=0.4.0
 3 | pandas>=2.1.1
 4 | numpy>=1.26.4
 5 | asyncio==3.4.3
 6 | mcp[cli]
 7 | python-dotenv==1.0.0
 8 | requests==2.31.0
 9 | schedule==1.2.0
10 | tqdm==4.66.1
11 | fastapi>=0.95.1
12 | uvicorn>=0.22.0
13 | 
```

--------------------------------------------------------------------------------
/AI-learn-resource/MCP-About.txt:
--------------------------------------------------------------------------------

```
   1 | # Introduction
   2 | 
   3 | > Get started with the Model Context Protocol (MCP)
   4 | 
   5 | MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
   6 | 
   7 | ## Why MCP?
   8 | 
   9 | MCP helps you build agents and complex workflows on top of LLMs. LLMs frequently need to integrate with data and tools, and MCP provides:
  10 | 
  11 | * A growing list of pre-built integrations that your LLM can directly plug into
  12 | * The flexibility to switch between LLM providers and vendors
  13 | * Best practices for securing your data within your infrastructure
  14 | 
  15 | ### General architecture
  16 | 
  17 | At its core, MCP follows a client-server architecture where a host application can connect to multiple servers:
  18 | 
  19 | ```mermaid
  20 | flowchart LR
  21 |     subgraph "Your Computer"
  22 |         Host["Host with MCP Client\n(Claude, IDEs, Tools)"]
  23 |         S1["MCP Server A"]
  24 |         S2["MCP Server B"]
  25 |         S3["MCP Server C"]
  26 |         Host <-->|"MCP Protocol"| S1
  27 |         Host <-->|"MCP Protocol"| S2
  28 |         Host <-->|"MCP Protocol"| S3
  29 |         S1 <--> D1[("Local\nData Source A")]
  30 |         S2 <--> D2[("Local\nData Source B")]
  31 |     end
  32 |     subgraph "Internet"
  33 |         S3 <-->|"Web APIs"| D3[("Remote\nService C")]
  34 |     end
  35 | ```
  36 | 
  37 | * **MCP Hosts**: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP
  38 | * **MCP Clients**: Protocol clients that maintain 1:1 connections with servers
  39 | * **MCP Servers**: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol
  40 | * **Local Data Sources**: Your computer's files, databases, and services that MCP servers can securely access
  41 | * **Remote Services**: External systems available over the internet (e.g., through APIs) that MCP servers can connect to
  42 | 
  43 | # Core architecture
  44 | 
  45 | > Understand how MCP connects clients, servers, and LLMs
  46 | 
  47 | The Model Context Protocol (MCP) is built on a flexible, extensible architecture that enables seamless communication between LLM applications and integrations. This document covers the core architectural components and concepts.
  48 | 
  49 | ## Overview
  50 | 
  51 | MCP follows a client-server architecture where:
  52 | 
  53 | * **Hosts** are LLM applications (like Claude Desktop or IDEs) that initiate connections
  54 | * **Clients** maintain 1:1 connections with servers, inside the host application
  55 | * **Servers** provide context, tools, and prompts to clients
  56 | 
  57 | ```mermaid
  58 | flowchart LR
  59 |     subgraph "Host"
  60 |         client1[MCP Client]
  61 |         client2[MCP Client]
  62 |     end
  63 |     subgraph "Server Process"
  64 |         server1[MCP Server]
  65 |     end
  66 |     subgraph "Server Process"
  67 |         server2[MCP Server]
  68 |     end
  69 | 
  70 |     client1 <-->|Transport Layer| server1
  71 |     client2 <-->|Transport Layer| server2
  72 | ```
  73 | 
  74 | ## Core components
  75 | 
  76 | ### Protocol layer
  77 | 
  78 | The protocol layer handles message framing, request/response linking, and high-level communication patterns.
  79 | 
  80 | #### for TypeScript
  81 | ```typescript
  82 | class Protocol<Request, Notification, Result> {
  83 | 	// Handle incoming requests
  84 | 	setRequestHandler<T>(schema: T, handler: (request: T, extra: RequestHandlerExtra) => Promise<Result>): void
  85 | 
  86 | 	// Handle incoming notifications
  87 | 	setNotificationHandler<T>(schema: T, handler: (notification: T) => Promise<void>): void
  88 | 
  89 | 	// Send requests and await responses
  90 | 	request<T>(request: Request, schema: T, options?: RequestOptions): Promise<T>
  91 | 
  92 | 	// Send one-way notifications
  93 | 	notification(notification: Notification): Promise<void>
  94 | }
  95 | ```
  96 | 
  97 | 
  98 | #### For Python
  99 | ```python
 100 | class Session(BaseSession[RequestT, NotificationT, ResultT]):
 101 | 	async def send_request(
 102 | 		self,
 103 | 		request: RequestT,
 104 | 		result_type: type[Result]
 105 | 	) -> Result:
 106 | 		"""Send request and wait for response. Raises McpError if response contains error."""
 107 | 		# Request handling implementation
 108 | 
 109 | 	async def send_notification(
 110 | 		self,
 111 | 		notification: NotificationT
 112 | 	) -> None:
 113 | 		"""Send one-way notification that doesn't expect response."""
 114 | 		# Notification handling implementation
 115 | 
 116 | 	async def _received_request(
 117 | 		self,
 118 | 		responder: RequestResponder[ReceiveRequestT, ResultT]
 119 | 	) -> None:
 120 | 		"""Handle incoming request from other side."""
 121 | 		# Request handling implementation
 122 | 
 123 | 	async def _received_notification(
 124 | 		self,
 125 | 		notification: ReceiveNotificationT
 126 | 	) -> None:
 127 | 		"""Handle incoming notification from other side."""
 128 | 		# Notification handling implementation
 129 | ```
 130 | 
 131 | Key classes include:
 132 | 
 133 | * `Protocol`
 134 | * `Client`
 135 | * `Server`
 136 | 
 137 | ### Transport layer
 138 | 
 139 | The transport layer handles the actual communication between clients and servers. MCP supports multiple transport mechanisms:
 140 | 
 141 | 1. **Stdio transport**
 142 |    * Uses standard input/output for communication
 143 |    * Ideal for local processes
 144 | 
 145 | 2. **HTTP with SSE transport**
 146 |    * Uses Server-Sent Events for server-to-client messages
 147 |    * HTTP POST for client-to-server messages
 148 | 
 149 | All transports use [JSON-RPC](https://www.jsonrpc.org/) 2.0 to exchange messages. See the [specification](/specification/) for detailed information about the Model Context Protocol message format.
 150 | 
 151 | ### Message types
 152 | 
 153 | MCP has these main types of messages:
 154 | 
 155 | 1. **Requests** expect a response from the other side:
 156 |    ```typescript
 157 |    interface Request {
 158 |      method: string;
 159 |      params?: { ... };
 160 |    }
 161 |    ```
 162 | 
 163 | 2. **Results** are successful responses to requests:
 164 |    ```typescript
 165 |    interface Result {
 166 |      [key: string]: unknown;
 167 |    }
 168 |    ```
 169 | 
 170 | 3. **Errors** indicate that a request failed:
 171 |    ```typescript
 172 |    interface Error {
 173 |      code: number;
 174 |      message: string;
 175 |      data?: unknown;
 176 |    }
 177 |    ```
 178 | 
 179 | 4. **Notifications** are one-way messages that don't expect a response:
 180 |    ```typescript
 181 |    interface Notification {
 182 |      method: string;
 183 |      params?: { ... };
 184 |    }
 185 |    ```
 186 | 
 187 | ## Connection lifecycle
 188 | 
 189 | ### 1. Initialization
 190 | 
 191 | ```mermaid
 192 | sequenceDiagram
 193 |     participant Client
 194 |     participant Server
 195 | 
 196 |     Client->>Server: initialize request
 197 |     Server->>Client: initialize response
 198 |     Client->>Server: initialized notification
 199 | 
 200 |     Note over Client,Server: Connection ready for use
 201 | ```
 202 | 
 203 | 1. Client sends `initialize` request with protocol version and capabilities
 204 | 2. Server responds with its protocol version and capabilities
 205 | 3. Client sends `initialized` notification as acknowledgment
 206 | 4. Normal message exchange begins
 207 | 
 208 | ### 2. Message exchange
 209 | 
 210 | After initialization, the following patterns are supported:
 211 | 
 212 | * **Request-Response**: Client or server sends requests, the other responds
 213 | * **Notifications**: Either party sends one-way messages
 214 | 
 215 | ### 3. Termination
 216 | 
 217 | Either party can terminate the connection:
 218 | 
 219 | * Clean shutdown via `close()`
 220 | * Transport disconnection
 221 | * Error conditions
 222 | 
 223 | ## Error handling
 224 | 
 225 | MCP defines these standard error codes:
 226 | 
 227 | ```typescript
 228 | enum ErrorCode {
 229 |   // Standard JSON-RPC error codes
 230 |   ParseError = -32700,
 231 |   InvalidRequest = -32600,
 232 |   MethodNotFound = -32601,
 233 |   InvalidParams = -32602,
 234 |   InternalError = -32603
 235 | }
 236 | ```
 237 | 
 238 | SDKs and applications can define their own error codes above -32000.
 239 | 
 240 | Errors are propagated through:
 241 | 
 242 | * Error responses to requests
 243 | * Error events on transports
 244 | * Protocol-level error handlers
 245 | 
 246 | ## Implementation example
 247 | 
 248 | Here's a basic example of implementing an MCP server:
 249 | 
 250 | ### For TypeScript
 251 | ```typescript
 252 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
 253 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
 254 | 
 255 | const server = new Server({
 256 |   name: "example-server",
 257 |   version: "1.0.0"
 258 | }, {
 259 |   capabilities: {
 260 | 	resources: {}
 261 |   }
 262 | });
 263 | 
 264 | // Handle requests
 265 | server.setRequestHandler(ListResourcesRequestSchema, async () => {
 266 |   return {
 267 | 	resources: [
 268 | 	  {
 269 | 		uri: "example://resource",
 270 | 		name: "Example Resource"
 271 | 	  }
 272 | 	]
 273 |   };
 274 | });
 275 | 
 276 | // Connect transport
 277 | const transport = new StdioServerTransport();
 278 | await server.connect(transport);
 279 | ```
 280 | 
 281 | ### For Python
 282 | ```python
 283 | import asyncio
 284 | import mcp.types as types
 285 | from mcp.server import Server
 286 | from mcp.server.stdio import stdio_server
 287 | 
 288 | app = Server("example-server")
 289 | 
 290 | @app.list_resources()
 291 | async def list_resources() -> list[types.Resource]:
 292 | 	return [
 293 | 		types.Resource(
 294 | 			uri="example://resource",
 295 | 			name="Example Resource"
 296 | 		)
 297 | 	]
 298 | 
 299 | async def main():
 300 | 	async with stdio_server() as streams:
 301 | 		await app.run(
 302 | 			streams[0],
 303 | 			streams[1],
 304 | 			app.create_initialization_options()
 305 | 		)
 306 | 
 307 | if __name__ == "__main__":
 308 | 	asyncio.run(main())
 309 | ```
 310 | 
 311 | 
 312 | ## Best practices
 313 | 
 314 | ### Transport selection
 315 | 
 316 | 1. **Local communication**
 317 |    * Use stdio transport for local processes
 318 |    * Efficient for same-machine communication
 319 |    * Simple process management
 320 | 
 321 | 2. **Remote communication**
 322 |    * Use SSE for scenarios requiring HTTP compatibility
 323 |    * Consider security implications including authentication and authorization
 324 | 
 325 | ### Message handling
 326 | 
 327 | 1. **Request processing**
 328 |    * Validate inputs thoroughly
 329 |    * Use type-safe schemas
 330 |    * Handle errors gracefully
 331 |    * Implement timeouts
 332 | 
 333 | 2. **Progress reporting**
 334 |    * Use progress tokens for long operations
 335 |    * Report progress incrementally
 336 |    * Include total progress when known
 337 | 
 338 | 3. **Error management**
 339 |    * Use appropriate error codes
 340 |    * Include helpful error messages
 341 |    * Clean up resources on errors
 342 | 
 343 | ## Security considerations
 344 | 
 345 | 1. **Transport security**
 346 |    * Use TLS for remote connections
 347 |    * Validate connection origins
 348 |    * Implement authentication when needed
 349 | 
 350 | 2. **Message validation**
 351 |    * Validate all incoming messages
 352 |    * Sanitize inputs
 353 |    * Check message size limits
 354 |    * Verify JSON-RPC format
 355 | 
 356 | 3. **Resource protection**
 357 |    * Implement access controls
 358 |    * Validate resource paths
 359 |    * Monitor resource usage
 360 |    * Rate limit requests
 361 | 
 362 | 4. **Error handling**
 363 |    * Don't leak sensitive information
 364 |    * Log security-relevant errors
 365 |    * Implement proper cleanup
 366 |    * Handle DoS scenarios
 367 | 
 368 | ## Debugging and monitoring
 369 | 
 370 | 1. **Logging**
 371 |    * Log protocol events
 372 |    * Track message flow
 373 |    * Monitor performance
 374 |    * Record errors
 375 | 
 376 | 2. **Diagnostics**
 377 |    * Implement health checks
 378 |    * Monitor connection state
 379 |    * Track resource usage
 380 |    * Profile performance
 381 | 
 382 | 3. **Testing**
 383 |    * Test different transports
 384 |    * Verify error handling
 385 |    * Check edge cases
 386 |    * Load test servers
 387 | 
 388 | 
 389 | # For Server Developers
 390 | 
 391 | > Get started building your own server to use in Claude for Desktop and other clients.
 392 | 
 393 | In this tutorial, we'll build a simple MCP weather server and connect it to a host, Claude for Desktop. We'll start with a basic setup, and then progress to more complex use cases.
 394 | 
 395 | ### What we'll be building
 396 | 
 397 | Many LLMs do not currently have the ability to fetch the forecast and severe weather alerts. Let's use MCP to solve that!
 398 | 
 399 | We'll build a server that exposes two tools: `get-alerts` and `get-forecast`. Then we'll connect the server to an MCP host (in this case, Claude for Desktop):
 400 | 
 401 | <Frame>
 402 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/weather-alerts.png" />
 403 | </Frame>
 404 | 
 405 | <Frame>
 406 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/current-weather.png" />
 407 | </Frame>
 408 | 
 409 | <Note>
 410 |   Servers can connect to any client. We've chosen Claude for Desktop here for simplicity, but we also have guides on [building your own client](/quickstart/client) as well as a [list of other clients here](/clients).
 411 | </Note>
 412 | 
 413 | <Accordion title="Why Claude for Desktop and not Claude.ai?">
 414 |   Because servers are locally run, MCP currently only supports desktop hosts. Remote hosts are in active development.
 415 | </Accordion>
 416 | 
 417 | ### Core MCP Concepts
 418 | 
 419 | MCP servers can provide three main types of capabilities:
 420 | 
 421 | 1. **Resources**: File-like data that can be read by clients (like API responses or file contents)
 422 | 2. **Tools**: Functions that can be called by the LLM (with user approval)
 423 | 3. **Prompts**: Pre-written templates that help users accomplish specific tasks
 424 | 
 425 | This tutorial will primarily focus on tools.
 426 | 
 427 | #### For Python
 428 | Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-python)
 429 | 
 430 | ##### Prerequisite knowledge
 431 | 
 432 | This quickstart assumes you have familiarity with:
 433 | 
 434 | * Python
 435 | * LLMs like Claude
 436 | 
 437 | ##### System requirements
 438 | 
 439 | * Python 3.10 or higher installed.
 440 | * You must use the Python MCP SDK 1.2.0 or higher.
 441 | 
 442 | ##### Set up your environment
 443 | 
 444 | First, let's install `uv` and set up our Python project and environment:
 445 | 
 446 | 
 447 |   ```bash MacOS/Linux
 448 |   curl -LsSf https://astral.sh/uv/install.sh | sh
 449 |   ```
 450 | 
 451 |   ```powershell Windows
 452 |   powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
 453 |   ```
 454 | 
 455 | 
 456 | Make sure to restart your terminal afterwards to ensure that the `uv` command gets picked up.
 457 | 
 458 | Now, let's create and set up our project:
 459 | 
 460 | 
 461 |   ```bash MacOS/Linux
 462 |   # Create a new directory for our project
 463 |   uv init weather
 464 |   cd weather
 465 | 
 466 |   # Create virtual environment and activate it
 467 |   uv venv
 468 |   source .venv/bin/activate
 469 | 
 470 |   # Install dependencies
 471 |   uv add "mcp[cli]" httpx
 472 | 
 473 |   # Create our server file
 474 |   touch weather.py
 475 |   ```
 476 | 
 477 |   ```powershell Windows
 478 |   # Create a new directory for our project
 479 |   uv init weather
 480 |   cd weather
 481 | 
 482 |   # Create virtual environment and activate it
 483 |   uv venv
 484 |   .venv\Scripts\activate
 485 | 
 486 |   # Install dependencies
 487 |   uv add mcp[cli] httpx
 488 | 
 489 |   # Create our server file
 490 |   new-item weather.py
 491 |   ```
 492 | 
 493 | 
 494 | Now let's dive into building your server.
 495 | 
 496 | ##### Building your server
 497 | 
 498 | ###### Importing packages and setting up the instance
 499 | 
 500 | Add these to the top of your `weather.py`:
 501 | 
 502 | ```python
 503 | from typing import Any
 504 | import httpx
 505 | from mcp.server.fastmcp import FastMCP
 506 | 
 507 | # Initialize FastMCP server
 508 | mcp = FastMCP("weather")
 509 | 
 510 | # Constants
 511 | NWS_API_BASE = "https://api.weather.gov"
 512 | USER_AGENT = "weather-app/1.0"
 513 | ```
 514 | 
 515 | The FastMCP class uses Python type hints and docstrings to automatically generate tool definitions, making it easy to create and maintain MCP tools.
 516 | 
 517 | ###### Helper functions
 518 | 
 519 | Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
 520 | 
 521 | ```python
 522 | async def make_nws_request(url: str) -> dict[str, Any] | None:
 523 | 	"""Make a request to the NWS API with proper error handling."""
 524 | 	headers = {
 525 | 		"User-Agent": USER_AGENT,
 526 | 		"Accept": "application/geo+json"
 527 | 	}
 528 | 	async with httpx.AsyncClient() as client:
 529 | 		try:
 530 | 			response = await client.get(url, headers=headers, timeout=30.0)
 531 | 			response.raise_for_status()
 532 | 			return response.json()
 533 | 		except Exception:
 534 | 			return None
 535 | 
 536 | def format_alert(feature: dict) -> str:
 537 | 	"""Format an alert feature into a readable string."""
 538 | 	props = feature["properties"]
 539 | 	return f"""
 540 | Event: {props.get('event', 'Unknown')}
 541 | Area: {props.get('areaDesc', 'Unknown')}
 542 | Severity: {props.get('severity', 'Unknown')}
 543 | Description: {props.get('description', 'No description available')}
 544 | Instructions: {props.get('instruction', 'No specific instructions provided')}
 545 | """
 546 | ```
 547 | 
 548 | ###### Implementing tool execution
 549 | 
 550 | The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
 551 | 
 552 | ```python
 553 | @mcp.tool()
 554 | async def get_alerts(state: str) -> str:
 555 | 	"""Get weather alerts for a US state.
 556 | 
 557 | 	Args:
 558 | 		state: Two-letter US state code (e.g. CA, NY)
 559 | 	"""
 560 | 	url = f"{NWS_API_BASE}/alerts/active/area/{state}"
 561 | 	data = await make_nws_request(url)
 562 | 
 563 | 	if not data or "features" not in data:
 564 | 		return "Unable to fetch alerts or no alerts found."
 565 | 
 566 | 	if not data["features"]:
 567 | 		return "No active alerts for this state."
 568 | 
 569 | 	alerts = [format_alert(feature) for feature in data["features"]]
 570 | 	return "\n---\n".join(alerts)
 571 | 
 572 | @mcp.tool()
 573 | async def get_forecast(latitude: float, longitude: float) -> str:
 574 | 	"""Get weather forecast for a location.
 575 | 
 576 | 	Args:
 577 | 		latitude: Latitude of the location
 578 | 		longitude: Longitude of the location
 579 | 	"""
 580 | 	# First get the forecast grid endpoint
 581 | 	points_url = f"{NWS_API_BASE}/points/{latitude},{longitude}"
 582 | 	points_data = await make_nws_request(points_url)
 583 | 
 584 | 	if not points_data:
 585 | 		return "Unable to fetch forecast data for this location."
 586 | 
 587 | 	# Get the forecast URL from the points response
 588 | 	forecast_url = points_data["properties"]["forecast"]
 589 | 	forecast_data = await make_nws_request(forecast_url)
 590 | 
 591 | 	if not forecast_data:
 592 | 		return "Unable to fetch detailed forecast."
 593 | 
 594 | 	# Format the periods into a readable forecast
 595 | 	periods = forecast_data["properties"]["periods"]
 596 | 	forecasts = []
 597 | 	for period in periods[:5]:  # Only show next 5 periods
 598 | 		forecast = f"""
 599 | {period['name']}:
 600 | Temperature: {period['temperature']}°{period['temperatureUnit']}
 601 | Wind: {period['windSpeed']} {period['windDirection']}
 602 | Forecast: {period['detailedForecast']}
 603 | """
 604 | 		forecasts.append(forecast)
 605 | 
 606 | 	return "\n---\n".join(forecasts)
 607 | ```
 608 | 
 609 | ###### Running the server
 610 | 
 611 | Finally, let's initialize and run the server:
 612 | 
 613 | ```python
 614 | if __name__ == "__main__":
 615 | 	# Initialize and run the server
 616 | 	mcp.run(transport='stdio')
 617 | ```
 618 | 
 619 | Your server is complete! Run `uv run weather.py` to confirm that everything's working.
 620 | 
 621 | Let's now test your server from an existing MCP host, Claude for Desktop.
 622 | 
 623 | ##### Testing your server with Claude for Desktop
 624 | 
 625 | <Note>
 626 |   Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/quickstart/client) tutorial to build an MCP client that connects to the server we just built.
 627 | </Note>
 628 | 
 629 | First, make sure you have Claude for Desktop installed. [You can install the latest version
 630 | here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
 631 | 
 632 | We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
 633 | 
 634 | For example, if you have [VS Code](https://code.visualstudio.com/) installed:
 635 | 
 636 | ```bash MacOS/Linux
 637 | code ~/Library/Application\ Support/Claude/claude_desktop_config.json
 638 | ```
 639 |  
 640 | 
 641 | ```powershell Windows
 642 | code $env:AppData\Claude\claude_desktop_config.json
 643 | ```
 644 | 
 645 | 
 646 | You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
 647 | 
 648 | In this case, we'll add our single weather server like so:
 649 | MacOS/Linux
 650 | ```json Python
 651 | {
 652 | 	"mcpServers": {
 653 | 		"weather": {
 654 | 			"command": "uv",
 655 | 			"args": [
 656 | 				"--directory",
 657 | 				"/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather",
 658 | 				"run",
 659 | 				"weather.py"
 660 | 			]
 661 | 		}
 662 | 	}
 663 | }
 664 | ```
 665 | 
 666 |   Windows
 667 | ```json Python
 668 | {
 669 | 	"mcpServers": {
 670 | 		"weather": {
 671 | 			"command": "uv",
 672 | 			"args": [
 673 | 				"--directory",
 674 | 				"C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather",
 675 | 				"run",
 676 | 				"weather.py"
 677 | 			]
 678 | 		}
 679 | 	}
 680 | }
 681 | ```
 682 | 
 683 | <Warning>
 684 |   You may need to put the full path to the `uv` executable in the `command` field. You can get this by running `which uv` on MacOS/Linux or `where uv` on Windows.
 685 | </Warning>
 686 | 
 687 | <Note>
 688 |   Make sure you pass in the absolute path to your server.
 689 | </Note>
 690 | 
 691 | This tells Claude for Desktop:
 692 | 
 693 | 1. There's an MCP server named "weather"
 694 | 2. To launch it by running `uv --directory /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather run weather.py`
 695 | 
 696 | Save the file, and restart **Claude for Desktop**.
 697 | 
 698 | 
 699 | #### For Node
 700 | Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-typescript)
 701 | 
 702 | ##### Prerequisite knowledge
 703 | 
 704 | This quickstart assumes you have familiarity with:
 705 | 
 706 | * TypeScript
 707 | * LLMs like Claude
 708 | 
 709 | ##### System requirements
 710 | 
 711 | For TypeScript, make sure you have the latest version of Node installed.
 712 | 
 713 | ##### Set up your environment
 714 | 
 715 | First, let's install Node.js and npm if you haven't already. You can download them from [nodejs.org](https://nodejs.org/).
 716 | Verify your Node.js installation:
 717 | 
 718 | ```bash
 719 | node --version
 720 | npm --version
 721 | ```
 722 | 
 723 | For this tutorial, you'll need Node.js version 16 or higher.
 724 | 
 725 | Now, let's create and set up our project:
 726 | 
 727 |   ```bash MacOS/Linux
 728 |   # Create a new directory for our project
 729 |   mkdir weather
 730 |   cd weather
 731 | 
 732 |   # Initialize a new npm project
 733 |   npm init -y
 734 | 
 735 |   # Install dependencies
 736 |   npm install @modelcontextprotocol/sdk zod
 737 |   npm install -D @types/node typescript
 738 | 
 739 |   # Create our files
 740 |   mkdir src
 741 |   touch src/index.ts
 742 |   ```
 743 | 
 744 |   ```powershell Windows
 745 |   # Create a new directory for our project
 746 |   md weather
 747 |   cd weather
 748 | 
 749 |   # Initialize a new npm project
 750 |   npm init -y
 751 | 
 752 |   # Install dependencies
 753 |   npm install @modelcontextprotocol/sdk zod
 754 |   npm install -D @types/node typescript
 755 | 
 756 |   # Create our files
 757 |   md src
 758 |   new-item src\index.ts
 759 |   ```
 760 | 
 761 | Update your package.json to add type: "module" and a build script:
 762 | 
 763 | ```json package.json
 764 | {
 765 |   "type": "module",
 766 |   "bin": {
 767 | 	"weather": "./build/index.js"
 768 |   },
 769 |   "scripts": {
 770 | 	"build": "tsc && chmod 755 build/index.js"
 771 |   },
 772 |   "files": [
 773 | 	"build"
 774 |   ],
 775 | }
 776 | ```
 777 | 
 778 | Create a `tsconfig.json` in the root of your project:
 779 | 
 780 | ```json tsconfig.json
 781 | {
 782 |   "compilerOptions": {
 783 | 	"target": "ES2022",
 784 | 	"module": "Node16",
 785 | 	"moduleResolution": "Node16",
 786 | 	"outDir": "./build",
 787 | 	"rootDir": "./src",
 788 | 	"strict": true,
 789 | 	"esModuleInterop": true,
 790 | 	"skipLibCheck": true,
 791 | 	"forceConsistentCasingInFileNames": true
 792 |   },
 793 |   "include": ["src/**/*"],
 794 |   "exclude": ["node_modules"]
 795 | }
 796 | ```
 797 | 
 798 | Now let's dive into building your server.
 799 | 
 800 | ##### Building your server
 801 | 
 802 | ###### Importing packages and setting up the instance
 803 | 
 804 | Add these to the top of your `src/index.ts`:
 805 | 
 806 | ```typescript
 807 | import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
 808 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
 809 | import { z } from "zod";
 810 | 
 811 | const NWS_API_BASE = "https://api.weather.gov";
 812 | const USER_AGENT = "weather-app/1.0";
 813 | 
 814 | // Create server instance
 815 | const server = new McpServer({
 816 |   name: "weather",
 817 |   version: "1.0.0",
 818 |   capabilities: {
 819 | 	resources: {},
 820 | 	tools: {},
 821 |   },
 822 | });
 823 | ```
 824 | 
 825 | ###### Helper functions
 826 | 
 827 | Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
 828 | 
 829 | ```typescript
 830 | // Helper function for making NWS API requests
 831 | async function makeNWSRequest<T>(url: string): Promise<T | null> {
 832 |   const headers = {
 833 | 	"User-Agent": USER_AGENT,
 834 | 	Accept: "application/geo+json",
 835 |   };
 836 | 
 837 |   try {
 838 | 	const response = await fetch(url, { headers });
 839 | 	if (!response.ok) {
 840 | 	  throw new Error(`HTTP error! status: ${response.status}`);
 841 | 	}
 842 | 	return (await response.json()) as T;
 843 |   } catch (error) {
 844 | 	console.error("Error making NWS request:", error);
 845 | 	return null;
 846 |   }
 847 | }
 848 | 
 849 | interface AlertFeature {
 850 |   properties: {
 851 | 	event?: string;
 852 | 	areaDesc?: string;
 853 | 	severity?: string;
 854 | 	status?: string;
 855 | 	headline?: string;
 856 |   };
 857 | }
 858 | 
 859 | // Format alert data
 860 | function formatAlert(feature: AlertFeature): string {
 861 |   const props = feature.properties;
 862 |   return [
 863 | 	`Event: ${props.event || "Unknown"}`,
 864 | 	`Area: ${props.areaDesc || "Unknown"}`,
 865 | 	`Severity: ${props.severity || "Unknown"}`,
 866 | 	`Status: ${props.status || "Unknown"}`,
 867 | 	`Headline: ${props.headline || "No headline"}`,
 868 | 	"---",
 869 |   ].join("\n");
 870 | }
 871 | 
 872 | interface ForecastPeriod {
 873 |   name?: string;
 874 |   temperature?: number;
 875 |   temperatureUnit?: string;
 876 |   windSpeed?: string;
 877 |   windDirection?: string;
 878 |   shortForecast?: string;
 879 | }
 880 | 
 881 | interface AlertsResponse {
 882 |   features: AlertFeature[];
 883 | }
 884 | 
 885 | interface PointsResponse {
 886 |   properties: {
 887 | 	forecast?: string;
 888 |   };
 889 | }
 890 | 
 891 | interface ForecastResponse {
 892 |   properties: {
 893 | 	periods: ForecastPeriod[];
 894 |   };
 895 | }
 896 | ```
 897 | 
 898 | ###### Implementing tool execution
 899 | 
 900 | The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
 901 | 
 902 | ```typescript
 903 | // Register weather tools
 904 | server.tool(
 905 |   "get-alerts",
 906 |   "Get weather alerts for a state",
 907 |   {
 908 | 	state: z.string().length(2).describe("Two-letter state code (e.g. CA, NY)"),
 909 |   },
 910 |   async ({ state }) => {
 911 | 	const stateCode = state.toUpperCase();
 912 | 	const alertsUrl = `${NWS_API_BASE}/alerts?area=${stateCode}`;
 913 | 	const alertsData = await makeNWSRequest<AlertsResponse>(alertsUrl);
 914 | 
 915 | 	if (!alertsData) {
 916 | 	  return {
 917 | 		content: [
 918 | 		  {
 919 | 			type: "text",
 920 | 			text: "Failed to retrieve alerts data",
 921 | 		  },
 922 | 		],
 923 | 	  };
 924 | 	}
 925 | 
 926 | 	const features = alertsData.features || [];
 927 | 	if (features.length === 0) {
 928 | 	  return {
 929 | 		content: [
 930 | 		  {
 931 | 			type: "text",
 932 | 			text: `No active alerts for ${stateCode}`,
 933 | 		  },
 934 | 		],
 935 | 	  };
 936 | 	}
 937 | 
 938 | 	const formattedAlerts = features.map(formatAlert);
 939 | 	const alertsText = `Active alerts for ${stateCode}:\n\n${formattedAlerts.join("\n")}`;
 940 | 
 941 | 	return {
 942 | 	  content: [
 943 | 		{
 944 | 		  type: "text",
 945 | 		  text: alertsText,
 946 | 		},
 947 | 	  ],
 948 | 	};
 949 |   },
 950 | );
 951 | 
 952 | server.tool(
 953 |   "get-forecast",
 954 |   "Get weather forecast for a location",
 955 |   {
 956 | 	latitude: z.number().min(-90).max(90).describe("Latitude of the location"),
 957 | 	longitude: z.number().min(-180).max(180).describe("Longitude of the location"),
 958 |   },
 959 |   async ({ latitude, longitude }) => {
 960 | 	// Get grid point data
 961 | 	const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(4)},${longitude.toFixed(4)}`;
 962 | 	const pointsData = await makeNWSRequest<PointsResponse>(pointsUrl);
 963 | 
 964 | 	if (!pointsData) {
 965 | 	  return {
 966 | 		content: [
 967 | 		  {
 968 | 			type: "text",
 969 | 			text: `Failed to retrieve grid point data for coordinates: ${latitude}, ${longitude}. This location may not be supported by the NWS API (only US locations are supported).`,
 970 | 		  },
 971 | 		],
 972 | 	  };
 973 | 	}
 974 | 
 975 | 	const forecastUrl = pointsData.properties?.forecast;
 976 | 	if (!forecastUrl) {
 977 | 	  return {
 978 | 		content: [
 979 | 		  {
 980 | 			type: "text",
 981 | 			text: "Failed to get forecast URL from grid point data",
 982 | 		  },
 983 | 		],
 984 | 	  };
 985 | 	}
 986 | 
 987 | 	// Get forecast data
 988 | 	const forecastData = await makeNWSRequest<ForecastResponse>(forecastUrl);
 989 | 	if (!forecastData) {
 990 | 	  return {
 991 | 		content: [
 992 | 		  {
 993 | 			type: "text",
 994 | 			text: "Failed to retrieve forecast data",
 995 | 		  },
 996 | 		],
 997 | 	  };
 998 | 	}
 999 | 
1000 | 	const periods = forecastData.properties?.periods || [];
1001 | 	if (periods.length === 0) {
1002 | 	  return {
1003 | 		content: [
1004 | 		  {
1005 | 			type: "text",
1006 | 			text: "No forecast periods available",
1007 | 		  },
1008 | 		],
1009 | 	  };
1010 | 	}
1011 | 
1012 | 	// Format forecast periods
1013 | 	const formattedForecast = periods.map((period: ForecastPeriod) =>
1014 | 	  [
1015 | 		`${period.name || "Unknown"}:`,
1016 | 		`Temperature: ${period.temperature || "Unknown"}°${period.temperatureUnit || "F"}`,
1017 | 		`Wind: ${period.windSpeed || "Unknown"} ${period.windDirection || ""}`,
1018 | 		`${period.shortForecast || "No forecast available"}`,
1019 | 		"---",
1020 | 	  ].join("\n"),
1021 | 	);
1022 | 
1023 | 	const forecastText = `Forecast for ${latitude}, ${longitude}:\n\n${formattedForecast.join("\n")}`;
1024 | 
1025 | 	return {
1026 | 	  content: [
1027 | 		{
1028 | 		  type: "text",
1029 | 		  text: forecastText,
1030 | 		},
1031 | 	  ],
1032 | 	};
1033 |   },
1034 | );
1035 | ```
1036 | 
1037 | ###### Running the server
1038 | 
1039 | Finally, implement the main function to run the server:
1040 | 
1041 | ```typescript
1042 | async function main() {
1043 |   const transport = new StdioServerTransport();
1044 |   await server.connect(transport);
1045 |   console.error("Weather MCP Server running on stdio");
1046 | }
1047 | 
1048 | main().catch((error) => {
1049 |   console.error("Fatal error in main():", error);
1050 |   process.exit(1);
1051 | });
1052 | ```
1053 | 
1054 | Make sure to run `npm run build` to build your server! This is a very important step in getting your server to connect.
1055 | 
1056 | Let's now test your server from an existing MCP host, Claude for Desktop.
1057 | 
1058 | ##### Testing your server with Claude for Desktop
1059 | 
1060 | <Note>
1061 |   Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/quickstart/client) tutorial to build an MCP client that connects to the server we just built.
1062 | </Note>
1063 | 
1064 | First, make sure you have Claude for Desktop installed. [You can install the latest version
1065 | here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
1066 | 
1067 | We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
1068 | 
1069 | For example, if you have [VS Code](https://code.visualstudio.com/) installed:
1070 | 
1071 | MacOS/Linux
1072 | ```bash
1073 | code ~/Library/Application\ Support/Claude/claude_desktop_config.json
1074 | ```
1075 | 
1076 | Windows
1077 | ```powershell
1078 | code $env:AppData\Claude\claude_desktop_config.json
1079 | ```
1080 | 
1081 | 
1082 | You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
1083 | 
1084 | In this case, we'll add our single weather server like so:
1085 | 
1086 | MacOS/Linux
1087 |   ```json Node
1088 |   {
1089 | 	  "mcpServers": {
1090 | 		  "weather": {
1091 | 			  "command": "node",
1092 | 			  "args": [
1093 | 				  "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js"
1094 | 			  ]
1095 | 		  }
1096 | 	  }
1097 |   }
1098 |   ```
1099 | 
1100 | Windows
1101 |   ```json Node
1102 |   {
1103 | 	  "mcpServers": {
1104 | 		  "weather": {
1105 | 			  "command": "node",
1106 | 			  "args": [
1107 | 				  "C:\\PATH\\TO\\PARENT\\FOLDER\\weather\\build\\index.js"
1108 | 			  ]
1109 | 		  }
1110 | 	  }
1111 |   }
1112 |   ```
1113 | 
1114 | This tells Claude for Desktop:
1115 | 
1116 | 1. There's an MCP server named "weather"
1117 | 2. Launch it by running `node /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js`
1118 | 
1119 | Save the file, and restart **Claude for Desktop**.
1120 | 
1121 | ### Test with commands
1122 | 
1123 | Let's make sure Claude for Desktop is picking up the two tools we've exposed in our `weather` server. You can do this by looking for the hammer <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-hammer-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon:
1124 | 
1125 | <Frame>
1126 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/visual-indicator-mcp-tools.png" />
1127 | </Frame>
1128 | 
1129 | After clicking on the hammer icon, you should see two tools listed:
1130 | 
1131 | <Frame>
1132 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/available-mcp-tools.png" />
1133 | </Frame>
1134 | 
1135 | If your server isn't being picked up by Claude for Desktop, proceed to the [Troubleshooting](#troubleshooting) section for debugging tips.
1136 | 
1137 | If the hammer icon has shown up, you can now test your server by running the following commands in Claude for Desktop:
1138 | 
1139 | * What's the weather in Sacramento?
1140 | * What are the active weather alerts in Texas?
1141 | 
1142 | <Frame>
1143 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/current-weather.png" />
1144 | </Frame>
1145 | 
1146 | <Frame>
1147 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/weather-alerts.png" />
1148 | </Frame>
1149 | 
1150 | <Note>
1151 |   Since this is the US National Weather service, the queries will only work for US locations.
1152 | </Note>
1153 | 
1154 | ## What's happening under the hood
1155 | 
1156 | When you ask a question:
1157 | 
1158 | 1. The client sends your question to Claude
1159 | 2. Claude analyzes the available tools and decides which one(s) to use
1160 | 3. The client executes the chosen tool(s) through the MCP server
1161 | 4. The results are sent back to Claude
1162 | 5. Claude formulates a natural language response
1163 | 6. The response is displayed to you!
1164 | 
1165 | ## Troubleshooting
1166 | 
1167 | <AccordionGroup>
1168 |   <Accordion title="Claude for Desktop Integration Issues">
1169 |     **Getting logs from Claude for Desktop**
1170 | 
1171 |     Claude.app logging related to MCP is written to log files in `~/Library/Logs/Claude`:
1172 | 
1173 |     * `mcp.log` will contain general logging about MCP connections and connection failures.
1174 |     * Files named `mcp-server-SERVERNAME.log` will contain error (stderr) logging from the named server.
1175 | 
1176 |     You can run the following command to list recent logs and follow along with any new ones:
1177 | 
1178 |     ```bash
1179 |     # Check Claude's logs for errors
1180 |     tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
1181 |     ```
1182 | 
1183 |     **Server not showing up in Claude**
1184 | 
1185 |     1. Check your `claude_desktop_config.json` file syntax
1186 |     2. Make sure the path to your project is absolute and not relative
1187 |     3. Restart Claude for Desktop completely
1188 | 
1189 |     **Tool calls failing silently**
1190 | 
1191 |     If Claude attempts to use the tools but they fail:
1192 | 
1193 |     4. Check Claude's logs for errors
1194 |     5. Verify your server builds and runs without errors
1195 |     6. Try restarting Claude for Desktop
1196 | 
1197 |     **None of this is working. What do I do?**
1198 | 
1199 |     Please refer to our [debugging guide](/docs/tools/debugging) for better debugging tools and more detailed guidance.
1200 |   </Accordion>
1201 | 
1202 |   <Accordion title="Weather API Issues">
1203 |     **Error: Failed to retrieve grid point data**
1204 | 
1205 |     This usually means either:
1206 | 
1207 |     1. The coordinates are outside the US
1208 |     2. The NWS API is having issues
1209 |     3. You're being rate limited
1210 | 
1211 |     Fix:
1212 | 
1213 |     * Verify you're using US coordinates
1214 |     * Add a small delay between requests
1215 |     * Check the NWS API status page
1216 | 
1217 |     **Error: No active alerts for \[STATE]**
1218 | 
1219 |     This isn't an error - it just means there are no current weather alerts for that state. Try a different state or check during severe weather.
1220 |   </Accordion>
1221 | </AccordionGroup>
1222 | 
1223 | <Note>
1224 |   For more advanced troubleshooting, check out our guide on [Debugging MCP](/docs/tools/debugging)
1225 | </Note>
1226 | 
1227 | 
1228 | 
1229 | # Resources
1230 | 
1231 | > Expose data and content from your servers to LLMs
1232 | 
1233 | Resources are a core primitive in the Model Context Protocol (MCP) that allow servers to expose data and content that can be read by clients and used as context for LLM interactions.
1234 | 
1235 | <Note>
1236 |   Resources are designed to be **application-controlled**, meaning that the client application can decide how and when they should be used.
1237 |   Different MCP clients may handle resources differently. For example:
1238 | 
1239 |   * Claude Desktop currently requires users to explicitly select resources before they can be used
1240 |   * Other clients might automatically select resources based on heuristics
1241 |   * Some implementations may even allow the AI model itself to determine which resources to use
1242 | 
1243 |   Server authors should be prepared to handle any of these interaction patterns when implementing resource support. In order to expose data to models automatically, server authors should use a **model-controlled** primitive such as [Tools](./tools).
1244 | </Note>
1245 | 
1246 | ## Overview
1247 | 
1248 | Resources represent any kind of data that an MCP server wants to make available to clients. This can include:
1249 | 
1250 | * File contents
1251 | * Database records
1252 | * API responses
1253 | * Live system data
1254 | * Screenshots and images
1255 | * Log files
1256 | * And more
1257 | 
1258 | Each resource is identified by a unique URI and can contain either text or binary data.
1259 | 
1260 | ## Resource URIs
1261 | 
1262 | Resources are identified using URIs that follow this format:
1263 | 
1264 | ```
1265 | [protocol]://[host]/[path]
1266 | ```
1267 | 
1268 | For example:
1269 | 
1270 | * `file:///home/user/documents/report.pdf`
1271 | * `postgres://database/customers/schema`
1272 | * `screen://localhost/display1`
1273 | 
1274 | The protocol and path structure is defined by the MCP server implementation. Servers can define their own custom URI schemes.
1275 | 
1276 | ## Resource types
1277 | 
1278 | Resources can contain two types of content:
1279 | 
1280 | ### Text resources
1281 | 
1282 | Text resources contain UTF-8 encoded text data. These are suitable for:
1283 | 
1284 | * Source code
1285 | * Configuration files
1286 | * Log files
1287 | * JSON/XML data
1288 | * Plain text
1289 | 
1290 | ### Binary resources
1291 | 
1292 | Binary resources contain raw binary data encoded in base64. These are suitable for:
1293 | 
1294 | * Images
1295 | * PDFs
1296 | * Audio files
1297 | * Video files
1298 | * Other non-text formats
1299 | 
1300 | ## Resource discovery
1301 | 
1302 | Clients can discover available resources through two main methods:
1303 | 
1304 | ### Direct resources
1305 | 
1306 | Servers expose a list of concrete resources via the `resources/list` endpoint. Each resource includes:
1307 | 
1308 | ```typescript
1309 | {
1310 |   uri: string;           // Unique identifier for the resource
1311 |   name: string;          // Human-readable name
1312 |   description?: string;  // Optional description
1313 |   mimeType?: string;     // Optional MIME type
1314 | }
1315 | ```
1316 | 
1317 | ### Resource templates
1318 | 
1319 | For dynamic resources, servers can expose [URI templates](https://datatracker.ietf.org/doc/html/rfc6570) that clients can use to construct valid resource URIs:
1320 | 
1321 | ```typescript
1322 | {
1323 |   uriTemplate: string;   // URI template following RFC 6570
1324 |   name: string;          // Human-readable name for this type
1325 |   description?: string;  // Optional description
1326 |   mimeType?: string;     // Optional MIME type for all matching resources
1327 | }
1328 | ```
1329 | 
1330 | ## Reading resources
1331 | 
1332 | To read a resource, clients make a `resources/read` request with the resource URI.
1333 | 
1334 | The server responds with a list of resource contents:
1335 | 
1336 | ```typescript
1337 | {
1338 |   contents: [
1339 |     {
1340 |       uri: string;        // The URI of the resource
1341 |       mimeType?: string;  // Optional MIME type
1342 | 
1343 |       // One of:
1344 |       text?: string;      // For text resources
1345 |       blob?: string;      // For binary resources (base64 encoded)
1346 |     }
1347 |   ]
1348 | }
1349 | ```
1350 | 
1351 | <Tip>
1352 |   Servers may return multiple resources in response to one `resources/read` request. This could be used, for example, to return a list of files inside a directory when the directory is read.
1353 | </Tip>
1354 | 
1355 | ## Resource updates
1356 | 
1357 | MCP supports real-time updates for resources through two mechanisms:
1358 | 
1359 | ### List changes
1360 | 
1361 | Servers can notify clients when their list of available resources changes via the `notifications/resources/list_changed` notification.
1362 | 
1363 | ### Content changes
1364 | 
1365 | Clients can subscribe to updates for specific resources:
1366 | 
1367 | 1. Client sends `resources/subscribe` with resource URI
1368 | 2. Server sends `notifications/resources/updated` when the resource changes
1369 | 3. Client can fetch latest content with `resources/read`
1370 | 4. Client can unsubscribe with `resources/unsubscribe`
1371 | 
1372 | ## Example implementation
1373 | 
1374 | Here's a simple example of implementing resource support in an MCP server:
1375 | TypeScript
1376 | ```typescript
1377 | const server = new Server({
1378 |   name: "example-server",
1379 |   version: "1.0.0"
1380 | }, {
1381 |   capabilities: {
1382 | 	resources: {}
1383 |   }
1384 | });
1385 | 
1386 | // List available resources
1387 | server.setRequestHandler(ListResourcesRequestSchema, async () => {
1388 |   return {
1389 | 	resources: [
1390 | 	  {
1391 | 		uri: "file:///logs/app.log",
1392 | 		name: "Application Logs",
1393 | 		mimeType: "text/plain"
1394 | 	  }
1395 | 	]
1396 |   };
1397 | });
1398 | 
1399 | // Read resource contents
1400 | server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
1401 |   const uri = request.params.uri;
1402 | 
1403 |   if (uri === "file:///logs/app.log") {
1404 | 	const logContents = await readLogFile();
1405 | 	return {
1406 | 	  contents: [
1407 | 		{
1408 | 		  uri,
1409 | 		  mimeType: "text/plain",
1410 | 		  text: logContents
1411 | 		}
1412 | 	  ]
1413 | 	};
1414 |   }
1415 | 
1416 |   throw new Error("Resource not found");
1417 | });
1418 | ```
1419 | 
1420 | Python
1421 | ```python
1422 | app = Server("example-server")
1423 | 
1424 | @app.list_resources()
1425 | async def list_resources() -> list[types.Resource]:
1426 | 	return [
1427 | 		types.Resource(
1428 | 			uri="file:///logs/app.log",
1429 | 			name="Application Logs",
1430 | 			mimeType="text/plain"
1431 | 		)
1432 | 	]
1433 | 
1434 | @app.read_resource()
1435 | async def read_resource(uri: AnyUrl) -> str:
1436 | 	if str(uri) == "file:///logs/app.log":
1437 | 		log_contents = await read_log_file()
1438 | 		return log_contents
1439 | 
1440 | 	raise ValueError("Resource not found")
1441 | 
1442 | # Start server
1443 | async with stdio_server() as streams:
1444 | 	await app.run(
1445 | 		streams[0],
1446 | 		streams[1],
1447 | 		app.create_initialization_options()
1448 | 	)
1449 | ```
1450 | 
1451 | 
1452 | ## Best practices
1453 | 
1454 | When implementing resource support:
1455 | 
1456 | 1. Use clear, descriptive resource names and URIs
1457 | 2. Include helpful descriptions to guide LLM understanding
1458 | 3. Set appropriate MIME types when known
1459 | 4. Implement resource templates for dynamic content
1460 | 5. Use subscriptions for frequently changing resources
1461 | 6. Handle errors gracefully with clear error messages
1462 | 7. Consider pagination for large resource lists
1463 | 8. Cache resource contents when appropriate
1464 | 9. Validate URIs before processing
1465 | 10. Document your custom URI schemes
1466 | 
1467 | ## Security considerations
1468 | 
1469 | When exposing resources:
1470 | 
1471 | * Validate all resource URIs
1472 | * Implement appropriate access controls
1473 | * Sanitize file paths to prevent directory traversal
1474 | * Be cautious with binary data handling
1475 | * Consider rate limiting for resource reads
1476 | * Audit resource access
1477 | * Encrypt sensitive data in transit
1478 | * Validate MIME types
1479 | * Implement timeouts for long-running reads
1480 | * Handle resource cleanup appropriately
1481 | 
1482 | # Prompts
1483 | 
1484 | > Create reusable prompt templates and workflows
1485 | 
1486 | Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a powerful way to standardize and share common LLM interactions.
1487 | 
1488 | <Note>
1489 |   Prompts are designed to be **user-controlled**, meaning they are exposed from servers to clients with the intention of the user being able to explicitly select them for use.
1490 | </Note>
1491 | 
1492 | ## Overview
1493 | 
1494 | Prompts in MCP are predefined templates that can:
1495 | 
1496 | * Accept dynamic arguments
1497 | * Include context from resources
1498 | * Chain multiple interactions
1499 | * Guide specific workflows
1500 | * Surface as UI elements (like slash commands)
1501 | 
1502 | ## Prompt structure
1503 | 
1504 | Each prompt is defined with:
1505 | 
1506 | ```typescript
1507 | {
1508 |   name: string;              // Unique identifier for the prompt
1509 |   description?: string;      // Human-readable description
1510 |   arguments?: [              // Optional list of arguments
1511 |     {
1512 |       name: string;          // Argument identifier
1513 |       description?: string;  // Argument description
1514 |       required?: boolean;    // Whether argument is required
1515 |     }
1516 |   ]
1517 | }
1518 | ```
1519 | 
1520 | ## Discovering prompts
1521 | 
1522 | Clients can discover available prompts through the `prompts/list` endpoint:
1523 | 
1524 | ```typescript
1525 | // Request
1526 | {
1527 |   method: "prompts/list"
1528 | }
1529 | 
1530 | // Response
1531 | {
1532 |   prompts: [
1533 |     {
1534 |       name: "analyze-code",
1535 |       description: "Analyze code for potential improvements",
1536 |       arguments: [
1537 |         {
1538 |           name: "language",
1539 |           description: "Programming language",
1540 |           required: true
1541 |         }
1542 |       ]
1543 |     }
1544 |   ]
1545 | }
1546 | ```
1547 | 
1548 | ## Using prompts
1549 | 
1550 | To use a prompt, clients make a `prompts/get` request:
1551 | 
1552 | ````typescript
1553 | // Request
1554 | {
1555 |   method: "prompts/get",
1556 |   params: {
1557 |     name: "analyze-code",
1558 |     arguments: {
1559 |       language: "python"
1560 |     }
1561 |   }
1562 | }
1563 | 
1564 | // Response
1565 | {
1566 |   description: "Analyze Python code for potential improvements",
1567 |   messages: [
1568 |     {
1569 |       role: "user",
1570 |       content: {
1571 |         type: "text",
1572 |         text: "Please analyze the following Python code for potential improvements:\n\n```python\ndef calculate_sum(numbers):\n    total = 0\n    for num in numbers:\n        total = total + num\n    return total\n\nresult = calculate_sum([1, 2, 3, 4, 5])\nprint(result)\n```"
1573 |       }
1574 |     }
1575 |   ]
1576 | }
1577 | ````
1578 | 
1579 | ## Dynamic prompts
1580 | 
1581 | Prompts can be dynamic and include:
1582 | 
1583 | ### Embedded resource context
1584 | 
1585 | ```json
1586 | {
1587 |   "name": "analyze-project",
1588 |   "description": "Analyze project logs and code",
1589 |   "arguments": [
1590 |     {
1591 |       "name": "timeframe",
1592 |       "description": "Time period to analyze logs",
1593 |       "required": true
1594 |     },
1595 |     {
1596 |       "name": "fileUri",
1597 |       "description": "URI of code file to review",
1598 |       "required": true
1599 |     }
1600 |   ]
1601 | }
1602 | ```
1603 | 
1604 | When handling the `prompts/get` request:
1605 | 
1606 | ```json
1607 | {
1608 |   "messages": [
1609 |     {
1610 |       "role": "user",
1611 |       "content": {
1612 |         "type": "text",
1613 |         "text": "Analyze these system logs and the code file for any issues:"
1614 |       }
1615 |     },
1616 |     {
1617 |       "role": "user",
1618 |       "content": {
1619 |         "type": "resource",
1620 |         "resource": {
1621 |           "uri": "logs://recent?timeframe=1h",
1622 |           "text": "[2024-03-14 15:32:11] ERROR: Connection timeout in network.py:127\n[2024-03-14 15:32:15] WARN: Retrying connection (attempt 2/3)\n[2024-03-14 15:32:20] ERROR: Max retries exceeded",
1623 |           "mimeType": "text/plain"
1624 |         }
1625 |       }
1626 |     },
1627 |     {
1628 |       "role": "user",
1629 |       "content": {
1630 |         "type": "resource",
1631 |         "resource": {
1632 |           "uri": "file:///path/to/code.py",
1633 |           "text": "def connect_to_service(timeout=30):\n    retries = 3\n    for attempt in range(retries):\n        try:\n            return establish_connection(timeout)\n        except TimeoutError:\n            if attempt == retries - 1:\n                raise\n            time.sleep(5)\n\ndef establish_connection(timeout):\n    # Connection implementation\n    pass",
1634 |           "mimeType": "text/x-python"
1635 |         }
1636 |       }
1637 |     }
1638 |   ]
1639 | }
1640 | ```
1641 | 
1642 | ### Multi-step workflows
1643 | 
1644 | ```typescript
1645 | const debugWorkflow = {
1646 |   name: "debug-error",
1647 |   async getMessages(error: string) {
1648 |     return [
1649 |       {
1650 |         role: "user",
1651 |         content: {
1652 |           type: "text",
1653 |           text: `Here's an error I'm seeing: ${error}`
1654 |         }
1655 |       },
1656 |       {
1657 |         role: "assistant",
1658 |         content: {
1659 |           type: "text",
1660 |           text: "I'll help analyze this error. What have you tried so far?"
1661 |         }
1662 |       },
1663 |       {
1664 |         role: "user",
1665 |         content: {
1666 |           type: "text",
1667 |           text: "I've tried restarting the service, but the error persists."
1668 |         }
1669 |       }
1670 |     ];
1671 |   }
1672 | };
1673 | ```
1674 | 
1675 | ## Example implementation
1676 | 
1677 | Here's a complete example of implementing prompts in an MCP server:
1678 | 
1679 | TypeScript
1680 | ```typescript
1681 | import { Server } from "@modelcontextprotocol/sdk/server";
1682 | import {
1683 |   ListPromptsRequestSchema,
1684 |   GetPromptRequestSchema
1685 | } from "@modelcontextprotocol/sdk/types";
1686 | 
1687 | const PROMPTS = {
1688 |   "git-commit": {
1689 | 	name: "git-commit",
1690 | 	description: "Generate a Git commit message",
1691 | 	arguments: [
1692 | 	  {
1693 | 		name: "changes",
1694 | 		description: "Git diff or description of changes",
1695 | 		required: true
1696 | 	  }
1697 | 	]
1698 |   },
1699 |   "explain-code": {
1700 | 	name: "explain-code",
1701 | 	description: "Explain how code works",
1702 | 	arguments: [
1703 | 	  {
1704 | 		name: "code",
1705 | 		description: "Code to explain",
1706 | 		required: true
1707 | 	  },
1708 | 	  {
1709 | 		name: "language",
1710 | 		description: "Programming language",
1711 | 		required: false
1712 | 	  }
1713 | 	]
1714 |   }
1715 | };
1716 | 
1717 | const server = new Server({
1718 |   name: "example-prompts-server",
1719 |   version: "1.0.0"
1720 | }, {
1721 |   capabilities: {
1722 | 	prompts: {}
1723 |   }
1724 | });
1725 | 
1726 | // List available prompts
1727 | server.setRequestHandler(ListPromptsRequestSchema, async () => {
1728 |   return {
1729 | 	prompts: Object.values(PROMPTS)
1730 |   };
1731 | });
1732 | 
1733 | // Get specific prompt
1734 | server.setRequestHandler(GetPromptRequestSchema, async (request) => {
1735 |   const prompt = PROMPTS[request.params.name];
1736 |   if (!prompt) {
1737 | 	throw new Error(`Prompt not found: ${request.params.name}`);
1738 |   }
1739 | 
1740 |   if (request.params.name === "git-commit") {
1741 | 	return {
1742 | 	  messages: [
1743 | 		{
1744 | 		  role: "user",
1745 | 		  content: {
1746 | 			type: "text",
1747 | 			text: `Generate a concise but descriptive commit message for these changes:\n\n${request.params.arguments?.changes}`
1748 | 		  }
1749 | 		}
1750 | 	  ]
1751 | 	};
1752 |   }
1753 | 
1754 |   if (request.params.name === "explain-code") {
1755 | 	const language = request.params.arguments?.language || "Unknown";
1756 | 	return {
1757 | 	  messages: [
1758 | 		{
1759 | 		  role: "user",
1760 | 		  content: {
1761 | 			type: "text",
1762 | 			text: `Explain how this ${language} code works:\n\n${request.params.arguments?.code}`
1763 | 		  }
1764 | 		}
1765 | 	  ]
1766 | 	};
1767 |   }
1768 | 
1769 |   throw new Error("Prompt implementation not found");
1770 | });
1771 | ```
1772 | 
1773 | Python
1774 | ```python
1775 | from mcp.server import Server
1776 | import mcp.types as types
1777 | 
1778 | # Define available prompts
1779 | PROMPTS = {
1780 | 	"git-commit": types.Prompt(
1781 | 		name="git-commit",
1782 | 		description="Generate a Git commit message",
1783 | 		arguments=[
1784 | 			types.PromptArgument(
1785 | 				name="changes",
1786 | 				description="Git diff or description of changes",
1787 | 				required=True
1788 | 			)
1789 | 		],
1790 | 	),
1791 | 	"explain-code": types.Prompt(
1792 | 		name="explain-code",
1793 | 		description="Explain how code works",
1794 | 		arguments=[
1795 | 			types.PromptArgument(
1796 | 				name="code",
1797 | 				description="Code to explain",
1798 | 				required=True
1799 | 			),
1800 | 			types.PromptArgument(
1801 | 				name="language",
1802 | 				description="Programming language",
1803 | 				required=False
1804 | 			)
1805 | 		],
1806 | 	)
1807 | }
1808 | 
1809 | # Initialize server
1810 | app = Server("example-prompts-server")
1811 | 
1812 | @app.list_prompts()
1813 | async def list_prompts() -> list[types.Prompt]:
1814 | 	return list(PROMPTS.values())
1815 | 
1816 | @app.get_prompt()
1817 | async def get_prompt(
1818 | 	name: str, arguments: dict[str, str] | None = None
1819 | ) -> types.GetPromptResult:
1820 | 	if name not in PROMPTS:
1821 | 		raise ValueError(f"Prompt not found: {name}")
1822 | 
1823 | 	if name == "git-commit":
1824 | 		changes = arguments.get("changes") if arguments else ""
1825 | 		return types.GetPromptResult(
1826 | 			messages=[
1827 | 				types.PromptMessage(
1828 | 					role="user",
1829 | 					content=types.TextContent(
1830 | 						type="text",
1831 | 						text=f"Generate a concise but descriptive commit message "
1832 | 						f"for these changes:\n\n{changes}"
1833 | 					)
1834 | 				)
1835 | 			]
1836 | 		)
1837 | 
1838 | 	if name == "explain-code":
1839 | 		code = arguments.get("code") if arguments else ""
1840 | 		language = arguments.get("language", "Unknown") if arguments else "Unknown"
1841 | 		return types.GetPromptResult(
1842 | 			messages=[
1843 | 				types.PromptMessage(
1844 | 					role="user",
1845 | 					content=types.TextContent(
1846 | 						type="text",
1847 | 						text=f"Explain how this {language} code works:\n\n{code}"
1848 | 					)
1849 | 				)
1850 | 			]
1851 | 		)
1852 | 
1853 | 	raise ValueError("Prompt implementation not found")
1854 | ```
1855 | 
1856 | 
1857 | 
1858 | ## Best practices
1859 | 
1860 | When implementing prompts:
1861 | 
1862 | 1. Use clear, descriptive prompt names
1863 | 2. Provide detailed descriptions for prompts and arguments
1864 | 3. Validate all required arguments
1865 | 4. Handle missing arguments gracefully
1866 | 5. Consider versioning for prompt templates
1867 | 6. Cache dynamic content when appropriate
1868 | 7. Implement error handling
1869 | 8. Document expected argument formats
1870 | 9. Consider prompt composability
1871 | 10. Test prompts with various inputs
1872 | 
1873 | ## UI integration
1874 | 
1875 | Prompts can be surfaced in client UIs as:
1876 | 
1877 | * Slash commands
1878 | * Quick actions
1879 | * Context menu items
1880 | * Command palette entries
1881 | * Guided workflows
1882 | * Interactive forms
1883 | 
1884 | ## Updates and changes
1885 | 
1886 | Servers can notify clients about prompt changes:
1887 | 
1888 | 1. Server capability: `prompts.listChanged`
1889 | 2. Notification: `notifications/prompts/list_changed`
1890 | 3. Client re-fetches prompt list
1891 | 
1892 | ## Security considerations
1893 | 
1894 | When implementing prompts:
1895 | 
1896 | * Validate all arguments
1897 | * Sanitize user input
1898 | * Consider rate limiting
1899 | * Implement access controls
1900 | * Audit prompt usage
1901 | * Handle sensitive data appropriately
1902 | * Validate generated content
1903 | * Implement timeouts
1904 | * Consider prompt injection risks
1905 | * Document security requirements
1906 | 
1907 | 
1908 | # Tools
1909 | 
1910 | > Enable LLMs to perform actions through your server
1911 | 
1912 | Tools are a powerful primitive in the Model Context Protocol (MCP) that enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world.
1913 | 
1914 | <Note>
1915 |   Tools are designed to be **model-controlled**, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval).
1916 | </Note>
1917 | 
1918 | ## Overview
1919 | 
1920 | Tools in MCP allow servers to expose executable functions that can be invoked by clients and used by LLMs to perform actions. Key aspects of tools include:
1921 | 
1922 | * **Discovery**: Clients can list available tools through the `tools/list` endpoint
1923 | * **Invocation**: Tools are called using the `tools/call` endpoint, where servers perform the requested operation and return results
1924 | * **Flexibility**: Tools can range from simple calculations to complex API interactions
1925 | 
1926 | Like [resources](/docs/concepts/resources), tools are identified by unique names and can include descriptions to guide their usage. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems.
1927 | 
1928 | ## Tool definition structure
1929 | 
1930 | Each tool is defined with the following structure:
1931 | 
1932 | ```typescript
1933 | {
1934 |   name: string;          // Unique identifier for the tool
1935 |   description?: string;  // Human-readable description
1936 |   inputSchema: {         // JSON Schema for the tool's parameters
1937 |     type: "object",
1938 |     properties: { ... }  // Tool-specific parameters
1939 |   },
1940 |   annotations?: {        // Optional hints about tool behavior
1941 |     title?: string;      // Human-readable title for the tool
1942 |     readOnlyHint?: boolean;    // If true, the tool does not modify its environment
1943 |     destructiveHint?: boolean; // If true, the tool may perform destructive updates
1944 |     idempotentHint?: boolean;  // If true, repeated calls with same args have no additional effect
1945 |     openWorldHint?: boolean;   // If true, tool interacts with external entities
1946 |   }
1947 | }
1948 | ```
1949 | 
1950 | ## Implementing tools
1951 | 
1952 | Here's an example of implementing a basic tool in an MCP server:
1953 | 
1954 | TypeScript
1955 | ```typescript
1956 | const server = new Server({
1957 |   name: "example-server",
1958 |   version: "1.0.0"
1959 | }, {
1960 |   capabilities: {
1961 | 	tools: {}
1962 |   }
1963 | });
1964 | 
1965 | // Define available tools
1966 | server.setRequestHandler(ListToolsRequestSchema, async () => {
1967 |   return {
1968 | 	tools: [{
1969 | 	  name: "calculate_sum",
1970 | 	  description: "Add two numbers together",
1971 | 	  inputSchema: {
1972 | 		type: "object",
1973 | 		properties: {
1974 | 		  a: { type: "number" },
1975 | 		  b: { type: "number" }
1976 | 		},
1977 | 		required: ["a", "b"]
1978 | 	  }
1979 | 	}]
1980 |   };
1981 | });
1982 | 
1983 | // Handle tool execution
1984 | server.setRequestHandler(CallToolRequestSchema, async (request) => {
1985 |   if (request.params.name === "calculate_sum") {
1986 | 	const { a, b } = request.params.arguments;
1987 | 	return {
1988 | 	  content: [
1989 | 		{
1990 | 		  type: "text",
1991 | 		  text: String(a + b)
1992 | 		}
1993 | 	  ]
1994 | 	};
1995 |   }
1996 |   throw new Error("Tool not found");
1997 | });
1998 | ```
1999 | 
2000 | Python
2001 | ```python
2002 | app = Server("example-server")
2003 | 
2004 | @app.list_tools()
2005 | async def list_tools() -> list[types.Tool]:
2006 | 	return [
2007 | 		types.Tool(
2008 | 			name="calculate_sum",
2009 | 			description="Add two numbers together",
2010 | 			inputSchema={
2011 | 				"type": "object",
2012 | 				"properties": {
2013 | 					"a": {"type": "number"},
2014 | 					"b": {"type": "number"}
2015 | 				},
2016 | 				"required": ["a", "b"]
2017 | 			}
2018 | 		)
2019 | 	]
2020 | 
2021 | @app.call_tool()
2022 | async def call_tool(
2023 | 	name: str,
2024 | 	arguments: dict
2025 | ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
2026 | 	if name == "calculate_sum":
2027 | 		a = arguments["a"]
2028 | 		b = arguments["b"]
2029 | 		result = a + b
2030 | 		return [types.TextContent(type="text", text=str(result))]
2031 | 	raise ValueError(f"Tool not found: {name}")
2032 | ```
2033 | 
2034 | 
2035 | ## Example tool patterns
2036 | 
2037 | Here are some examples of types of tools that a server could provide:
2038 | 
2039 | ### System operations
2040 | 
2041 | Tools that interact with the local system:
2042 | 
2043 | ```typescript
2044 | {
2045 |   name: "execute_command",
2046 |   description: "Run a shell command",
2047 |   inputSchema: {
2048 |     type: "object",
2049 |     properties: {
2050 |       command: { type: "string" },
2051 |       args: { type: "array", items: { type: "string" } }
2052 |     }
2053 |   }
2054 | }
2055 | ```
2056 | 
2057 | ### API integrations
2058 | 
2059 | Tools that wrap external APIs:
2060 | 
2061 | ```typescript
2062 | {
2063 |   name: "github_create_issue",
2064 |   description: "Create a GitHub issue",
2065 |   inputSchema: {
2066 |     type: "object",
2067 |     properties: {
2068 |       title: { type: "string" },
2069 |       body: { type: "string" },
2070 |       labels: { type: "array", items: { type: "string" } }
2071 |     }
2072 |   }
2073 | }
2074 | ```
2075 | 
2076 | ### Data processing
2077 | 
2078 | Tools that transform or analyze data:
2079 | 
2080 | ```typescript
2081 | {
2082 |   name: "analyze_csv",
2083 |   description: "Analyze a CSV file",
2084 |   inputSchema: {
2085 |     type: "object",
2086 |     properties: {
2087 |       filepath: { type: "string" },
2088 |       operations: {
2089 |         type: "array",
2090 |         items: {
2091 |           enum: ["sum", "average", "count"]
2092 |         }
2093 |       }
2094 |     }
2095 |   }
2096 | }
2097 | ```
2098 | 
2099 | ## Best practices
2100 | 
2101 | When implementing tools:
2102 | 
2103 | 1. Provide clear, descriptive names and descriptions
2104 | 2. Use detailed JSON Schema definitions for parameters
2105 | 3. Include examples in tool descriptions to demonstrate how the model should use them
2106 | 4. Implement proper error handling and validation
2107 | 5. Use progress reporting for long operations
2108 | 6. Keep tool operations focused and atomic
2109 | 7. Document expected return value structures
2110 | 8. Implement proper timeouts
2111 | 9. Consider rate limiting for resource-intensive operations
2112 | 10. Log tool usage for debugging and monitoring
2113 | 
2114 | ## Security considerations
2115 | 
2116 | When exposing tools:
2117 | 
2118 | ### Input validation
2119 | 
2120 | * Validate all parameters against the schema
2121 | * Sanitize file paths and system commands
2122 | * Validate URLs and external identifiers
2123 | * Check parameter sizes and ranges
2124 | * Prevent command injection
2125 | 
2126 | ### Access control
2127 | 
2128 | * Implement authentication where needed
2129 | * Use appropriate authorization checks
2130 | * Audit tool usage
2131 | * Rate limit requests
2132 | * Monitor for abuse
2133 | 
2134 | ### Error handling
2135 | 
2136 | * Don't expose internal errors to clients
2137 | * Log security-relevant errors
2138 | * Handle timeouts appropriately
2139 | * Clean up resources after errors
2140 | * Validate return values
2141 | 
2142 | ## Tool discovery and updates
2143 | 
2144 | MCP supports dynamic tool discovery:
2145 | 
2146 | 1. Clients can list available tools at any time
2147 | 2. Servers can notify clients when tools change using `notifications/tools/list_changed`
2148 | 3. Tools can be added or removed during runtime
2149 | 4. Tool definitions can be updated (though this should be done carefully)
2150 | 
2151 | ## Error handling
2152 | 
2153 | Tool errors should be reported within the result object, not as MCP protocol-level errors. This allows the LLM to see and potentially handle the error. When a tool encounters an error:
2154 | 
2155 | 1. Set `isError` to `true` in the result
2156 | 2. Include error details in the `content` array
2157 | 
2158 | Here's an example of proper error handling for tools:
2159 | TypeScript
2160 | ```typescript
2161 | try {
2162 |   // Tool operation
2163 |   const result = performOperation();
2164 |   return {
2165 | 	content: [
2166 | 	  {
2167 | 		type: "text",
2168 | 		text: `Operation successful: ${result}`
2169 | 	  }
2170 | 	]
2171 |   };
2172 | } catch (error) {
2173 |   return {
2174 | 	isError: true,
2175 | 	content: [
2176 | 	  {
2177 | 		type: "text",
2178 | 		text: `Error: ${error.message}`
2179 | 	  }
2180 | 	]
2181 |   };
2182 | }
2183 | ```
2184 | 
2185 |   Python
2186 | ```python
2187 | try:
2188 | 	# Tool operation
2189 | 	result = perform_operation()
2190 | 	return types.CallToolResult(
2191 | 		content=[
2192 | 			types.TextContent(
2193 | 				type="text",
2194 | 				text=f"Operation successful: {result}"
2195 | 			)
2196 | 		]
2197 | 	)
2198 | except Exception as error:
2199 | 	return types.CallToolResult(
2200 | 		isError=True,
2201 | 		content=[
2202 | 			types.TextContent(
2203 | 				type="text",
2204 | 				text=f"Error: {str(error)}"
2205 | 			)
2206 | 		]
2207 | 	)
2208 | ```
2209 | 
2210 | 
2211 | This approach allows the LLM to see that an error occurred and potentially take corrective action or request human intervention.
2212 | 
2213 | ## Tool annotations
2214 | 
2215 | Tool annotations provide additional metadata about a tool's behavior, helping clients understand how to present and manage tools. These annotations are hints that describe the nature and impact of a tool, but should not be relied upon for security decisions.
2216 | 
2217 | ### Purpose of tool annotations
2218 | 
2219 | Tool annotations serve several key purposes:
2220 | 
2221 | 1. Provide UX-specific information without affecting model context
2222 | 2. Help clients categorize and present tools appropriately
2223 | 3. Convey information about a tool's potential side effects
2224 | 4. Assist in developing intuitive interfaces for tool approval
2225 | 
2226 | ### Available tool annotations
2227 | 
2228 | The MCP specification defines the following annotations for tools:
2229 | 
2230 | | Annotation        | Type    | Default | Description                                                                                                                          |
2231 | | ----------------- | ------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------ |
2232 | | `title`           | string  | -       | A human-readable title for the tool, useful for UI display                                                                           |
2233 | | `readOnlyHint`    | boolean | false   | If true, indicates the tool does not modify its environment                                                                          |
2234 | | `destructiveHint` | boolean | true    | If true, the tool may perform destructive updates (only meaningful when `readOnlyHint` is false)                                     |
2235 | | `idempotentHint`  | boolean | false   | If true, calling the tool repeatedly with the same arguments has no additional effect (only meaningful when `readOnlyHint` is false) |
2236 | | `openWorldHint`   | boolean | true    | If true, the tool may interact with an "open world" of external entities                                                             |
2237 | 
2238 | ### Example usage
2239 | 
2240 | Here's how to define tools with annotations for different scenarios:
2241 | 
2242 | ```typescript
2243 | // A read-only search tool
2244 | {
2245 |   name: "web_search",
2246 |   description: "Search the web for information",
2247 |   inputSchema: {
2248 |     type: "object",
2249 |     properties: {
2250 |       query: { type: "string" }
2251 |     },
2252 |     required: ["query"]
2253 |   },
2254 |   annotations: {
2255 |     title: "Web Search",
2256 |     readOnlyHint: true,
2257 |     openWorldHint: true
2258 |   }
2259 | }
2260 | 
2261 | // A destructive file deletion tool
2262 | {
2263 |   name: "delete_file",
2264 |   description: "Delete a file from the filesystem",
2265 |   inputSchema: {
2266 |     type: "object",
2267 |     properties: {
2268 |       path: { type: "string" }
2269 |     },
2270 |     required: ["path"]
2271 |   },
2272 |   annotations: {
2273 |     title: "Delete File",
2274 |     readOnlyHint: false,
2275 |     destructiveHint: true,
2276 |     idempotentHint: true,
2277 |     openWorldHint: false
2278 |   }
2279 | }
2280 | 
2281 | // A non-destructive database record creation tool
2282 | {
2283 |   name: "create_record",
2284 |   description: "Create a new record in the database",
2285 |   inputSchema: {
2286 |     type: "object",
2287 |     properties: {
2288 |       table: { type: "string" },
2289 |       data: { type: "object" }
2290 |     },
2291 |     required: ["table", "data"]
2292 |   },
2293 |   annotations: {
2294 |     title: "Create Database Record",
2295 |     readOnlyHint: false,
2296 |     destructiveHint: false,
2297 |     idempotentHint: false,
2298 |     openWorldHint: false
2299 |   }
2300 | }
2301 | ```
2302 | 
2303 | ### Integrating annotations in server implementation
2304 | 
2305 | TypeScript
2306 | ```typescript
2307 | server.setRequestHandler(ListToolsRequestSchema, async () => {
2308 |   return {
2309 | 	tools: [{
2310 | 	  name: "calculate_sum",
2311 | 	  description: "Add two numbers together",
2312 | 	  inputSchema: {
2313 | 		type: "object",
2314 | 		properties: {
2315 | 		  a: { type: "number" },
2316 | 		  b: { type: "number" }
2317 | 		},
2318 | 		required: ["a", "b"]
2319 | 	  },
2320 | 	  annotations: {
2321 | 		title: "Calculate Sum",
2322 | 		readOnlyHint: true,
2323 | 		openWorldHint: false
2324 | 	  }
2325 | 	}]
2326 |   };
2327 | });
2328 | ```
2329 |  
2330 |  Python
2331 | ```python
2332 | from mcp.server.fastmcp import FastMCP
2333 | 
2334 | mcp = FastMCP("example-server")
2335 | 
2336 | @mcp.tool(
2337 | 	annotations={
2338 | 		"title": "Calculate Sum",
2339 | 		"readOnlyHint": True,
2340 | 		"openWorldHint": False
2341 | 	}
2342 | )
2343 | async def calculate_sum(a: float, b: float) -> str:
2344 | 	"""Add two numbers together.
2345 | 	
2346 | 	Args:
2347 | 		a: First number to add
2348 | 		b: Second number to add
2349 | 	"""
2350 | 	result = a + b
2351 | 	return str(result)
2352 | ```
2353 | 
2354 | 
2355 | ### Best practices for tool annotations
2356 | 
2357 | 1. **Be accurate about side effects**: Clearly indicate whether a tool modifies its environment and whether those modifications are destructive.
2358 | 
2359 | 2. **Use descriptive titles**: Provide human-friendly titles that clearly describe the tool's purpose.
2360 | 
2361 | 3. **Indicate idempotency properly**: Mark tools as idempotent only if repeated calls with the same arguments truly have no additional effect.
2362 | 
2363 | 4. **Set appropriate open/closed world hints**: Indicate whether a tool interacts with a closed system (like a database) or an open system (like the web).
2364 | 
2365 | 5. **Remember annotations are hints**: All properties in ToolAnnotations are hints and not guaranteed to provide a faithful description of tool behavior. Clients should never make security-critical decisions based solely on annotations.
2366 | 
2367 | ## Testing tools
2368 | 
2369 | A comprehensive testing strategy for MCP tools should cover:
2370 | 
2371 | * **Functional testing**: Verify tools execute correctly with valid inputs and handle invalid inputs appropriately
2372 | * **Integration testing**: Test tool interaction with external systems using both real and mocked dependencies
2373 | * **Security testing**: Validate authentication, authorization, input sanitization, and rate limiting
2374 | * **Performance testing**: Check behavior under load, timeout handling, and resource cleanup
2375 | * **Error handling**: Ensure tools properly report errors through the MCP protocol and clean up resources
2376 | 
2377 | 
2378 | # Transports
2379 | 
2380 | > Learn about MCP's communication mechanisms
2381 | 
2382 | Transports in the Model Context Protocol (MCP) provide the foundation for communication between clients and servers. A transport handles the underlying mechanics of how messages are sent and received.
2383 | 
2384 | ## Message Format
2385 | 
2386 | MCP uses [JSON-RPC](https://www.jsonrpc.org/) 2.0 as its wire format. The transport layer is responsible for converting MCP protocol messages into JSON-RPC format for transmission and converting received JSON-RPC messages back into MCP protocol messages.
2387 | 
2388 | There are three types of JSON-RPC messages used:
2389 | 
2390 | ### Requests
2391 | 
2392 | ```typescript
2393 | {
2394 |   jsonrpc: "2.0",
2395 |   id: number | string,
2396 |   method: string,
2397 |   params?: object
2398 | }
2399 | ```
2400 | 
2401 | ### Responses
2402 | 
2403 | ```typescript
2404 | {
2405 |   jsonrpc: "2.0",
2406 |   id: number | string,
2407 |   result?: object,
2408 |   error?: {
2409 |     code: number,
2410 |     message: string,
2411 |     data?: unknown
2412 |   }
2413 | }
2414 | ```
2415 | 
2416 | ### Notifications
2417 | 
2418 | ```typescript
2419 | {
2420 |   jsonrpc: "2.0",
2421 |   method: string,
2422 |   params?: object
2423 | }
2424 | ```
2425 | 
2426 | ## Built-in Transport Types
2427 | 
2428 | MCP includes two standard transport implementations:
2429 | 
2430 | ### Standard Input/Output (stdio)
2431 | 
2432 | The stdio transport enables communication through standard input and output streams. This is particularly useful for local integrations and command-line tools.
2433 | 
2434 | Use stdio when:
2435 | 
2436 | * Building command-line tools
2437 | * Implementing local integrations
2438 | * Needing simple process communication
2439 | * Working with shell scripts
2440 | 
2441 | TypeScript (Server)
2442 | ```typescript
2443 | const server = new Server({
2444 |   name: "example-server",
2445 |   version: "1.0.0"
2446 | }, {
2447 |   capabilities: {}
2448 | });
2449 | 
2450 | const transport = new StdioServerTransport();
2451 | await server.connect(transport);
2452 | ```
2453 | 
2454 | TypeScript (Client)
2455 | ```typescript
2456 | const client = new Client({
2457 |   name: "example-client",
2458 |   version: "1.0.0"
2459 | }, {
2460 |   capabilities: {}
2461 | });
2462 | 
2463 | const transport = new StdioClientTransport({
2464 |   command: "./server",
2465 |   args: ["--option", "value"]
2466 | });
2467 | await client.connect(transport);
2468 | ```
2469 |  
2470 |  Python (Server)
2471 | ```python
2472 | app = Server("example-server")
2473 | 
2474 | async with stdio_server() as streams:
2475 | 	await app.run(
2476 | 		streams[0],
2477 | 		streams[1],
2478 | 		app.create_initialization_options()
2479 | 	)
2480 | ```
2481 | 
2482 | Python (Client)
2483 | ```python
2484 | params = StdioServerParameters(
2485 | 	command="./server",
2486 | 	args=["--option", "value"]
2487 | )
2488 | 
2489 | async with stdio_client(params) as streams:
2490 | 	async with ClientSession(streams[0], streams[1]) as session:
2491 | 		await session.initialize()
2492 | ```
2493 | 
2494 | 
2495 | 
2496 | ### Server-Sent Events (SSE)
2497 | 
2498 | SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication.
2499 | 
2500 | Use SSE when:
2501 | 
2502 | * Only server-to-client streaming is needed
2503 | * Working with restricted networks
2504 | * Implementing simple updates
2505 | 
2506 | #### Security Warning: DNS Rebinding Attacks
2507 | 
2508 | SSE transports can be vulnerable to DNS rebinding attacks if not properly secured. To prevent this:
2509 | 
2510 | 1. **Always validate Origin headers** on incoming SSE connections to ensure they come from expected sources
2511 | 2. **Avoid binding servers to all network interfaces** (0.0.0.0) when running locally - bind only to localhost (127.0.0.1) instead
2512 | 3. **Implement proper authentication** for all SSE connections
2513 | 
2514 | Without these protections, attackers could use DNS rebinding to interact with local MCP servers from remote websites.
2515 | TypeScript (Server)
2516 | ```typescript
2517 | import express from "express";
2518 | 
2519 | const app = express();
2520 | 
2521 | const server = new Server({
2522 |   name: "example-server",
2523 |   version: "1.0.0"
2524 | }, {
2525 |   capabilities: {}
2526 | });
2527 | 
2528 | let transport: SSEServerTransport | null = null;
2529 | 
2530 | app.get("/sse", (req, res) => {
2531 |   transport = new SSEServerTransport("/messages", res);
2532 |   server.connect(transport);
2533 | });
2534 | 
2535 | app.post("/messages", (req, res) => {
2536 |   if (transport) {
2537 | 	transport.handlePostMessage(req, res);
2538 |   }
2539 | });
2540 | 
2541 | app.listen(3000);
2542 | ```
2543 | 
2544 | TypeScript (Client)
2545 | ```typescript
2546 | const client = new Client({
2547 |   name: "example-client",
2548 |   version: "1.0.0"
2549 | }, {
2550 |   capabilities: {}
2551 | });
2552 | 
2553 | const transport = new SSEClientTransport(
2554 |   new URL("http://localhost:3000/sse")
2555 | );
2556 | await client.connect(transport);
2557 | ```
2558 |   
2559 |   
2560 |   Python (Server)
2561 | ```python
2562 | from mcp.server.sse import SseServerTransport
2563 | from starlette.applications import Starlette
2564 | from starlette.routing import Route
2565 | 
2566 | app = Server("example-server")
2567 | sse = SseServerTransport("/messages")
2568 | 
2569 | async def handle_sse(scope, receive, send):
2570 | 	async with sse.connect_sse(scope, receive, send) as streams:
2571 | 		await app.run(streams[0], streams[1], app.create_initialization_options())
2572 | 
2573 | async def handle_messages(scope, receive, send):
2574 | 	await sse.handle_post_message(scope, receive, send)
2575 | 
2576 | starlette_app = Starlette(
2577 | 	routes=[
2578 | 		Route("/sse", endpoint=handle_sse),
2579 | 		Route("/messages", endpoint=handle_messages, methods=["POST"]),
2580 | 	]
2581 | )
2582 | ```
2583 |  
2584 |   
2585 |   Python (Client)
2586 | ```python
2587 | async with sse_client("http://localhost:8000/sse") as streams:
2588 | 	async with ClientSession(streams[0], streams[1]) as session:
2589 | 		await session.initialize()
2590 | ```
2591 | 
2592 | 
2593 | 
2594 | ## Custom Transports
2595 | 
2596 | MCP makes it easy to implement custom transports for specific needs. Any transport implementation just needs to conform to the Transport interface:
2597 | 
2598 | You can implement custom transports for:
2599 | 
2600 | * Custom network protocols
2601 | * Specialized communication channels
2602 | * Integration with existing systems
2603 | * Performance optimization
2604 | 
2605 | TypeScript
2606 | ```typescript
2607 | interface Transport {
2608 |   // Start processing messages
2609 |   start(): Promise<void>;
2610 | 
2611 |   // Send a JSON-RPC message
2612 |   send(message: JSONRPCMessage): Promise<void>;
2613 | 
2614 |   // Close the connection
2615 |   close(): Promise<void>;
2616 | 
2617 |   // Callbacks
2618 |   onclose?: () => void;
2619 |   onerror?: (error: Error) => void;
2620 |   onmessage?: (message: JSONRPCMessage) => void;
2621 | }
2622 | ```
2623 | 
2624 | Python
2625 | 
2626 | Note that while MCP Servers are often implemented with asyncio, we recommend
2627 | implementing low-level interfaces like transports with `anyio` for wider compatibility.
2628 | 
2629 | ```python
2630 | @contextmanager
2631 | async def create_transport(
2632 | 	read_stream: MemoryObjectReceiveStream[JSONRPCMessage | Exception],
2633 | 	write_stream: MemoryObjectSendStream[JSONRPCMessage]
2634 | ):
2635 | 	"""
2636 | 	Transport interface for MCP.
2637 | 
2638 | 	Args:
2639 | 		read_stream: Stream to read incoming messages from
2640 | 		write_stream: Stream to write outgoing messages to
2641 | 	"""
2642 | 	async with anyio.create_task_group() as tg:
2643 | 		try:
2644 | 			# Start processing messages
2645 | 			tg.start_soon(lambda: process_messages(read_stream))
2646 | 
2647 | 			# Send messages
2648 | 			async with write_stream:
2649 | 				yield write_stream
2650 | 
2651 | 		except Exception as exc:
2652 | 			# Handle errors
2653 | 			raise exc
2654 | 		finally:
2655 | 			# Clean up
2656 | 			tg.cancel_scope.cancel()
2657 | 			await write_stream.aclose()
2658 | 			await read_stream.aclose()
2659 | ```
2660 | 
2661 | 
2662 | ## Error Handling
2663 | 
2664 | Transport implementations should handle various error scenarios:
2665 | 
2666 | 1. Connection errors
2667 | 2. Message parsing errors
2668 | 3. Protocol errors
2669 | 4. Network timeouts
2670 | 5. Resource cleanup
2671 | 
2672 | Example error handling:
2673 | 
2674 | TypeScript
2675 | ```typescript
2676 | class ExampleTransport implements Transport {
2677 |   async start() {
2678 | 	try {
2679 | 	  // Connection logic
2680 | 	} catch (error) {
2681 | 	  this.onerror?.(new Error(`Failed to connect: ${error}`));
2682 | 	  throw error;
2683 | 	}
2684 |   }
2685 | 
2686 |   async send(message: JSONRPCMessage) {
2687 | 	try {
2688 | 	  // Sending logic
2689 | 	} catch (error) {
2690 | 	  this.onerror?.(new Error(`Failed to send message: ${error}`));
2691 | 	  throw error;
2692 | 	}
2693 |   }
2694 | }
2695 | ```
2696 | 
2697 | Python
2698 | Note that while MCP Servers are often implemented with asyncio, we recommend
2699 | implementing low-level interfaces like transports with `anyio` for wider compatibility.
2700 | 
2701 | ```python
2702 | @contextmanager
2703 | async def example_transport(scope: Scope, receive: Receive, send: Send):
2704 | 	try:
2705 | 		# Create streams for bidirectional communication
2706 | 		read_stream_writer, read_stream = anyio.create_memory_object_stream(0)
2707 | 		write_stream, write_stream_reader = anyio.create_memory_object_stream(0)
2708 | 
2709 | 		async def message_handler():
2710 | 			try:
2711 | 				async with read_stream_writer:
2712 | 					# Message handling logic
2713 | 					pass
2714 | 			except Exception as exc:
2715 | 				logger.error(f"Failed to handle message: {exc}")
2716 | 				raise exc
2717 | 
2718 | 		async with anyio.create_task_group() as tg:
2719 | 			tg.start_soon(message_handler)
2720 | 			try:
2721 | 				# Yield streams for communication
2722 | 				yield read_stream, write_stream
2723 | 			except Exception as exc:
2724 | 				logger.error(f"Transport error: {exc}")
2725 | 				raise exc
2726 | 			finally:
2727 | 				tg.cancel_scope.cancel()
2728 | 				await write_stream.aclose()
2729 | 				await read_stream.aclose()
2730 | 	except Exception as exc:
2731 | 		logger.error(f"Failed to initialize transport: {exc}")
2732 | 		raise exc
2733 | ```
2734 | 
2735 | 
2736 | 
2737 | ## Best Practices
2738 | 
2739 | When implementing or using MCP transport:
2740 | 
2741 | 1. Handle connection lifecycle properly
2742 | 2. Implement proper error handling
2743 | 3. Clean up resources on connection close
2744 | 4. Use appropriate timeouts
2745 | 5. Validate messages before sending
2746 | 6. Log transport events for debugging
2747 | 7. Implement reconnection logic when appropriate
2748 | 8. Handle backpressure in message queues
2749 | 9. Monitor connection health
2750 | 10. Implement proper security measures
2751 | 
2752 | ## Security Considerations
2753 | 
2754 | When implementing transport:
2755 | 
2756 | ### Authentication and Authorization
2757 | 
2758 | * Implement proper authentication mechanisms
2759 | * Validate client credentials
2760 | * Use secure token handling
2761 | * Implement authorization checks
2762 | 
2763 | ### Data Security
2764 | 
2765 | * Use TLS for network transport
2766 | * Encrypt sensitive data
2767 | * Validate message integrity
2768 | * Implement message size limits
2769 | * Sanitize input data
2770 | 
2771 | ### Network Security
2772 | 
2773 | * Implement rate limiting
2774 | * Use appropriate timeouts
2775 | * Handle denial of service scenarios
2776 | * Monitor for unusual patterns
2777 | * Implement proper firewall rules
2778 | * For SSE transports, validate Origin headers to prevent DNS rebinding attacks
2779 | * For local SSE servers, bind only to localhost (127.0.0.1) instead of all interfaces (0.0.0.0)
2780 | 
2781 | ## Debugging Transport
2782 | 
2783 | Tips for debugging transport issues:
2784 | 
2785 | 1. Enable debug logging
2786 | 2. Monitor message flow
2787 | 3. Check connection states
2788 | 4. Validate message formats
2789 | 5. Test error scenarios
2790 | 6. Use network analysis tools
2791 | 7. Implement health checks
2792 | 8. Monitor resource usage
2793 | 9. Test edge cases
2794 | 10. Use proper error tracking
2795 | 
2796 | 
2797 | 
```