This is page 2 of 2. Use http://codebase.md/klauern/mcp-ynab?lines=true&page={x} to view the full context.
# Directory Structure
```
├── .cursor
│ └── rules
│ ├── mcp.mdc
│ └── project-setup.mdc
├── .env.example
├── .gitignore
├── CLAUDE.md
├── docs
│ ├── llms-full.txt
│ └── mcp-py-sdk.md
├── mise.toml
├── package-lock.json
├── pyproject.toml
├── README.md
├── src
│ └── mcp_ynab
│ ├── __init__.py
│ ├── __main__.py
│ └── server.py
├── Taskfile.yml
├── tests
│ ├── __init__.py
│ ├── conftest.py
│ ├── test_environment.py
│ └── test_server.py
├── todo.txt
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/docs/llms-full.txt:
--------------------------------------------------------------------------------
```
1 | # Example Clients
2 |
3 | A list of applications that support MCP integrations
4 |
5 | This page provides an overview of applications that support the Model Context Protocol (MCP). Each client may support different MCP features, allowing for varying levels of integration with MCP servers.
6 |
7 | ## Feature support matrix
8 |
9 | | Client | [Resources] | [Prompts] | [Tools] | [Sampling] | Roots | Notes |
10 | | ------------------------------------------ | ----------- | --------- | ------- | ---------- | ----- | ------------------------------------------------------------------ |
11 | | [Claude Desktop App][Claude] | ✅ | ✅ | ✅ | ❌ | ❌ | Full support for all MCP features |
12 | | [Zed][Zed] | ❌ | ✅ | ❌ | ❌ | ❌ | Prompts appear as slash commands |
13 | | [Sourcegraph Cody][Cody] | ✅ | ❌ | ❌ | ❌ | ❌ | Supports resources through OpenCTX |
14 | | [Firebase Genkit][Genkit] | ⚠️ | ✅ | ✅ | ❌ | ❌ | Supports resource list and lookup through tools. |
15 | | [Continue][Continue] | ✅ | ✅ | ✅ | ❌ | ❌ | Full support for all MCP features |
16 | | [GenAIScript][GenAIScript] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools. |
17 | | [Cline][Cline] | ✅ | ❌ | ✅ | ❌ | ❌ | Supports tools and resources. |
18 | | [LibreChat][LibreChat] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools for Agents |
19 | | [TheiaAI/TheiaIDE][TheiaAI/TheiaIDE] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools for Agents in Theia AI and the AI-powered Theia IDE |
20 | | [Superinterface][Superinterface] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools |
21 | | [5ire][5ire] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools. |
22 | | [Bee Agent Framework][Bee Agent Framework] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools in agentic workflows. |
23 |
24 | [Claude]: https://claude.ai/download
25 |
26 | [Zed]: https://zed.dev
27 |
28 | [Cody]: https://sourcegraph.com/cody
29 |
30 | [Genkit]: https://github.com/firebase/genkit
31 |
32 | [Continue]: https://github.com/continuedev/continue
33 |
34 | [GenAIScript]: https://microsoft.github.io/genaiscript/reference/scripts/mcp-tools/
35 |
36 | [Cline]: https://github.com/cline/cline
37 |
38 | [LibreChat]: https://github.com/danny-avila/LibreChat
39 |
40 | [TheiaAI/TheiaIDE]: https://eclipsesource.com/blogs/2024/12/19/theia-ide-and-theia-ai-support-mcp/
41 |
42 | [Superinterface]: https://superinterface.ai
43 |
44 | [5ire]: https://github.com/nanbingxyz/5ire
45 |
46 | [Bee Agent Framework]: https://i-am-bee.github.io/bee-agent-framework
47 |
48 | [Resources]: https://modelcontextprotocol.io/docs/concepts/resources
49 |
50 | [Prompts]: https://modelcontextprotocol.io/docs/concepts/prompts
51 |
52 | [Tools]: https://modelcontextprotocol.io/docs/concepts/tools
53 |
54 | [Sampling]: https://modelcontextprotocol.io/docs/concepts/sampling
55 |
56 | ## Client details
57 |
58 | ### Claude Desktop App
59 |
60 | The Claude desktop application provides comprehensive support for MCP, enabling deep integration with local tools and data sources.
61 |
62 | **Key features:**
63 |
64 | * Full support for resources, allowing attachment of local files and data
65 | * Support for prompt templates
66 | * Tool integration for executing commands and scripts
67 | * Local server connections for enhanced privacy and security
68 |
69 | > ⓘ Note: The Claude.ai web application does not currently support MCP. MCP features are only available in the desktop application.
70 |
71 | ### Zed
72 |
73 | [Zed](https://zed.dev/docs/assistant/model-context-protocol) is a high-performance code editor with built-in MCP support, focusing on prompt templates and tool integration.
74 |
75 | **Key features:**
76 |
77 | * Prompt templates surface as slash commands in the editor
78 | * Tool integration for enhanced coding workflows
79 | * Tight integration with editor features and workspace context
80 | * Does not support MCP resources
81 |
82 | ### Sourcegraph Cody
83 |
84 | [Cody](https://openctx.org/docs/providers/modelcontextprotocol) is Sourcegraph's AI coding assistant, which implements MCP through OpenCTX.
85 |
86 | **Key features:**
87 |
88 | * Support for MCP resources
89 | * Integration with Sourcegraph's code intelligence
90 | * Uses OpenCTX as an abstraction layer
91 | * Future support planned for additional MCP features
92 |
93 | ### Firebase Genkit
94 |
95 | [Genkit](https://github.com/firebase/genkit) is Firebase's SDK for building and integrating GenAI features into applications. The [genkitx-mcp](https://github.com/firebase/genkit/tree/main/js/plugins/mcp) plugin enables consuming MCP servers as a client or creating MCP servers from Genkit tools and prompts.
96 |
97 | **Key features:**
98 |
99 | * Client support for tools and prompts (resources partially supported)
100 | * Rich discovery with support in Genkit's Dev UI playground
101 | * Seamless interoperability with Genkit's existing tools and prompts
102 | * Works across a wide variety of GenAI models from top providers
103 |
104 | ### Continue
105 |
106 | [Continue](https://github.com/continuedev/continue) is an open-source AI code assistant, with built-in support for all MCP features.
107 |
108 | **Key features**
109 |
110 | * Type "@" to mention MCP resources
111 | * Prompt templates surface as slash commands
112 | * Use both built-in and MCP tools directly in chat
113 | * Supports VS Code and JetBrains IDEs, with any LLM
114 |
115 | ### GenAIScript
116 |
117 | Programmatically assemble prompts for LLMs using [GenAIScript](https://microsoft.github.io/genaiscript/) (in JavaScript). Orchestrate LLMs, tools, and data in JavaScript.
118 |
119 | **Key features:**
120 |
121 | * JavaScript toolbox to work with prompts
122 | * Abstraction to make it easy and productive
123 | * Seamless Visual Studio Code integration
124 |
125 | ### Cline
126 |
127 | [Cline](https://github.com/cline/cline) is an autonomous coding agent in VS Code that edits files, runs commands, uses a browser, and more–with your permission at each step.
128 |
129 | **Key features:**
130 |
131 | * Create and add tools through natural language (e.g. "add a tool that searches the web")
132 | * Share custom MCP servers Cline creates with others via the `~/Documents/Cline/MCP` directory
133 | * Displays configured MCP servers along with their tools, resources, and any error logs
134 |
135 | ### LibreChat
136 |
137 | [LibreChat](https://github.com/danny-avila/LibreChat) is an open-source, customizable AI chat UI that supports multiple AI providers, now including MCP integration.
138 |
139 | **Key features:**
140 |
141 | * Extend current tool ecosystem, including [Code Interpreter](https://www.librechat.ai/docs/features/code_interpreter) and Image generation tools, through MCP servers
142 | * Add tools to customizable [Agents](https://www.librechat.ai/docs/features/agents), using a variety of LLMs from top providers
143 | * Open-source and self-hostable, with secure multi-user support
144 | * Future roadmap includes expanded MCP feature support
145 |
146 | ### TheiaAI/TheiaIDE
147 |
148 | [Theia AI](https://eclipsesource.com/blogs/2024/10/07/introducing-theia-ai/) is a framework for building AI-enhanced tools and IDEs. The [AI-powered Theia IDE](https://eclipsesource.com/blogs/2024/10/08/introducting-ai-theia-ide/) is an open and flexible development environment built on Theia AI.
149 |
150 | **Key features:**
151 |
152 | * **Tool Integration**: Theia AI enables AI agents, including those in the Theia IDE, to utilize MCP servers for seamless tool interaction.
153 | * **Customizable Prompts**: The Theia IDE allows users to define and adapt prompts, dynamically integrating MCP servers for tailored workflows.
154 | * **Custom agents**: The Theia IDE supports creating custom agents that leverage MCP capabilities, enabling users to design dedicated workflows on the fly.
155 |
156 | Theia AI and Theia IDE's MCP integration provide users with flexibility, making them powerful platforms for exploring and adapting MCP.
157 |
158 | **Learn more:**
159 |
160 | * [Theia IDE and Theia AI MCP Announcement](https://eclipsesource.com/blogs/2024/12/19/theia-ide-and-theia-ai-support-mcp/)
161 | * [Download the AI-powered Theia IDE](https://theia-ide.org/)
162 |
163 | ### Superinterface
164 |
165 | [Superinterface](https://superinterface.ai) is AI infrastructure and a developer platform to build in-app AI assistants with support for MCP, interactive components, client-side function calling and more.
166 |
167 | **Key features:**
168 |
169 | * Use tools from MCP servers in assistants embedded via React components or script tags
170 | * SSE transport support
171 | * Use any AI model from any AI provider (OpenAI, Anthropic, Ollama, others)
172 |
173 | ### 5ire
174 |
175 | [5ire](https://github.com/nanbingxyz/5ire) is an open source cross-platform desktop AI assistant that supports tools through MCP servers.
176 |
177 | **Key features:**
178 |
179 | * Built-in MCP servers can be quickly enabled and disabled.
180 | * Users can add more servers by modifying the configuration file.
181 | * It is open-source and user-friendly, suitable for beginners.
182 | * Future support for MCP will be continuously improved.
183 |
184 | ### Bee Agent Framework
185 |
186 | [Bee Agent Framework](https://i-am-bee.github.io/bee-agent-framework) is an open-source framework for building, deploying, and serving powerful agentic workflows at scale. The framework includes the **MCP Tool**, a native feature that simplifies the integration of MCP servers into agentic workflows.
187 |
188 | **Key features:**
189 |
190 | * Seamlessly incorporate MCP tools into agentic workflows.
191 | * Quickly instantiate framework-native tools from connected MCP client(s).
192 | * Planned future support for agentic MCP capabilities.
193 |
194 | **Learn more:**
195 |
196 | * [Example of using MCP tools in agentic workflow](https://i-am-bee.github.io/bee-agent-framework/#/tools?id=using-the-mcptool-class)
197 |
198 | ## Adding MCP support to your application
199 |
200 | If you've added MCP support to your application, we encourage you to submit a pull request to add it to this list. MCP integration can provide your users with powerful contextual AI capabilities and make your application part of the growing MCP ecosystem.
201 |
202 | Benefits of adding MCP support:
203 |
204 | * Enable users to bring their own context and tools
205 | * Join a growing ecosystem of interoperable AI applications
206 | * Provide users with flexible integration options
207 | * Support local-first AI workflows
208 |
209 | To get started with implementing MCP in your application, check out our [Python](https://github.com/modelcontextprotocol/python-sdk) or [TypeScript SDK Documentation](https://github.com/modelcontextprotocol/typescript-sdk)
210 |
211 | ## Updates and corrections
212 |
213 | This list is maintained by the community. If you notice any inaccuracies or would like to update information about MCP support in your application, please submit a pull request or [open an issue in our documentation repository](https://github.com/modelcontextprotocol/docs/issues).
214 |
215 |
216 | # Contributing
217 |
218 | How to participate in Model Context Protocol development
219 |
220 | We welcome contributions from the community! Please review our [contributing guidelines](https://github.com/modelcontextprotocol/.github/blob/main/CONTRIBUTING.md) for details on how to submit changes.
221 |
222 | All contributors must adhere to our [Code of Conduct](https://github.com/modelcontextprotocol/.github/blob/main/CODE_OF_CONDUCT.md).
223 |
224 | For questions and discussions, please use [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions).
225 |
226 |
227 | # Roadmap
228 |
229 | Our plans for evolving Model Context Protocol (H1 2025)
230 |
231 | The Model Context Protocol is rapidly evolving. This page outlines our current thinking on key priorities and future direction for **the first half of 2025**, though these may change significantly as the project develops.
232 |
233 | <Note>The ideas presented here are not commitments—we may solve these challenges differently than described, or some may not materialize at all. This is also not an *exhaustive* list; we may incorporate work that isn't mentioned here.</Note>
234 |
235 | We encourage community participation! Each section links to relevant discussions where you can learn more and contribute your thoughts.
236 |
237 | ## Remote MCP Support
238 |
239 | Our top priority is enabling [remote MCP connections](https://github.com/modelcontextprotocol/specification/discussions/102), allowing clients to securely connect to MCP servers over the internet. Key initiatives include:
240 |
241 | * [**Authentication & Authorization**](https://github.com/modelcontextprotocol/specification/discussions/64): Adding standardized auth capabilities, particularly focused on OAuth 2.0 support.
242 |
243 | * [**Service Discovery**](https://github.com/modelcontextprotocol/specification/discussions/69): Defining how clients can discover and connect to remote MCP servers.
244 |
245 | * [**Stateless Operations**](https://github.com/modelcontextprotocol/specification/discussions/102): Thinking about whether MCP could encompass serverless environments too, where they will need to be mostly stateless.
246 |
247 | ## Reference Implementations
248 |
249 | To help developers build with MCP, we want to offer documentation for:
250 |
251 | * **Client Examples**: Comprehensive reference client implementation(s), demonstrating all protocol features
252 | * **Protocol Drafting**: Streamlined process for proposing and incorporating new protocol features
253 |
254 | ## Distribution & Discovery
255 |
256 | Looking ahead, we're exploring ways to make MCP servers more accessible. Some areas we may investigate include:
257 |
258 | * **Package Management**: Standardized packaging format for MCP servers
259 | * **Installation Tools**: Simplified server installation across MCP clients
260 | * **Sandboxing**: Improved security through server isolation
261 | * **Server Registry**: A common directory for discovering available MCP servers
262 |
263 | ## Agent Support
264 |
265 | We're expanding MCP's capabilities for [complex agentic workflows](https://github.com/modelcontextprotocol/specification/discussions/111), particularly focusing on:
266 |
267 | * [**Hierarchical Agent Systems**](https://github.com/modelcontextprotocol/specification/discussions/94): Improved support for trees of agents through namespacing and topology awareness.
268 |
269 | * [**Interactive Workflows**](https://github.com/modelcontextprotocol/specification/issues/97): Better handling of user permissions and information requests across agent hierarchies, and ways to send output to users instead of models.
270 |
271 | * [**Streaming Results**](https://github.com/modelcontextprotocol/specification/issues/117): Real-time updates from long-running agent operations.
272 |
273 | ## Broader Ecosystem
274 |
275 | We're also invested in:
276 |
277 | * **Community-Led Standards Development**: Fostering a collaborative ecosystem where all AI providers can help shape MCP as an open standard through equal participation and shared governance, ensuring it meets the needs of diverse AI applications and use cases.
278 | * [**Additional Modalities**](https://github.com/modelcontextprotocol/specification/discussions/88): Expanding beyond text to support audio, video, and other formats.
279 | * \[**Standardization**] Considering standardization through a standardization body.
280 |
281 | ## Get Involved
282 |
283 | We welcome community participation in shaping MCP's future. Visit our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to join the conversation and contribute your ideas.
284 |
285 |
286 | # What's New
287 |
288 | The latest updates and improvements to MCP
289 |
290 | <Update label="2025-01-18" description="SDK and Server Improvements">
291 | * Simplified, express-like API in the [TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk)
292 | * Added 8 new clients to the [clients page](https://modelcontextprotocol.io/clients)
293 | </Update>
294 |
295 | <Update label="2025-01-03" description="SDK and Server Improvements">
296 | * FastMCP API in the [Python SDK](https://github.com/modelcontextprotocol/python-sdk)
297 | * Dockerized MCP servers in the [servers repo](https://github.com/modelcontextprotocol/servers)
298 | </Update>
299 |
300 | <Update label="2024-12-21" description="Kotlin SDK released">
301 | * Jetbrains released a Kotlin SDK for MCP!
302 | * For a sample MCP Kotlin server, check out [this repository](https://github.com/modelcontextprotocol/kotlin-sdk/tree/main/samples/kotlin-mcp-server)
303 | </Update>
304 |
305 |
306 | # Core architecture
307 |
308 | Understand how MCP connects clients, servers, and LLMs
309 |
310 | The Model Context Protocol (MCP) is built on a flexible, extensible architecture that enables seamless communication between LLM applications and integrations. This document covers the core architectural components and concepts.
311 |
312 | ## Overview
313 |
314 | MCP follows a client-server architecture where:
315 |
316 | * **Hosts** are LLM applications (like Claude Desktop or IDEs) that initiate connections
317 | * **Clients** maintain 1:1 connections with servers, inside the host application
318 | * **Servers** provide context, tools, and prompts to clients
319 |
320 | ```mermaid
321 | flowchart LR
322 | subgraph " Host (e.g., Claude Desktop) "
323 | client1[MCP Client]
324 | client2[MCP Client]
325 | end
326 | subgraph "Server Process"
327 | server1[MCP Server]
328 | end
329 | subgraph "Server Process"
330 | server2[MCP Server]
331 | end
332 |
333 | client1 <-->|Transport Layer| server1
334 | client2 <-->|Transport Layer| server2
335 | ```
336 |
337 | ## Core components
338 |
339 | ### Protocol layer
340 |
341 | The protocol layer handles message framing, request/response linking, and high-level communication patterns.
342 |
343 | <Tabs>
344 | <Tab title="TypeScript">
345 | ```typescript
346 | class Protocol<Request, Notification, Result> {
347 | // Handle incoming requests
348 | setRequestHandler<T>(schema: T, handler: (request: T, extra: RequestHandlerExtra) => Promise<Result>): void
349 |
350 | // Handle incoming notifications
351 | setNotificationHandler<T>(schema: T, handler: (notification: T) => Promise<void>): void
352 |
353 | // Send requests and await responses
354 | request<T>(request: Request, schema: T, options?: RequestOptions): Promise<T>
355 |
356 | // Send one-way notifications
357 | notification(notification: Notification): Promise<void>
358 | }
359 | ```
360 | </Tab>
361 |
362 | <Tab title="Python">
363 | ```python
364 | class Session(BaseSession[RequestT, NotificationT, ResultT]):
365 | async def send_request(
366 | self,
367 | request: RequestT,
368 | result_type: type[Result]
369 | ) -> Result:
370 | """
371 | Send request and wait for response. Raises McpError if response contains error.
372 | """
373 | # Request handling implementation
374 |
375 | async def send_notification(
376 | self,
377 | notification: NotificationT
378 | ) -> None:
379 | """Send one-way notification that doesn't expect response."""
380 | # Notification handling implementation
381 |
382 | async def _received_request(
383 | self,
384 | responder: RequestResponder[ReceiveRequestT, ResultT]
385 | ) -> None:
386 | """Handle incoming request from other side."""
387 | # Request handling implementation
388 |
389 | async def _received_notification(
390 | self,
391 | notification: ReceiveNotificationT
392 | ) -> None:
393 | """Handle incoming notification from other side."""
394 | # Notification handling implementation
395 | ```
396 | </Tab>
397 | </Tabs>
398 |
399 | Key classes include:
400 |
401 | * `Protocol`
402 | * `Client`
403 | * `Server`
404 |
405 | ### Transport layer
406 |
407 | The transport layer handles the actual communication between clients and servers. MCP supports multiple transport mechanisms:
408 |
409 | 1. **Stdio transport**
410 | * Uses standard input/output for communication
411 | * Ideal for local processes
412 |
413 | 2. **HTTP with SSE transport**
414 | * Uses Server-Sent Events for server-to-client messages
415 | * HTTP POST for client-to-server messages
416 |
417 | All transports use [JSON-RPC](https://www.jsonrpc.org/) 2.0 to exchange messages. See the [specification](https://spec.modelcontextprotocol.io) for detailed information about the Model Context Protocol message format.
418 |
419 | ### Message types
420 |
421 | MCP has these main types of messages:
422 |
423 | 1. **Requests** expect a response from the other side:
424 | ```typescript
425 | interface Request {
426 | method: string;
427 | params?: { ... };
428 | }
429 | ```
430 |
431 | 2. **Results** are successful responses to requests:
432 | ```typescript
433 | interface Result {
434 | [key: string]: unknown;
435 | }
436 | ```
437 |
438 | 3. **Errors** indicate that a request failed:
439 | ```typescript
440 | interface Error {
441 | code: number;
442 | message: string;
443 | data?: unknown;
444 | }
445 | ```
446 |
447 | 4. **Notifications** are one-way messages that don't expect a response:
448 | ```typescript
449 | interface Notification {
450 | method: string;
451 | params?: { ... };
452 | }
453 | ```
454 |
455 | ## Connection lifecycle
456 |
457 | ### 1. Initialization
458 |
459 | ```mermaid
460 | sequenceDiagram
461 | participant Client
462 | participant Server
463 |
464 | Client->>Server: initialize request
465 | Server->>Client: initialize response
466 | Client->>Server: initialized notification
467 |
468 | Note over Client,Server: Connection ready for use
469 | ```
470 |
471 | 1. Client sends `initialize` request with protocol version and capabilities
472 | 2. Server responds with its protocol version and capabilities
473 | 3. Client sends `initialized` notification as acknowledgment
474 | 4. Normal message exchange begins
475 |
476 | ### 2. Message exchange
477 |
478 | After initialization, the following patterns are supported:
479 |
480 | * **Request-Response**: Client or server sends requests, the other responds
481 | * **Notifications**: Either party sends one-way messages
482 |
483 | ### 3. Termination
484 |
485 | Either party can terminate the connection:
486 |
487 | * Clean shutdown via `close()`
488 | * Transport disconnection
489 | * Error conditions
490 |
491 | ## Error handling
492 |
493 | MCP defines these standard error codes:
494 |
495 | ```typescript
496 | enum ErrorCode {
497 | // Standard JSON-RPC error codes
498 | ParseError = -32700,
499 | InvalidRequest = -32600,
500 | MethodNotFound = -32601,
501 | InvalidParams = -32602,
502 | InternalError = -32603
503 | }
504 | ```
505 |
506 | SDKs and applications can define their own error codes above -32000.
507 |
508 | Errors are propagated through:
509 |
510 | * Error responses to requests
511 | * Error events on transports
512 | * Protocol-level error handlers
513 |
514 | ## Implementation example
515 |
516 | Here's a basic example of implementing an MCP server:
517 |
518 | <Tabs>
519 | <Tab title="TypeScript">
520 | ```typescript
521 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
522 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
523 |
524 | const server = new Server({
525 | name: "example-server",
526 | version: "1.0.0"
527 | }, {
528 | capabilities: {
529 | resources: {}
530 | }
531 | });
532 |
533 | // Handle requests
534 | server.setRequestHandler(ListResourcesRequestSchema, async () => {
535 | return {
536 | resources: [
537 | {
538 | uri: "example://resource",
539 | name: "Example Resource"
540 | }
541 | ]
542 | };
543 | });
544 |
545 | // Connect transport
546 | const transport = new StdioServerTransport();
547 | await server.connect(transport);
548 | ```
549 | </Tab>
550 |
551 | <Tab title="Python">
552 | ```python
553 | import asyncio
554 | import mcp.types as types
555 | from mcp.server import Server
556 | from mcp.server.stdio import stdio_server
557 |
558 | app = Server("example-server")
559 |
560 | @app.list_resources()
561 | async def list_resources() -> list[types.Resource]:
562 | return [
563 | types.Resource(
564 | uri="example://resource",
565 | name="Example Resource"
566 | )
567 | ]
568 |
569 | async def main():
570 | async with stdio_server() as streams:
571 | await app.run(
572 | streams[0],
573 | streams[1],
574 | app.create_initialization_options()
575 | )
576 |
577 | if __name__ == "__main__":
578 | asyncio.run(main)
579 | ```
580 | </Tab>
581 | </Tabs>
582 |
583 | ## Best practices
584 |
585 | ### Transport selection
586 |
587 | 1. **Local communication**
588 | * Use stdio transport for local processes
589 | * Efficient for same-machine communication
590 | * Simple process management
591 |
592 | 2. **Remote communication**
593 | * Use SSE for scenarios requiring HTTP compatibility
594 | * Consider security implications including authentication and authorization
595 |
596 | ### Message handling
597 |
598 | 1. **Request processing**
599 | * Validate inputs thoroughly
600 | * Use type-safe schemas
601 | * Handle errors gracefully
602 | * Implement timeouts
603 |
604 | 2. **Progress reporting**
605 | * Use progress tokens for long operations
606 | * Report progress incrementally
607 | * Include total progress when known
608 |
609 | 3. **Error management**
610 | * Use appropriate error codes
611 | * Include helpful error messages
612 | * Clean up resources on errors
613 |
614 | ## Security considerations
615 |
616 | 1. **Transport security**
617 | * Use TLS for remote connections
618 | * Validate connection origins
619 | * Implement authentication when needed
620 |
621 | 2. **Message validation**
622 | * Validate all incoming messages
623 | * Sanitize inputs
624 | * Check message size limits
625 | * Verify JSON-RPC format
626 |
627 | 3. **Resource protection**
628 | * Implement access controls
629 | * Validate resource paths
630 | * Monitor resource usage
631 | * Rate limit requests
632 |
633 | 4. **Error handling**
634 | * Don't leak sensitive information
635 | * Log security-relevant errors
636 | * Implement proper cleanup
637 | * Handle DoS scenarios
638 |
639 | ## Debugging and monitoring
640 |
641 | 1. **Logging**
642 | * Log protocol events
643 | * Track message flow
644 | * Monitor performance
645 | * Record errors
646 |
647 | 2. **Diagnostics**
648 | * Implement health checks
649 | * Monitor connection state
650 | * Track resource usage
651 | * Profile performance
652 |
653 | 3. **Testing**
654 | * Test different transports
655 | * Verify error handling
656 | * Check edge cases
657 | * Load test servers
658 |
659 |
660 | # Prompts
661 |
662 | Create reusable prompt templates and workflows
663 |
664 | Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a powerful way to standardize and share common LLM interactions.
665 |
666 | <Note>
667 | Prompts are designed to be **user-controlled**, meaning they are exposed from servers to clients with the intention of the user being able to explicitly select them for use.
668 | </Note>
669 |
670 | ## Overview
671 |
672 | Prompts in MCP are predefined templates that can:
673 |
674 | * Accept dynamic arguments
675 | * Include context from resources
676 | * Chain multiple interactions
677 | * Guide specific workflows
678 | * Surface as UI elements (like slash commands)
679 |
680 | ## Prompt structure
681 |
682 | Each prompt is defined with:
683 |
684 | ```typescript
685 | {
686 | name: string; // Unique identifier for the prompt
687 | description?: string; // Human-readable description
688 | arguments?: [ // Optional list of arguments
689 | {
690 | name: string; // Argument identifier
691 | description?: string; // Argument description
692 | required?: boolean; // Whether argument is required
693 | }
694 | ]
695 | }
696 | ```
697 |
698 | ## Discovering prompts
699 |
700 | Clients can discover available prompts through the `prompts/list` endpoint:
701 |
702 | ```typescript
703 | // Request
704 | {
705 | method: "prompts/list"
706 | }
707 |
708 | // Response
709 | {
710 | prompts: [
711 | {
712 | name: "analyze-code",
713 | description: "Analyze code for potential improvements",
714 | arguments: [
715 | {
716 | name: "language",
717 | description: "Programming language",
718 | required: true
719 | }
720 | ]
721 | }
722 | ]
723 | }
724 | ```
725 |
726 | ## Using prompts
727 |
728 | To use a prompt, clients make a `prompts/get` request:
729 |
730 | ````typescript
731 | // Request
732 | {
733 | method: "prompts/get",
734 | params: {
735 | name: "analyze-code",
736 | arguments: {
737 | language: "python"
738 | }
739 | }
740 | }
741 |
742 | // Response
743 | {
744 | description: "Analyze Python code for potential improvements",
745 | messages: [
746 | {
747 | role: "user",
748 | content: {
749 | type: "text",
750 | text: "Please analyze the following Python code for potential improvements:\n\n```python\ndef calculate_sum(numbers):\n total = 0\n for num in numbers:\n total = total + num\n return total\n\nresult = calculate_sum([1, 2, 3, 4, 5])\nprint(result)\n```"
751 | }
752 | }
753 | ]
754 | }
755 | ````
756 |
757 | ## Dynamic prompts
758 |
759 | Prompts can be dynamic and include:
760 |
761 | ### Embedded resource context
762 |
763 | ```json
764 | {
765 | "name": "analyze-project",
766 | "description": "Analyze project logs and code",
767 | "arguments": [
768 | {
769 | "name": "timeframe",
770 | "description": "Time period to analyze logs",
771 | "required": true
772 | },
773 | {
774 | "name": "fileUri",
775 | "description": "URI of code file to review",
776 | "required": true
777 | }
778 | ]
779 | }
780 | ```
781 |
782 | When handling the `prompts/get` request:
783 |
784 | ```json
785 | {
786 | "messages": [
787 | {
788 | "role": "user",
789 | "content": {
790 | "type": "text",
791 | "text": "Analyze these system logs and the code file for any issues:"
792 | }
793 | },
794 | {
795 | "role": "user",
796 | "content": {
797 | "type": "resource",
798 | "resource": {
799 | "uri": "logs://recent?timeframe=1h",
800 | "text": "[2024-03-14 15:32:11] ERROR: Connection timeout in network.py:127\n[2024-03-14 15:32:15] WARN: Retrying connection (attempt 2/3)\n[2024-03-14 15:32:20] ERROR: Max retries exceeded",
801 | "mimeType": "text/plain"
802 | }
803 | }
804 | },
805 | {
806 | "role": "user",
807 | "content": {
808 | "type": "resource",
809 | "resource": {
810 | "uri": "file:///path/to/code.py",
811 | "text": "def connect_to_service(timeout=30):\n retries = 3\n for attempt in range(retries):\n try:\n return establish_connection(timeout)\n except TimeoutError:\n if attempt == retries - 1:\n raise\n time.sleep(5)\n\ndef establish_connection(timeout):\n # Connection implementation\n pass",
812 | "mimeType": "text/x-python"
813 | }
814 | }
815 | }
816 | ]
817 | }
818 | ```
819 |
820 | ### Multi-step workflows
821 |
822 | ```typescript
823 | const debugWorkflow = {
824 | name: "debug-error",
825 | async getMessages(error: string) {
826 | return [
827 | {
828 | role: "user",
829 | content: {
830 | type: "text",
831 | text: `Here's an error I'm seeing: ${error}`
832 | }
833 | },
834 | {
835 | role: "assistant",
836 | content: {
837 | type: "text",
838 | text: "I'll help analyze this error. What have you tried so far?"
839 | }
840 | },
841 | {
842 | role: "user",
843 | content: {
844 | type: "text",
845 | text: "I've tried restarting the service, but the error persists."
846 | }
847 | }
848 | ];
849 | }
850 | };
851 | ```
852 |
853 | ## Example implementation
854 |
855 | Here's a complete example of implementing prompts in an MCP server:
856 |
857 | <Tabs>
858 | <Tab title="TypeScript">
859 | ```typescript
860 | import { Server } from "@modelcontextprotocol/sdk/server";
861 | import {
862 | ListPromptsRequestSchema,
863 | GetPromptRequestSchema
864 | } from "@modelcontextprotocol/sdk/types";
865 |
866 | const PROMPTS = {
867 | "git-commit": {
868 | name: "git-commit",
869 | description: "Generate a Git commit message",
870 | arguments: [
871 | {
872 | name: "changes",
873 | description: "Git diff or description of changes",
874 | required: true
875 | }
876 | ]
877 | },
878 | "explain-code": {
879 | name: "explain-code",
880 | description: "Explain how code works",
881 | arguments: [
882 | {
883 | name: "code",
884 | description: "Code to explain",
885 | required: true
886 | },
887 | {
888 | name: "language",
889 | description: "Programming language",
890 | required: false
891 | }
892 | ]
893 | }
894 | };
895 |
896 | const server = new Server({
897 | name: "example-prompts-server",
898 | version: "1.0.0"
899 | }, {
900 | capabilities: {
901 | prompts: {}
902 | }
903 | });
904 |
905 | // List available prompts
906 | server.setRequestHandler(ListPromptsRequestSchema, async () => {
907 | return {
908 | prompts: Object.values(PROMPTS)
909 | };
910 | });
911 |
912 | // Get specific prompt
913 | server.setRequestHandler(GetPromptRequestSchema, async (request) => {
914 | const prompt = PROMPTS[request.params.name];
915 | if (!prompt) {
916 | throw new Error(`Prompt not found: ${request.params.name}`);
917 | }
918 |
919 | if (request.params.name === "git-commit") {
920 | return {
921 | messages: [
922 | {
923 | role: "user",
924 | content: {
925 | type: "text",
926 | text: `Generate a concise but descriptive commit message for these changes:\n\n${request.params.arguments?.changes}`
927 | }
928 | }
929 | ]
930 | };
931 | }
932 |
933 | if (request.params.name === "explain-code") {
934 | const language = request.params.arguments?.language || "Unknown";
935 | return {
936 | messages: [
937 | {
938 | role: "user",
939 | content: {
940 | type: "text",
941 | text: `Explain how this ${language} code works:\n\n${request.params.arguments?.code}`
942 | }
943 | }
944 | ]
945 | };
946 | }
947 |
948 | throw new Error("Prompt implementation not found");
949 | });
950 | ```
951 | </Tab>
952 |
953 | <Tab title="Python">
954 | ```python
955 | from mcp.server import Server
956 | import mcp.types as types
957 |
958 | # Define available prompts
959 | PROMPTS = {
960 | "git-commit": types.Prompt(
961 | name="git-commit",
962 | description="Generate a Git commit message",
963 | arguments=[
964 | types.PromptArgument(
965 | name="changes",
966 | description="Git diff or description of changes",
967 | required=True
968 | )
969 | ],
970 | ),
971 | "explain-code": types.Prompt(
972 | name="explain-code",
973 | description="Explain how code works",
974 | arguments=[
975 | types.PromptArgument(
976 | name="code",
977 | description="Code to explain",
978 | required=True
979 | ),
980 | types.PromptArgument(
981 | name="language",
982 | description="Programming language",
983 | required=False
984 | )
985 | ],
986 | )
987 | }
988 |
989 | # Initialize server
990 | app = Server("example-prompts-server")
991 |
992 | @app.list_prompts()
993 | async def list_prompts() -> list[types.Prompt]:
994 | return list(PROMPTS.values())
995 |
996 | @app.get_prompt()
997 | async def get_prompt(
998 | name: str, arguments: dict[str, str] | None = None
999 | ) -> types.GetPromptResult:
1000 | if name not in PROMPTS:
1001 | raise ValueError(f"Prompt not found: {name}")
1002 |
1003 | if name == "git-commit":
1004 | changes = arguments.get("changes") if arguments else ""
1005 | return types.GetPromptResult(
1006 | messages=[
1007 | types.PromptMessage(
1008 | role="user",
1009 | content=types.TextContent(
1010 | type="text",
1011 | text=f"Generate a concise but descriptive commit message "
1012 | f"for these changes:\n\n{changes}"
1013 | )
1014 | )
1015 | ]
1016 | )
1017 |
1018 | if name == "explain-code":
1019 | code = arguments.get("code") if arguments else ""
1020 | language = arguments.get("language", "Unknown") if arguments else "Unknown"
1021 | return types.GetPromptResult(
1022 | messages=[
1023 | types.PromptMessage(
1024 | role="user",
1025 | content=types.TextContent(
1026 | type="text",
1027 | text=f"Explain how this {language} code works:\n\n{code}"
1028 | )
1029 | )
1030 | ]
1031 | )
1032 |
1033 | raise ValueError("Prompt implementation not found")
1034 | ```
1035 | </Tab>
1036 | </Tabs>
1037 |
1038 | ## Best practices
1039 |
1040 | When implementing prompts:
1041 |
1042 | 1. Use clear, descriptive prompt names
1043 | 2. Provide detailed descriptions for prompts and arguments
1044 | 3. Validate all required arguments
1045 | 4. Handle missing arguments gracefully
1046 | 5. Consider versioning for prompt templates
1047 | 6. Cache dynamic content when appropriate
1048 | 7. Implement error handling
1049 | 8. Document expected argument formats
1050 | 9. Consider prompt composability
1051 | 10. Test prompts with various inputs
1052 |
1053 | ## UI integration
1054 |
1055 | Prompts can be surfaced in client UIs as:
1056 |
1057 | * Slash commands
1058 | * Quick actions
1059 | * Context menu items
1060 | * Command palette entries
1061 | * Guided workflows
1062 | * Interactive forms
1063 |
1064 | ## Updates and changes
1065 |
1066 | Servers can notify clients about prompt changes:
1067 |
1068 | 1. Server capability: `prompts.listChanged`
1069 | 2. Notification: `notifications/prompts/list_changed`
1070 | 3. Client re-fetches prompt list
1071 |
1072 | ## Security considerations
1073 |
1074 | When implementing prompts:
1075 |
1076 | * Validate all arguments
1077 | * Sanitize user input
1078 | * Consider rate limiting
1079 | * Implement access controls
1080 | * Audit prompt usage
1081 | * Handle sensitive data appropriately
1082 | * Validate generated content
1083 | * Implement timeouts
1084 | * Consider prompt injection risks
1085 | * Document security requirements
1086 |
1087 |
1088 | # Resources
1089 |
1090 | Expose data and content from your servers to LLMs
1091 |
1092 | Resources are a core primitive in the Model Context Protocol (MCP) that allow servers to expose data and content that can be read by clients and used as context for LLM interactions.
1093 |
1094 | <Note>
1095 | Resources are designed to be **application-controlled**, meaning that the client application can decide how and when they should be used.
1096 | Different MCP clients may handle resources differently. For example:
1097 |
1098 | * Claude Desktop currently requires users to explicitly select resources before they can be used
1099 | * Other clients might automatically select resources based on heuristics
1100 | * Some implementations may even allow the AI model itself to determine which resources to use
1101 |
1102 | Server authors should be prepared to handle any of these interaction patterns when implementing resource support. In order to expose data to models automatically, server authors should use a **model-controlled** primitive such as [Tools](./tools).
1103 | </Note>
1104 |
1105 | ## Overview
1106 |
1107 | Resources represent any kind of data that an MCP server wants to make available to clients. This can include:
1108 |
1109 | * File contents
1110 | * Database records
1111 | * API responses
1112 | * Live system data
1113 | * Screenshots and images
1114 | * Log files
1115 | * And more
1116 |
1117 | Each resource is identified by a unique URI and can contain either text or binary data.
1118 |
1119 | ## Resource URIs
1120 |
1121 | Resources are identified using URIs that follow this format:
1122 |
1123 | ```
1124 | [protocol]://[host]/[path]
1125 | ```
1126 |
1127 | For example:
1128 |
1129 | * `file:///home/user/documents/report.pdf`
1130 | * `postgres://database/customers/schema`
1131 | * `screen://localhost/display1`
1132 |
1133 | The protocol and path structure is defined by the MCP server implementation. Servers can define their own custom URI schemes.
1134 |
1135 | ## Resource types
1136 |
1137 | Resources can contain two types of content:
1138 |
1139 | ### Text resources
1140 |
1141 | Text resources contain UTF-8 encoded text data. These are suitable for:
1142 |
1143 | * Source code
1144 | * Configuration files
1145 | * Log files
1146 | * JSON/XML data
1147 | * Plain text
1148 |
1149 | ### Binary resources
1150 |
1151 | Binary resources contain raw binary data encoded in base64. These are suitable for:
1152 |
1153 | * Images
1154 | * PDFs
1155 | * Audio files
1156 | * Video files
1157 | * Other non-text formats
1158 |
1159 | ## Resource discovery
1160 |
1161 | Clients can discover available resources through two main methods:
1162 |
1163 | ### Direct resources
1164 |
1165 | Servers expose a list of concrete resources via the `resources/list` endpoint. Each resource includes:
1166 |
1167 | ```typescript
1168 | {
1169 | uri: string; // Unique identifier for the resource
1170 | name: string; // Human-readable name
1171 | description?: string; // Optional description
1172 | mimeType?: string; // Optional MIME type
1173 | }
1174 | ```
1175 |
1176 | ### Resource templates
1177 |
1178 | For dynamic resources, servers can expose [URI templates](https://datatracker.ietf.org/doc/html/rfc6570) that clients can use to construct valid resource URIs:
1179 |
1180 | ```typescript
1181 | {
1182 | uriTemplate: string; // URI template following RFC 6570
1183 | name: string; // Human-readable name for this type
1184 | description?: string; // Optional description
1185 | mimeType?: string; // Optional MIME type for all matching resources
1186 | }
1187 | ```
1188 |
1189 | ## Reading resources
1190 |
1191 | To read a resource, clients make a `resources/read` request with the resource URI.
1192 |
1193 | The server responds with a list of resource contents:
1194 |
1195 | ```typescript
1196 | {
1197 | contents: [
1198 | {
1199 | uri: string; // The URI of the resource
1200 | mimeType?: string; // Optional MIME type
1201 |
1202 | // One of:
1203 | text?: string; // For text resources
1204 | blob?: string; // For binary resources (base64 encoded)
1205 | }
1206 | ]
1207 | }
1208 | ```
1209 |
1210 | <Tip>
1211 | Servers may return multiple resources in response to one `resources/read` request. This could be used, for example, to return a list of files inside a directory when the directory is read.
1212 | </Tip>
1213 |
1214 | ## Resource updates
1215 |
1216 | MCP supports real-time updates for resources through two mechanisms:
1217 |
1218 | ### List changes
1219 |
1220 | Servers can notify clients when their list of available resources changes via the `notifications/resources/list_changed` notification.
1221 |
1222 | ### Content changes
1223 |
1224 | Clients can subscribe to updates for specific resources:
1225 |
1226 | 1. Client sends `resources/subscribe` with resource URI
1227 | 2. Server sends `notifications/resources/updated` when the resource changes
1228 | 3. Client can fetch latest content with `resources/read`
1229 | 4. Client can unsubscribe with `resources/unsubscribe`
1230 |
1231 | ## Example implementation
1232 |
1233 | Here's a simple example of implementing resource support in an MCP server:
1234 |
1235 | <Tabs>
1236 | <Tab title="TypeScript">
1237 | ```typescript
1238 | const server = new Server({
1239 | name: "example-server",
1240 | version: "1.0.0"
1241 | }, {
1242 | capabilities: {
1243 | resources: {}
1244 | }
1245 | });
1246 |
1247 | // List available resources
1248 | server.setRequestHandler(ListResourcesRequestSchema, async () => {
1249 | return {
1250 | resources: [
1251 | {
1252 | uri: "file:///logs/app.log",
1253 | name: "Application Logs",
1254 | mimeType: "text/plain"
1255 | }
1256 | ]
1257 | };
1258 | });
1259 |
1260 | // Read resource contents
1261 | server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
1262 | const uri = request.params.uri;
1263 |
1264 | if (uri === "file:///logs/app.log") {
1265 | const logContents = await readLogFile();
1266 | return {
1267 | contents: [
1268 | {
1269 | uri,
1270 | mimeType: "text/plain",
1271 | text: logContents
1272 | }
1273 | ]
1274 | };
1275 | }
1276 |
1277 | throw new Error("Resource not found");
1278 | });
1279 | ```
1280 | </Tab>
1281 |
1282 | <Tab title="Python">
1283 | ```python
1284 | app = Server("example-server")
1285 |
1286 | @app.list_resources()
1287 | async def list_resources() -> list[types.Resource]:
1288 | return [
1289 | types.Resource(
1290 | uri="file:///logs/app.log",
1291 | name="Application Logs",
1292 | mimeType="text/plain"
1293 | )
1294 | ]
1295 |
1296 | @app.read_resource()
1297 | async def read_resource(uri: AnyUrl) -> str:
1298 | if str(uri) == "file:///logs/app.log":
1299 | log_contents = await read_log_file()
1300 | return log_contents
1301 |
1302 | raise ValueError("Resource not found")
1303 |
1304 | # Start server
1305 | async with stdio_server() as streams:
1306 | await app.run(
1307 | streams[0],
1308 | streams[1],
1309 | app.create_initialization_options()
1310 | )
1311 | ```
1312 | </Tab>
1313 | </Tabs>
1314 |
1315 | ## Best practices
1316 |
1317 | When implementing resource support:
1318 |
1319 | 1. Use clear, descriptive resource names and URIs
1320 | 2. Include helpful descriptions to guide LLM understanding
1321 | 3. Set appropriate MIME types when known
1322 | 4. Implement resource templates for dynamic content
1323 | 5. Use subscriptions for frequently changing resources
1324 | 6. Handle errors gracefully with clear error messages
1325 | 7. Consider pagination for large resource lists
1326 | 8. Cache resource contents when appropriate
1327 | 9. Validate URIs before processing
1328 | 10. Document your custom URI schemes
1329 |
1330 | ## Security considerations
1331 |
1332 | When exposing resources:
1333 |
1334 | * Validate all resource URIs
1335 | * Implement appropriate access controls
1336 | * Sanitize file paths to prevent directory traversal
1337 | * Be cautious with binary data handling
1338 | * Consider rate limiting for resource reads
1339 | * Audit resource access
1340 | * Encrypt sensitive data in transit
1341 | * Validate MIME types
1342 | * Implement timeouts for long-running reads
1343 | * Handle resource cleanup appropriately
1344 |
1345 |
1346 | # Roots
1347 |
1348 | Understanding roots in MCP
1349 |
1350 | Roots are a concept in MCP that define the boundaries where servers can operate. They provide a way for clients to inform servers about relevant resources and their locations.
1351 |
1352 | ## What are Roots?
1353 |
1354 | A root is a URI that a client suggests a server should focus on. When a client connects to a server, it declares which roots the server should work with. While primarily used for filesystem paths, roots can be any valid URI including HTTP URLs.
1355 |
1356 | For example, roots could be:
1357 |
1358 | ```
1359 | file:///home/user/projects/myapp
1360 | https://api.example.com/v1
1361 | ```
1362 |
1363 | ## Why Use Roots?
1364 |
1365 | Roots serve several important purposes:
1366 |
1367 | 1. **Guidance**: They inform servers about relevant resources and locations
1368 | 2. **Clarity**: Roots make it clear which resources are part of your workspace
1369 | 3. **Organization**: Multiple roots let you work with different resources simultaneously
1370 |
1371 | ## How Roots Work
1372 |
1373 | When a client supports roots, it:
1374 |
1375 | 1. Declares the `roots` capability during connection
1376 | 2. Provides a list of suggested roots to the server
1377 | 3. Notifies the server when roots change (if supported)
1378 |
1379 | While roots are informational and not strictly enforcing, servers should:
1380 |
1381 | 1. Respect the provided roots
1382 | 2. Use root URIs to locate and access resources
1383 | 3. Prioritize operations within root boundaries
1384 |
1385 | ## Common Use Cases
1386 |
1387 | Roots are commonly used to define:
1388 |
1389 | * Project directories
1390 | * Repository locations
1391 | * API endpoints
1392 | * Configuration locations
1393 | * Resource boundaries
1394 |
1395 | ## Best Practices
1396 |
1397 | When working with roots:
1398 |
1399 | 1. Only suggest necessary resources
1400 | 2. Use clear, descriptive names for roots
1401 | 3. Monitor root accessibility
1402 | 4. Handle root changes gracefully
1403 |
1404 | ## Example
1405 |
1406 | Here's how a typical MCP client might expose roots:
1407 |
1408 | ```json
1409 | {
1410 | "roots": [
1411 | {
1412 | "uri": "file:///home/user/projects/frontend",
1413 | "name": "Frontend Repository"
1414 | },
1415 | {
1416 | "uri": "https://api.example.com/v1",
1417 | "name": "API Endpoint"
1418 | }
1419 | ]
1420 | }
1421 | ```
1422 |
1423 | This configuration suggests the server focus on both a local repository and an API endpoint while keeping them logically separated.
1424 |
1425 |
1426 | # Sampling
1427 |
1428 | Let your servers request completions from LLMs
1429 |
1430 | Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated agentic behaviors while maintaining security and privacy.
1431 |
1432 | <Info>
1433 | This feature of MCP is not yet supported in the Claude Desktop client.
1434 | </Info>
1435 |
1436 | ## How sampling works
1437 |
1438 | The sampling flow follows these steps:
1439 |
1440 | 1. Server sends a `sampling/createMessage` request to the client
1441 | 2. Client reviews the request and can modify it
1442 | 3. Client samples from an LLM
1443 | 4. Client reviews the completion
1444 | 5. Client returns the result to the server
1445 |
1446 | This human-in-the-loop design ensures users maintain control over what the LLM sees and generates.
1447 |
1448 | ## Message format
1449 |
1450 | Sampling requests use a standardized message format:
1451 |
1452 | ```typescript
1453 | {
1454 | messages: [
1455 | {
1456 | role: "user" | "assistant",
1457 | content: {
1458 | type: "text" | "image",
1459 |
1460 | // For text:
1461 | text?: string,
1462 |
1463 | // For images:
1464 | data?: string, // base64 encoded
1465 | mimeType?: string
1466 | }
1467 | }
1468 | ],
1469 | modelPreferences?: {
1470 | hints?: [{
1471 | name?: string // Suggested model name/family
1472 | }],
1473 | costPriority?: number, // 0-1, importance of minimizing cost
1474 | speedPriority?: number, // 0-1, importance of low latency
1475 | intelligencePriority?: number // 0-1, importance of capabilities
1476 | },
1477 | systemPrompt?: string,
1478 | includeContext?: "none" | "thisServer" | "allServers",
1479 | temperature?: number,
1480 | maxTokens: number,
1481 | stopSequences?: string[],
1482 | metadata?: Record<string, unknown>
1483 | }
1484 | ```
1485 |
1486 | ## Request parameters
1487 |
1488 | ### Messages
1489 |
1490 | The `messages` array contains the conversation history to send to the LLM. Each message has:
1491 |
1492 | * `role`: Either "user" or "assistant"
1493 | * `content`: The message content, which can be:
1494 | * Text content with a `text` field
1495 | * Image content with `data` (base64) and `mimeType` fields
1496 |
1497 | ### Model preferences
1498 |
1499 | The `modelPreferences` object allows servers to specify their model selection preferences:
1500 |
1501 | * `hints`: Array of model name suggestions that clients can use to select an appropriate model:
1502 | * `name`: String that can match full or partial model names (e.g. "claude-3", "sonnet")
1503 | * Clients may map hints to equivalent models from different providers
1504 | * Multiple hints are evaluated in preference order
1505 |
1506 | * Priority values (0-1 normalized):
1507 | * `costPriority`: Importance of minimizing costs
1508 | * `speedPriority`: Importance of low latency response
1509 | * `intelligencePriority`: Importance of advanced model capabilities
1510 |
1511 | Clients make the final model selection based on these preferences and their available models.
1512 |
1513 | ### System prompt
1514 |
1515 | An optional `systemPrompt` field allows servers to request a specific system prompt. The client may modify or ignore this.
1516 |
1517 | ### Context inclusion
1518 |
1519 | The `includeContext` parameter specifies what MCP context to include:
1520 |
1521 | * `"none"`: No additional context
1522 | * `"thisServer"`: Include context from the requesting server
1523 | * `"allServers"`: Include context from all connected MCP servers
1524 |
1525 | The client controls what context is actually included.
1526 |
1527 | ### Sampling parameters
1528 |
1529 | Fine-tune the LLM sampling with:
1530 |
1531 | * `temperature`: Controls randomness (0.0 to 1.0)
1532 | * `maxTokens`: Maximum tokens to generate
1533 | * `stopSequences`: Array of sequences that stop generation
1534 | * `metadata`: Additional provider-specific parameters
1535 |
1536 | ## Response format
1537 |
1538 | The client returns a completion result:
1539 |
1540 | ```typescript
1541 | {
1542 | model: string, // Name of the model used
1543 | stopReason?: "endTurn" | "stopSequence" | "maxTokens" | string,
1544 | role: "user" | "assistant",
1545 | content: {
1546 | type: "text" | "image",
1547 | text?: string,
1548 | data?: string,
1549 | mimeType?: string
1550 | }
1551 | }
1552 | ```
1553 |
1554 | ## Example request
1555 |
1556 | Here's an example of requesting sampling from a client:
1557 |
1558 | ```json
1559 | {
1560 | "method": "sampling/createMessage",
1561 | "params": {
1562 | "messages": [
1563 | {
1564 | "role": "user",
1565 | "content": {
1566 | "type": "text",
1567 | "text": "What files are in the current directory?"
1568 | }
1569 | }
1570 | ],
1571 | "systemPrompt": "You are a helpful file system assistant.",
1572 | "includeContext": "thisServer",
1573 | "maxTokens": 100
1574 | }
1575 | }
1576 | ```
1577 |
1578 | ## Best practices
1579 |
1580 | When implementing sampling:
1581 |
1582 | 1. Always provide clear, well-structured prompts
1583 | 2. Handle both text and image content appropriately
1584 | 3. Set reasonable token limits
1585 | 4. Include relevant context through `includeContext`
1586 | 5. Validate responses before using them
1587 | 6. Handle errors gracefully
1588 | 7. Consider rate limiting sampling requests
1589 | 8. Document expected sampling behavior
1590 | 9. Test with various model parameters
1591 | 10. Monitor sampling costs
1592 |
1593 | ## Human in the loop controls
1594 |
1595 | Sampling is designed with human oversight in mind:
1596 |
1597 | ### For prompts
1598 |
1599 | * Clients should show users the proposed prompt
1600 | * Users should be able to modify or reject prompts
1601 | * System prompts can be filtered or modified
1602 | * Context inclusion is controlled by the client
1603 |
1604 | ### For completions
1605 |
1606 | * Clients should show users the completion
1607 | * Users should be able to modify or reject completions
1608 | * Clients can filter or modify completions
1609 | * Users control which model is used
1610 |
1611 | ## Security considerations
1612 |
1613 | When implementing sampling:
1614 |
1615 | * Validate all message content
1616 | * Sanitize sensitive information
1617 | * Implement appropriate rate limits
1618 | * Monitor sampling usage
1619 | * Encrypt data in transit
1620 | * Handle user data privacy
1621 | * Audit sampling requests
1622 | * Control cost exposure
1623 | * Implement timeouts
1624 | * Handle model errors gracefully
1625 |
1626 | ## Common patterns
1627 |
1628 | ### Agentic workflows
1629 |
1630 | Sampling enables agentic patterns like:
1631 |
1632 | * Reading and analyzing resources
1633 | * Making decisions based on context
1634 | * Generating structured data
1635 | * Handling multi-step tasks
1636 | * Providing interactive assistance
1637 |
1638 | ### Context management
1639 |
1640 | Best practices for context:
1641 |
1642 | * Request minimal necessary context
1643 | * Structure context clearly
1644 | * Handle context size limits
1645 | * Update context as needed
1646 | * Clean up stale context
1647 |
1648 | ### Error handling
1649 |
1650 | Robust error handling should:
1651 |
1652 | * Catch sampling failures
1653 | * Handle timeout errors
1654 | * Manage rate limits
1655 | * Validate responses
1656 | * Provide fallback behaviors
1657 | * Log errors appropriately
1658 |
1659 | ## Limitations
1660 |
1661 | Be aware of these limitations:
1662 |
1663 | * Sampling depends on client capabilities
1664 | * Users control sampling behavior
1665 | * Context size has limits
1666 | * Rate limits may apply
1667 | * Costs should be considered
1668 | * Model availability varies
1669 | * Response times vary
1670 | * Not all content types supported
1671 |
1672 |
1673 | # Tools
1674 |
1675 | Enable LLMs to perform actions through your server
1676 |
1677 | Tools are a powerful primitive in the Model Context Protocol (MCP) that enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world.
1678 |
1679 | <Note>
1680 | Tools are designed to be **model-controlled**, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval).
1681 | </Note>
1682 |
1683 | ## Overview
1684 |
1685 | Tools in MCP allow servers to expose executable functions that can be invoked by clients and used by LLMs to perform actions. Key aspects of tools include:
1686 |
1687 | * **Discovery**: Clients can list available tools through the `tools/list` endpoint
1688 | * **Invocation**: Tools are called using the `tools/call` endpoint, where servers perform the requested operation and return results
1689 | * **Flexibility**: Tools can range from simple calculations to complex API interactions
1690 |
1691 | Like [resources](/docs/concepts/resources), tools are identified by unique names and can include descriptions to guide their usage. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems.
1692 |
1693 | ## Tool definition structure
1694 |
1695 | Each tool is defined with the following structure:
1696 |
1697 | ```typescript
1698 | {
1699 | name: string; // Unique identifier for the tool
1700 | description?: string; // Human-readable description
1701 | inputSchema: { // JSON Schema for the tool's parameters
1702 | type: "object",
1703 | properties: { ... } // Tool-specific parameters
1704 | }
1705 | }
1706 | ```
1707 |
1708 | ## Implementing tools
1709 |
1710 | Here's an example of implementing a basic tool in an MCP server:
1711 |
1712 | <Tabs>
1713 | <Tab title="TypeScript">
1714 | ```typescript
1715 | const server = new Server({
1716 | name: "example-server",
1717 | version: "1.0.0"
1718 | }, {
1719 | capabilities: {
1720 | tools: {}
1721 | }
1722 | });
1723 |
1724 | // Define available tools
1725 | server.setRequestHandler(ListToolsRequestSchema, async () => {
1726 | return {
1727 | tools: [{
1728 | name: "calculate_sum",
1729 | description: "Add two numbers together",
1730 | inputSchema: {
1731 | type: "object",
1732 | properties: {
1733 | a: { type: "number" },
1734 | b: { type: "number" }
1735 | },
1736 | required: ["a", "b"]
1737 | }
1738 | }]
1739 | };
1740 | });
1741 |
1742 | // Handle tool execution
1743 | server.setRequestHandler(CallToolRequestSchema, async (request) => {
1744 | if (request.params.name === "calculate_sum") {
1745 | const { a, b } = request.params.arguments;
1746 | return {
1747 | content: [
1748 | {
1749 | type: "text",
1750 | text: String(a + b)
1751 | }
1752 | ]
1753 | };
1754 | }
1755 | throw new Error("Tool not found");
1756 | });
1757 | ```
1758 | </Tab>
1759 |
1760 | <Tab title="Python">
1761 | ```python
1762 | app = Server("example-server")
1763 |
1764 | @app.list_tools()
1765 | async def list_tools() -> list[types.Tool]:
1766 | return [
1767 | types.Tool(
1768 | name="calculate_sum",
1769 | description="Add two numbers together",
1770 | inputSchema={
1771 | "type": "object",
1772 | "properties": {
1773 | "a": {"type": "number"},
1774 | "b": {"type": "number"}
1775 | },
1776 | "required": ["a", "b"]
1777 | }
1778 | )
1779 | ]
1780 |
1781 | @app.call_tool()
1782 | async def call_tool(
1783 | name: str,
1784 | arguments: dict
1785 | ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
1786 | if name == "calculate_sum":
1787 | a = arguments["a"]
1788 | b = arguments["b"]
1789 | result = a + b
1790 | return [types.TextContent(type="text", text=str(result))]
1791 | raise ValueError(f"Tool not found: {name}")
1792 | ```
1793 | </Tab>
1794 | </Tabs>
1795 |
1796 | ## Example tool patterns
1797 |
1798 | Here are some examples of types of tools that a server could provide:
1799 |
1800 | ### System operations
1801 |
1802 | Tools that interact with the local system:
1803 |
1804 | ```typescript
1805 | {
1806 | name: "execute_command",
1807 | description: "Run a shell command",
1808 | inputSchema: {
1809 | type: "object",
1810 | properties: {
1811 | command: { type: "string" },
1812 | args: { type: "array", items: { type: "string" } }
1813 | }
1814 | }
1815 | }
1816 | ```
1817 |
1818 | ### API integrations
1819 |
1820 | Tools that wrap external APIs:
1821 |
1822 | ```typescript
1823 | {
1824 | name: "github_create_issue",
1825 | description: "Create a GitHub issue",
1826 | inputSchema: {
1827 | type: "object",
1828 | properties: {
1829 | title: { type: "string" },
1830 | body: { type: "string" },
1831 | labels: { type: "array", items: { type: "string" } }
1832 | }
1833 | }
1834 | }
1835 | ```
1836 |
1837 | ### Data processing
1838 |
1839 | Tools that transform or analyze data:
1840 |
1841 | ```typescript
1842 | {
1843 | name: "analyze_csv",
1844 | description: "Analyze a CSV file",
1845 | inputSchema: {
1846 | type: "object",
1847 | properties: {
1848 | filepath: { type: "string" },
1849 | operations: {
1850 | type: "array",
1851 | items: {
1852 | enum: ["sum", "average", "count"]
1853 | }
1854 | }
1855 | }
1856 | }
1857 | }
1858 | ```
1859 |
1860 | ## Best practices
1861 |
1862 | When implementing tools:
1863 |
1864 | 1. Provide clear, descriptive names and descriptions
1865 | 2. Use detailed JSON Schema definitions for parameters
1866 | 3. Include examples in tool descriptions to demonstrate how the model should use them
1867 | 4. Implement proper error handling and validation
1868 | 5. Use progress reporting for long operations
1869 | 6. Keep tool operations focused and atomic
1870 | 7. Document expected return value structures
1871 | 8. Implement proper timeouts
1872 | 9. Consider rate limiting for resource-intensive operations
1873 | 10. Log tool usage for debugging and monitoring
1874 |
1875 | ## Security considerations
1876 |
1877 | When exposing tools:
1878 |
1879 | ### Input validation
1880 |
1881 | * Validate all parameters against the schema
1882 | * Sanitize file paths and system commands
1883 | * Validate URLs and external identifiers
1884 | * Check parameter sizes and ranges
1885 | * Prevent command injection
1886 |
1887 | ### Access control
1888 |
1889 | * Implement authentication where needed
1890 | * Use appropriate authorization checks
1891 | * Audit tool usage
1892 | * Rate limit requests
1893 | * Monitor for abuse
1894 |
1895 | ### Error handling
1896 |
1897 | * Don't expose internal errors to clients
1898 | * Log security-relevant errors
1899 | * Handle timeouts appropriately
1900 | * Clean up resources after errors
1901 | * Validate return values
1902 |
1903 | ## Tool discovery and updates
1904 |
1905 | MCP supports dynamic tool discovery:
1906 |
1907 | 1. Clients can list available tools at any time
1908 | 2. Servers can notify clients when tools change using `notifications/tools/list_changed`
1909 | 3. Tools can be added or removed during runtime
1910 | 4. Tool definitions can be updated (though this should be done carefully)
1911 |
1912 | ## Error handling
1913 |
1914 | Tool errors should be reported within the result object, not as MCP protocol-level errors. This allows the LLM to see and potentially handle the error. When a tool encounters an error:
1915 |
1916 | 1. Set `isError` to `true` in the result
1917 | 2. Include error details in the `content` array
1918 |
1919 | Here's an example of proper error handling for tools:
1920 |
1921 | <Tabs>
1922 | <Tab title="TypeScript">
1923 | ```typescript
1924 | try {
1925 | // Tool operation
1926 | const result = performOperation();
1927 | return {
1928 | content: [
1929 | {
1930 | type: "text",
1931 | text: `Operation successful: ${result}`
1932 | }
1933 | ]
1934 | };
1935 | } catch (error) {
1936 | return {
1937 | isError: true,
1938 | content: [
1939 | {
1940 | type: "text",
1941 | text: `Error: ${error.message}`
1942 | }
1943 | ]
1944 | };
1945 | }
1946 | ```
1947 | </Tab>
1948 |
1949 | <Tab title="Python">
1950 | ```python
1951 | try:
1952 | # Tool operation
1953 | result = perform_operation()
1954 | return types.CallToolResult(
1955 | content=[
1956 | types.TextContent(
1957 | type="text",
1958 | text=f"Operation successful: {result}"
1959 | )
1960 | ]
1961 | )
1962 | except Exception as error:
1963 | return types.CallToolResult(
1964 | isError=True,
1965 | content=[
1966 | types.TextContent(
1967 | type="text",
1968 | text=f"Error: {str(error)}"
1969 | )
1970 | ]
1971 | )
1972 | ```
1973 | </Tab>
1974 | </Tabs>
1975 |
1976 | This approach allows the LLM to see that an error occurred and potentially take corrective action or request human intervention.
1977 |
1978 | ## Testing tools
1979 |
1980 | A comprehensive testing strategy for MCP tools should cover:
1981 |
1982 | * **Functional testing**: Verify tools execute correctly with valid inputs and handle invalid inputs appropriately
1983 | * **Integration testing**: Test tool interaction with external systems using both real and mocked dependencies
1984 | * **Security testing**: Validate authentication, authorization, input sanitization, and rate limiting
1985 | * **Performance testing**: Check behavior under load, timeout handling, and resource cleanup
1986 | * **Error handling**: Ensure tools properly report errors through the MCP protocol and clean up resources
1987 |
1988 |
1989 | # Transports
1990 |
1991 | Learn about MCP's communication mechanisms
1992 |
1993 | Transports in the Model Context Protocol (MCP) provide the foundation for communication between clients and servers. A transport handles the underlying mechanics of how messages are sent and received.
1994 |
1995 | ## Message Format
1996 |
1997 | MCP uses [JSON-RPC](https://www.jsonrpc.org/) 2.0 as its wire format. The transport layer is responsible for converting MCP protocol messages into JSON-RPC format for transmission and converting received JSON-RPC messages back into MCP protocol messages.
1998 |
1999 | There are three types of JSON-RPC messages used:
2000 |
2001 | ### Requests
2002 |
2003 | ```typescript
2004 | {
2005 | jsonrpc: "2.0",
2006 | id: number | string,
2007 | method: string,
2008 | params?: object
2009 | }
2010 | ```
2011 |
2012 | ### Responses
2013 |
2014 | ```typescript
2015 | {
2016 | jsonrpc: "2.0",
2017 | id: number | string,
2018 | result?: object,
2019 | error?: {
2020 | code: number,
2021 | message: string,
2022 | data?: unknown
2023 | }
2024 | }
2025 | ```
2026 |
2027 | ### Notifications
2028 |
2029 | ```typescript
2030 | {
2031 | jsonrpc: "2.0",
2032 | method: string,
2033 | params?: object
2034 | }
2035 | ```
2036 |
2037 | ## Built-in Transport Types
2038 |
2039 | MCP includes two standard transport implementations:
2040 |
2041 | ### Standard Input/Output (stdio)
2042 |
2043 | The stdio transport enables communication through standard input and output streams. This is particularly useful for local integrations and command-line tools.
2044 |
2045 | Use stdio when:
2046 |
2047 | * Building command-line tools
2048 | * Implementing local integrations
2049 | * Needing simple process communication
2050 | * Working with shell scripts
2051 |
2052 | <Tabs>
2053 | <Tab title="TypeScript (Server)">
2054 | ```typescript
2055 | const server = new Server({
2056 | name: "example-server",
2057 | version: "1.0.0"
2058 | }, {
2059 | capabilities: {}
2060 | });
2061 |
2062 | const transport = new StdioServerTransport();
2063 | await server.connect(transport);
2064 | ```
2065 | </Tab>
2066 |
2067 | <Tab title="TypeScript (Client)">
2068 | ```typescript
2069 | const client = new Client({
2070 | name: "example-client",
2071 | version: "1.0.0"
2072 | }, {
2073 | capabilities: {}
2074 | });
2075 |
2076 | const transport = new StdioClientTransport({
2077 | command: "./server",
2078 | args: ["--option", "value"]
2079 | });
2080 | await client.connect(transport);
2081 | ```
2082 | </Tab>
2083 |
2084 | <Tab title="Python (Server)">
2085 | ```python
2086 | app = Server("example-server")
2087 |
2088 | async with stdio_server() as streams:
2089 | await app.run(
2090 | streams[0],
2091 | streams[1],
2092 | app.create_initialization_options()
2093 | )
2094 | ```
2095 | </Tab>
2096 |
2097 | <Tab title="Python (Client)">
2098 | ```python
2099 | params = StdioServerParameters(
2100 | command="./server",
2101 | args=["--option", "value"]
2102 | )
2103 |
2104 | async with stdio_client(params) as streams:
2105 | async with ClientSession(streams[0], streams[1]) as session:
2106 | await session.initialize()
2107 | ```
2108 | </Tab>
2109 | </Tabs>
2110 |
2111 | ### Server-Sent Events (SSE)
2112 |
2113 | SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication.
2114 |
2115 | Use SSE when:
2116 |
2117 | * Only server-to-client streaming is needed
2118 | * Working with restricted networks
2119 | * Implementing simple updates
2120 |
2121 | <Tabs>
2122 | <Tab title="TypeScript (Server)">
2123 | ```typescript
2124 | import express from "express";
2125 |
2126 | const app = express();
2127 |
2128 | const server = new Server({
2129 | name: "example-server",
2130 | version: "1.0.0"
2131 | }, {
2132 | capabilities: {}
2133 | });
2134 |
2135 | let transport: SSEServerTransport | null = null;
2136 |
2137 | app.get("/sse", (req, res) => {
2138 | transport = new SSEServerTransport("/messages", res);
2139 | server.connect(transport);
2140 | });
2141 |
2142 | app.post("/messages", (req, res) => {
2143 | if (transport) {
2144 | transport.handlePostMessage(req, res);
2145 | }
2146 | });
2147 |
2148 | app.listen(3000);
2149 | ```
2150 | </Tab>
2151 |
2152 | <Tab title="TypeScript (Client)">
2153 | ```typescript
2154 | const client = new Client({
2155 | name: "example-client",
2156 | version: "1.0.0"
2157 | }, {
2158 | capabilities: {}
2159 | });
2160 |
2161 | const transport = new SSEClientTransport(
2162 | new URL("http://localhost:3000/sse")
2163 | );
2164 | await client.connect(transport);
2165 | ```
2166 | </Tab>
2167 |
2168 | <Tab title="Python (Server)">
2169 | ```python
2170 | from mcp.server.sse import SseServerTransport
2171 | from starlette.applications import Starlette
2172 | from starlette.routing import Route
2173 |
2174 | app = Server("example-server")
2175 | sse = SseServerTransport("/messages")
2176 |
2177 | async def handle_sse(scope, receive, send):
2178 | async with sse.connect_sse(scope, receive, send) as streams:
2179 | await app.run(streams[0], streams[1], app.create_initialization_options())
2180 |
2181 | async def handle_messages(scope, receive, send):
2182 | await sse.handle_post_message(scope, receive, send)
2183 |
2184 | starlette_app = Starlette(
2185 | routes=[
2186 | Route("/sse", endpoint=handle_sse),
2187 | Route("/messages", endpoint=handle_messages, methods=["POST"]),
2188 | ]
2189 | )
2190 | ```
2191 | </Tab>
2192 |
2193 | <Tab title="Python (Client)">
2194 | ```python
2195 | async with sse_client("http://localhost:8000/sse") as streams:
2196 | async with ClientSession(streams[0], streams[1]) as session:
2197 | await session.initialize()
2198 | ```
2199 | </Tab>
2200 | </Tabs>
2201 |
2202 | ## Custom Transports
2203 |
2204 | MCP makes it easy to implement custom transports for specific needs. Any transport implementation just needs to conform to the Transport interface:
2205 |
2206 | You can implement custom transports for:
2207 |
2208 | * Custom network protocols
2209 | * Specialized communication channels
2210 | * Integration with existing systems
2211 | * Performance optimization
2212 |
2213 | <Tabs>
2214 | <Tab title="TypeScript">
2215 | ```typescript
2216 | interface Transport {
2217 | // Start processing messages
2218 | start(): Promise<void>;
2219 |
2220 | // Send a JSON-RPC message
2221 | send(message: JSONRPCMessage): Promise<void>;
2222 |
2223 | // Close the connection
2224 | close(): Promise<void>;
2225 |
2226 | // Callbacks
2227 | onclose?: () => void;
2228 | onerror?: (error: Error) => void;
2229 | onmessage?: (message: JSONRPCMessage) => void;
2230 | }
2231 | ```
2232 | </Tab>
2233 |
2234 | <Tab title="Python">
2235 | Note that while MCP Servers are often implemented with asyncio, we recommend
2236 | implementing low-level interfaces like transports with `anyio` for wider compatibility.
2237 |
2238 | ```python
2239 | @contextmanager
2240 | async def create_transport(
2241 | read_stream: MemoryObjectReceiveStream[JSONRPCMessage | Exception],
2242 | write_stream: MemoryObjectSendStream[JSONRPCMessage]
2243 | ):
2244 | """
2245 | Transport interface for MCP.
2246 |
2247 | Args:
2248 | read_stream: Stream to read incoming messages from
2249 | write_stream: Stream to write outgoing messages to
2250 | """
2251 | async with anyio.create_task_group() as tg:
2252 | try:
2253 | # Start processing messages
2254 | tg.start_soon(lambda: process_messages(read_stream))
2255 |
2256 | # Send messages
2257 | async with write_stream:
2258 | yield write_stream
2259 |
2260 | except Exception as exc:
2261 | # Handle errors
2262 | raise exc
2263 | finally:
2264 | # Clean up
2265 | tg.cancel_scope.cancel()
2266 | await write_stream.aclose()
2267 | await read_stream.aclose()
2268 | ```
2269 | </Tab>
2270 | </Tabs>
2271 |
2272 | ## Error Handling
2273 |
2274 | Transport implementations should handle various error scenarios:
2275 |
2276 | 1. Connection errors
2277 | 2. Message parsing errors
2278 | 3. Protocol errors
2279 | 4. Network timeouts
2280 | 5. Resource cleanup
2281 |
2282 | Example error handling:
2283 |
2284 | <Tabs>
2285 | <Tab title="TypeScript">
2286 | ```typescript
2287 | class ExampleTransport implements Transport {
2288 | async start() {
2289 | try {
2290 | // Connection logic
2291 | } catch (error) {
2292 | this.onerror?.(new Error(`Failed to connect: ${error}`));
2293 | throw error;
2294 | }
2295 | }
2296 |
2297 | async send(message: JSONRPCMessage) {
2298 | try {
2299 | // Sending logic
2300 | } catch (error) {
2301 | this.onerror?.(new Error(`Failed to send message: ${error}`));
2302 | throw error;
2303 | }
2304 | }
2305 | }
2306 | ```
2307 | </Tab>
2308 |
2309 | <Tab title="Python">
2310 | Note that while MCP Servers are often implemented with asyncio, we recommend
2311 | implementing low-level interfaces like transports with `anyio` for wider compatibility.
2312 |
2313 | ```python
2314 | @contextmanager
2315 | async def example_transport(scope: Scope, receive: Receive, send: Send):
2316 | try:
2317 | # Create streams for bidirectional communication
2318 | read_stream_writer, read_stream = anyio.create_memory_object_stream(0)
2319 | write_stream, write_stream_reader = anyio.create_memory_object_stream(0)
2320 |
2321 | async def message_handler():
2322 | try:
2323 | async with read_stream_writer:
2324 | # Message handling logic
2325 | pass
2326 | except Exception as exc:
2327 | logger.error(f"Failed to handle message: {exc}")
2328 | raise exc
2329 |
2330 | async with anyio.create_task_group() as tg:
2331 | tg.start_soon(message_handler)
2332 | try:
2333 | # Yield streams for communication
2334 | yield read_stream, write_stream
2335 | except Exception as exc:
2336 | logger.error(f"Transport error: {exc}")
2337 | raise exc
2338 | finally:
2339 | tg.cancel_scope.cancel()
2340 | await write_stream.aclose()
2341 | await read_stream.aclose()
2342 | except Exception as exc:
2343 | logger.error(f"Failed to initialize transport: {exc}")
2344 | raise exc
2345 | ```
2346 | </Tab>
2347 | </Tabs>
2348 |
2349 | ## Best Practices
2350 |
2351 | When implementing or using MCP transport:
2352 |
2353 | 1. Handle connection lifecycle properly
2354 | 2. Implement proper error handling
2355 | 3. Clean up resources on connection close
2356 | 4. Use appropriate timeouts
2357 | 5. Validate messages before sending
2358 | 6. Log transport events for debugging
2359 | 7. Implement reconnection logic when appropriate
2360 | 8. Handle backpressure in message queues
2361 | 9. Monitor connection health
2362 | 10. Implement proper security measures
2363 |
2364 | ## Security Considerations
2365 |
2366 | When implementing transport:
2367 |
2368 | ### Authentication and Authorization
2369 |
2370 | * Implement proper authentication mechanisms
2371 | * Validate client credentials
2372 | * Use secure token handling
2373 | * Implement authorization checks
2374 |
2375 | ### Data Security
2376 |
2377 | * Use TLS for network transport
2378 | * Encrypt sensitive data
2379 | * Validate message integrity
2380 | * Implement message size limits
2381 | * Sanitize input data
2382 |
2383 | ### Network Security
2384 |
2385 | * Implement rate limiting
2386 | * Use appropriate timeouts
2387 | * Handle denial of service scenarios
2388 | * Monitor for unusual patterns
2389 | * Implement proper firewall rules
2390 |
2391 | ## Debugging Transport
2392 |
2393 | Tips for debugging transport issues:
2394 |
2395 | 1. Enable debug logging
2396 | 2. Monitor message flow
2397 | 3. Check connection states
2398 | 4. Validate message formats
2399 | 5. Test error scenarios
2400 | 6. Use network analysis tools
2401 | 7. Implement health checks
2402 | 8. Monitor resource usage
2403 | 9. Test edge cases
2404 | 10. Use proper error tracking
2405 |
2406 |
2407 | # Debugging
2408 |
2409 | A comprehensive guide to debugging Model Context Protocol (MCP) integrations
2410 |
2411 | Effective debugging is essential when developing MCP servers or integrating them with applications. This guide covers the debugging tools and approaches available in the MCP ecosystem.
2412 |
2413 | <Info>
2414 | This guide is for macOS. Guides for other platforms are coming soon.
2415 | </Info>
2416 |
2417 | ## Debugging tools overview
2418 |
2419 | MCP provides several tools for debugging at different levels:
2420 |
2421 | 1. **MCP Inspector**
2422 | * Interactive debugging interface
2423 | * Direct server testing
2424 | * See the [Inspector guide](/docs/tools/inspector) for details
2425 |
2426 | 2. **Claude Desktop Developer Tools**
2427 | * Integration testing
2428 | * Log collection
2429 | * Chrome DevTools integration
2430 |
2431 | 3. **Server Logging**
2432 | * Custom logging implementations
2433 | * Error tracking
2434 | * Performance monitoring
2435 |
2436 | ## Debugging in Claude Desktop
2437 |
2438 | ### Checking server status
2439 |
2440 | The Claude.app interface provides basic server status information:
2441 |
2442 | 1. Click the <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-plug-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon to view:
2443 | * Connected servers
2444 | * Available prompts and resources
2445 |
2446 | 2. Click the <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-hammer-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon to view:
2447 | * Tools made available to the model
2448 |
2449 | ### Viewing logs
2450 |
2451 | Review detailed MCP logs from Claude Desktop:
2452 |
2453 | ```bash
2454 | # Follow logs in real-time
2455 | tail -n 20 -F ~/Library/Logs/Claude/mcp*.log
2456 | ```
2457 |
2458 | The logs capture:
2459 |
2460 | * Server connection events
2461 | * Configuration issues
2462 | * Runtime errors
2463 | * Message exchanges
2464 |
2465 | ### Using Chrome DevTools
2466 |
2467 | Access Chrome's developer tools inside Claude Desktop to investigate client-side errors:
2468 |
2469 | 1. Create a `developer_settings.json` file with `allowDevTools` set to true:
2470 |
2471 | ```bash
2472 | echo '{"allowDevTools": true}' > ~/Library/Application\ Support/Claude/developer_settings.json
2473 | ```
2474 |
2475 | 2. Open DevTools: `Command-Option-Shift-i`
2476 |
2477 | Note: You'll see two DevTools windows:
2478 |
2479 | * Main content window
2480 | * App title bar window
2481 |
2482 | Use the Console panel to inspect client-side errors.
2483 |
2484 | Use the Network panel to inspect:
2485 |
2486 | * Message payloads
2487 | * Connection timing
2488 |
2489 | ## Common issues
2490 |
2491 | ### Working directory
2492 |
2493 | When using MCP servers with Claude Desktop:
2494 |
2495 | * The working directory for servers launched via `claude_desktop_config.json` may be undefined (like `/` on macOS) since Claude Desktop could be started from anywhere
2496 | * Always use absolute paths in your configuration and `.env` files to ensure reliable operation
2497 | * For testing servers directly via command line, the working directory will be where you run the command
2498 |
2499 | For example in `claude_desktop_config.json`, use:
2500 |
2501 | ```json
2502 | {
2503 | "command": "npx",
2504 | "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/data"]
2505 | }
2506 | ```
2507 |
2508 | Instead of relative paths like `./data`
2509 |
2510 | ### Environment variables
2511 |
2512 | MCP servers inherit only a subset of environment variables automatically, like `USER`, `HOME`, and `PATH`.
2513 |
2514 | To override the default variables or provide your own, you can specify an `env` key in `claude_desktop_config.json`:
2515 |
2516 | ```json
2517 | {
2518 | "myserver": {
2519 | "command": "mcp-server-myapp",
2520 | "env": {
2521 | "MYAPP_API_KEY": "some_key",
2522 | }
2523 | }
2524 | }
2525 | ```
2526 |
2527 | ### Server initialization
2528 |
2529 | Common initialization problems:
2530 |
2531 | 1. **Path Issues**
2532 | * Incorrect server executable path
2533 | * Missing required files
2534 | * Permission problems
2535 | * Try using an absolute path for `command`
2536 |
2537 | 2. **Configuration Errors**
2538 | * Invalid JSON syntax
2539 | * Missing required fields
2540 | * Type mismatches
2541 |
2542 | 3. **Environment Problems**
2543 | * Missing environment variables
2544 | * Incorrect variable values
2545 | * Permission restrictions
2546 |
2547 | ### Connection problems
2548 |
2549 | When servers fail to connect:
2550 |
2551 | 1. Check Claude Desktop logs
2552 | 2. Verify server process is running
2553 | 3. Test standalone with [Inspector](/docs/tools/inspector)
2554 | 4. Verify protocol compatibility
2555 |
2556 | ## Implementing logging
2557 |
2558 | ### Server-side logging
2559 |
2560 | When building a server that uses the local stdio [transport](/docs/concepts/transports), all messages logged to stderr (standard error) will be captured by the host application (e.g., Claude Desktop) automatically.
2561 |
2562 | <Warning>
2563 | Local MCP servers should not log messages to stdout (standard out), as this will interfere with protocol operation.
2564 | </Warning>
2565 |
2566 | For all [transports](/docs/concepts/transports), you can also provide logging to the client by sending a log message notification:
2567 |
2568 | <Tabs>
2569 | <Tab title="Python">
2570 | ```python
2571 | server.request_context.session.send_log_message(
2572 | level="info",
2573 | data="Server started successfully",
2574 | )
2575 | ```
2576 | </Tab>
2577 |
2578 | <Tab title="TypeScript">
2579 | ```typescript
2580 | server.sendLoggingMessage({
2581 | level: "info",
2582 | data: "Server started successfully",
2583 | });
2584 | ```
2585 | </Tab>
2586 | </Tabs>
2587 |
2588 | Important events to log:
2589 |
2590 | * Initialization steps
2591 | * Resource access
2592 | * Tool execution
2593 | * Error conditions
2594 | * Performance metrics
2595 |
2596 | ### Client-side logging
2597 |
2598 | In client applications:
2599 |
2600 | 1. Enable debug logging
2601 | 2. Monitor network traffic
2602 | 3. Track message exchanges
2603 | 4. Record error states
2604 |
2605 | ## Debugging workflow
2606 |
2607 | ### Development cycle
2608 |
2609 | 1. Initial Development
2610 | * Use [Inspector](/docs/tools/inspector) for basic testing
2611 | * Implement core functionality
2612 | * Add logging points
2613 |
2614 | 2. Integration Testing
2615 | * Test in Claude Desktop
2616 | * Monitor logs
2617 | * Check error handling
2618 |
2619 | ### Testing changes
2620 |
2621 | To test changes efficiently:
2622 |
2623 | * **Configuration changes**: Restart Claude Desktop
2624 | * **Server code changes**: Use Command-R to reload
2625 | * **Quick iteration**: Use [Inspector](/docs/tools/inspector) during development
2626 |
2627 | ## Best practices
2628 |
2629 | ### Logging strategy
2630 |
2631 | 1. **Structured Logging**
2632 | * Use consistent formats
2633 | * Include context
2634 | * Add timestamps
2635 | * Track request IDs
2636 |
2637 | 2. **Error Handling**
2638 | * Log stack traces
2639 | * Include error context
2640 | * Track error patterns
2641 | * Monitor recovery
2642 |
2643 | 3. **Performance Tracking**
2644 | * Log operation timing
2645 | * Monitor resource usage
2646 | * Track message sizes
2647 | * Measure latency
2648 |
2649 | ### Security considerations
2650 |
2651 | When debugging:
2652 |
2653 | 1. **Sensitive Data**
2654 | * Sanitize logs
2655 | * Protect credentials
2656 | * Mask personal information
2657 |
2658 | 2. **Access Control**
2659 | * Verify permissions
2660 | * Check authentication
2661 | * Monitor access patterns
2662 |
2663 | ## Getting help
2664 |
2665 | When encountering issues:
2666 |
2667 | 1. **First Steps**
2668 | * Check server logs
2669 | * Test with [Inspector](/docs/tools/inspector)
2670 | * Review configuration
2671 | * Verify environment
2672 |
2673 | 2. **Support Channels**
2674 | * GitHub issues
2675 | * GitHub discussions
2676 |
2677 | 3. **Providing Information**
2678 | * Log excerpts
2679 | * Configuration files
2680 | * Steps to reproduce
2681 | * Environment details
2682 |
2683 | ## Next steps
2684 |
2685 | <CardGroup cols={2}>
2686 | <Card title="MCP Inspector" icon="magnifying-glass" href="/docs/tools/inspector">
2687 | Learn to use the MCP Inspector
2688 | </Card>
2689 | </CardGroup>
2690 |
2691 |
2692 | # Inspector
2693 |
2694 | In-depth guide to using the MCP Inspector for testing and debugging Model Context Protocol servers
2695 |
2696 | The [MCP Inspector](https://github.com/modelcontextprotocol/inspector) is an interactive developer tool for testing and debugging MCP servers. While the [Debugging Guide](/docs/tools/debugging) covers the Inspector as part of the overall debugging toolkit, this document provides a detailed exploration of the Inspector's features and capabilities.
2697 |
2698 | ## Getting started
2699 |
2700 | ### Installation and basic usage
2701 |
2702 | The Inspector runs directly through `npx` without requiring installation:
2703 |
2704 | ```bash
2705 | npx @modelcontextprotocol/inspector <command>
2706 | ```
2707 |
2708 | ```bash
2709 | npx @modelcontextprotocol/inspector <command> <arg1> <arg2>
2710 | ```
2711 |
2712 | #### Inspecting servers from NPM or PyPi
2713 |
2714 | A common way to start server packages from [NPM](https://npmjs.com) or [PyPi](https://pypi.com).
2715 |
2716 | <Tabs>
2717 | <Tab title="NPM package">
2718 | ```bash
2719 | npx -y @modelcontextprotocol/inspector npx <package-name> <args>
2720 | # For example
2721 | npx -y @modelcontextprotocol/inspector npx server-postgres postgres://127.0.0.1/testdb
2722 | ```
2723 | </Tab>
2724 |
2725 | <Tab title="PyPi package">
2726 | ```bash
2727 | npx @modelcontextprotocol/inspector uvx <package-name> <args>
2728 | # For example
2729 | npx @modelcontextprotocol/inspector uvx mcp-server-git --repository ~/code/mcp/servers.git
2730 | ```
2731 | </Tab>
2732 | </Tabs>
2733 |
2734 | #### Inspecting locally developed servers
2735 |
2736 | To inspect servers locally developed or downloaded as a repository, the most common
2737 | way is:
2738 |
2739 | <Tabs>
2740 | <Tab title="TypeScript">
2741 | ```bash
2742 | npx @modelcontextprotocol/inspector node path/to/server/index.js args...
2743 | ```
2744 | </Tab>
2745 |
2746 | <Tab title="Python">
2747 | ```bash
2748 | npx @modelcontextprotocol/inspector \
2749 | uv \
2750 | --directory path/to/server \
2751 | run \
2752 | package-name \
2753 | args...
2754 | ```
2755 | </Tab>
2756 | </Tabs>
2757 |
2758 | Please carefully read any attached README for the most accurate instructions.
2759 |
2760 | ## Feature overview
2761 |
2762 | <Frame caption="The MCP Inspector interface">
2763 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/mcp-inspector.png" />
2764 | </Frame>
2765 |
2766 | The Inspector provides several features for interacting with your MCP server:
2767 |
2768 | ### Server connection pane
2769 |
2770 | * Allows selecting the [transport](/docs/concepts/transports) for connecting to the server
2771 | * For local servers, supports customizing the command-line arguments and environment
2772 |
2773 | ### Resources tab
2774 |
2775 | * Lists all available resources
2776 | * Shows resource metadata (MIME types, descriptions)
2777 | * Allows resource content inspection
2778 | * Supports subscription testing
2779 |
2780 | ### Prompts tab
2781 |
2782 | * Displays available prompt templates
2783 | * Shows prompt arguments and descriptions
2784 | * Enables prompt testing with custom arguments
2785 | * Previews generated messages
2786 |
2787 | ### Tools tab
2788 |
2789 | * Lists available tools
2790 | * Shows tool schemas and descriptions
2791 | * Enables tool testing with custom inputs
2792 | * Displays tool execution results
2793 |
2794 | ### Notifications pane
2795 |
2796 | * Presents all logs recorded from the server
2797 | * Shows notifications received from the server
2798 |
2799 | ## Best practices
2800 |
2801 | ### Development workflow
2802 |
2803 | 1. Start Development
2804 | * Launch Inspector with your server
2805 | * Verify basic connectivity
2806 | * Check capability negotiation
2807 |
2808 | 2. Iterative testing
2809 | * Make server changes
2810 | * Rebuild the server
2811 | * Reconnect the Inspector
2812 | * Test affected features
2813 | * Monitor messages
2814 |
2815 | 3. Test edge cases
2816 | * Invalid inputs
2817 | * Missing prompt arguments
2818 | * Concurrent operations
2819 | * Verify error handling and error responses
2820 |
2821 | ## Next steps
2822 |
2823 | <CardGroup cols={2}>
2824 | <Card title="Inspector Repository" icon="github" href="https://github.com/modelcontextprotocol/inspector">
2825 | Check out the MCP Inspector source code
2826 | </Card>
2827 |
2828 | <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging">
2829 | Learn about broader debugging strategies
2830 | </Card>
2831 | </CardGroup>
2832 |
2833 |
2834 | # Example Servers
2835 |
2836 | A list of example servers and implementations
2837 |
2838 | This page showcases various Model Context Protocol (MCP) servers that demonstrate the protocol's capabilities and versatility. These servers enable Large Language Models (LLMs) to securely access tools and data sources.
2839 |
2840 | ## Reference implementations
2841 |
2842 | These official reference servers demonstrate core MCP features and SDK usage:
2843 |
2844 | ### Data and file systems
2845 |
2846 | * **[Filesystem](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem)** - Secure file operations with configurable access controls
2847 | * **[PostgreSQL](https://github.com/modelcontextprotocol/servers/tree/main/src/postgres)** - Read-only database access with schema inspection capabilities
2848 | * **[SQLite](https://github.com/modelcontextprotocol/servers/tree/main/src/sqlite)** - Database interaction and business intelligence features
2849 | * **[Google Drive](https://github.com/modelcontextprotocol/servers/tree/main/src/gdrive)** - File access and search capabilities for Google Drive
2850 |
2851 | ### Development tools
2852 |
2853 | * **[Git](https://github.com/modelcontextprotocol/servers/tree/main/src/git)** - Tools to read, search, and manipulate Git repositories
2854 | * **[GitHub](https://github.com/modelcontextprotocol/servers/tree/main/src/github)** - Repository management, file operations, and GitHub API integration
2855 | * **[GitLab](https://github.com/modelcontextprotocol/servers/tree/main/src/gitlab)** - GitLab API integration enabling project management
2856 | * **[Sentry](https://github.com/modelcontextprotocol/servers/tree/main/src/sentry)** - Retrieving and analyzing issues from Sentry.io
2857 |
2858 | ### Web and browser automation
2859 |
2860 | * **[Brave Search](https://github.com/modelcontextprotocol/servers/tree/main/src/brave-search)** - Web and local search using Brave's Search API
2861 | * **[Fetch](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch)** - Web content fetching and conversion optimized for LLM usage
2862 | * **[Puppeteer](https://github.com/modelcontextprotocol/servers/tree/main/src/puppeteer)** - Browser automation and web scraping capabilities
2863 |
2864 | ### Productivity and communication
2865 |
2866 | * **[Slack](https://github.com/modelcontextprotocol/servers/tree/main/src/slack)** - Channel management and messaging capabilities
2867 | * **[Google Maps](https://github.com/modelcontextprotocol/servers/tree/main/src/google-maps)** - Location services, directions, and place details
2868 | * **[Memory](https://github.com/modelcontextprotocol/servers/tree/main/src/memory)** - Knowledge graph-based persistent memory system
2869 |
2870 | ### AI and specialized tools
2871 |
2872 | * **[EverArt](https://github.com/modelcontextprotocol/servers/tree/main/src/everart)** - AI image generation using various models
2873 | * **[Sequential Thinking](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking)** - Dynamic problem-solving through thought sequences
2874 | * **[AWS KB Retrieval](https://github.com/modelcontextprotocol/servers/tree/main/src/aws-kb-retrieval-server)** - Retrieval from AWS Knowledge Base using Bedrock Agent Runtime
2875 |
2876 | ## Official integrations
2877 |
2878 | These MCP servers are maintained by companies for their platforms:
2879 |
2880 | * **[Axiom](https://github.com/axiomhq/mcp-server-axiom)** - Query and analyze logs, traces, and event data using natural language
2881 | * **[Browserbase](https://github.com/browserbase/mcp-server-browserbase)** - Automate browser interactions in the cloud
2882 | * **[Cloudflare](https://github.com/cloudflare/mcp-server-cloudflare)** - Deploy and manage resources on the Cloudflare developer platform
2883 | * **[E2B](https://github.com/e2b-dev/mcp-server)** - Execute code in secure cloud sandboxes
2884 | * **[Neon](https://github.com/neondatabase/mcp-server-neon)** - Interact with the Neon serverless Postgres platform
2885 | * **[Obsidian Markdown Notes](https://github.com/calclavia/mcp-obsidian)** - Read and search through Markdown notes in Obsidian vaults
2886 | * **[Qdrant](https://github.com/qdrant/mcp-server-qdrant/)** - Implement semantic memory using the Qdrant vector search engine
2887 | * **[Raygun](https://github.com/MindscapeHQ/mcp-server-raygun)** - Access crash reporting and monitoring data
2888 | * **[Search1API](https://github.com/fatwang2/search1api-mcp)** - Unified API for search, crawling, and sitemaps
2889 | * **[Tinybird](https://github.com/tinybirdco/mcp-tinybird)** - Interface with the Tinybird serverless ClickHouse platform
2890 |
2891 | ## Community highlights
2892 |
2893 | A growing ecosystem of community-developed servers extends MCP's capabilities:
2894 |
2895 | * **[Docker](https://github.com/ckreiling/mcp-server-docker)** - Manage containers, images, volumes, and networks
2896 | * **[Kubernetes](https://github.com/Flux159/mcp-server-kubernetes)** - Manage pods, deployments, and services
2897 | * **[Linear](https://github.com/jerhadf/linear-mcp-server)** - Project management and issue tracking
2898 | * **[Snowflake](https://github.com/datawiz168/mcp-snowflake-service)** - Interact with Snowflake databases
2899 | * **[Spotify](https://github.com/varunneal/spotify-mcp)** - Control Spotify playback and manage playlists
2900 | * **[Todoist](https://github.com/abhiz123/todoist-mcp-server)** - Task management integration
2901 |
2902 | > **Note:** Community servers are untested and should be used at your own risk. They are not affiliated with or endorsed by Anthropic.
2903 |
2904 | For a complete list of community servers, visit the [MCP Servers Repository](https://github.com/modelcontextprotocol/servers).
2905 |
2906 | ## Getting started
2907 |
2908 | ### Using reference servers
2909 |
2910 | TypeScript-based servers can be used directly with `npx`:
2911 |
2912 | ```bash
2913 | npx -y @modelcontextprotocol/server-memory
2914 | ```
2915 |
2916 | Python-based servers can be used with `uvx` (recommended) or `pip`:
2917 |
2918 | ```bash
2919 | # Using uvx
2920 | uvx mcp-server-git
2921 |
2922 | # Using pip
2923 | pip install mcp-server-git
2924 | python -m mcp_server_git
2925 | ```
2926 |
2927 | ### Configuring with Claude
2928 |
2929 | To use an MCP server with Claude, add it to your configuration:
2930 |
2931 | ```json
2932 | {
2933 | "mcpServers": {
2934 | "memory": {
2935 | "command": "npx",
2936 | "args": ["-y", "@modelcontextprotocol/server-memory"]
2937 | },
2938 | "filesystem": {
2939 | "command": "npx",
2940 | "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"]
2941 | },
2942 | "github": {
2943 | "command": "npx",
2944 | "args": ["-y", "@modelcontextprotocol/server-github"],
2945 | "env": {
2946 | "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
2947 | }
2948 | }
2949 | }
2950 | }
2951 | ```
2952 |
2953 | ## Additional resources
2954 |
2955 | * [MCP Servers Repository](https://github.com/modelcontextprotocol/servers) - Complete collection of reference implementations and community servers
2956 | * [Awesome MCP Servers](https://github.com/punkpeye/awesome-mcp-servers) - Curated list of MCP servers
2957 | * [MCP CLI](https://github.com/wong2/mcp-cli) - Command-line inspector for testing MCP servers
2958 | * [MCP Get](https://mcp-get.com) - Tool for installing and managing MCP servers
2959 | * [Supergateway](https://github.com/supercorp-ai/supergateway) - Run MCP stdio servers over SSE
2960 |
2961 | Visit our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to engage with the MCP community.
2962 |
2963 |
2964 | # Introduction
2965 |
2966 | Get started with the Model Context Protocol (MCP)
2967 |
2968 | <Note>Kotlin SDK released! Check out [what else is new.](/development/updates)</Note>
2969 |
2970 | MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
2971 |
2972 | ## Why MCP?
2973 |
2974 | MCP helps you build agents and complex workflows on top of LLMs. LLMs frequently need to integrate with data and tools, and MCP provides:
2975 |
2976 | * A growing list of pre-built integrations that your LLM can directly plug into
2977 | * The flexibility to switch between LLM providers and vendors
2978 | * Best practices for securing your data within your infrastructure
2979 |
2980 | ### General architecture
2981 |
2982 | At its core, MCP follows a client-server architecture where a host application can connect to multiple servers:
2983 |
2984 | ```mermaid
2985 | flowchart LR
2986 | subgraph "Your Computer"
2987 | Host["Host with MCP Client\n(Claude, IDEs, Tools)"]
2988 | S1["MCP Server A"]
2989 | S2["MCP Server B"]
2990 | S3["MCP Server C"]
2991 | Host <-->|"MCP Protocol"| S1
2992 | Host <-->|"MCP Protocol"| S2
2993 | Host <-->|"MCP Protocol"| S3
2994 | S1 <--> D1[("Local\nData Source A")]
2995 | S2 <--> D2[("Local\nData Source B")]
2996 | end
2997 | subgraph "Internet"
2998 | S3 <-->|"Web APIs"| D3[("Remote\nService C")]
2999 | end
3000 | ```
3001 |
3002 | * **MCP Hosts**: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP
3003 | * **MCP Clients**: Protocol clients that maintain 1:1 connections with servers
3004 | * **MCP Servers**: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol
3005 | * **Local Data Sources**: Your computer's files, databases, and services that MCP servers can securely access
3006 | * **Remote Services**: External systems available over the internet (e.g., through APIs) that MCP servers can connect to
3007 |
3008 | ## Get started
3009 |
3010 | Choose the path that best fits your needs:
3011 |
3012 | #### Quick Starts
3013 |
3014 | <CardGroup cols={2}>
3015 | <Card title="For Server Developers" icon="bolt" href="/quickstart/server">
3016 | Get started building your own server to use in Claude for Desktop and other clients
3017 | </Card>
3018 |
3019 | <Card title="For Client Developers" icon="bolt" href="/quickstart/client">
3020 | Get started building your own client that can integrate with all MCP servers
3021 | </Card>
3022 |
3023 | <Card title="For Claude Desktop Users" icon="bolt" href="/quickstart/user">
3024 | Get started using pre-built servers in Claude for Desktop
3025 | </Card>
3026 | </CardGroup>
3027 |
3028 | #### Examples
3029 |
3030 | <CardGroup cols={2}>
3031 | <Card title="Example Servers" icon="grid" href="/examples">
3032 | Check out our gallery of official MCP servers and implementations
3033 | </Card>
3034 |
3035 | <Card title="Example Clients" icon="cubes" href="/clients">
3036 | View the list of clients that support MCP integrations
3037 | </Card>
3038 | </CardGroup>
3039 |
3040 | ## Tutorials
3041 |
3042 | <CardGroup cols={2}>
3043 | <Card title="Building MCP with LLMs" icon="comments" href="/tutorials/building-mcp-with-llms">
3044 | Learn how to use LLMs like Claude to speed up your MCP development
3045 | </Card>
3046 |
3047 | <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging">
3048 | Learn how to effectively debug MCP servers and integrations
3049 | </Card>
3050 |
3051 | <Card title="MCP Inspector" icon="magnifying-glass" href="/docs/tools/inspector">
3052 | Test and inspect your MCP servers with our interactive debugging tool
3053 | </Card>
3054 | </CardGroup>
3055 |
3056 | ## Explore MCP
3057 |
3058 | Dive deeper into MCP's core concepts and capabilities:
3059 |
3060 | <CardGroup cols={2}>
3061 | <Card title="Core architecture" icon="sitemap" href="/docs/concepts/architecture">
3062 | Understand how MCP connects clients, servers, and LLMs
3063 | </Card>
3064 |
3065 | <Card title="Resources" icon="database" href="/docs/concepts/resources">
3066 | Expose data and content from your servers to LLMs
3067 | </Card>
3068 |
3069 | <Card title="Prompts" icon="message" href="/docs/concepts/prompts">
3070 | Create reusable prompt templates and workflows
3071 | </Card>
3072 |
3073 | <Card title="Tools" icon="wrench" href="/docs/concepts/tools">
3074 | Enable LLMs to perform actions through your server
3075 | </Card>
3076 |
3077 | <Card title="Sampling" icon="robot" href="/docs/concepts/sampling">
3078 | Let your servers request completions from LLMs
3079 | </Card>
3080 |
3081 | <Card title="Transports" icon="network-wired" href="/docs/concepts/transports">
3082 | Learn about MCP's communication mechanism
3083 | </Card>
3084 | </CardGroup>
3085 |
3086 | ## Contributing
3087 |
3088 | Want to contribute? Check out our [Contributing Guide](/development/contributing) to learn how you can help improve MCP.
3089 |
3090 | ## Support and Feedback
3091 |
3092 | Here's how to get help or provide feedback:
3093 |
3094 | * For bug reports and feature requests related to the MCP specification, SDKs, or documentation (open source), please [create a GitHub issue](https://github.com/modelcontextprotocol)
3095 | * For discussions or Q\&A about the MCP specification, use the [specification discussions](https://github.com/modelcontextprotocol/specification/discussions)
3096 | * For discussions or Q\&A about other MCP open source components, use the [organization discussions](https://github.com/orgs/modelcontextprotocol/discussions)
3097 | * For bug reports, feature requests, and questions related to Claude.app and claude.ai's MCP integration, please email [[email protected]](mailto:[email protected])
3098 |
3099 |
3100 | # For Client Developers
3101 |
3102 | Get started building your own client that can integrate with all MCP servers.
3103 |
3104 | In this tutorial, you'll learn how to build a LLM-powered chatbot client that connects to MCP servers. It helps to have gone through the [Server quickstart](/quickstart/server) that guides you through the basic of building your first server.
3105 |
3106 | <Tabs>
3107 | <Tab title="Python">
3108 | [You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client)
3109 |
3110 | ## System Requirements
3111 |
3112 | Before starting, ensure your system meets these requirements:
3113 |
3114 | * Mac or Windows computer
3115 | * Latest Python version installed
3116 | * Latest version of `uv` installed
3117 |
3118 | ## Setting Up Your Environment
3119 |
3120 | First, create a new Python project with `uv`:
3121 |
3122 | ```bash
3123 | # Create project directory
3124 | uv init mcp-client
3125 | cd mcp-client
3126 |
3127 | # Create virtual environment
3128 | uv venv
3129 |
3130 | # Activate virtual environment
3131 | # On Windows:
3132 | .venv\Scripts\activate
3133 | # On Unix or MacOS:
3134 | source .venv/bin/activate
3135 |
3136 | # Install required packages
3137 | uv add mcp anthropic python-dotenv
3138 |
3139 | # Remove boilerplate files
3140 | rm hello.py
3141 |
3142 | # Create our main file
3143 | touch client.py
3144 | ```
3145 |
3146 | ## Setting Up Your API Key
3147 |
3148 | You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys).
3149 |
3150 | Create a `.env` file to store it:
3151 |
3152 | ```bash
3153 | # Create .env file
3154 | touch .env
3155 | ```
3156 |
3157 | Add your key to the `.env` file:
3158 |
3159 | ```bash
3160 | ANTHROPIC_API_KEY=<your key here>
3161 | ```
3162 |
3163 | Add `.env` to your `.gitignore`:
3164 |
3165 | ```bash
3166 | echo ".env" >> .gitignore
3167 | ```
3168 |
3169 | <Warning>
3170 | Make sure you keep your `ANTHROPIC_API_KEY` secure!
3171 | </Warning>
3172 |
3173 | ## Creating the Client
3174 |
3175 | ### Basic Client Structure
3176 |
3177 | First, let's set up our imports and create the basic client class:
3178 |
3179 | ```python
3180 | import asyncio
3181 | from typing import Optional
3182 | from contextlib import AsyncExitStack
3183 |
3184 | from mcp import ClientSession, StdioServerParameters
3185 | from mcp.client.stdio import stdio_client
3186 |
3187 | from anthropic import Anthropic
3188 | from dotenv import load_dotenv
3189 |
3190 | load_dotenv() # load environment variables from .env
3191 |
3192 | class MCPClient:
3193 | def __init__(self):
3194 | # Initialize session and client objects
3195 | self.session: Optional[ClientSession] = None
3196 | self.exit_stack = AsyncExitStack()
3197 | self.anthropic = Anthropic()
3198 | # methods will go here
3199 | ```
3200 |
3201 | ### Server Connection Management
3202 |
3203 | Next, we'll implement the method to connect to an MCP server:
3204 |
3205 | ```python
3206 | async def connect_to_server(self, server_script_path: str):
3207 | """Connect to an MCP server
3208 |
3209 | Args:
3210 | server_script_path: Path to the server script (.py or .js)
3211 | """
3212 | is_python = server_script_path.endswith('.py')
3213 | is_js = server_script_path.endswith('.js')
3214 | if not (is_python or is_js):
3215 | raise ValueError("Server script must be a .py or .js file")
3216 |
3217 | command = "python" if is_python else "node"
3218 | server_params = StdioServerParameters(
3219 | command=command,
3220 | args=[server_script_path],
3221 | env=None
3222 | )
3223 |
3224 | stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
3225 | self.stdio, self.write = stdio_transport
3226 | self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
3227 |
3228 | await self.session.initialize()
3229 |
3230 | # List available tools
3231 | response = await self.session.list_tools()
3232 | tools = response.tools
3233 | print("\nConnected to server with tools:", [tool.name for tool in tools])
3234 | ```
3235 |
3236 | ### Query Processing Logic
3237 |
3238 | Now let's add the core functionality for processing queries and handling tool calls:
3239 |
3240 | ```python
3241 | async def process_query(self, query: str) -> str:
3242 | """Process a query using Claude and available tools"""
3243 | messages = [
3244 | {
3245 | "role": "user",
3246 | "content": query
3247 | }
3248 | ]
3249 |
3250 | response = await self.session.list_tools()
3251 | available_tools = [{
3252 | "name": tool.name,
3253 | "description": tool.description,
3254 | "input_schema": tool.inputSchema
3255 | } for tool in response.tools]
3256 |
3257 | # Initial Claude API call
3258 | response = self.anthropic.messages.create(
3259 | model="claude-3-5-sonnet-20241022",
3260 | max_tokens=1000,
3261 | messages=messages,
3262 | tools=available_tools
3263 | )
3264 |
3265 | # Process response and handle tool calls
3266 | tool_results = []
3267 | final_text = []
3268 |
3269 | for content in response.content:
3270 | if content.type == 'text':
3271 | final_text.append(content.text)
3272 | elif content.type == 'tool_use':
3273 | tool_name = content.name
3274 | tool_args = content.input
3275 |
3276 | # Execute tool call
3277 | result = await self.session.call_tool(tool_name, tool_args)
3278 | tool_results.append({"call": tool_name, "result": result})
3279 | final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
3280 |
3281 | # Continue conversation with tool results
3282 | if hasattr(content, 'text') and content.text:
3283 | messages.append({
3284 | "role": "assistant",
3285 | "content": content.text
3286 | })
3287 | messages.append({
3288 | "role": "user",
3289 | "content": result.content
3290 | })
3291 |
3292 | # Get next response from Claude
3293 | response = self.anthropic.messages.create(
3294 | model="claude-3-5-sonnet-20241022",
3295 | max_tokens=1000,
3296 | messages=messages,
3297 | )
3298 |
3299 | final_text.append(response.content[0].text)
3300 |
3301 | return "\n".join(final_text)
3302 | ```
3303 |
3304 | ### Interactive Chat Interface
3305 |
3306 | Now we'll add the chat loop and cleanup functionality:
3307 |
3308 | ```python
3309 | async def chat_loop(self):
3310 | """Run an interactive chat loop"""
3311 | print("\nMCP Client Started!")
3312 | print("Type your queries or 'quit' to exit.")
3313 |
3314 | while True:
3315 | try:
3316 | query = input("\nQuery: ").strip()
3317 |
3318 | if query.lower() == 'quit':
3319 | break
3320 |
3321 | response = await self.process_query(query)
3322 | print("\n" + response)
3323 |
3324 | except Exception as e:
3325 | print(f"\nError: {str(e)}")
3326 |
3327 | async def cleanup(self):
3328 | """Clean up resources"""
3329 | await self.exit_stack.aclose()
3330 | ```
3331 |
3332 | ### Main Entry Point
3333 |
3334 | Finally, we'll add the main execution logic:
3335 |
3336 | ```python
3337 | async def main():
3338 | if len(sys.argv) < 2:
3339 | print("Usage: python client.py <path_to_server_script>")
3340 | sys.exit(1)
3341 |
3342 | client = MCPClient()
3343 | try:
3344 | await client.connect_to_server(sys.argv[1])
3345 | await client.chat_loop()
3346 | finally:
3347 | await client.cleanup()
3348 |
3349 | if __name__ == "__main__":
3350 | import sys
3351 | asyncio.run(main())
3352 | ```
3353 |
3354 | You can find the complete `client.py` file [here.](https://gist.github.com/zckly/f3f28ea731e096e53b39b47bf0a2d4b1)
3355 |
3356 | ## Key Components Explained
3357 |
3358 | ### 1. Client Initialization
3359 |
3360 | * The `MCPClient` class initializes with session management and API clients
3361 | * Uses `AsyncExitStack` for proper resource management
3362 | * Configures the Anthropic client for Claude interactions
3363 |
3364 | ### 2. Server Connection
3365 |
3366 | * Supports both Python and Node.js servers
3367 | * Validates server script type
3368 | * Sets up proper communication channels
3369 | * Initializes the session and lists available tools
3370 |
3371 | ### 3. Query Processing
3372 |
3373 | * Maintains conversation context
3374 | * Handles Claude's responses and tool calls
3375 | * Manages the message flow between Claude and tools
3376 | * Combines results into a coherent response
3377 |
3378 | ### 4. Interactive Interface
3379 |
3380 | * Provides a simple command-line interface
3381 | * Handles user input and displays responses
3382 | * Includes basic error handling
3383 | * Allows graceful exit
3384 |
3385 | ### 5. Resource Management
3386 |
3387 | * Proper cleanup of resources
3388 | * Error handling for connection issues
3389 | * Graceful shutdown procedures
3390 |
3391 | ## Common Customization Points
3392 |
3393 | 1. **Tool Handling**
3394 | * Modify `process_query()` to handle specific tool types
3395 | * Add custom error handling for tool calls
3396 | * Implement tool-specific response formatting
3397 |
3398 | 2. **Response Processing**
3399 | * Customize how tool results are formatted
3400 | * Add response filtering or transformation
3401 | * Implement custom logging
3402 |
3403 | 3. **User Interface**
3404 | * Add a GUI or web interface
3405 | * Implement rich console output
3406 | * Add command history or auto-completion
3407 |
3408 | ## Running the Client
3409 |
3410 | To run your client with any MCP server:
3411 |
3412 | ```bash
3413 | uv run client.py path/to/server.py # python server
3414 | uv run client.py path/to/build/index.js # node server
3415 | ```
3416 |
3417 | <Note>
3418 | If you're continuing the weather tutorial from the server quickstart, your command might look something like this: `python client.py .../weather/src/weather/server.py`
3419 | </Note>
3420 |
3421 | The client will:
3422 |
3423 | 1. Connect to the specified server
3424 | 2. List available tools
3425 | 3. Start an interactive chat session where you can:
3426 | * Enter queries
3427 | * See tool executions
3428 | * Get responses from Claude
3429 |
3430 | Here's an example of what it should look like if connected to the weather server from the server quickstart:
3431 |
3432 | <Frame>
3433 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/client-claude-cli-python.png" />
3434 | </Frame>
3435 |
3436 | ## How It Works
3437 |
3438 | When you submit a query:
3439 |
3440 | 1. The client gets the list of available tools from the server
3441 | 2. Your query is sent to Claude along with tool descriptions
3442 | 3. Claude decides which tools (if any) to use
3443 | 4. The client executes any requested tool calls through the server
3444 | 5. Results are sent back to Claude
3445 | 6. Claude provides a natural language response
3446 | 7. The response is displayed to you
3447 |
3448 | ## Best practices
3449 |
3450 | 1. **Error Handling**
3451 | * Always wrap tool calls in try-catch blocks
3452 | * Provide meaningful error messages
3453 | * Gracefully handle connection issues
3454 |
3455 | 2. **Resource Management**
3456 | * Use `AsyncExitStack` for proper cleanup
3457 | * Close connections when done
3458 | * Handle server disconnections
3459 |
3460 | 3. **Security**
3461 | * Store API keys securely in `.env`
3462 | * Validate server responses
3463 | * Be cautious with tool permissions
3464 |
3465 | ## Troubleshooting
3466 |
3467 | ### Server Path Issues
3468 |
3469 | * Double-check the path to your server script is correct
3470 | * Use the absolute path if the relative path isn't working
3471 | * For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path
3472 | * Verify the server file has the correct extension (.py for Python or .js for Node.js)
3473 |
3474 | Example of correct path usage:
3475 |
3476 | ```bash
3477 | # Relative path
3478 | uv run client.py ./server/weather.py
3479 |
3480 | # Absolute path
3481 | uv run client.py /Users/username/projects/mcp-server/weather.py
3482 |
3483 | # Windows path (either format works)
3484 | uv run client.py C:/projects/mcp-server/weather.py
3485 | uv run client.py C:\\projects\\mcp-server\\weather.py
3486 | ```
3487 |
3488 | ### Response Timing
3489 |
3490 | * The first response might take up to 30 seconds to return
3491 | * This is normal and happens while:
3492 | * The server initializes
3493 | * Claude processes the query
3494 | * Tools are being executed
3495 | * Subsequent responses are typically faster
3496 | * Don't interrupt the process during this initial waiting period
3497 |
3498 | ### Common Error Messages
3499 |
3500 | If you see:
3501 |
3502 | * `FileNotFoundError`: Check your server path
3503 | * `Connection refused`: Ensure the server is running and the path is correct
3504 | * `Tool execution failed`: Verify the tool's required environment variables are set
3505 | * `Timeout error`: Consider increasing the timeout in your client configuration
3506 | </Tab>
3507 | </Tabs>
3508 |
3509 | ## Next steps
3510 |
3511 | <CardGroup cols={2}>
3512 | <Card title="Example servers" icon="grid" href="/examples">
3513 | Check out our gallery of official MCP servers and implementations
3514 | </Card>
3515 |
3516 | <Card title="Clients" icon="cubes" href="/clients">
3517 | View the list of clients that support MCP integrations
3518 | </Card>
3519 |
3520 | <Card title="Building MCP with LLMs" icon="comments" href="/building-mcp-with-llms">
3521 | Learn how to use LLMs like Claude to speed up your MCP development
3522 | </Card>
3523 |
3524 | <Card title="Core architecture" icon="sitemap" href="/docs/concepts/architecture">
3525 | Understand how MCP connects clients, servers, and LLMs
3526 | </Card>
3527 | </CardGroup>
3528 |
3529 |
3530 | # For Server Developers
3531 |
3532 | Get started building your own server to use in Claude for Desktop and other clients.
3533 |
3534 | In this tutorial, we'll build a simple MCP weather server and connect it to a host, Claude for Desktop. We'll start with a basic setup, and then progress to more complex use cases.
3535 |
3536 | ### What we'll be building
3537 |
3538 | Many LLMs (including Claude) do not currently have the ability to fetch the forecast and severe weather alerts. Let's use MCP to solve that!
3539 |
3540 | We'll build a server that exposes two tools: `get-alerts` and `get-forecast`. Then we'll connect the server to an MCP host (in this case, Claude for Desktop):
3541 |
3542 | <Frame>
3543 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/weather-alerts.png" />
3544 | </Frame>
3545 |
3546 | <Frame>
3547 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/current-weather.png" />
3548 | </Frame>
3549 |
3550 | <Note>
3551 | Servers can connect to any client. We've chosen Claude for Desktop here for simplicity, but we also have guides on [building your own client](/quickstart/client) as well as a [list of other clients here](/clients).
3552 | </Note>
3553 |
3554 | <Accordion title="Why Claude for Desktop and not Claude.ai?">
3555 | Because servers are locally run, MCP currently only supports desktop hosts. Remote hosts are in active development.
3556 | </Accordion>
3557 |
3558 | ### Core MCP Concepts
3559 |
3560 | MCP servers can provide three main types of capabilities:
3561 |
3562 | 1. **Resources**: File-like data that can be read by clients (like API responses or file contents)
3563 | 2. **Tools**: Functions that can be called by the LLM (with user approval)
3564 | 3. **Prompts**: Pre-written templates that help users accomplish specific tasks
3565 |
3566 | This tutorial will primarily focus on tools.
3567 |
3568 | <Tabs>
3569 | <Tab title="Python">
3570 | Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-python)
3571 |
3572 | ### Prerequisite knowledge
3573 |
3574 | This quickstart assumes you have familiarity with:
3575 |
3576 | * Python
3577 | * LLMs like Claude
3578 |
3579 | ### System requirements
3580 |
3581 | * Python 3.10 or higher installed.
3582 | * You must use the Python MCP SDK 1.2.0 or higher.
3583 |
3584 | ### Set up your environment
3585 |
3586 | First, let's install `uv` and set up our Python project and environment:
3587 |
3588 | <CodeGroup>
3589 | ```bash MacOS/Linux
3590 | curl -LsSf https://astral.sh/uv/install.sh | sh
3591 | ```
3592 |
3593 | ```powershell Windows
3594 | powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
3595 | ```
3596 | </CodeGroup>
3597 |
3598 | Make sure to restart your terminal afterwards to ensure that the `uv` command gets picked up.
3599 |
3600 | Now, let's create and set up our project:
3601 |
3602 | <CodeGroup>
3603 | ```bash MacOS/Linux
3604 | # Create a new directory for our project
3605 | uv init weather
3606 | cd weather
3607 |
3608 | # Create virtual environment and activate it
3609 | uv venv
3610 | source .venv/bin/activate
3611 |
3612 | # Install dependencies
3613 | uv add "mcp[cli]" httpx
3614 |
3615 | # Create our server file
3616 | touch weather.py
3617 | ```
3618 |
3619 | ```powershell Windows
3620 | # Create a new directory for our project
3621 | uv init weather
3622 | cd weather
3623 |
3624 | # Create virtual environment and activate it
3625 | uv venv
3626 | .venv\Scripts\activate
3627 |
3628 | # Install dependencies
3629 | uv add mcp[cli] httpx
3630 |
3631 | # Create our server file
3632 | new-item weather.py
3633 | ```
3634 | </CodeGroup>
3635 |
3636 | Now let's dive into building your server.
3637 |
3638 | ## Building your server
3639 |
3640 | ### Importing packages and setting up the instance
3641 |
3642 | Add these to the top of your `weather.py`:
3643 |
3644 | ```python
3645 | from typing import Any
3646 | import httpx
3647 | from mcp.server.fastmcp import FastMCP
3648 |
3649 | # Initialize FastMCP server
3650 | mcp = FastMCP("weather")
3651 |
3652 | # Constants
3653 | NWS_API_BASE = "https://api.weather.gov"
3654 | USER_AGENT = "weather-app/1.0"
3655 | ```
3656 |
3657 | The FastMCP class uses Python type hints and docstrings to automatically generate tool definitions, making it easy to create and maintain MCP tools.
3658 |
3659 | ### Helper functions
3660 |
3661 | Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
3662 |
3663 | ```python
3664 | async def make_nws_request(url: str) -> dict[str, Any] | None:
3665 | """Make a request to the NWS API with proper error handling."""
3666 | headers = {
3667 | "User-Agent": USER_AGENT,
3668 | "Accept": "application/geo+json"
3669 | }
3670 | async with httpx.AsyncClient() as client:
3671 | try:
3672 | response = await client.get(url, headers=headers, timeout=30.0)
3673 | response.raise_for_status()
3674 | return response.json()
3675 | except Exception:
3676 | return None
3677 |
3678 | def format_alert(feature: dict) -> str:
3679 | """Format an alert feature into a readable string."""
3680 | props = feature["properties"]
3681 | return f"""
3682 | Event: {props.get('event', 'Unknown')}
3683 | Area: {props.get('areaDesc', 'Unknown')}
3684 | Severity: {props.get('severity', 'Unknown')}
3685 | Description: {props.get('description', 'No description available')}
3686 | Instructions: {props.get('instruction', 'No specific instructions provided')}
3687 | """
3688 | ```
3689 |
3690 | ### Implementing tool execution
3691 |
3692 | The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
3693 |
3694 | ```python
3695 | @mcp.tool()
3696 | async def get_alerts(state: str) -> str:
3697 | """Get weather alerts for a US state.
3698 |
3699 | Args:
3700 | state: Two-letter US state code (e.g. CA, NY)
3701 | """
3702 | url = f"{NWS_API_BASE}/alerts/active/area/{state}"
3703 | data = await make_nws_request(url)
3704 |
3705 | if not data or "features" not in data:
3706 | return "Unable to fetch alerts or no alerts found."
3707 |
3708 | if not data["features"]:
3709 | return "No active alerts for this state."
3710 |
3711 | alerts = [format_alert(feature) for feature in data["features"]]
3712 | return "\n---\n".join(alerts)
3713 |
3714 | @mcp.tool()
3715 | async def get_forecast(latitude: float, longitude: float) -> str:
3716 | """Get weather forecast for a location.
3717 |
3718 | Args:
3719 | latitude: Latitude of the location
3720 | longitude: Longitude of the location
3721 | """
3722 | # First get the forecast grid endpoint
3723 | points_url = f"{NWS_API_BASE}/points/{latitude},{longitude}"
3724 | points_data = await make_nws_request(points_url)
3725 |
3726 | if not points_data:
3727 | return "Unable to fetch forecast data for this location."
3728 |
3729 | # Get the forecast URL from the points response
3730 | forecast_url = points_data["properties"]["forecast"]
3731 | forecast_data = await make_nws_request(forecast_url)
3732 |
3733 | if not forecast_data:
3734 | return "Unable to fetch detailed forecast."
3735 |
3736 | # Format the periods into a readable forecast
3737 | periods = forecast_data["properties"]["periods"]
3738 | forecasts = []
3739 | for period in periods[:5]: # Only show next 5 periods
3740 | forecast = f"""
3741 | {period['name']}:
3742 | Temperature: {period['temperature']}°{period['temperatureUnit']}
3743 | Wind: {period['windSpeed']} {period['windDirection']}
3744 | Forecast: {period['detailedForecast']}
3745 | """
3746 | forecasts.append(forecast)
3747 |
3748 | return "\n---\n".join(forecasts)
3749 | ```
3750 |
3751 | ### Running the server
3752 |
3753 | Finally, let's initialize and run the server:
3754 |
3755 | ```python
3756 | if __name__ == "__main__":
3757 | # Initialize and run the server
3758 | mcp.run(transport='stdio')
3759 | ```
3760 |
3761 | Your server is complete! Run `uv run weather.py` to confirm that everything's working.
3762 |
3763 | Let's now test your server from an existing MCP host, Claude for Desktop.
3764 |
3765 | ## Testing your server with Claude for Desktop
3766 |
3767 | <Note>
3768 | Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/quickstart/client) tutorial to build an MCP client that connects to the server we just built.
3769 | </Note>
3770 |
3771 | First, make sure you have Claude for Desktop installed. [You can install the latest version
3772 | here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
3773 |
3774 | We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
3775 |
3776 | For example, if you have [VS Code](https://code.visualstudio.com/) installed:
3777 |
3778 | <Tabs>
3779 | <Tab title="MacOS/Linux">
3780 | ```bash
3781 | code ~/Library/Application\ Support/Claude/claude_desktop_config.json
3782 | ```
3783 | </Tab>
3784 |
3785 | <Tab title="Windows">
3786 | ```powershell
3787 | code $env:AppData\Claude\claude_desktop_config.json
3788 | ```
3789 | </Tab>
3790 | </Tabs>
3791 |
3792 | You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
3793 |
3794 | In this case, we'll add our single weather server like so:
3795 |
3796 | <Tabs>
3797 | <Tab title="MacOS/Linux">
3798 | ```json Python
3799 | {
3800 | "mcpServers": {
3801 | "weather": {
3802 | "command": "uv",
3803 | "args": [
3804 | "--directory",
3805 | "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather",
3806 | "run",
3807 | "weather.py"
3808 | ]
3809 | }
3810 | }
3811 | }
3812 | ```
3813 | </Tab>
3814 |
3815 | <Tab title="Windows">
3816 | ```json Python
3817 | {
3818 | "mcpServers": {
3819 | "weather": {
3820 | "command": "uv",
3821 | "args": [
3822 | "--directory",
3823 | "C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather",
3824 | "run",
3825 | "weather.py"
3826 | ]
3827 | }
3828 | }
3829 | }
3830 | ```
3831 | </Tab>
3832 | </Tabs>
3833 |
3834 | <Warning>
3835 | You may need to put the full path to the `uv` executable in the `command` field. You can get this by running `which uv` on MacOS/Linux or `where uv` on Windows.
3836 | </Warning>
3837 |
3838 | <Note>
3839 | Make sure you pass in the absolute path to your server.
3840 | </Note>
3841 |
3842 | This tells Claude for Desktop:
3843 |
3844 | 1. There's an MCP server named "weather"
3845 | 2. To launch it by running `uv --directory /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather run weather`
3846 |
3847 | Save the file, and restart **Claude for Desktop**.
3848 | </Tab>
3849 |
3850 | <Tab title="Node">
3851 | Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-typescript)
3852 |
3853 | ### Prerequisite knowledge
3854 |
3855 | This quickstart assumes you have familiarity with:
3856 |
3857 | * TypeScript
3858 | * LLMs like Claude
3859 |
3860 | ### System requirements
3861 |
3862 | For TypeScript, make sure you have the latest version of Node installed.
3863 |
3864 | ### Set up your environment
3865 |
3866 | First, let's install Node.js and npm if you haven't already. You can download them from [nodejs.org](https://nodejs.org/).
3867 | Verify your Node.js installation:
3868 |
3869 | ```bash
3870 | node --version
3871 | npm --version
3872 | ```
3873 |
3874 | For this tutorial, you'll need Node.js version 16 or higher.
3875 |
3876 | Now, let's create and set up our project:
3877 |
3878 | <CodeGroup>
3879 | ```bash MacOS/Linux
3880 | # Create a new directory for our project
3881 | mkdir weather
3882 | cd weather
3883 |
3884 | # Initialize a new npm project
3885 | npm init -y
3886 |
3887 | # Install dependencies
3888 | npm install @modelcontextprotocol/sdk zod
3889 | npm install -D @types/node typescript
3890 |
3891 | # Create our files
3892 | mkdir src
3893 | touch src/index.ts
3894 | ```
3895 |
3896 | ```powershell Windows
3897 | # Create a new directory for our project
3898 | md weather
3899 | cd weather
3900 |
3901 | # Initialize a new npm project
3902 | npm init -y
3903 |
3904 | # Install dependencies
3905 | npm install @modelcontextprotocol/sdk zod
3906 | npm install -D @types/node typescript
3907 |
3908 | # Create our files
3909 | md src
3910 | new-item src\index.ts
3911 | ```
3912 | </CodeGroup>
3913 |
3914 | Update your package.json to add type: "module" and a build script:
3915 |
3916 | ```json package.json
3917 | {
3918 | "type": "module",
3919 | "bin": {
3920 | "weather": "./build/index.js"
3921 | },
3922 | "scripts": {
3923 | "build": "tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
3924 | },
3925 | "files": [
3926 | "build"
3927 | ],
3928 | }
3929 | ```
3930 |
3931 | Create a `tsconfig.json` in the root of your project:
3932 |
3933 | ```json tsconfig.json
3934 | {
3935 | "compilerOptions": {
3936 | "target": "ES2022",
3937 | "module": "Node16",
3938 | "moduleResolution": "Node16",
3939 | "outDir": "./build",
3940 | "rootDir": "./src",
3941 | "strict": true,
3942 | "esModuleInterop": true,
3943 | "skipLibCheck": true,
3944 | "forceConsistentCasingInFileNames": true
3945 | },
3946 | "include": ["src/**/*"],
3947 | "exclude": ["node_modules"]
3948 | }
3949 | ```
3950 |
3951 | Now let's dive into building your server.
3952 |
3953 | ## Building your server
3954 |
3955 | ### Importing packages and setting up the instance
3956 |
3957 | Add these to the top of your `src/index.ts`:
3958 |
3959 | ```typescript
3960 | import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
3961 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
3962 | import { z } from "zod";
3963 |
3964 | const NWS_API_BASE = "https://api.weather.gov";
3965 | const USER_AGENT = "weather-app/1.0";
3966 |
3967 | // Create server instance
3968 | const server = new McpServer({
3969 | name: "weather",
3970 | version: "1.0.0",
3971 | });
3972 | ```
3973 |
3974 | ### Helper functions
3975 |
3976 | Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
3977 |
3978 | ```typescript
3979 | // Helper function for making NWS API requests
3980 | async function makeNWSRequest<T>(url: string): Promise<T | null> {
3981 | const headers = {
3982 | "User-Agent": USER_AGENT,
3983 | Accept: "application/geo+json",
3984 | };
3985 |
3986 | try {
3987 | const response = await fetch(url, { headers });
3988 | if (!response.ok) {
3989 | throw new Error(`HTTP error! status: ${response.status}`);
3990 | }
3991 | return (await response.json()) as T;
3992 | } catch (error) {
3993 | console.error("Error making NWS request:", error);
3994 | return null;
3995 | }
3996 | }
3997 |
3998 | interface AlertFeature {
3999 | properties: {
4000 | event?: string;
4001 | areaDesc?: string;
4002 | severity?: string;
4003 | status?: string;
4004 | headline?: string;
4005 | };
4006 | }
4007 |
4008 | // Format alert data
4009 | function formatAlert(feature: AlertFeature): string {
4010 | const props = feature.properties;
4011 | return [
4012 | `Event: ${props.event || "Unknown"}`,
4013 | `Area: ${props.areaDesc || "Unknown"}`,
4014 | `Severity: ${props.severity || "Unknown"}`,
4015 | `Status: ${props.status || "Unknown"}`,
4016 | `Headline: ${props.headline || "No headline"}`,
4017 | "---",
4018 | ].join("\n");
4019 | }
4020 |
4021 | interface ForecastPeriod {
4022 | name?: string;
4023 | temperature?: number;
4024 | temperatureUnit?: string;
4025 | windSpeed?: string;
4026 | windDirection?: string;
4027 | shortForecast?: string;
4028 | }
4029 |
4030 | interface AlertsResponse {
4031 | features: AlertFeature[];
4032 | }
4033 |
4034 | interface PointsResponse {
4035 | properties: {
4036 | forecast?: string;
4037 | };
4038 | }
4039 |
4040 | interface ForecastResponse {
4041 | properties: {
4042 | periods: ForecastPeriod[];
4043 | };
4044 | }
4045 | ```
4046 |
4047 | ### Implementing tool execution
4048 |
4049 | The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
4050 |
4051 | ```typescript
4052 | // Register weather tools
4053 | server.tool(
4054 | "get-alerts",
4055 | "Get weather alerts for a state",
4056 | {
4057 | state: z.string().length(2).describe("Two-letter state code (e.g. CA, NY)"),
4058 | },
4059 | async ({ state }) => {
4060 | const stateCode = state.toUpperCase();
4061 | const alertsUrl = `${NWS_API_BASE}/alerts?area=${stateCode}`;
4062 | const alertsData = await makeNWSRequest<AlertsResponse>(alertsUrl);
4063 |
4064 | if (!alertsData) {
4065 | return {
4066 | content: [
4067 | {
4068 | type: "text",
4069 | text: "Failed to retrieve alerts data",
4070 | },
4071 | ],
4072 | };
4073 | }
4074 |
4075 | const features = alertsData.features || [];
4076 | if (features.length === 0) {
4077 | return {
4078 | content: [
4079 | {
4080 | type: "text",
4081 | text: `No active alerts for ${stateCode}`,
4082 | },
4083 | ],
4084 | };
4085 | }
4086 |
4087 | const formattedAlerts = features.map(formatAlert);
4088 | const alertsText = `Active alerts for ${stateCode}:\n\n${formattedAlerts.join("\n")}`;
4089 |
4090 | return {
4091 | content: [
4092 | {
4093 | type: "text",
4094 | text: alertsText,
4095 | },
4096 | ],
4097 | };
4098 | },
4099 | );
4100 |
4101 | server.tool(
4102 | "get-forecast",
4103 | "Get weather forecast for a location",
4104 | {
4105 | latitude: z.number().min(-90).max(90).describe("Latitude of the location"),
4106 | longitude: z.number().min(-180).max(180).describe("Longitude of the location"),
4107 | },
4108 | async ({ latitude, longitude }) => {
4109 | // Get grid point data
4110 | const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(4)},${longitude.toFixed(4)}`;
4111 | const pointsData = await makeNWSRequest<PointsResponse>(pointsUrl);
4112 |
4113 | if (!pointsData) {
4114 | return {
4115 | content: [
4116 | {
4117 | type: "text",
4118 | text: `Failed to retrieve grid point data for coordinates: ${latitude}, ${longitude}. This location may not be supported by the NWS API (only US locations are supported).`,
4119 | },
4120 | ],
4121 | };
4122 | }
4123 |
4124 | const forecastUrl = pointsData.properties?.forecast;
4125 | if (!forecastUrl) {
4126 | return {
4127 | content: [
4128 | {
4129 | type: "text",
4130 | text: "Failed to get forecast URL from grid point data",
4131 | },
4132 | ],
4133 | };
4134 | }
4135 |
4136 | // Get forecast data
4137 | const forecastData = await makeNWSRequest<ForecastResponse>(forecastUrl);
4138 | if (!forecastData) {
4139 | return {
4140 | content: [
4141 | {
4142 | type: "text",
4143 | text: "Failed to retrieve forecast data",
4144 | },
4145 | ],
4146 | };
4147 | }
4148 |
4149 | const periods = forecastData.properties?.periods || [];
4150 | if (periods.length === 0) {
4151 | return {
4152 | content: [
4153 | {
4154 | type: "text",
4155 | text: "No forecast periods available",
4156 | },
4157 | ],
4158 | };
4159 | }
4160 |
4161 | // Format forecast periods
4162 | const formattedForecast = periods.map((period: ForecastPeriod) =>
4163 | [
4164 | `${period.name || "Unknown"}:`,
4165 | `Temperature: ${period.temperature || "Unknown"}°${period.temperatureUnit || "F"}`,
4166 | `Wind: ${period.windSpeed || "Unknown"} ${period.windDirection || ""}`,
4167 | `${period.shortForecast || "No forecast available"}`,
4168 | "---",
4169 | ].join("\n"),
4170 | );
4171 |
4172 | const forecastText = `Forecast for ${latitude}, ${longitude}:\n\n${formattedForecast.join("\n")}`;
4173 |
4174 | return {
4175 | content: [
4176 | {
4177 | type: "text",
4178 | text: forecastText,
4179 | },
4180 | ],
4181 | };
4182 | },
4183 | );
4184 | ```
4185 |
4186 | ### Running the server
4187 |
4188 | Finally, implement the main function to run the server:
4189 |
4190 | ```typescript
4191 | async function main() {
4192 | const transport = new StdioServerTransport();
4193 | await server.connect(transport);
4194 | console.error("Weather MCP Server running on stdio");
4195 | }
4196 |
4197 | main().catch((error) => {
4198 | console.error("Fatal error in main():", error);
4199 | process.exit(1);
4200 | });
4201 | ```
4202 |
4203 | Make sure to run `npm run build` to build your server! This is a very important step in getting your server to connect.
4204 |
4205 | Let's now test your server from an existing MCP host, Claude for Desktop.
4206 |
4207 | ## Testing your server with Claude for Desktop
4208 |
4209 | <Note>
4210 | Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/quickstart/client) tutorial to build an MCP client that connects to the server we just built.
4211 | </Note>
4212 |
4213 | First, make sure you have Claude for Desktop installed. [You can install the latest version
4214 | here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
4215 |
4216 | We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
4217 |
4218 | For example, if you have [VS Code](https://code.visualstudio.com/) installed:
4219 |
4220 | <Tabs>
4221 | <Tab title="MacOS/Linux">
4222 | ```bash
4223 | code ~/Library/Application\ Support/Claude/claude_desktop_config.json
4224 | ```
4225 | </Tab>
4226 |
4227 | <Tab title="Windows">
4228 | ```powershell
4229 | code $env:AppData\Claude\claude_desktop_config.json
4230 | ```
4231 | </Tab>
4232 | </Tabs>
4233 |
4234 | You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
4235 |
4236 | In this case, we'll add our single weather server like so:
4237 |
4238 | <Tabs>
4239 | <Tab title="MacOS/Linux">
4240 | <CodeGroup>
4241 | ```json Node
4242 | {
4243 | "mcpServers": {
4244 | "weather": {
4245 | "command": "node",
4246 | "args": [
4247 | "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js"
4248 | ]
4249 | }
4250 | }
4251 | }
4252 | ```
4253 | </CodeGroup>
4254 | </Tab>
4255 |
4256 | <Tab title="Windows">
4257 | <CodeGroup>
4258 | ```json Node
4259 | {
4260 | "mcpServers": {
4261 | "weather": {
4262 | "command": "node",
4263 | "args": [
4264 | "C:\\PATH\\TO\\PARENT\\FOLDER\\weather\\build\\index.js"
4265 | ]
4266 | }
4267 | }
4268 | }
4269 | ```
4270 | </CodeGroup>
4271 | </Tab>
4272 | </Tabs>
4273 |
4274 | This tells Claude for Desktop:
4275 |
4276 | 1. There's an MCP server named "weather"
4277 | 2. Launch it by running `node /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js`
4278 |
4279 | Save the file, and restart **Claude for Desktop**.
4280 | </Tab>
4281 | </Tabs>
4282 |
4283 | ### Test with commands
4284 |
4285 | Let's make sure Claude for Desktop is picking up the two tools we've exposed in our `weather` server. You can do this by looking for the hammer <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-hammer-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon:
4286 |
4287 | <Frame>
4288 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/visual-indicator-mcp-tools.png" />
4289 | </Frame>
4290 |
4291 | After clicking on the hammer icon, you should see two tools listed:
4292 |
4293 | <Frame>
4294 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/available-mcp-tools.png" />
4295 | </Frame>
4296 |
4297 | If your server isn't being picked up by Claude for Desktop, proceed to the [Troubleshooting](#troubleshooting) section for debugging tips.
4298 |
4299 | If the hammer icon has shown up, you can now test your server by running the following commands in Claude for Desktop:
4300 |
4301 | * What's the weather in Sacramento?
4302 | * What are the active weather alerts in Texas?
4303 |
4304 | <Frame>
4305 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/current-weather.png" />
4306 | </Frame>
4307 |
4308 | <Frame>
4309 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/weather-alerts.png" />
4310 | </Frame>
4311 |
4312 | <Note>
4313 | Since this is the US National Weather service, the queries will only work for US locations.
4314 | </Note>
4315 |
4316 | ## What's happening under the hood
4317 |
4318 | When you ask a question:
4319 |
4320 | 1. The client sends your question to Claude
4321 | 2. Claude analyzes the available tools and decides which one(s) to use
4322 | 3. The client executes the chosen tool(s) through the MCP server
4323 | 4. The results are sent back to Claude
4324 | 5. Claude formulates a natural language response
4325 | 6. The response is displayed to you!
4326 |
4327 | ## Troubleshooting
4328 |
4329 | <AccordionGroup>
4330 | <Accordion title="Claude for Desktop Integration Issues">
4331 | **Getting logs from Claude for Desktop**
4332 |
4333 | Claude.app logging related to MCP is written to log files in `~/Library/Logs/Claude`:
4334 |
4335 | * `mcp.log` will contain general logging about MCP connections and connection failures.
4336 | * Files named `mcp-server-SERVERNAME.log` will contain error (stderr) logging from the named server.
4337 |
4338 | You can run the following command to list recent logs and follow along with any new ones:
4339 |
4340 | ```bash
4341 | # Check Claude's logs for errors
4342 | tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
4343 | ```
4344 |
4345 | **Server not showing up in Claude**
4346 |
4347 | 1. Check your `claude_desktop_config.json` file syntax
4348 | 2. Make sure the path to your project is absolute and not relative
4349 | 3. Restart Claude for Desktop completely
4350 |
4351 | **Tool calls failing silently**
4352 |
4353 | If Claude attempts to use the tools but they fail:
4354 |
4355 | 1. Check Claude's logs for errors
4356 | 2. Verify your server builds and runs without errors
4357 | 3. Try restarting Claude for Desktop
4358 |
4359 | **None of this is working. What do I do?**
4360 |
4361 | Please refer to our [debugging guide](/docs/tools/debugging) for better debugging tools and more detailed guidance.
4362 | </Accordion>
4363 |
4364 | <Accordion title="Weather API Issues">
4365 | **Error: Failed to retrieve grid point data**
4366 |
4367 | This usually means either:
4368 |
4369 | 1. The coordinates are outside the US
4370 | 2. The NWS API is having issues
4371 | 3. You're being rate limited
4372 |
4373 | Fix:
4374 |
4375 | * Verify you're using US coordinates
4376 | * Add a small delay between requests
4377 | * Check the NWS API status page
4378 |
4379 | **Error: No active alerts for \[STATE]**
4380 |
4381 | This isn't an error - it just means there are no current weather alerts for that state. Try a different state or check during severe weather.
4382 | </Accordion>
4383 | </AccordionGroup>
4384 |
4385 | <Note>
4386 | For more advanced troubleshooting, check out our guide on [Debugging MCP](/docs/tools/debugging)
4387 | </Note>
4388 |
4389 | ## Next steps
4390 |
4391 | <CardGroup cols={2}>
4392 | <Card title="Building a client" icon="outlet" href="/quickstart/client">
4393 | Learn how to build your own MCP client that can connect to your server
4394 | </Card>
4395 |
4396 | <Card title="Example servers" icon="grid" href="/examples">
4397 | Check out our gallery of official MCP servers and implementations
4398 | </Card>
4399 |
4400 | <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging">
4401 | Learn how to effectively debug MCP servers and integrations
4402 | </Card>
4403 |
4404 | <Card title="Building MCP with LLMs" icon="comments" href="/building-mcp-with-llms">
4405 | Learn how to use LLMs like Claude to speed up your MCP development
4406 | </Card>
4407 | </CardGroup>
4408 |
4409 |
4410 | # For Claude Desktop Users
4411 |
4412 | Get started using pre-built servers in Claude for Desktop.
4413 |
4414 | In this tutorial, you will extend [Claude for Desktop](https://claude.ai/download) so that it can read from your computer's file system, write new files, move files, and even search files.
4415 |
4416 | <Frame>
4417 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-filesystem.png" />
4418 | </Frame>
4419 |
4420 | Don't worry — it will ask you for your permission before executing these actions!
4421 |
4422 | ## 1. Download Claude for Desktop
4423 |
4424 | Start by downloading [Claude for Desktop](https://claude.ai/download), choosing either macOS or Windows. (Linux is not yet supported for Claude for Desktop.)
4425 |
4426 | Follow the installation instructions.
4427 |
4428 | If you already have Claude for Desktop, make sure it's on the latest version by clicking on the Claude menu on your computer and selecting "Check for Updates..."
4429 |
4430 | <Accordion title="Why Claude for Desktop and not Claude.ai?">
4431 | Because servers are locally run, MCP currently only supports desktop hosts. Remote hosts are in active development.
4432 | </Accordion>
4433 |
4434 | ## 2. Add the Filesystem MCP Server
4435 |
4436 | To add this filesystem functionality, we will be installing a pre-built [Filesystem MCP Server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem) to Claude for Desktop. This is one of dozens of [servers](https://github.com/modelcontextprotocol/servers/tree/main) created by Anthropic and the community.
4437 |
4438 | Get started by opening up the Claude menu on your computer and select "Settings..." Please note that these are not the Claude Account Settings found in the app window itself.
4439 |
4440 | This is what it should look like on a Mac:
4441 |
4442 | <Frame style={{ textAlign: 'center' }}>
4443 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-menu.png" width="400" />
4444 | </Frame>
4445 |
4446 | Click on "Developer" in the lefthand bar of the Settings pane, and then click on "Edit Config":
4447 |
4448 | <Frame>
4449 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-developer.png" />
4450 | </Frame>
4451 |
4452 | This will create a configuration file at:
4453 |
4454 | * macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
4455 | * Windows: `%APPDATA%\Claude\claude_desktop_config.json`
4456 |
4457 | if you don't already have one, and will display the file in your file system.
4458 |
4459 | Open up the configuration file in any text editor. Replace the file contents with this:
4460 |
4461 | <Tabs>
4462 | <Tab title="MacOS/Linux">
4463 | ```json
4464 | {
4465 | "mcpServers": {
4466 | "filesystem": {
4467 | "command": "npx",
4468 | "args": [
4469 | "-y",
4470 | "@modelcontextprotocol/server-filesystem",
4471 | "/Users/username/Desktop",
4472 | "/Users/username/Downloads"
4473 | ]
4474 | }
4475 | }
4476 | }
4477 | ```
4478 | </Tab>
4479 |
4480 | <Tab title="Windows">
4481 | ```json
4482 | {
4483 | "mcpServers": {
4484 | "filesystem": {
4485 | "command": "npx",
4486 | "args": [
4487 | "-y",
4488 | "@modelcontextprotocol/server-filesystem",
4489 | "C:\\Users\\username\\Desktop",
4490 | "C:\\Users\\username\\Downloads"
4491 | ]
4492 | }
4493 | }
4494 | }
4495 | ```
4496 | </Tab>
4497 | </Tabs>
4498 |
4499 | Make sure to replace `username` with your computer's username. The paths should point to valid directories that you want Claude to be able to access and modify. It's set up to work for Desktop and Downloads, but you can add more paths as well.
4500 |
4501 | You will also need [Node.js](https://nodejs.org) on your computer for this to run properly. To verify you have Node installed, open the command line on your computer.
4502 |
4503 | * On macOS, open the Terminal from your Applications folder
4504 | * On Windows, press Windows + R, type "cmd", and press Enter
4505 |
4506 | Once in the command line, verify you have Node installed by entering in the following command:
4507 |
4508 | ```bash
4509 | node --version
4510 | ```
4511 |
4512 | If you get an error saying "command not found" or "node is not recognized", download Node from [nodejs.org](https://nodejs.org/).
4513 |
4514 | <Tip>
4515 | **How does the configuration file work?**
4516 |
4517 | This configuration file tells Claude for Dekstop which MCP servers to start up every time you start the application. In this case, we have added one server called "filesystem" that will use the Node `npx` command to install and run `@modelcontextprotocol/server-filesystem`. This server, described [here](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem), will let you access your file system in Claude for Desktop.
4518 | </Tip>
4519 |
4520 | <Warning>
4521 | **Command Privileges**
4522 |
4523 | Claude for Desktop will run the commands in the configuration file with the permissions of your user account, and access to your local files. Only add commands if you understand and trust the source.
4524 | </Warning>
4525 |
4526 | ## 3. Restart Claude
4527 |
4528 | After updating your configuration file, you need to restart Claude for Desktop.
4529 |
4530 | Upon restarting, you should see a hammer <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-hammer-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon in the bottom right corner of the input box:
4531 |
4532 | <Frame>
4533 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-hammer.png" />
4534 | </Frame>
4535 |
4536 | After clicking on the hammer icon, you should see the tools that come with the Filesystem MCP Server:
4537 |
4538 | <Frame style={{ textAlign: 'center' }}>
4539 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-tools.png" width="400" />
4540 | </Frame>
4541 |
4542 | If your server isn't being picked up by Claude for Desktop, proceed to the [Troubleshooting](#troubleshooting) section for debugging tips.
4543 |
4544 | ## 4. Try it out!
4545 |
4546 | You can now talk to Claude and ask it about your filesystem. It should know when to call the relevant tools.
4547 |
4548 | Things you might try asking Claude:
4549 |
4550 | * Can you write a poem and save it to my desktop?
4551 | * What are some work-related files in my downloads folder?
4552 | * Can you take all the images on my desktop and move them to a new folder called "Images"?
4553 |
4554 | As needed, Claude will call the relevant tools and seek your approval before taking an action:
4555 |
4556 | <Frame style={{ textAlign: 'center' }}>
4557 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-approve.png" width="500" />
4558 | </Frame>
4559 |
4560 | ## Troubleshooting
4561 |
4562 | <AccordionGroup>
4563 | <Accordion title="Server not showing up in Claude / hammer icon missing">
4564 | 1. Restart Claude for Desktop completely
4565 | 2. Check your `claude_desktop_config.json` file syntax
4566 | 3. Make sure the file paths included in `claude_desktop_config.json` are valid and that they are absolute and not relative
4567 | 4. Look at [logs](#getting-logs-from-claude-for-desktop) to see why the server is not connecting
4568 | 5. In your command line, try manually running the server (replacing `username` as you did in `claude_desktop_config.json`) to see if you get any errors:
4569 |
4570 | <Tabs>
4571 | <Tab title="MacOS/Linux">
4572 | ```bash
4573 | npx -y @modelcontextprotocol/server-filesystem /Users/username/Desktop /Users/username/Downloads
4574 | ```
4575 | </Tab>
4576 |
4577 | <Tab title="Windows">
4578 | ```bash
4579 | npx -y @modelcontextprotocol/server-filesystem C:\Users\username\Desktop C:\Users\username\Downloads
4580 | ```
4581 | </Tab>
4582 | </Tabs>
4583 | </Accordion>
4584 |
4585 | <Accordion title="Getting logs from Claude for Desktop">
4586 | Claude.app logging related to MCP is written to log files in:
4587 |
4588 | * macOS: `~/Library/Logs/Claude`
4589 |
4590 | * Windows: `%APPDATA%\Claude\logs`
4591 |
4592 | * `mcp.log` will contain general logging about MCP connections and connection failures.
4593 |
4594 | * Files named `mcp-server-SERVERNAME.log` will contain error (stderr) logging from the named server.
4595 |
4596 | You can run the following command to list recent logs and follow along with any new ones (on Windows, it will only show recent logs):
4597 |
4598 | <Tabs>
4599 | <Tab title="MacOS/Linux">
4600 | ```bash
4601 | # Check Claude's logs for errors
4602 | tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
4603 | ```
4604 | </Tab>
4605 |
4606 | <Tab title="Windows">
4607 | ```bash
4608 | type "%APPDATA%\Claude\logs\mcp*.log"
4609 | ```
4610 | </Tab>
4611 | </Tabs>
4612 | </Accordion>
4613 |
4614 | <Accordion title="Tool calls failing silently">
4615 | If Claude attempts to use the tools but they fail:
4616 |
4617 | 1. Check Claude's logs for errors
4618 | 2. Verify your server builds and runs without errors
4619 | 3. Try restarting Claude for Desktop
4620 | </Accordion>
4621 |
4622 | <Accordion title="None of this is working. What do I do?">
4623 | Please refer to our [debugging guide](/docs/tools/debugging) for better debugging tools and more detailed guidance.
4624 | </Accordion>
4625 |
4626 | <Accordion title="ENOENT error and `${APPDATA}` in paths on Windows">
4627 | If your configured server fails to load, and you see within its logs an error referring to `${APPDATA}` within a path, you may need to add the expanded value of `%APPDATA%` to your `env` key in `claude_desktop_config.json`:
4628 |
4629 | ```json
4630 | {
4631 | "brave-search": {
4632 | "command": "npx",
4633 | "args": ["-y", "@modelcontextprotocol/server-brave-search"],
4634 | "env": {
4635 | "APPDATA": "C:\\Users\\user\\AppData\\Roaming\\",
4636 | "BRAVE_API_KEY": "..."
4637 | }
4638 | }
4639 | }
4640 | ```
4641 |
4642 | With this change in place, launch Claude Desktop once again.
4643 |
4644 | <Warning>
4645 | **NPM should be installed globally**
4646 |
4647 | The `npx` command may continue to fail if you have not installed NPM globally. If NPM is already installed globally, you will find `%APPDATA%\npm` exists on your system. If not, you can install NPM globally by running the following command:
4648 |
4649 | ```bash
4650 | npm install -g npm
4651 | ```
4652 | </Warning>
4653 | </Accordion>
4654 | </AccordionGroup>
4655 |
4656 | ## Next steps
4657 |
4658 | <CardGroup cols={2}>
4659 | <Card title="Explore other servers" icon="grid" href="/examples">
4660 | Check out our gallery of official MCP servers and implementations
4661 | </Card>
4662 |
4663 | <Card title="Build your own server" icon="code" href="/quickstart/server">
4664 | Now build your own custom server to use in Claude for Desktop and other clients
4665 | </Card>
4666 | </CardGroup>
4667 |
4668 |
4669 | # Building MCP with LLMs
4670 |
4671 | Speed up your MCP development using LLMs such as Claude!
4672 |
4673 | This guide will help you use LLMs to help you build custom Model Context Protocol (MCP) servers and clients. We'll be focusing on Claude for this tutorial, but you can do this with any frontier LLM.
4674 |
4675 | ## Preparing the documentation
4676 |
4677 | Before starting, gather the necessary documentation to help Claude understand MCP:
4678 |
4679 | 1. Visit [https://modelcontextprotocol.io/llms-full.txt](https://modelcontextprotocol.io/llms-full.txt) and copy the full documentation text
4680 | 2. Navigate to either the [MCP TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk) or [Python SDK repository](https://github.com/modelcontextprotocol/python-sdk)
4681 | 3. Copy the README files and other relevant documentation
4682 | 4. Paste these documents into your conversation with Claude
4683 |
4684 | ## Describing your server
4685 |
4686 | Once you've provided the documentation, clearly describe to Claude what kind of server you want to build. Be specific about:
4687 |
4688 | * What resources your server will expose
4689 | * What tools it will provide
4690 | * Any prompts it should offer
4691 | * What external systems it needs to interact with
4692 |
4693 | For example:
4694 |
4695 | ```
4696 | Build an MCP server that:
4697 | - Connects to my company's PostgreSQL database
4698 | - Exposes table schemas as resources
4699 | - Provides tools for running read-only SQL queries
4700 | - Includes prompts for common data analysis tasks
4701 | ```
4702 |
4703 | ## Working with Claude
4704 |
4705 | When working with Claude on MCP servers:
4706 |
4707 | 1. Start with the core functionality first, then iterate to add more features
4708 | 2. Ask Claude to explain any parts of the code you don't understand
4709 | 3. Request modifications or improvements as needed
4710 | 4. Have Claude help you test the server and handle edge cases
4711 |
4712 | Claude can help implement all the key MCP features:
4713 |
4714 | * Resource management and exposure
4715 | * Tool definitions and implementations
4716 | * Prompt templates and handlers
4717 | * Error handling and logging
4718 | * Connection and transport setup
4719 |
4720 | ## Best practices
4721 |
4722 | When building MCP servers with Claude:
4723 |
4724 | * Break down complex servers into smaller pieces
4725 | * Test each component thoroughly before moving on
4726 | * Keep security in mind - validate inputs and limit access appropriately
4727 | * Document your code well for future maintenance
4728 | * Follow MCP protocol specifications carefully
4729 |
4730 | ## Next steps
4731 |
4732 | After Claude helps you build your server:
4733 |
4734 | 1. Review the generated code carefully
4735 | 2. Test the server with the MCP Inspector tool
4736 | 3. Connect it to Claude.app or other MCP clients
4737 | 4. Iterate based on real usage and feedback
4738 |
4739 | Remember that Claude can help you modify and improve your server as requirements change over time.
4740 |
4741 | Need more guidance? Just ask Claude specific questions about implementing MCP features or troubleshooting issues that arise.
4742 |
```