#
tokens: 48404/50000 1/15 files (page 3/3)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 3 of 3. Use http://codebase.md/felores/airtable-mcp?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .gitignore
├── .npmignore
├── docs
│   ├── Airtable API Documentation.md
│   ├── Airtable API field types and cell values.md
│   ├── Airtable_MCP_server_guide_for_LLMs.md
│   ├── mcp-llm-guide.md
│   └── MCP-llms-full.md
├── LICENSE
├── package-lock.json
├── package.json
├── prompts
│   ├── project-knowledge.md
│   └── system-prompt.md
├── README.md
├── scripts
│   └── post-build.js
├── src
│   ├── index.ts
│   └── types.ts
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/docs/MCP-llms-full.md:
--------------------------------------------------------------------------------

```markdown
   1 | # Example Clients
   2 | 
   3 | A list of applications that support MCP integrations
   4 | 
   5 | This page provides an overview of applications that support the Model Context Protocol (MCP). Each client may support different MCP features, allowing for varying levels of integration with MCP servers.
   6 | 
   7 | ## Feature support matrix
   8 | 
   9 | | Client                                     | [Resources] | [Prompts] | [Tools] | [Sampling] | Roots | Notes                                                              |
  10 | | ------------------------------------------ | ----------- | --------- | ------- | ---------- | ----- | ------------------------------------------------------------------ |
  11 | | [Claude Desktop App][Claude]               | ✅           | ✅         | ✅       | ❌          | ❌     | Full support for all MCP features                                  |
  12 | | [Zed][Zed]                                 | ❌           | ✅         | ❌       | ❌          | ❌     | Prompts appear as slash commands                                   |
  13 | | [Sourcegraph Cody][Cody]                   | ✅           | ❌         | ❌       | ❌          | ❌     | Supports resources through OpenCTX                                 |
  14 | | [Firebase Genkit][Genkit]                  | ⚠️          | ✅         | ✅       | ❌          | ❌     | Supports resource list and lookup through tools.                   |
  15 | | [Continue][Continue]                       | ✅           | ✅         | ✅       | ❌          | ❌     | Full support for all MCP features                                  |
  16 | | [GenAIScript][GenAIScript]                 | ❌           | ❌         | ✅       | ❌          | ❌     | Supports tools.                                                    |
  17 | | [Cline][Cline]                             | ✅           | ❌         | ✅       | ❌          | ❌     | Supports tools and resources.                                      |
  18 | | [LibreChat][LibreChat]                     | ❌           | ❌         | ✅       | ❌          | ❌     | Supports tools for Agents                                          |
  19 | | [TheiaAI/TheiaIDE][TheiaAI/TheiaIDE]       | ❌           | ❌         | ✅       | ❌          | ❌     | Supports tools for Agents in Theia AI and the AI-powered Theia IDE |
  20 | | [Superinterface][Superinterface]           | ❌           | ❌         | ✅       | ❌          | ❌     | Supports tools                                                     |
  21 | | [5ire][5ire]                               | ❌           | ❌         | ✅       | ❌          | ❌     | Supports tools.                                                    |
  22 | | [Bee Agent Framework][Bee Agent Framework] | ❌           | ❌         | ✅       | ❌          | ❌     | Supports tools in agentic workflows.                               |
  23 | 
  24 | [Claude]: https://claude.ai/download
  25 | 
  26 | [Zed]: https://zed.dev
  27 | 
  28 | [Cody]: https://sourcegraph.com/cody
  29 | 
  30 | [Genkit]: https://github.com/firebase/genkit
  31 | 
  32 | [Continue]: https://github.com/continuedev/continue
  33 | 
  34 | [GenAIScript]: https://microsoft.github.io/genaiscript/reference/scripts/mcp-tools/
  35 | 
  36 | [Cline]: https://github.com/cline/cline
  37 | 
  38 | [LibreChat]: https://github.com/danny-avila/LibreChat
  39 | 
  40 | [TheiaAI/TheiaIDE]: https://eclipsesource.com/blogs/2024/12/19/theia-ide-and-theia-ai-support-mcp/
  41 | 
  42 | [Superinterface]: https://superinterface.ai
  43 | 
  44 | [5ire]: https://github.com/nanbingxyz/5ire
  45 | 
  46 | [Bee Agent Framework]: https://i-am-bee.github.io/bee-agent-framework
  47 | 
  48 | [Resources]: https://modelcontextprotocol.io/docs/concepts/resources
  49 | 
  50 | [Prompts]: https://modelcontextprotocol.io/docs/concepts/prompts
  51 | 
  52 | [Tools]: https://modelcontextprotocol.io/docs/concepts/tools
  53 | 
  54 | [Sampling]: https://modelcontextprotocol.io/docs/concepts/sampling
  55 | 
  56 | ## Client details
  57 | 
  58 | ### Claude Desktop App
  59 | 
  60 | The Claude desktop application provides comprehensive support for MCP, enabling deep integration with local tools and data sources.
  61 | 
  62 | **Key features:**
  63 | 
  64 | *   Full support for resources, allowing attachment of local files and data
  65 | *   Support for prompt templates
  66 | *   Tool integration for executing commands and scripts
  67 | *   Local server connections for enhanced privacy and security
  68 | 
  69 | > ⓘ Note: The Claude.ai web application does not currently support MCP. MCP features are only available in the desktop application.
  70 | 
  71 | ### Zed
  72 | 
  73 | [Zed](https://zed.dev/docs/assistant/model-context-protocol) is a high-performance code editor with built-in MCP support, focusing on prompt templates and tool integration.
  74 | 
  75 | **Key features:**
  76 | 
  77 | *   Prompt templates surface as slash commands in the editor
  78 | *   Tool integration for enhanced coding workflows
  79 | *   Tight integration with editor features and workspace context
  80 | *   Does not support MCP resources
  81 | 
  82 | ### Sourcegraph Cody
  83 | 
  84 | [Cody](https://openctx.org/docs/providers/modelcontextprotocol) is Sourcegraph's AI coding assistant, which implements MCP through OpenCTX.
  85 | 
  86 | **Key features:**
  87 | 
  88 | *   Support for MCP resources
  89 | *   Integration with Sourcegraph's code intelligence
  90 | *   Uses OpenCTX as an abstraction layer
  91 | *   Future support planned for additional MCP features
  92 | 
  93 | ### Firebase Genkit
  94 | 
  95 | [Genkit](https://github.com/firebase/genkit) is Firebase's SDK for building and integrating GenAI features into applications. The [genkitx-mcp](https://github.com/firebase/genkit/tree/main/js/plugins/mcp) plugin enables consuming MCP servers as a client or creating MCP servers from Genkit tools and prompts.
  96 | 
  97 | **Key features:**
  98 | 
  99 | *   Client support for tools and prompts (resources partially supported)
 100 | *   Rich discovery with support in Genkit's Dev UI playground
 101 | *   Seamless interoperability with Genkit's existing tools and prompts
 102 | *   Works across a wide variety of GenAI models from top providers
 103 | 
 104 | ### Continue
 105 | 
 106 | [Continue](https://github.com/continuedev/continue) is an open-source AI code assistant, with built-in support for all MCP features.
 107 | 
 108 | **Key features**
 109 | 
 110 | *   Type "@" to mention MCP resources
 111 | *   Prompt templates surface as slash commands
 112 | *   Use both built-in and MCP tools directly in chat
 113 | *   Supports VS Code and JetBrains IDEs, with any LLM
 114 | 
 115 | ### GenAIScript
 116 | 
 117 | Programmatically assemble prompts for LLMs using [GenAIScript](https://microsoft.github.io/genaiscript/) (in JavaScript). Orchestrate LLMs, tools, and data in JavaScript.
 118 | 
 119 | **Key features:**
 120 | 
 121 | *   JavaScript toolbox to work with prompts
 122 | *   Abstraction to make it easy and productive
 123 | *   Seamless Visual Studio Code integration
 124 | 
 125 | ### Cline
 126 | 
 127 | [Cline](https://github.com/cline/cline) is an autonomous coding agent in VS Code that edits files, runs commands, uses a browser, and more–with your permission at each step.
 128 | 
 129 | **Key features:**
 130 | 
 131 | *   Create and add tools through natural language (e.g. "add a tool that searches the web")
 132 | *   Share custom MCP servers Cline creates with others via the `~/Documents/Cline/MCP` directory
 133 | *   Displays configured MCP servers along with their tools, resources, and any error logs
 134 | 
 135 | ### LibreChat
 136 | 
 137 | [LibreChat](https://github.com/danny-avila/LibreChat) is an open-source, customizable AI chat UI that supports multiple AI providers, now including MCP integration.
 138 | 
 139 | **Key features:**
 140 | 
 141 | *   Extend current tool ecosystem, including [Code Interpreter](https://www.librechat.ai/docs/features/code_interpreter) and Image generation tools, through MCP servers
 142 | *   Add tools to customizable [Agents](https://www.librechat.ai/docs/features/agents), using a variety of LLMs from top providers
 143 | *   Open-source and self-hostable, with secure multi-user support
 144 | *   Future roadmap includes expanded MCP feature support
 145 | 
 146 | ### TheiaAI/TheiaIDE
 147 | 
 148 | [Theia AI](https://eclipsesource.com/blogs/2024/10/07/introducing-theia-ai/) is a framework for building AI-enhanced tools and IDEs. The [AI-powered Theia IDE](https://eclipsesource.com/blogs/2024/10/08/introducting-ai-theia-ide/) is an open and flexible development environment built on Theia AI.
 149 | 
 150 | **Key features:**
 151 | 
 152 | *   **Tool Integration**: Theia AI enables AI agents, including those in the Theia IDE, to utilize MCP servers for seamless tool interaction.
 153 | *   **Customizable Prompts**: The Theia IDE allows users to define and adapt prompts, dynamically integrating MCP servers for tailored workflows.
 154 | *   **Custom agents**: The Theia IDE supports creating custom agents that leverage MCP capabilities, enabling users to design dedicated workflows on the fly.
 155 | 
 156 | Theia AI and Theia IDE's MCP integration provide users with flexibility, making them powerful platforms for exploring and adapting MCP.
 157 | 
 158 | **Learn more:**
 159 | 
 160 | *   [Theia IDE and Theia AI MCP Announcement](https://eclipsesource.com/blogs/2024/12/19/theia-ide-and-theia-ai-support-mcp/)
 161 | *   [Download the AI-powered Theia IDE](https://theia-ide.org/)
 162 | 
 163 | ### Superinterface
 164 | 
 165 | [Superinterface](https://superinterface.ai) is AI infrastructure and a developer platform to build in-app AI assistants with support for MCP, interactive components, client-side function calling and more.
 166 | 
 167 | **Key features:**
 168 | 
 169 | *   Use tools from MCP servers in assistants embedded via React components or script tags
 170 | *   SSE transport support
 171 | *   Use any AI model from any AI provider (OpenAI, Anthropic, Ollama, others)
 172 | 
 173 | ### 5ire
 174 | 
 175 | [5ire](https://github.com/nanbingxyz/5ire) is an open source cross-platform desktop AI assistant that supports tools through MCP servers.
 176 | 
 177 | **Key features:**
 178 | 
 179 | *   Built-in MCP servers can be quickly enabled and disabled.
 180 | *   Users can add more servers by modifying the configuration file.
 181 | *   It is open-source and user-friendly, suitable for beginners.
 182 | *   Future support for MCP will be continuously improved.
 183 | 
 184 | ### Bee Agent Framework
 185 | 
 186 | [Bee Agent Framework](https://i-am-bee.github.io/bee-agent-framework) is an open-source framework for building, deploying, and serving powerful agentic workflows at scale. The framework includes the **MCP Tool**, a native feature that simplifies the integration of MCP servers into agentic workflows.
 187 | 
 188 | **Key features:**
 189 | 
 190 | *   Seamlessly incorporate MCP tools into agentic workflows.
 191 | *   Quickly instantiate framework-native tools from connected MCP client(s).
 192 | *   Planned future support for agentic MCP capabilities.
 193 | 
 194 | **Learn more:**
 195 | 
 196 | *   [Example of using MCP tools in agentic workflow](https://i-am-bee.github.io/bee-agent-framework/#/tools?id=using-the-mcptool-class)
 197 | 
 198 | ## Adding MCP support to your application
 199 | 
 200 | If you've added MCP support to your application, we encourage you to submit a pull request to add it to this list. MCP integration can provide your users with powerful contextual AI capabilities and make your application part of the growing MCP ecosystem.
 201 | 
 202 | Benefits of adding MCP support:
 203 | 
 204 | *   Enable users to bring their own context and tools
 205 | *   Join a growing ecosystem of interoperable AI applications
 206 | *   Provide users with flexible integration options
 207 | *   Support local-first AI workflows
 208 | 
 209 | To get started with implementing MCP in your application, check out our [Python](https://github.com/modelcontextprotocol/python-sdk) or [TypeScript SDK Documentation](https://github.com/modelcontextprotocol/typescript-sdk)
 210 | 
 211 | ## Updates and corrections
 212 | 
 213 | This list is maintained by the community. If you notice any inaccuracies or would like to update information about MCP support in your application, please submit a pull request or [open an issue in our documentation repository](https://github.com/modelcontextprotocol/docs/issues).
 214 | 
 215 | 
 216 | # Contributing
 217 | 
 218 | How to participate in Model Context Protocol development
 219 | 
 220 | We welcome contributions from the community! Please review our [contributing guidelines](https://github.com/modelcontextprotocol/.github/blob/main/CONTRIBUTING.md) for details on how to submit changes.
 221 | 
 222 | All contributors must adhere to our [Code of Conduct](https://github.com/modelcontextprotocol/.github/blob/main/CODE_OF_CONDUCT.md).
 223 | 
 224 | For questions and discussions, please use [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions).
 225 | 
 226 | 
 227 | # Roadmap
 228 | 
 229 | Our plans for evolving Model Context Protocol (H1 2025)
 230 | 
 231 | The Model Context Protocol is rapidly evolving. This page outlines our current thinking on key priorities and future direction for **the first half of 2025**, though these may change significantly as the project develops.
 232 | 
 233 | <Note>The ideas presented here are not commitments—we may solve these challenges differently than described, or some may not materialize at all. This is also not an *exhaustive* list; we may incorporate work that isn't mentioned here.</Note>
 234 | 
 235 | We encourage community participation! Each section links to relevant discussions where you can learn more and contribute your thoughts.
 236 | 
 237 | ## Remote MCP Support
 238 | 
 239 | Our top priority is enabling [remote MCP connections](https://github.com/modelcontextprotocol/specification/discussions/102), allowing clients to securely connect to MCP servers over the internet. Key initiatives include:
 240 | 
 241 | *   [**Authentication & Authorization**](https://github.com/modelcontextprotocol/specification/discussions/64): Adding standardized auth capabilities, particularly focused on OAuth 2.0 support.
 242 | 
 243 | *   [**Service Discovery**](https://github.com/modelcontextprotocol/specification/discussions/69): Defining how clients can discover and connect to remote MCP servers.
 244 | 
 245 | *   [**Stateless Operations**](https://github.com/modelcontextprotocol/specification/discussions/102): Thinking about whether MCP could encompass serverless environments too, where they will need to be mostly stateless.
 246 | 
 247 | ## Reference Implementations
 248 | 
 249 | To help developers build with MCP, we want to offer documentation for:
 250 | 
 251 | *   **Client Examples**: Comprehensive reference client implementation(s), demonstrating all protocol features
 252 | *   **Protocol Drafting**: Streamlined process for proposing and incorporating new protocol features
 253 | 
 254 | ## Distribution & Discovery
 255 | 
 256 | Looking ahead, we're exploring ways to make MCP servers more accessible. Some areas we may investigate include:
 257 | 
 258 | *   **Package Management**: Standardized packaging format for MCP servers
 259 | *   **Installation Tools**: Simplified server installation across MCP clients
 260 | *   **Sandboxing**: Improved security through server isolation
 261 | *   **Server Registry**: A common directory for discovering available MCP servers
 262 | 
 263 | ## Agent Support
 264 | 
 265 | We're expanding MCP's capabilities for [complex agentic workflows](https://github.com/modelcontextprotocol/specification/discussions/111), particularly focusing on:
 266 | 
 267 | *   [**Hierarchical Agent Systems**](https://github.com/modelcontextprotocol/specification/discussions/94): Improved support for trees of agents through namespacing and topology awareness.
 268 | 
 269 | *   [**Interactive Workflows**](https://github.com/modelcontextprotocol/specification/issues/97): Better handling of user permissions and information requests across agent hierarchies, and ways to send output to users instead of models.
 270 | 
 271 | *   [**Streaming Results**](https://github.com/modelcontextprotocol/specification/issues/117): Real-time updates from long-running agent operations.
 272 | 
 273 | ## Broader Ecosystem
 274 | 
 275 | We're also invested in:
 276 | 
 277 | *   **Community-Led Standards Development**: Fostering a collaborative ecosystem where all AI providers can help shape MCP as an open standard through equal participation and shared governance, ensuring it meets the needs of diverse AI applications and use cases.
 278 | *   [**Additional Modalities**](https://github.com/modelcontextprotocol/specification/discussions/88): Expanding beyond text to support audio, video, and other formats.
 279 | *   \[**Standardization**] Considering standardization through a standardization body.
 280 | 
 281 | ## Get Involved
 282 | 
 283 | We welcome community participation in shaping MCP's future. Visit our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to join the conversation and contribute your ideas.
 284 | 
 285 | 
 286 | # Core architecture
 287 | 
 288 | Understand how MCP connects clients, servers, and LLMs
 289 | 
 290 | The Model Context Protocol (MCP) is built on a flexible, extensible architecture that enables seamless communication between LLM applications and integrations. This document covers the core architectural components and concepts.
 291 | 
 292 | ## Overview
 293 | 
 294 | MCP follows a client-server architecture where:
 295 | 
 296 | *   **Hosts** are LLM applications (like Claude Desktop or IDEs) that initiate connections
 297 | *   **Clients** maintain 1:1 connections with servers, inside the host application
 298 | *   **Servers** provide context, tools, and prompts to clients
 299 | 
 300 | ```mermaid
 301 | flowchart LR
 302 |     subgraph "&nbsp;Host (e.g., Claude Desktop)&nbsp;"
 303 |         client1[MCP Client]
 304 |         client2[MCP Client]
 305 |     end
 306 |     subgraph "Server Process"
 307 |         server1[MCP Server]
 308 |     end
 309 |     subgraph "Server Process"
 310 |         server2[MCP Server]
 311 |     end
 312 | 
 313 |     client1 <-->|Transport Layer| server1
 314 |     client2 <-->|Transport Layer| server2
 315 | ```
 316 | 
 317 | ## Core components
 318 | 
 319 | ### Protocol layer
 320 | 
 321 | The protocol layer handles message framing, request/response linking, and high-level communication patterns.
 322 | 
 323 | <Tabs>
 324 |   <Tab title="TypeScript">
 325 |     ```typescript
 326 |     class Protocol<Request, Notification, Result> {
 327 |         // Handle incoming requests
 328 |         setRequestHandler<T>(schema: T, handler: (request: T, extra: RequestHandlerExtra) => Promise<Result>): void
 329 | 
 330 |         // Handle incoming notifications
 331 |         setNotificationHandler<T>(schema: T, handler: (notification: T) => Promise<void>): void
 332 | 
 333 |         // Send requests and await responses
 334 |         request<T>(request: Request, schema: T, options?: RequestOptions): Promise<T>
 335 | 
 336 |         // Send one-way notifications
 337 |         notification(notification: Notification): Promise<void>
 338 |     }
 339 |     ```
 340 |   </Tab>
 341 | 
 342 |   <Tab title="Python">
 343 |     ```python
 344 |     class Session(BaseSession[RequestT, NotificationT, ResultT]):
 345 |         async def send_request(
 346 |             self,
 347 |             request: RequestT,
 348 |             result_type: type[Result]
 349 |         ) -> Result:
 350 |             """
 351 |             Send request and wait for response. Raises McpError if response contains error.
 352 |             """
 353 |             # Request handling implementation
 354 | 
 355 |         async def send_notification(
 356 |             self,
 357 |             notification: NotificationT
 358 |         ) -> None:
 359 |             """Send one-way notification that doesn't expect response."""
 360 |             # Notification handling implementation
 361 | 
 362 |         async def _received_request(
 363 |             self,
 364 |             responder: RequestResponder[ReceiveRequestT, ResultT]
 365 |         ) -> None:
 366 |             """Handle incoming request from other side."""
 367 |             # Request handling implementation
 368 | 
 369 |         async def _received_notification(
 370 |             self,
 371 |             notification: ReceiveNotificationT
 372 |         ) -> None:
 373 |             """Handle incoming notification from other side."""
 374 |             # Notification handling implementation
 375 |     ```
 376 |   </Tab>
 377 | </Tabs>
 378 | 
 379 | Key classes include:
 380 | 
 381 | *   `Protocol`
 382 | *   `Client`
 383 | *   `Server`
 384 | 
 385 | ### Transport layer
 386 | 
 387 | The transport layer handles the actual communication between clients and servers. MCP supports multiple transport mechanisms:
 388 | 
 389 | 1.  **Stdio transport**
 390 |     *   Uses standard input/output for communication
 391 |     *   Ideal for local processes
 392 | 
 393 | 2.  **HTTP with SSE transport**
 394 |     *   Uses Server-Sent Events for server-to-client messages
 395 |     *   HTTP POST for client-to-server messages
 396 | 
 397 | All transports use [JSON-RPC](https://www.jsonrpc.org/) 2.0 to exchange messages. See the [specification](https://spec.modelcontextprotocol.io) for detailed information about the Model Context Protocol message format.
 398 | 
 399 | ### Message types
 400 | 
 401 | MCP has these main types of messages:
 402 | 
 403 | 1.  **Requests** expect a response from the other side:
 404 |     ```typescript
 405 |     interface Request {
 406 |       method: string;
 407 |       params?: { ... };
 408 |     }
 409 |     ```
 410 | 
 411 | 2.  **Results** are successful responses to requests:
 412 |     ```typescript
 413 |     interface Result {
 414 |       [key: string]: unknown;
 415 |     }
 416 |     ```
 417 | 
 418 | 3.  **Errors** indicate that a request failed:
 419 |     ```typescript
 420 |     interface Error {
 421 |       code: number;
 422 |       message: string;
 423 |       data?: unknown;
 424 |     }
 425 |     ```
 426 | 
 427 | 4.  **Notifications** are one-way messages that don't expect a response:
 428 |     ```typescript
 429 |     interface Notification {
 430 |       method: string;
 431 |       params?: { ... };
 432 |     }
 433 |     ```
 434 | 
 435 | ## Connection lifecycle
 436 | 
 437 | ### 1. Initialization
 438 | 
 439 | ```mermaid
 440 | sequenceDiagram
 441 |     participant Client
 442 |     participant Server
 443 | 
 444 |     Client->>Server: initialize request
 445 |     Server->>Client: initialize response
 446 |     Client->>Server: initialized notification
 447 | 
 448 |     Note over Client,Server: Connection ready for use
 449 | ```
 450 | 
 451 | 1.  Client sends `initialize` request with protocol version and capabilities
 452 | 2.  Server responds with its protocol version and capabilities
 453 | 3.  Client sends `initialized` notification as acknowledgment
 454 | 4.  Normal message exchange begins
 455 | 
 456 | ### 2. Message exchange
 457 | 
 458 | After initialization, the following patterns are supported:
 459 | 
 460 | *   **Request-Response**: Client or server sends requests, the other responds
 461 | *   **Notifications**: Either party sends one-way messages
 462 | 
 463 | ### 3. Termination
 464 | 
 465 | Either party can terminate the connection:
 466 | 
 467 | *   Clean shutdown via `close()`
 468 | *   Transport disconnection
 469 | *   Error conditions
 470 | 
 471 | ## Error handling
 472 | 
 473 | MCP defines these standard error codes:
 474 | 
 475 | ```typescript
 476 | enum ErrorCode {
 477 |   // Standard JSON-RPC error codes
 478 |   ParseError = -32700,
 479 |   InvalidRequest = -32600,
 480 |   MethodNotFound = -32601,
 481 |   InvalidParams = -32602,
 482 |   InternalError = -32603
 483 | }
 484 | ```
 485 | 
 486 | SDKs and applications can define their own error codes above -32000.
 487 | 
 488 | Errors are propagated through:
 489 | 
 490 | *   Error responses to requests
 491 | *   Error events on transports
 492 | *   Protocol-level error handlers
 493 | 
 494 | ## Implementation example
 495 | 
 496 | Here's a basic example of implementing an MCP server:
 497 | 
 498 | <Tabs>
 499 |   <Tab title="TypeScript">
 500 |     ```typescript
 501 |     import { Server } from "@modelcontextprotocol/sdk/server/index.js";
 502 |     import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
 503 | 
 504 |     const server = new Server({
 505 |       name: "example-server",
 506 |       version: "1.0.0"
 507 |     }, {
 508 |       capabilities: {
 509 |         resources: {}
 510 |       }
 511 |     });
 512 | 
 513 |     // Handle requests
 514 |     server.setRequestHandler(ListResourcesRequestSchema, async () => {
 515 |       return {
 516 |         resources: [
 517 |           {
 518 |             uri: "example://resource",
 519 |             name: "Example Resource"
 520 |           }
 521 |         ]
 522 |       };
 523 |     });
 524 | 
 525 |     // Connect transport
 526 |     const transport = new StdioServerTransport();
 527 |     await server.connect(transport);
 528 |     ```
 529 |   </Tab>
 530 | 
 531 |   <Tab title="Python">
 532 |     ```python
 533 |     import asyncio
 534 |     import mcp.types as types
 535 |     from mcp.server import Server
 536 |     from mcp.server.stdio import stdio_server
 537 | 
 538 |     app = Server("example-server")
 539 | 
 540 |     @app.list_resources()
 541 |     async def list_resources() -> list[types.Resource]:
 542 |         return [
 543 |             types.Resource(
 544 |                 uri="example://resource",
 545 |                 name="Example Resource"
 546 |             )
 547 |         ]
 548 | 
 549 |     async def main():
 550 |         async with stdio_server() as streams:
 551 |             await app.run(
 552 |                 streams[0],
 553 |                 streams[1],
 554 |                 app.create_initialization_options()
 555 |             )
 556 | 
 557 |     if __name__ == "__main__":
 558 |         asyncio.run(main)
 559 |     ```
 560 |   </Tab>
 561 | </Tabs>
 562 | 
 563 | ## Best practices
 564 | 
 565 | ### Transport selection
 566 | 
 567 | 1.  **Local communication**
 568 |     *   Use stdio transport for local processes
 569 |     *   Efficient for same-machine communication
 570 |     *   Simple process management
 571 | 
 572 | 2.  **Remote communication**
 573 |     *   Use SSE for scenarios requiring HTTP compatibility
 574 |     *   Consider security implications including authentication and authorization
 575 | 
 576 | ### Message handling
 577 | 
 578 | 1.  **Request processing**
 579 |     *   Validate inputs thoroughly
 580 |     *   Use type-safe schemas
 581 |     *   Handle errors gracefully
 582 |     *   Implement timeouts
 583 | 
 584 | 2.  **Progress reporting**
 585 |     *   Use progress tokens for long operations
 586 |     *   Report progress incrementally
 587 |     *   Include total progress when known
 588 | 
 589 | 3.  **Error management**
 590 |     *   Use appropriate error codes
 591 |     *   Include helpful error messages
 592 |     *   Clean up resources on errors
 593 | 
 594 | ## Security considerations
 595 | 
 596 | 1.  **Transport security**
 597 |     *   Use TLS for remote connections
 598 |     *   Validate connection origins
 599 |     *   Implement authentication when needed
 600 | 
 601 | 2.  **Message validation**
 602 |     *   Validate all incoming messages
 603 |     *   Sanitize inputs
 604 |     *   Check message size limits
 605 |     *   Verify JSON-RPC format
 606 | 
 607 | 3.  **Resource protection**
 608 |     *   Implement access controls
 609 |     *   Validate resource paths
 610 |     *   Monitor resource usage
 611 |     *   Rate limit requests
 612 | 
 613 | 4.  **Error handling**
 614 |     *   Don't leak sensitive information
 615 |     *   Log security-relevant errors
 616 |     *   Implement proper cleanup
 617 |     *   Handle DoS scenarios
 618 | 
 619 | ## Debugging and monitoring
 620 | 
 621 | 1.  **Logging**
 622 |     *   Log protocol events
 623 |     *   Track message flow
 624 |     *   Monitor performance
 625 |     *   Record errors
 626 | 
 627 | 2.  **Diagnostics**
 628 |     *   Implement health checks
 629 |     *   Monitor connection state
 630 |     *   Track resource usage
 631 |     *   Profile performance
 632 | 
 633 | 3.  **Testing**
 634 |     *   Test different transports
 635 |     *   Verify error handling
 636 |     *   Check edge cases
 637 |     *   Load test servers
 638 | 
 639 | 
 640 | # Prompts
 641 | 
 642 | Create reusable prompt templates and workflows
 643 | 
 644 | Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a powerful way to standardize and share common LLM interactions.
 645 | 
 646 | <Note>
 647 |   Prompts are designed to be **user-controlled**, meaning they are exposed from servers to clients with the intention of the user being able to explicitly select them for use.
 648 | </Note>
 649 | 
 650 | ## Overview
 651 | 
 652 | Prompts in MCP are predefined templates that can:
 653 | 
 654 | *   Accept dynamic arguments
 655 | *   Include context from resources
 656 | *   Chain multiple interactions
 657 | *   Guide specific workflows
 658 | *   Surface as UI elements (like slash commands)
 659 | 
 660 | ## Prompt structure
 661 | 
 662 | Each prompt is defined with:
 663 | 
 664 | ```typescript
 665 | {
 666 |   name: string;              // Unique identifier for the prompt
 667 |   description?: string;      // Human-readable description
 668 |   arguments?: [              // Optional list of arguments
 669 |     {
 670 |       name: string;          // Argument identifier
 671 |       description?: string;  // Argument description
 672 |       required?: boolean;    // Whether argument is required
 673 |     }
 674 |   ]
 675 | }
 676 | ```
 677 | 
 678 | ## Discovering prompts
 679 | 
 680 | Clients can discover available prompts through the `prompts/list` endpoint:
 681 | 
 682 | ```typescript
 683 | // Request
 684 | {
 685 |   method: "prompts/list"
 686 | }
 687 | 
 688 | // Response
 689 | {
 690 |   prompts: [
 691 |     {
 692 |       name: "analyze-code",
 693 |       description: "Analyze code for potential improvements",
 694 |       arguments: [
 695 |         {
 696 |           name: "language",
 697 |           description: "Programming language",
 698 |           required: true
 699 |         }
 700 |       ]
 701 |     }
 702 |   ]
 703 | }
 704 | ```
 705 | 
 706 | ## Using prompts
 707 | 
 708 | To use a prompt, clients make a `prompts/get` request:
 709 | 
 710 | ````typescript
 711 | // Request
 712 | {
 713 |   method: "prompts/get",
 714 |   params: {
 715 |     name: "analyze-code",
 716 |     arguments: {
 717 |       language: "python"
 718 |     }
 719 |   }
 720 | }
 721 | 
 722 | // Response
 723 | {
 724 |   description: "Analyze Python code for potential improvements",
 725 |   messages: [
 726 |     {
 727 |       role: "user",
 728 |       content: {
 729 |         type: "text",
 730 |         text: "Please analyze the following Python code for potential improvements:\n\n```python\ndef calculate_sum(numbers):\n    total = 0\n    for num in numbers:\n        total = total + num\n    return total\n\nresult = calculate_sum([1, 2, 3, 4, 5])\nprint(result)\n```"
 731 |       }
 732 |     }
 733 |   ]
 734 | }
 735 | ````
 736 | 
 737 | ## Dynamic prompts
 738 | 
 739 | Prompts can be dynamic and include:
 740 | 
 741 | ### Embedded resource context
 742 | 
 743 | ```json
 744 | {
 745 |   "name": "analyze-project",
 746 |   "description": "Analyze project logs and code",
 747 |   "arguments": [
 748 |     {
 749 |       "name": "timeframe",
 750 |       "description": "Time period to analyze logs",
 751 |       "required": true
 752 |     },
 753 |     {
 754 |       "name": "fileUri",
 755 |       "description": "URI of code file to review",
 756 |       "required": true
 757 |     }
 758 |   ]
 759 | }
 760 | ```
 761 | 
 762 | When handling the `prompts/get` request:
 763 | 
 764 | ```json
 765 | {
 766 |   "messages": [
 767 |     {
 768 |       "role": "user",
 769 |       "content": {
 770 |         "type": "text",
 771 |         "text": "Analyze these system logs and the code file for any issues:"
 772 |       }
 773 |     },
 774 |     {
 775 |       "role": "user",
 776 |       "content": {
 777 |         "type": "resource",
 778 |         "resource": {
 779 |           "uri": "logs://recent?timeframe=1h",
 780 |           "text": "[2024-03-14 15:32:11] ERROR: Connection timeout in network.py:127\n[2024-03-14 15:32:15] WARN: Retrying connection (attempt 2/3)\n[2024-03-14 15:32:20] ERROR: Max retries exceeded",
 781 |           "mimeType": "text/plain"
 782 |         }
 783 |       }
 784 |     },
 785 |     {
 786 |       "role": "user",
 787 |       "content": {
 788 |         "type": "resource",
 789 |         "resource": {
 790 |           "uri": "file:///path/to/code.py",
 791 |           "text": "def connect_to_service(timeout=30):\n    retries = 3\n    for attempt in range(retries):\n        try:\n            return establish_connection(timeout)\n        except TimeoutError:\n            if attempt == retries - 1:\n                raise\n            time.sleep(5)\n\ndef establish_connection(timeout):\n    # Connection implementation\n    pass",
 792 |           "mimeType": "text/x-python"
 793 |         }
 794 |       }
 795 |     }
 796 |   ]
 797 | }
 798 | ```
 799 | 
 800 | ### Multi-step workflows
 801 | 
 802 | ```typescript
 803 | const debugWorkflow = {
 804 |   name: "debug-error",
 805 |   async getMessages(error: string) {
 806 |     return [
 807 |       {
 808 |         role: "user",
 809 |         content: {
 810 |           type: "text",
 811 |           text: `Here's an error I'm seeing: ${error}`
 812 |         }
 813 |       },
 814 |       {
 815 |         role: "assistant",
 816 |         content: {
 817 |           type: "text",
 818 |           text: "I'll help analyze this error. What have you tried so far?"
 819 |         }
 820 |       },
 821 |       {
 822 |         role: "user",
 823 |         content: {
 824 |           type: "text",
 825 |           text: "I've tried restarting the service, but the error persists."
 826 |         }
 827 |       }
 828 |     ];
 829 |   }
 830 | };
 831 | ```
 832 | 
 833 | ## Example implementation
 834 | 
 835 | Here's a complete example of implementing prompts in an MCP server:
 836 | 
 837 | <Tabs>
 838 |   <Tab title="TypeScript">
 839 |     ```typescript
 840 |     import { Server } from "@modelcontextprotocol/sdk/server";
 841 |     import {
 842 |       ListPromptsRequestSchema,
 843 |       GetPromptRequestSchema
 844 |     } from "@modelcontextprotocol/sdk/types";
 845 | 
 846 |     const PROMPTS = {
 847 |       "git-commit": {
 848 |         name: "git-commit",
 849 |         description: "Generate a Git commit message",
 850 |         arguments: [
 851 |           {
 852 |             name: "changes",
 853 |             description: "Git diff or description of changes",
 854 |             required: true
 855 |           }
 856 |         ]
 857 |       },
 858 |       "explain-code": {
 859 |         name: "explain-code",
 860 |         description: "Explain how code works",
 861 |         arguments: [
 862 |           {
 863 |             name: "code",
 864 |             description: "Code to explain",
 865 |             required: true
 866 |           },
 867 |           {
 868 |             name: "language",
 869 |             description: "Programming language",
 870 |             required: false
 871 |           }
 872 |         ]
 873 |       }
 874 |     };
 875 | 
 876 |     const server = new Server({
 877 |       name: "example-prompts-server",
 878 |       version: "1.0.0"
 879 |     }, {
 880 |       capabilities: {
 881 |         prompts: {}
 882 |       }
 883 |     });
 884 | 
 885 |     // List available prompts
 886 |     server.setRequestHandler(ListPromptsRequestSchema, async () => {
 887 |       return {
 888 |         prompts: Object.values(PROMPTS)
 889 |       };
 890 |     });
 891 | 
 892 |     // Get specific prompt
 893 |     server.setRequestHandler(GetPromptRequestSchema, async (request) => {
 894 |       const prompt = PROMPTS[request.params.name];
 895 |       if (!prompt) {
 896 |         throw new Error(`Prompt not found: ${request.params.name}`);
 897 |       }
 898 | 
 899 |       if (request.params.name === "git-commit") {
 900 |         return {
 901 |           messages: [
 902 |             {
 903 |               role: "user",
 904 |               content: {
 905 |                 type: "text",
 906 |                 text: `Generate a concise but descriptive commit message for these changes:\n\n${request.params.arguments?.changes}`
 907 |               }
 908 |             }
 909 |           ]
 910 |         };
 911 |       }
 912 | 
 913 |       if (request.params.name === "explain-code") {
 914 |         const language = request.params.arguments?.language || "Unknown";
 915 |         return {
 916 |           messages: [
 917 |             {
 918 |               role: "user",
 919 |               content: {
 920 |                 type: "text",
 921 |                 text: `Explain how this ${language} code works:\n\n${request.params.arguments?.code}`
 922 |               }
 923 |             }
 924 |           ]
 925 |         };
 926 |       }
 927 | 
 928 |       throw new Error("Prompt implementation not found");
 929 |     });
 930 |     ```
 931 |   </Tab>
 932 | 
 933 |   <Tab title="Python">
 934 |     ```python
 935 |     from mcp.server import Server
 936 |     import mcp.types as types
 937 | 
 938 |     # Define available prompts
 939 |     PROMPTS = {
 940 |         "git-commit": types.Prompt(
 941 |             name="git-commit",
 942 |             description="Generate a Git commit message",
 943 |             arguments=[
 944 |                 types.PromptArgument(
 945 |                     name="changes",
 946 |                     description="Git diff or description of changes",
 947 |                     required=True
 948 |                 )
 949 |             ],
 950 |         ),
 951 |         "explain-code": types.Prompt(
 952 |             name="explain-code",
 953 |             description="Explain how code works",
 954 |             arguments=[
 955 |                 types.PromptArgument(
 956 |                     name="code",
 957 |                     description="Code to explain",
 958 |                     required=True
 959 |                 ),
 960 |                 types.PromptArgument(
 961 |                     name="language",
 962 |                     description="Programming language",
 963 |                     required=False
 964 |                 )
 965 |             ],
 966 |         )
 967 |     }
 968 | 
 969 |     # Initialize server
 970 |     app = Server("example-prompts-server")
 971 | 
 972 |     @app.list_prompts()
 973 |     async def list_prompts() -> list[types.Prompt]:
 974 |         return list(PROMPTS.values())
 975 | 
 976 |     @app.get_prompt()
 977 |     async def get_prompt(
 978 |         name: str, arguments: dict[str, str] | None = None
 979 |     ) -> types.GetPromptResult:
 980 |         if name not in PROMPTS:
 981 |             raise ValueError(f"Prompt not found: {name}")
 982 | 
 983 |         if name == "git-commit":
 984 |             changes = arguments.get("changes") if arguments else ""
 985 |             return types.GetPromptResult(
 986 |                 messages=[
 987 |                     types.PromptMessage(
 988 |                         role="user",
 989 |                         content=types.TextContent(
 990 |                             type="text",
 991 |                             text=f"Generate a concise but descriptive commit message "
 992 |                             f"for these changes:\n\n{changes}"
 993 |                         )
 994 |                     )
 995 |                 ]
 996 |             )
 997 | 
 998 |         if name == "explain-code":
 999 |             code = arguments.get("code") if arguments else ""
1000 |             language = arguments.get("language", "Unknown") if arguments else "Unknown"
1001 |             return types.GetPromptResult(
1002 |                 messages=[
1003 |                     types.PromptMessage(
1004 |                         role="user",
1005 |                         content=types.TextContent(
1006 |                             type="text",
1007 |                             text=f"Explain how this {language} code works:\n\n{code}"
1008 |                         )
1009 |                     )
1010 |                 ]
1011 |             )
1012 | 
1013 |         raise ValueError("Prompt implementation not found")
1014 |     ```
1015 |   </Tab>
1016 | </Tabs>
1017 | 
1018 | ## Best practices
1019 | 
1020 | When implementing prompts:
1021 | 
1022 | 1.  Use clear, descriptive prompt names
1023 | 2.  Provide detailed descriptions for prompts and arguments
1024 | 3.  Validate all required arguments
1025 | 4.  Handle missing arguments gracefully
1026 | 5.  Consider versioning for prompt templates
1027 | 6.  Cache dynamic content when appropriate
1028 | 7.  Implement error handling
1029 | 8.  Document expected argument formats
1030 | 9.  Consider prompt composability
1031 | 10. Test prompts with various inputs
1032 | 
1033 | ## UI integration
1034 | 
1035 | Prompts can be surfaced in client UIs as:
1036 | 
1037 | *   Slash commands
1038 | *   Quick actions
1039 | *   Context menu items
1040 | *   Command palette entries
1041 | *   Guided workflows
1042 | *   Interactive forms
1043 | 
1044 | ## Updates and changes
1045 | 
1046 | Servers can notify clients about prompt changes:
1047 | 
1048 | 1.  Server capability: `prompts.listChanged`
1049 | 2.  Notification: `notifications/prompts/list_changed`
1050 | 3.  Client re-fetches prompt list
1051 | 
1052 | ## Security considerations
1053 | 
1054 | When implementing prompts:
1055 | 
1056 | *   Validate all arguments
1057 | *   Sanitize user input
1058 | *   Consider rate limiting
1059 | *   Implement access controls
1060 | *   Audit prompt usage
1061 | *   Handle sensitive data appropriately
1062 | *   Validate generated content
1063 | *   Implement timeouts
1064 | *   Consider prompt injection risks
1065 | *   Document security requirements
1066 | 
1067 | 
1068 | # Resources
1069 | 
1070 | Expose data and content from your servers to LLMs
1071 | 
1072 | Resources are a core primitive in the Model Context Protocol (MCP) that allow servers to expose data and content that can be read by clients and used as context for LLM interactions.
1073 | 
1074 | <Note>
1075 |   Resources are designed to be **application-controlled**, meaning that the client application can decide how and when they should be used.
1076 |   Different MCP clients may handle resources differently. For example:
1077 | 
1078 |   *   Claude Desktop currently requires users to explicitly select resources before they can be used
1079 |   *   Other clients might automatically select resources based on heuristics
1080 |   *   Some implementations may even allow the AI model itself to determine which resources to use
1081 | 
1082 |   Server authors should be prepared to handle any of these interaction patterns when implementing resource support. In order to expose data to models automatically, server authors should use a **model-controlled** primitive such as [Tools](./tools).
1083 | </Note>
1084 | 
1085 | ## Overview
1086 | 
1087 | Resources represent any kind of data that an MCP server wants to make available to clients. This can include:
1088 | 
1089 | *   File contents
1090 | *   Database records
1091 | *   API responses
1092 | *   Live system data
1093 | *   Screenshots and images
1094 | *   Log files
1095 | *   And more
1096 | 
1097 | Each resource is identified by a unique URI and can contain either text or binary data.
1098 | 
1099 | ## Resource URIs
1100 | 
1101 | Resources are identified using URIs that follow this format:
1102 | 
1103 | ```
1104 | [protocol]://[host]/[path]
1105 | ```
1106 | 
1107 | For example:
1108 | 
1109 | *   `file:///home/user/documents/report.pdf`
1110 | *   `postgres://database/customers/schema`
1111 | *   `screen://localhost/display1`
1112 | 
1113 | The protocol and path structure is defined by the MCP server implementation. Servers can define their own custom URI schemes.
1114 | 
1115 | ## Resource types
1116 | 
1117 | Resources can contain two types of content:
1118 | 
1119 | ### Text resources
1120 | 
1121 | Text resources contain UTF-8 encoded text data. These are suitable for:
1122 | 
1123 | *   Source code
1124 | *   Configuration files
1125 | *   Log files
1126 | *   JSON/XML data
1127 | *   Plain text
1128 | 
1129 | ### Binary resources
1130 | 
1131 | Binary resources contain raw binary data encoded in base64. These are suitable for:
1132 | 
1133 | *   Images
1134 | *   PDFs
1135 | *   Audio files
1136 | *   Video files
1137 | *   Other non-text formats
1138 | 
1139 | ## Resource discovery
1140 | 
1141 | Clients can discover available resources through two main methods:
1142 | 
1143 | ### Direct resources
1144 | 
1145 | Servers expose a list of concrete resources via the `resources/list` endpoint. Each resource includes:
1146 | 
1147 | ```typescript
1148 | {
1149 |   uri: string;           // Unique identifier for the resource
1150 |   name: string;          // Human-readable name
1151 |   description?: string;  // Optional description
1152 |   mimeType?: string;     // Optional MIME type
1153 | }
1154 | ```
1155 | 
1156 | ### Resource templates
1157 | 
1158 | For dynamic resources, servers can expose [URI templates](https://datatracker.ietf.org/doc/html/rfc6570) that clients can use to construct valid resource URIs:
1159 | 
1160 | ```typescript
1161 | {
1162 |   uriTemplate: string;   // URI template following RFC 6570
1163 |   name: string;          // Human-readable name for this type
1164 |   description?: string;  // Optional description
1165 |   mimeType?: string;     // Optional MIME type for all matching resources
1166 | }
1167 | ```
1168 | 
1169 | ## Reading resources
1170 | 
1171 | To read a resource, clients make a `resources/read` request with the resource URI.
1172 | 
1173 | The server responds with a list of resource contents:
1174 | 
1175 | ```typescript
1176 | {
1177 |   contents: [
1178 |     {
1179 |       uri: string;        // The URI of the resource
1180 |       mimeType?: string;  // Optional MIME type
1181 | 
1182 |       // One of:
1183 |       text?: string;      // For text resources
1184 |       blob?: string;      // For binary resources (base64 encoded)
1185 |     }
1186 |   ]
1187 | }
1188 | ```
1189 | 
1190 | <Tip>
1191 |   Servers may return multiple resources in response to one `resources/read` request. This could be used, for example, to return a list of files inside a directory when the directory is read.
1192 | </Tip>
1193 | 
1194 | ## Resource updates
1195 | 
1196 | MCP supports real-time updates for resources through two mechanisms:
1197 | 
1198 | ### List changes
1199 | 
1200 | Servers can notify clients when their list of available resources changes via the `notifications/resources/list_changed` notification.
1201 | 
1202 | ### Content changes
1203 | 
1204 | Clients can subscribe to updates for specific resources:
1205 | 
1206 | 1.  Client sends `resources/subscribe` with resource URI
1207 | 2.  Server sends `notifications/resources/updated` when the resource changes
1208 | 3.  Client can fetch latest content with `resources/read`
1209 | 4.  Client can unsubscribe with `resources/unsubscribe`
1210 | 
1211 | ## Example implementation
1212 | 
1213 | Here's a simple example of implementing resource support in an MCP server:
1214 | 
1215 | <Tabs>
1216 |   <Tab title="TypeScript">
1217 |     ```typescript
1218 |     const server = new Server({
1219 |       name: "example-server",
1220 |       version: "1.0.0"
1221 |     }, {
1222 |       capabilities: {
1223 |         resources: {}
1224 |       }
1225 |     });
1226 | 
1227 |     // List available resources
1228 |     server.setRequestHandler(ListResourcesRequestSchema, async () => {
1229 |       return {
1230 |         resources: [
1231 |           {
1232 |             uri: "file:///logs/app.log",
1233 |             name: "Application Logs",
1234 |             mimeType: "text/plain"
1235 |           }
1236 |         ]
1237 |       };
1238 |     });
1239 | 
1240 |     // Read resource contents
1241 |     server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
1242 |       const uri = request.params.uri;
1243 | 
1244 |       if (uri === "file:///logs/app.log") {
1245 |         const logContents = await readLogFile();
1246 |         return {
1247 |           contents: [
1248 |             {
1249 |               uri,
1250 |               mimeType: "text/plain",
1251 |               text: logContents
1252 |             }
1253 |           ]
1254 |         };
1255 |       }
1256 | 
1257 |       throw new Error("Resource not found");
1258 |     });
1259 |     ```
1260 |   </Tab>
1261 | 
1262 |   <Tab title="Python">
1263 |     ```python
1264 |     app = Server("example-server")
1265 | 
1266 |     @app.list_resources()
1267 |     async def list_resources() -> list[types.Resource]:
1268 |         return [
1269 |             types.Resource(
1270 |                 uri="file:///logs/app.log",
1271 |                 name="Application Logs",
1272 |                 mimeType="text/plain"
1273 |             )
1274 |         ]
1275 | 
1276 |     @app.read_resource()
1277 |     async def read_resource(uri: AnyUrl) -> str:
1278 |         if str(uri) == "file:///logs/app.log":
1279 |             log_contents = await read_log_file()
1280 |             return log_contents
1281 | 
1282 |         raise ValueError("Resource not found")
1283 | 
1284 |     # Start server
1285 |     async with stdio_server() as streams:
1286 |         await app.run(
1287 |             streams[0],
1288 |             streams[1],
1289 |             app.create_initialization_options()
1290 |         )
1291 |     ```
1292 |   </Tab>
1293 | </Tabs>
1294 | 
1295 | ## Best practices
1296 | 
1297 | When implementing resource support:
1298 | 
1299 | 1.  Use clear, descriptive resource names and URIs
1300 | 2.  Include helpful descriptions to guide LLM understanding
1301 | 3.  Set appropriate MIME types when known
1302 | 4.  Implement resource templates for dynamic content
1303 | 5.  Use subscriptions for frequently changing resources
1304 | 6.  Handle errors gracefully with clear error messages
1305 | 7.  Consider pagination for large resource lists
1306 | 8.  Cache resource contents when appropriate
1307 | 9.  Validate URIs before processing
1308 | 10. Document your custom URI schemes
1309 | 
1310 | ## Security considerations
1311 | 
1312 | When exposing resources:
1313 | 
1314 | *   Validate all resource URIs
1315 | *   Implement appropriate access controls
1316 | *   Sanitize file paths to prevent directory traversal
1317 | *   Be cautious with binary data handling
1318 | *   Consider rate limiting for resource reads
1319 | *   Audit resource access
1320 | *   Encrypt sensitive data in transit
1321 | *   Validate MIME types
1322 | *   Implement timeouts for long-running reads
1323 | *   Handle resource cleanup appropriately
1324 | 
1325 | 
1326 | # Roots
1327 | 
1328 | Understanding roots in MCP
1329 | 
1330 | Roots are a concept in MCP that define the boundaries where servers can operate. They provide a way for clients to inform servers about relevant resources and their locations.
1331 | 
1332 | ## What are Roots?
1333 | 
1334 | A root is a URI that a client suggests a server should focus on. When a client connects to a server, it declares which roots the server should work with. While primarily used for filesystem paths, roots can be any valid URI including HTTP URLs.
1335 | 
1336 | For example, roots could be:
1337 | 
1338 | ```
1339 | file:///home/user/projects/myapp
1340 | https://api.example.com/v1
1341 | ```
1342 | 
1343 | ## Why Use Roots?
1344 | 
1345 | Roots serve several important purposes:
1346 | 
1347 | 1.  **Guidance**: They inform servers about relevant resources and locations
1348 | 2.  **Clarity**: Roots make it clear which resources are part of your workspace
1349 | 3.  **Organization**: Multiple roots let you work with different resources simultaneously
1350 | 
1351 | ## How Roots Work
1352 | 
1353 | When a client supports roots, it:
1354 | 
1355 | 1.  Declares the `roots` capability during connection
1356 | 2.  Provides a list of suggested roots to the server
1357 | 3.  Notifies the server when roots change (if supported)
1358 | 
1359 | While roots are informational and not strictly enforcing, servers should:
1360 | 
1361 | 1.  Respect the provided roots
1362 | 2.  Use root URIs to locate and access resources
1363 | 3.  Prioritize operations within root boundaries
1364 | 
1365 | ## Common Use Cases
1366 | 
1367 | Roots are commonly used to define:
1368 | 
1369 | *   Project directories
1370 | *   Repository locations
1371 | *   API endpoints
1372 | *   Configuration locations
1373 | *   Resource boundaries
1374 | 
1375 | ## Best Practices
1376 | 
1377 | When working with roots:
1378 | 
1379 | 1.  Only suggest necessary resources
1380 | 2.  Use clear, descriptive names for roots
1381 | 3.  Monitor root accessibility
1382 | 4.  Handle root changes gracefully
1383 | 
1384 | ## Example
1385 | 
1386 | Here's how a typical MCP client might expose roots:
1387 | 
1388 | ```json
1389 | {
1390 |   "roots": [
1391 |     {
1392 |       "uri": "file:///home/user/projects/frontend",
1393 |       "name": "Frontend Repository"
1394 |     },
1395 |     {
1396 |       "uri": "https://api.example.com/v1",
1397 |       "name": "API Endpoint"
1398 |     }
1399 |   ]
1400 | }
1401 | ```
1402 | 
1403 | This configuration suggests the server focus on both a local repository and an API endpoint while keeping them logically separated.
1404 | 
1405 | 
1406 | # Sampling
1407 | 
1408 | Let your servers request completions from LLMs
1409 | 
1410 | Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated agentic behaviors while maintaining security and privacy.
1411 | 
1412 | <Info>
1413 |   This feature of MCP is not yet supported in the Claude Desktop client.
1414 | </Info>
1415 | 
1416 | ## How sampling works
1417 | 
1418 | The sampling flow follows these steps:
1419 | 
1420 | 1.  Server sends a `sampling/createMessage` request to the client
1421 | 2.  Client reviews the request and can modify it
1422 | 3.  Client samples from an LLM
1423 | 4.  Client reviews the completion
1424 | 5.  Client returns the result to the server
1425 | 
1426 | This human-in-the-loop design ensures users maintain control over what the LLM sees and generates.
1427 | 
1428 | ## Message format
1429 | 
1430 | Sampling requests use a standardized message format:
1431 | 
1432 | ```typescript
1433 | {
1434 |   messages: [
1435 |     {
1436 |       role: "user" | "assistant",
1437 |       content: {
1438 |         type: "text" | "image",
1439 | 
1440 |         // For text:
1441 |         text?: string,
1442 | 
1443 |         // For images:
1444 |         data?: string,             // base64 encoded
1445 |         mimeType?: string
1446 |       }
1447 |     }
1448 |   ],
1449 |   modelPreferences?: {
1450 |     hints?: [{
1451 |       name?: string                // Suggested model name/family
1452 |     }],
1453 |     costPriority?: number,         // 0-1, importance of minimizing cost
1454 |     speedPriority?: number,        // 0-1, importance of low latency
1455 |     intelligencePriority?: number  // 0-1, importance of capabilities
1456 |   },
1457 |   systemPrompt?: string,
1458 |   includeContext?: "none" | "thisServer" | "allServers",
1459 |   temperature?: number,
1460 |   maxTokens: number,
1461 |   stopSequences?: string[],
1462 |   metadata?: Record<string, unknown>
1463 | }
1464 | ```
1465 | 
1466 | ## Request parameters
1467 | 
1468 | ### Messages
1469 | 
1470 | The `messages` array contains the conversation history to send to the LLM. Each message has:
1471 | 
1472 | *   `role`: Either "user" or "assistant"
1473 | *   `content`: The message content, which can be:
1474 |     *   Text content with a `text` field
1475 |     *   Image content with `data` (base64) and `mimeType` fields
1476 | 
1477 | ### Model preferences
1478 | 
1479 | The `modelPreferences` object allows servers to specify their model selection preferences:
1480 | 
1481 | *   `hints`: Array of model name suggestions that clients can use to select an appropriate model:
1482 |     *   `name`: String that can match full or partial model names (e.g. "claude-3", "sonnet")
1483 |     *   Clients may map hints to equivalent models from different providers
1484 |     *   Multiple hints are evaluated in preference order
1485 | 
1486 | *   Priority values (0-1 normalized):
1487 |     *   `costPriority`: Importance of minimizing costs
1488 |     *   `speedPriority`: Importance of low latency response
1489 |     *   `intelligencePriority`: Importance of advanced model capabilities
1490 | 
1491 | Clients make the final model selection based on these preferences and their available models.
1492 | 
1493 | ### System prompt
1494 | 
1495 | An optional `systemPrompt` field allows servers to request a specific system prompt. The client may modify or ignore this.
1496 | 
1497 | ### Context inclusion
1498 | 
1499 | The `includeContext` parameter specifies what MCP context to include:
1500 | 
1501 | *   `"none"`: No additional context
1502 | *   `"thisServer"`: Include context from the requesting server
1503 | *   `"allServers"`: Include context from all connected MCP servers
1504 | 
1505 | The client controls what context is actually included.
1506 | 
1507 | ### Sampling parameters
1508 | 
1509 | Fine-tune the LLM sampling with:
1510 | 
1511 | *   `temperature`: Controls randomness (0.0 to 1.0)
1512 | *   `maxTokens`: Maximum tokens to generate
1513 | *   `stopSequences`: Array of sequences that stop generation
1514 | *   `metadata`: Additional provider-specific parameters
1515 | 
1516 | ## Response format
1517 | 
1518 | The client returns a completion result:
1519 | 
1520 | ```typescript
1521 | {
1522 |   model: string,  // Name of the model used
1523 |   stopReason?: "endTurn" | "stopSequence" | "maxTokens" | string,
1524 |   role: "user" | "assistant",
1525 |   content: {
1526 |     type: "text" | "image",
1527 |     text?: string,
1528 |     data?: string,
1529 |     mimeType?: string
1530 |   }
1531 | }
1532 | ```
1533 | 
1534 | ## Example request
1535 | 
1536 | Here's an example of requesting sampling from a client:
1537 | 
1538 | ```json
1539 | {
1540 |   "method": "sampling/createMessage",
1541 |   "params": {
1542 |     "messages": [
1543 |       {
1544 |         "role": "user",
1545 |         "content": {
1546 |           "type": "text",
1547 |           "text": "What files are in the current directory?"
1548 |         }
1549 |       }
1550 |     ],
1551 |     "systemPrompt": "You are a helpful file system assistant.",
1552 |     "includeContext": "thisServer",
1553 |     "maxTokens": 100
1554 |   }
1555 | }
1556 | ```
1557 | 
1558 | ## Best practices
1559 | 
1560 | When implementing sampling:
1561 | 
1562 | 1.  Always provide clear, well-structured prompts
1563 | 2.  Handle both text and image content appropriately
1564 | 3.  Set reasonable token limits
1565 | 4.  Include relevant context through `includeContext`
1566 | 5.  Validate responses before using them
1567 | 6.  Handle errors gracefully
1568 | 7.  Consider rate limiting sampling requests
1569 | 8.  Document expected sampling behavior
1570 | 9.  Test with various model parameters
1571 | 10. Monitor sampling costs
1572 | 
1573 | ## Human in the loop controls
1574 | 
1575 | Sampling is designed with human oversight in mind:
1576 | 
1577 | ### For prompts
1578 | 
1579 | *   Clients should show users the proposed prompt
1580 | *   Users should be able to modify or reject prompts
1581 | *   System prompts can be filtered or modified
1582 | *   Context inclusion is controlled by the client
1583 | 
1584 | ### For completions
1585 | 
1586 | *   Clients should show users the completion
1587 | *   Users should be able to modify or reject completions
1588 | *   Clients can filter or modify completions
1589 | *   Users control which model is used
1590 | 
1591 | ## Security considerations
1592 | 
1593 | When implementing sampling:
1594 | 
1595 | *   Validate all message content
1596 | *   Sanitize sensitive information
1597 | *   Implement appropriate rate limits
1598 | *   Monitor sampling usage
1599 | *   Encrypt data in transit
1600 | *   Handle user data privacy
1601 | *   Audit sampling requests
1602 | *   Control cost exposure
1603 | *   Implement timeouts
1604 | *   Handle model errors gracefully
1605 | 
1606 | ## Common patterns
1607 | 
1608 | ### Agentic workflows
1609 | 
1610 | Sampling enables agentic patterns like:
1611 | 
1612 | *   Reading and analyzing resources
1613 | *   Making decisions based on context
1614 | *   Generating structured data
1615 | *   Handling multi-step tasks
1616 | *   Providing interactive assistance
1617 | 
1618 | ### Context management
1619 | 
1620 | Best practices for context:
1621 | 
1622 | *   Request minimal necessary context
1623 | *   Structure context clearly
1624 | *   Handle context size limits
1625 | *   Update context as needed
1626 | *   Clean up stale context
1627 | 
1628 | ### Error handling
1629 | 
1630 | Robust error handling should:
1631 | 
1632 | *   Catch sampling failures
1633 | *   Handle timeout errors
1634 | *   Manage rate limits
1635 | *   Validate responses
1636 | *   Provide fallback behaviors
1637 | *   Log errors appropriately
1638 | 
1639 | ## Limitations
1640 | 
1641 | Be aware of these limitations:
1642 | 
1643 | *   Sampling depends on client capabilities
1644 | *   Users control sampling behavior
1645 | *   Context size has limits
1646 | *   Rate limits may apply
1647 | *   Costs should be considered
1648 | *   Model availability varies
1649 | *   Response times vary
1650 | *   Not all content types supported
1651 | 
1652 | 
1653 | # Tools
1654 | 
1655 | Enable LLMs to perform actions through your server
1656 | 
1657 | Tools are a powerful primitive in the Model Context Protocol (MCP) that enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world.
1658 | 
1659 | <Note>
1660 |   Tools are designed to be **model-controlled**, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval).
1661 | </Note>
1662 | 
1663 | ## Overview
1664 | 
1665 | Tools in MCP allow servers to expose executable functions that can be invoked by clients and used by LLMs to perform actions. Key aspects of tools include:
1666 | 
1667 | *   **Discovery**: Clients can list available tools through the `tools/list` endpoint
1668 | *   **Invocation**: Tools are called using the `tools/call` endpoint, where servers perform the requested operation and return results
1669 | *   **Flexibility**: Tools can range from simple calculations to complex API interactions
1670 | 
1671 | Like [resources](/docs/concepts/resources), tools are identified by unique names and can include descriptions to guide their usage. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems.
1672 | 
1673 | ## Tool definition structure
1674 | 
1675 | Each tool is defined with the following structure:
1676 | 
1677 | ```typescript
1678 | {
1679 |   name: string;          // Unique identifier for the tool
1680 |   description?: string;  // Human-readable description
1681 |   inputSchema: {         // JSON Schema for the tool's parameters
1682 |     type: "object",
1683 |     properties: { ... }  // Tool-specific parameters
1684 |   }
1685 | }
1686 | ```
1687 | 
1688 | ## Implementing tools
1689 | 
1690 | Here's an example of implementing a basic tool in an MCP server:
1691 | 
1692 | <Tabs>
1693 |   <Tab title="TypeScript">
1694 |     ```typescript
1695 |     const server = new Server({
1696 |       name: "example-server",
1697 |       version: "1.0.0"
1698 |     }, {
1699 |       capabilities: {
1700 |         tools: {}
1701 |       }
1702 |     });
1703 | 
1704 |     // Define available tools
1705 |     server.setRequestHandler(ListToolsRequestSchema, async () => {
1706 |       return {
1707 |         tools: [{
1708 |           name: "calculate_sum",
1709 |           description: "Add two numbers together",
1710 |           inputSchema: {
1711 |             type: "object",
1712 |             properties: {
1713 |               a: { type: "number" },
1714 |               b: { type: "number" }
1715 |             },
1716 |             required: ["a", "b"]
1717 |           }
1718 |         }]
1719 |       };
1720 |     });
1721 | 
1722 |     // Handle tool execution
1723 |     server.setRequestHandler(CallToolRequestSchema, async (request) => {
1724 |       if (request.params.name === "calculate_sum") {
1725 |         const { a, b } = request.params.arguments;
1726 |         return {
1727 |           content: [
1728 |             {
1729 |               type: "text",
1730 |               text: String(a + b)
1731 |             }
1732 |           ]
1733 |         };
1734 |       }
1735 |       throw new Error("Tool not found");
1736 |     });
1737 |     ```
1738 |   </Tab>
1739 | 
1740 |   <Tab title="Python">
1741 |     ```python
1742 |     app = Server("example-server")
1743 | 
1744 |     @app.list_tools()
1745 |     async def list_tools() -> list[types.Tool]:
1746 |         return [
1747 |             types.Tool(
1748 |                 name="calculate_sum",
1749 |                 description="Add two numbers together",
1750 |                 inputSchema={
1751 |                     "type": "object",
1752 |                     "properties": {
1753 |                         "a": {"type": "number"},
1754 |                         "b": {"type": "number"}
1755 |                     },
1756 |                     "required": ["a", "b"]
1757 |                 }
1758 |             )
1759 |         ]
1760 | 
1761 |     @app.call_tool()
1762 |     async def call_tool(
1763 |         name: str,
1764 |         arguments: dict
1765 |     ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
1766 |         if name == "calculate_sum":
1767 |             a = arguments["a"]
1768 |             b = arguments["b"]
1769 |             result = a + b
1770 |             return [types.TextContent(type="text", text=str(result))]
1771 |         raise ValueError(f"Tool not found: {name}")
1772 |     ```
1773 |   </Tab>
1774 | </Tabs>
1775 | 
1776 | ## Example tool patterns
1777 | 
1778 | Here are some examples of types of tools that a server could provide:
1779 | 
1780 | ### System operations
1781 | 
1782 | Tools that interact with the local system:
1783 | 
1784 | ```typescript
1785 | {
1786 |   name: "execute_command",
1787 |   description: "Run a shell command",
1788 |   inputSchema: {
1789 |     type: "object",
1790 |     properties: {
1791 |       command: { type: "string" },
1792 |       args: { type: "array", items: { type: "string" } }
1793 |     }
1794 |   }
1795 | }
1796 | ```
1797 | 
1798 | ### API integrations
1799 | 
1800 | Tools that wrap external APIs:
1801 | 
1802 | ```typescript
1803 | {
1804 |   name: "github_create_issue",
1805 |   description: "Create a GitHub issue",
1806 |   inputSchema: {
1807 |     type: "object",
1808 |     properties: {
1809 |       title: { type: "string" },
1810 |       body: { type: "string" },
1811 |       labels: { type: "array", items: { type: "string" } }
1812 |     }
1813 |   }
1814 | }
1815 | ```
1816 | 
1817 | ### Data processing
1818 | 
1819 | Tools that transform or analyze data:
1820 | 
1821 | ```typescript
1822 | {
1823 |   name: "analyze_csv",
1824 |   description: "Analyze a CSV file",
1825 |   inputSchema: {
1826 |     type: "object",
1827 |     properties: {
1828 |       filepath: { type: "string" },
1829 |       operations: {
1830 |         type: "array",
1831 |         items: {
1832 |           enum: ["sum", "average", "count"]
1833 |         }
1834 |       }
1835 |     }
1836 |   }
1837 | }
1838 | ```
1839 | 
1840 | ## Best practices
1841 | 
1842 | When implementing tools:
1843 | 
1844 | 1.  Provide clear, descriptive names and descriptions
1845 | 2.  Use detailed JSON Schema definitions for parameters
1846 | 3.  Include examples in tool descriptions to demonstrate how the model should use them
1847 | 4.  Implement proper error handling and validation
1848 | 5.  Use progress reporting for long operations
1849 | 6.  Keep tool operations focused and atomic
1850 | 7.  Document expected return value structures
1851 | 8.  Implement proper timeouts
1852 | 9.  Consider rate limiting for resource-intensive operations
1853 | 10. Log tool usage for debugging and monitoring
1854 | 
1855 | ## Security considerations
1856 | 
1857 | When exposing tools:
1858 | 
1859 | ### Input validation
1860 | 
1861 | *   Validate all parameters against the schema
1862 | *   Sanitize file paths and system commands
1863 | *   Validate URLs and external identifiers
1864 | *   Check parameter sizes and ranges
1865 | *   Prevent command injection
1866 | 
1867 | ### Access control
1868 | 
1869 | *   Implement authentication where needed
1870 | *   Use appropriate authorization checks
1871 | *   Audit tool usage
1872 | *   Rate limit requests
1873 | *   Monitor for abuse
1874 | 
1875 | ### Error handling
1876 | 
1877 | *   Don't expose internal errors to clients
1878 | *   Log security-relevant errors
1879 | *   Handle timeouts appropriately
1880 | *   Clean up resources after errors
1881 | *   Validate return values
1882 | 
1883 | ## Tool discovery and updates
1884 | 
1885 | MCP supports dynamic tool discovery:
1886 | 
1887 | 1.  Clients can list available tools at any time
1888 | 2.  Servers can notify clients when tools change using `notifications/tools/list_changed`
1889 | 3.  Tools can be added or removed during runtime
1890 | 4.  Tool definitions can be updated (though this should be done carefully)
1891 | 
1892 | ## Error handling
1893 | 
1894 | Tool errors should be reported within the result object, not as MCP protocol-level errors. This allows the LLM to see and potentially handle the error. When a tool encounters an error:
1895 | 
1896 | 1.  Set `isError` to `true` in the result
1897 | 2.  Include error details in the `content` array
1898 | 
1899 | Here's an example of proper error handling for tools:
1900 | 
1901 | <Tabs>
1902 |   <Tab title="TypeScript">
1903 |     ```typescript
1904 |     try {
1905 |       // Tool operation
1906 |       const result = performOperation();
1907 |       return {
1908 |         content: [
1909 |           {
1910 |             type: "text",
1911 |             text: `Operation successful: ${result}`
1912 |           }
1913 |         ]
1914 |       };
1915 |     } catch (error) {
1916 |       return {
1917 |         isError: true,
1918 |         content: [
1919 |           {
1920 |             type: "text",
1921 |             text: `Error: ${error.message}`
1922 |           }
1923 |         ]
1924 |       };
1925 |     }
1926 |     ```
1927 |   </Tab>
1928 | 
1929 |   <Tab title="Python">
1930 |     ```python
1931 |     try:
1932 |         # Tool operation
1933 |         result = perform_operation()
1934 |         return types.CallToolResult(
1935 |             content=[
1936 |                 types.TextContent(
1937 |                     type="text",
1938 |                     text=f"Operation successful: {result}"
1939 |                 )
1940 |             ]
1941 |         )
1942 |     except Exception as error:
1943 |         return types.CallToolResult(
1944 |             isError=True,
1945 |             content=[
1946 |                 types.TextContent(
1947 |                     type="text",
1948 |                     text=f"Error: {str(error)}"
1949 |                 )
1950 |             ]
1951 |         )
1952 |     ```
1953 |   </Tab>
1954 | </Tabs>
1955 | 
1956 | This approach allows the LLM to see that an error occurred and potentially take corrective action or request human intervention.
1957 | 
1958 | ## Testing tools
1959 | 
1960 | A comprehensive testing strategy for MCP tools should cover:
1961 | 
1962 | *   **Functional testing**: Verify tools execute correctly with valid inputs and handle invalid inputs appropriately
1963 | *   **Integration testing**: Test tool interaction with external systems using both real and mocked dependencies
1964 | *   **Security testing**: Validate authentication, authorization, input sanitization, and rate limiting
1965 | *   **Performance testing**: Check behavior under load, timeout handling, and resource cleanup
1966 | *   **Error handling**: Ensure tools properly report errors through the MCP protocol and clean up resources
1967 | 
1968 | 
1969 | # Transports
1970 | 
1971 | Learn about MCP's communication mechanisms
1972 | 
1973 | Transports in the Model Context Protocol (MCP) provide the foundation for communication between clients and servers. A transport handles the underlying mechanics of how messages are sent and received.
1974 | 
1975 | ## Message Format
1976 | 
1977 | MCP uses [JSON-RPC](https://www.jsonrpc.org/) 2.0 as its wire format. The transport layer is responsible for converting MCP protocol messages into JSON-RPC format for transmission and converting received JSON-RPC messages back into MCP protocol messages.
1978 | 
1979 | There are three types of JSON-RPC messages used:
1980 | 
1981 | ### Requests
1982 | 
1983 | ```typescript
1984 | {
1985 |   jsonrpc: "2.0",
1986 |   id: number | string,
1987 |   method: string,
1988 |   params?: object
1989 | }
1990 | ```
1991 | 
1992 | ### Responses
1993 | 
1994 | ```typescript
1995 | {
1996 |   jsonrpc: "2.0",
1997 |   id: number | string,
1998 |   result?: object,
1999 |   error?: {
2000 |     code: number,
2001 |     message: string,
2002 |     data?: unknown
2003 |   }
2004 | }
2005 | ```
2006 | 
2007 | ### Notifications
2008 | 
2009 | ```typescript
2010 | {
2011 |   jsonrpc: "2.0",
2012 |   method: string,
2013 |   params?: object
2014 | }
2015 | ```
2016 | 
2017 | ## Built-in Transport Types
2018 | 
2019 | MCP includes two standard transport implementations:
2020 | 
2021 | ### Standard Input/Output (stdio)
2022 | 
2023 | The stdio transport enables communication through standard input and output streams. This is particularly useful for local integrations and command-line tools.
2024 | 
2025 | Use stdio when:
2026 | 
2027 | *   Building command-line tools
2028 | *   Implementing local integrations
2029 | *   Needing simple process communication
2030 | *   Working with shell scripts
2031 | 
2032 | <Tabs>
2033 |   <Tab title="TypeScript (Server)">
2034 |     ```typescript
2035 |     const server = new Server({
2036 |       name: "example-server",
2037 |       version: "1.0.0"
2038 |     }, {
2039 |       capabilities: {}
2040 |     });
2041 | 
2042 |     const transport = new StdioServerTransport();
2043 |     await server.connect(transport);
2044 |     ```
2045 |   </Tab>
2046 | 
2047 |   <Tab title="TypeScript (Client)">
2048 |     ```typescript
2049 |     const client = new Client({
2050 |       name: "example-client",
2051 |       version: "1.0.0"
2052 |     }, {
2053 |       capabilities: {}
2054 |     });
2055 | 
2056 |     const transport = new StdioClientTransport({
2057 |       command: "./server",
2058 |       args: ["--option", "value"]
2059 |     });
2060 |     await client.connect(transport);
2061 |     ```
2062 |   </Tab>
2063 | 
2064 |   <Tab title="Python (Server)">
2065 |     ```python
2066 |     app = Server("example-server")
2067 | 
2068 |     async with stdio_server() as streams:
2069 |         await app.run(
2070 |             streams[0],
2071 |             streams[1],
2072 |             app.create_initialization_options()
2073 |         )
2074 |     ```
2075 |   </Tab>
2076 | 
2077 |   <Tab title="Python (Client)">
2078 |     ```python
2079 |     params = StdioServerParameters(
2080 |         command="./server",
2081 |         args=["--option", "value"]
2082 |     )
2083 | 
2084 |     async with stdio_client(params) as streams:
2085 |         async with ClientSession(streams[0], streams[1]) as session:
2086 |             await session.initialize()
2087 |     ```
2088 |   </Tab>
2089 | </Tabs>
2090 | 
2091 | ### Server-Sent Events (SSE)
2092 | 
2093 | SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication.
2094 | 
2095 | Use SSE when:
2096 | 
2097 | *   Only server-to-client streaming is needed
2098 | *   Working with restricted networks
2099 | *   Implementing simple updates
2100 | 
2101 | <Tabs>
2102 |   <Tab title="TypeScript (Server)">
2103 |     ```typescript
2104 |     const server = new Server({
2105 |       name: "example-server",
2106 |       version: "1.0.0"
2107 |     }, {
2108 |       capabilities: {}
2109 |     });
2110 | 
2111 |     const transport = new SSEServerTransport("/message", response);
2112 |     await server.connect(transport);
2113 |     ```
2114 |   </Tab>
2115 | 
2116 |   <Tab title="TypeScript (Client)">
2117 |     ```typescript
2118 |     const client = new Client({
2119 |       name: "example-client",
2120 |       version: "1.0.0"
2121 |     }, {
2122 |       capabilities: {}
2123 |     });
2124 | 
2125 |     const transport = new SSEClientTransport(
2126 |       new URL("http://localhost:3000/sse")
2127 |     );
2128 |     await client.connect(transport);
2129 |     ```
2130 |   </Tab>
2131 | 
2132 |   <Tab title="Python (Server)">
2133 |     ```python
2134 |     from mcp.server.sse import SseServerTransport
2135 |     from starlette.applications import Starlette
2136 |     from starlette.routing import Route
2137 | 
2138 |     app = Server("example-server")
2139 |     sse = SseServerTransport("/messages")
2140 | 
2141 |     async def handle_sse(scope, receive, send):
2142 |         async with sse.connect_sse(scope, receive, send) as streams:
2143 |             await app.run(streams[0], streams[1], app.create_initialization_options())
2144 | 
2145 |     async def handle_messages(scope, receive, send):
2146 |         await sse.handle_post_message(scope, receive, send)
2147 | 
2148 |     starlette_app = Starlette(
2149 |         routes=[
2150 |             Route("/sse", endpoint=handle_sse),
2151 |             Route("/messages", endpoint=handle_messages, methods=["POST"]),
2152 |         ]
2153 |     )
2154 |     ```
2155 |   </Tab>
2156 | 
2157 |   <Tab title="Python (Client)">
2158 |     ```python
2159 |     async with sse_client("http://localhost:8000/sse") as streams:
2160 |         async with ClientSession(streams[0], streams[1]) as session:
2161 |             await session.initialize()
2162 |     ```
2163 |   </Tab>
2164 | </Tabs>
2165 | 
2166 | ## Custom Transports
2167 | 
2168 | MCP makes it easy to implement custom transports for specific needs. Any transport implementation just needs to conform to the Transport interface:
2169 | 
2170 | You can implement custom transports for:
2171 | 
2172 | *   Custom network protocols
2173 | *   Specialized communication channels
2174 | *   Integration with existing systems
2175 | *   Performance optimization
2176 | 
2177 | <Tabs>
2178 |   <Tab title="TypeScript">
2179 |     ```typescript
2180 |     interface Transport {
2181 |       // Start processing messages
2182 |       start(): Promise<void>;
2183 | 
2184 |       // Send a JSON-RPC message
2185 |       send(message: JSONRPCMessage): Promise<void>;
2186 | 
2187 |       // Close the connection
2188 |       close(): Promise<void>;
2189 | 
2190 |       // Callbacks
2191 |       onclose?: () => void;
2192 |       onerror?: (error: Error) => void;
2193 |       onmessage?: (message: JSONRPCMessage) => void;
2194 |     }
2195 |     ```
2196 |   </Tab>
2197 | 
2198 |   <Tab title="Python">
2199 |     Note that while MCP Servers are often implemented with asyncio, we recommend
2200 |     implementing low-level interfaces like transports with `anyio` for wider compatibility.
2201 | 
2202 |     ```python
2203 |     @contextmanager
2204 |     async def create_transport(
2205 |         read_stream: MemoryObjectReceiveStream[JSONRPCMessage | Exception],
2206 |         write_stream: MemoryObjectSendStream[JSONRPCMessage]
2207 |     ):
2208 |         """
2209 |         Transport interface for MCP.
2210 | 
2211 |         Args:
2212 |             read_stream: Stream to read incoming messages from
2213 |             write_stream: Stream to write outgoing messages to
2214 |         """
2215 |         async with anyio.create_task_group() as tg:
2216 |             try:
2217 |                 # Start processing messages
2218 |                 tg.start_soon(lambda: process_messages(read_stream))
2219 | 
2220 |                 # Send messages
2221 |                 async with write_stream:
2222 |                     yield write_stream
2223 | 
2224 |             except Exception as exc:
2225 |                 # Handle errors
2226 |                 raise exc
2227 |             finally:
2228 |                 # Clean up
2229 |                 tg.cancel_scope.cancel()
2230 |                 await write_stream.aclose()
2231 |                 await read_stream.aclose()
2232 |     ```
2233 |   </Tab>
2234 | </Tabs>
2235 | 
2236 | ## Error Handling
2237 | 
2238 | Transport implementations should handle various error scenarios:
2239 | 
2240 | 1.  Connection errors
2241 | 2.  Message parsing errors
2242 | 3.  Protocol errors
2243 | 4.  Network timeouts
2244 | 5.  Resource cleanup
2245 | 
2246 | Example error handling:
2247 | 
2248 | <Tabs>
2249 |   <Tab title="TypeScript">
2250 |     ```typescript
2251 |     class ExampleTransport implements Transport {
2252 |       async start() {
2253 |         try {
2254 |           // Connection logic
2255 |         } catch (error) {
2256 |           this.onerror?.(new Error(`Failed to connect: ${error}`));
2257 |           throw error;
2258 |         }
2259 |       }
2260 | 
2261 |       async send(message: JSONRPCMessage) {
2262 |         try {
2263 |           // Sending logic
2264 |         } catch (error) {
2265 |           this.onerror?.(new Error(`Failed to send message: ${error}`));
2266 |           throw error;
2267 |         }
2268 |       }
2269 |     }
2270 |     ```
2271 |   </Tab>
2272 | 
2273 |   <Tab title="Python">
2274 |     Note that while MCP Servers are often implemented with asyncio, we recommend
2275 |     implementing low-level interfaces like transports with `anyio` for wider compatibility.
2276 | 
2277 |     ```python
2278 |     @contextmanager
2279 |     async def example_transport(scope: Scope, receive: Receive, send: Send):
2280 |         try:
2281 |             # Create streams for bidirectional communication
2282 |             read_stream_writer, read_stream = anyio.create_memory_object_stream(0)
2283 |             write_stream, write_stream_reader = anyio.create_memory_object_stream(0)
2284 | 
2285 |             async def message_handler():
2286 |                 try:
2287 |                     async with read_stream_writer:
2288 |                         # Message handling logic
2289 |                         pass
2290 |                 except Exception as exc:
2291 |                     logger.error(f"Failed to handle message: {exc}")
2292 |                     raise exc
2293 | 
2294 |             async with anyio.create_task_group() as tg:
2295 |                 tg.start_soon(message_handler)
2296 |                 try:
2297 |                     # Yield streams for communication
2298 |                     yield read_stream, write_stream
2299 |                 except Exception as exc:
2300 |                     logger.error(f"Transport error: {exc}")
2301 |                     raise exc
2302 |                 finally:
2303 |                     tg.cancel_scope.cancel()
2304 |                     await write_stream.aclose()
2305 |                     await read_stream.aclose()
2306 |         except Exception as exc:
2307 |             logger.error(f"Failed to initialize transport: {exc}")
2308 |             raise exc
2309 |     ```
2310 |   </Tab>
2311 | </Tabs>
2312 | 
2313 | ## Best Practices
2314 | 
2315 | When implementing or using MCP transport:
2316 | 
2317 | 1.  Handle connection lifecycle properly
2318 | 2.  Implement proper error handling
2319 | 3.  Clean up resources on connection close
2320 | 4.  Use appropriate timeouts
2321 | 5.  Validate messages before sending
2322 | 6.  Log transport events for debugging
2323 | 7.  Implement reconnection logic when appropriate
2324 | 8.  Handle backpressure in message queues
2325 | 9.  Monitor connection health
2326 | 10. Implement proper security measures
2327 | 
2328 | ## Security Considerations
2329 | 
2330 | When implementing transport:
2331 | 
2332 | ### Authentication and Authorization
2333 | 
2334 | *   Implement proper authentication mechanisms
2335 | *   Validate client credentials
2336 | *   Use secure token handling
2337 | *   Implement authorization checks
2338 | 
2339 | ### Data Security
2340 | 
2341 | *   Use TLS for network transport
2342 | *   Encrypt sensitive data
2343 | *   Validate message integrity
2344 | *   Implement message size limits
2345 | *   Sanitize input data
2346 | 
2347 | ### Network Security
2348 | 
2349 | *   Implement rate limiting
2350 | *   Use appropriate timeouts
2351 | *   Handle denial of service scenarios
2352 | *   Monitor for unusual patterns
2353 | *   Implement proper firewall rules
2354 | 
2355 | ## Debugging Transport
2356 | 
2357 | Tips for debugging transport issues:
2358 | 
2359 | 1.  Enable debug logging
2360 | 2.  Monitor message flow
2361 | 3.  Check connection states
2362 | 4.  Validate message formats
2363 | 5.  Test error scenarios
2364 | 6.  Use network analysis tools
2365 | 7.  Implement health checks
2366 | 8.  Monitor resource usage
2367 | 9.  Test edge cases
2368 | 10. Use proper error tracking
2369 | 
2370 | 
2371 | # Debugging
2372 | 
2373 | A comprehensive guide to debugging Model Context Protocol (MCP) integrations
2374 | 
2375 | Effective debugging is essential when developing MCP servers or integrating them with applications. This guide covers the debugging tools and approaches available in the MCP ecosystem.
2376 | 
2377 | <Info>
2378 |   This guide is for macOS. Guides for other platforms are coming soon.
2379 | </Info>
2380 | 
2381 | ## Debugging tools overview
2382 | 
2383 | MCP provides several tools for debugging at different levels:
2384 | 
2385 | 1.  **MCP Inspector**
2386 |     *   Interactive debugging interface
2387 |     *   Direct server testing
2388 |     *   See the [Inspector guide](/docs/tools/inspector) for details
2389 | 
2390 | 2.  **Claude Desktop Developer Tools**
2391 |     *   Integration testing
2392 |     *   Log collection
2393 |     *   Chrome DevTools integration
2394 | 
2395 | 3.  **Server Logging**
2396 |     *   Custom logging implementations
2397 |     *   Error tracking
2398 |     *   Performance monitoring
2399 | 
2400 | ## Debugging in Claude Desktop
2401 | 
2402 | ### Checking server status
2403 | 
2404 | The Claude.app interface provides basic server status information:
2405 | 
2406 | 1.  Click the <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-plug-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon to view:
2407 |     *   Connected servers
2408 |     *   Available prompts and resources
2409 | 
2410 | 2.  Click the <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-hammer-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon to view:
2411 |     *   Tools made available to the model
2412 | 
2413 | ### Viewing logs
2414 | 
2415 | Review detailed MCP logs from Claude Desktop:
2416 | 
2417 | ```bash
2418 | # Follow logs in real-time
2419 | tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
2420 | ```
2421 | 
2422 | The logs capture:
2423 | 
2424 | *   Server connection events
2425 | *   Configuration issues
2426 | *   Runtime errors
2427 | *   Message exchanges
2428 | 
2429 | ### Using Chrome DevTools
2430 | 
2431 | Access Chrome's developer tools inside Claude Desktop to investigate client-side errors:
2432 | 
2433 | 1.  Enable DevTools:
2434 | 
2435 | ```bash
2436 | jq '.allowDevTools = true' ~/Library/Application\ Support/Claude/developer_settings.json > tmp.json \
2437 |   && mv tmp.json ~/Library/Application\ Support/Claude/developer_settings.json
2438 | ```
2439 | 
2440 | 2.  Open DevTools: `Command-Option-Shift-i`
2441 | 
2442 | Note: You'll see two DevTools windows:
2443 | 
2444 | *   Main content window
2445 | *   App title bar window
2446 | 
2447 | Use the Console panel to inspect client-side errors.
2448 | 
2449 | Use the Network panel to inspect:
2450 | 
2451 | *   Message payloads
2452 | *   Connection timing
2453 | 
2454 | ## Common issues
2455 | 
2456 | ### Working directory
2457 | 
2458 | When using MCP servers with Claude Desktop:
2459 | 
2460 | *   The working directory for servers launched via `claude_desktop_config.json` may be undefined (like `/` on macOS) since Claude Desktop could be started from anywhere
2461 | *   Always use absolute paths in your configuration and `.env` files to ensure reliable operation
2462 | *   For testing servers directly via command line, the working directory will be where you run the command
2463 | 
2464 | For example in `claude_desktop_config.json`, use:
2465 | 
2466 | ```json
2467 | {
2468 |   "command": "npx",
2469 |   "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/data"]
2470 | }
2471 | ```
2472 | 
2473 | Instead of relative paths like `./data`
2474 | 
2475 | ### Environment variables
2476 | 
2477 | MCP servers inherit only a subset of environment variables automatically, like `USER`, `HOME`, and `PATH`.
2478 | 
2479 | To override the default variables or provide your own, you can specify an `env` key in `claude_desktop_config.json`:
2480 | 
2481 | ```json
2482 | {
2483 |   "myserver": {
2484 |     "command": "mcp-server-myapp",
2485 |     "env": {
2486 |       "MYAPP_API_KEY": "some_key",
2487 |     }
2488 |   }
2489 | }
2490 | ```
2491 | 
2492 | ### Server initialization
2493 | 
2494 | Common initialization problems:
2495 | 
2496 | 1.  **Path Issues**
2497 |     *   Incorrect server executable path
2498 |     *   Missing required files
2499 |     *   Permission problems
2500 |     *   Try using an absolute path for `command`
2501 | 
2502 | 2.  **Configuration Errors**
2503 |     *   Invalid JSON syntax
2504 |     *   Missing required fields
2505 |     *   Type mismatches
2506 | 
2507 | 3.  **Environment Problems**
2508 |     *   Missing environment variables
2509 |     *   Incorrect variable values
2510 |     *   Permission restrictions
2511 | 
2512 | ### Connection problems
2513 | 
2514 | When servers fail to connect:
2515 | 
2516 | 1.  Check Claude Desktop logs
2517 | 2.  Verify server process is running
2518 | 3.  Test standalone with [Inspector](/docs/tools/inspector)
2519 | 4.  Verify protocol compatibility
2520 | 
2521 | ## Implementing logging
2522 | 
2523 | ### Server-side logging
2524 | 
2525 | When building a server that uses the local stdio [transport](/docs/concepts/transports), all messages logged to stderr (standard error) will be captured by the host application (e.g., Claude Desktop) automatically.
2526 | 
2527 | <Warning>
2528 |   Local MCP servers should not log messages to stdout (standard out), as this will interfere with protocol operation.
2529 | </Warning>
2530 | 
2531 | For all [transports](/docs/concepts/transports), you can also provide logging to the client by sending a log message notification:
2532 | 
2533 | <Tabs>
2534 |   <Tab title="Python">
2535 |     ```python
2536 |     server.request_context.session.send_log_message(
2537 |       level="info",
2538 |       data="Server started successfully",
2539 |     )
2540 |     ```
2541 |   </Tab>
2542 | 
2543 |   <Tab title="TypeScript">
2544 |     ```typescript
2545 |     server.sendLoggingMessage({
2546 |       level: "info",
2547 |       data: "Server started successfully",
2548 |     });
2549 |     ```
2550 |   </Tab>
2551 | </Tabs>
2552 | 
2553 | Important events to log:
2554 | 
2555 | *   Initialization steps
2556 | *   Resource access
2557 | *   Tool execution
2558 | *   Error conditions
2559 | *   Performance metrics
2560 | 
2561 | ### Client-side logging
2562 | 
2563 | In client applications:
2564 | 
2565 | 1.  Enable debug logging
2566 | 2.  Monitor network traffic
2567 | 3.  Track message exchanges
2568 | 4.  Record error states
2569 | 
2570 | ## Debugging workflow
2571 | 
2572 | ### Development cycle
2573 | 
2574 | 1.  Initial Development
2575 |     *   Use [Inspector](/docs/tools/inspector) for basic testing
2576 |     *   Implement core functionality
2577 |     *   Add logging points
2578 | 
2579 | 2.  Integration Testing
2580 |     *   Test in Claude Desktop
2581 |     *   Monitor logs
2582 |     *   Check error handling
2583 | 
2584 | ### Testing changes
2585 | 
2586 | To test changes efficiently:
2587 | 
2588 | *   **Configuration changes**: Restart Claude Desktop
2589 | *   **Server code changes**: Use Command-R to reload
2590 | *   **Quick iteration**: Use [Inspector](/docs/tools/inspector) during development
2591 | 
2592 | ## Best practices
2593 | 
2594 | ### Logging strategy
2595 | 
2596 | 1.  **Structured Logging**
2597 |     *   Use consistent formats
2598 |     *   Include context
2599 |     *   Add timestamps
2600 |     *   Track request IDs
2601 | 
2602 | 2.  **Error Handling**
2603 |     *   Log stack traces
2604 |     *   Include error context
2605 |     *   Track error patterns
2606 |     *   Monitor recovery
2607 | 
2608 | 3.  **Performance Tracking**
2609 |     *   Log operation timing
2610 |     *   Monitor resource usage
2611 |     *   Track message sizes
2612 |     *   Measure latency
2613 | 
2614 | ### Security considerations
2615 | 
2616 | When debugging:
2617 | 
2618 | 1.  **Sensitive Data**
2619 |     *   Sanitize logs
2620 |     *   Protect credentials
2621 |     *   Mask personal information
2622 | 
2623 | 2.  **Access Control**
2624 |     *   Verify permissions
2625 |     *   Check authentication
2626 |     *   Monitor access patterns
2627 | 
2628 | ## Getting help
2629 | 
2630 | When encountering issues:
2631 | 
2632 | 1.  **First Steps**
2633 |     *   Check server logs
2634 |     *   Test with [Inspector](/docs/tools/inspector)
2635 |     *   Review configuration
2636 |     *   Verify environment
2637 | 
2638 | 2.  **Support Channels**
2639 |     *   GitHub issues
2640 |     *   GitHub discussions
2641 | 
2642 | 3.  **Providing Information**
2643 |     *   Log excerpts
2644 |     *   Configuration files
2645 |     *   Steps to reproduce
2646 |     *   Environment details
2647 | 
2648 | ## Next steps
2649 | 
2650 | <CardGroup cols={2}>
2651 |   <Card title="MCP Inspector" icon="magnifying-glass" href="/docs/tools/inspector">
2652 |     Learn to use the MCP Inspector
2653 |   </Card>
2654 | </CardGroup>
2655 | 
2656 | 
2657 | # Inspector
2658 | 
2659 | In-depth guide to using the MCP Inspector for testing and debugging Model Context Protocol servers
2660 | 
2661 | The [MCP Inspector](https://github.com/modelcontextprotocol/inspector) is an interactive developer tool for testing and debugging MCP servers. While the [Debugging Guide](/docs/tools/debugging) covers the Inspector as part of the overall debugging toolkit, this document provides a detailed exploration of the Inspector's features and capabilities.
2662 | 
2663 | ## Getting started
2664 | 
2665 | ### Installation and basic usage
2666 | 
2667 | The Inspector runs directly through `npx` without requiring installation:
2668 | 
2669 | ```bash
2670 | npx @modelcontextprotocol/inspector <command>
2671 | ```
2672 | 
2673 | ```bash
2674 | npx @modelcontextprotocol/inspector <command> <arg1> <arg2>
2675 | ```
2676 | 
2677 | #### Inspecting servers from NPM or PyPi
2678 | 
2679 | A common way to start server packages from [NPM](https://npmjs.com) or [PyPi](https://pypi.com).
2680 | 
2681 | <Tabs>
2682 |   <Tab title="NPM package">
2683 |     ```bash
2684 |     npx -y @modelcontextprotocol/inspector npx <package-name> <args>
2685 |     # For example
2686 |     npx -y @modelcontextprotocol/inspector npx server-postgres postgres://127.0.0.1/testdb
2687 |     ```
2688 |   </Tab>
2689 | 
2690 |   <Tab title="PyPi package">
2691 |     ```bash
2692 |     npx @modelcontextprotocol/inspector uvx <package-name> <args>
2693 |     # For example
2694 |     npx @modelcontextprotocol/inspector uvx mcp-server-git --repository ~/code/mcp/servers.git
2695 |     ```
2696 |   </Tab>
2697 | </Tabs>
2698 | 
2699 | #### Inspecting locally developed servers
2700 | 
2701 | To inspect servers locally developed or downloaded as a repository, the most common
2702 | way is:
2703 | 
2704 | <Tabs>
2705 |   <Tab title="TypeScript">
2706 |     ```bash
2707 |     npx @modelcontextprotocol/inspector node path/to/server/index.js args...
2708 |     ```
2709 |   </Tab>
2710 | 
2711 |   <Tab title="Python">
2712 |     ```bash
2713 |     npx @modelcontextprotocol/inspector \
2714 |       uv \
2715 |       --directory path/to/server \
2716 |       run \
2717 |       package-name \
2718 |       args...
2719 |     ```
2720 |   </Tab>
2721 | </Tabs>
2722 | 
2723 | Please carefully read any attached README for the most accurate instructions.
2724 | 
2725 | ## Feature overview
2726 | 
2727 | <Frame caption="The MCP Inspector interface">
2728 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/mcp-inspector.png" />
2729 | </Frame>
2730 | 
2731 | The Inspector provides several features for interacting with your MCP server:
2732 | 
2733 | ### Server connection pane
2734 | 
2735 | *   Allows selecting the [transport](/docs/concepts/transports) for connecting to the server
2736 | *   For local servers, supports customizing the command-line arguments and environment
2737 | 
2738 | ### Resources tab
2739 | 
2740 | *   Lists all available resources
2741 | *   Shows resource metadata (MIME types, descriptions)
2742 | *   Allows resource content inspection
2743 | *   Supports subscription testing
2744 | 
2745 | ### Prompts tab
2746 | 
2747 | *   Displays available prompt templates
2748 | *   Shows prompt arguments and descriptions
2749 | *   Enables prompt testing with custom arguments
2750 | *   Previews generated messages
2751 | 
2752 | ### Tools tab
2753 | 
2754 | *   Lists available tools
2755 | *   Shows tool schemas and descriptions
2756 | *   Enables tool testing with custom inputs
2757 | *   Displays tool execution results
2758 | 
2759 | ### Notifications pane
2760 | 
2761 | *   Presents all logs recorded from the server
2762 | *   Shows notifications received from the server
2763 | 
2764 | ## Best practices
2765 | 
2766 | ### Development workflow
2767 | 
2768 | 1.  Start Development
2769 |     *   Launch Inspector with your server
2770 |     *   Verify basic connectivity
2771 |     *   Check capability negotiation
2772 | 
2773 | 2.  Iterative testing
2774 |     *   Make server changes
2775 |     *   Rebuild the server
2776 |     *   Reconnect the Inspector
2777 |     *   Test affected features
2778 |     *   Monitor messages
2779 | 
2780 | 3.  Test edge cases
2781 |     *   Invalid inputs
2782 |     *   Missing prompt arguments
2783 |     *   Concurrent operations
2784 |     *   Verify error handling and error responses
2785 | 
2786 | ## Next steps
2787 | 
2788 | <CardGroup cols={2}>
2789 |   <Card title="Inspector Repository" icon="github" href="https://github.com/modelcontextprotocol/inspector">
2790 |     Check out the MCP Inspector source code
2791 |   </Card>
2792 | 
2793 |   <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging">
2794 |     Learn about broader debugging strategies
2795 |   </Card>
2796 | </CardGroup>
2797 | 
2798 | 
2799 | # Example Servers
2800 | 
2801 | A list of example servers and implementations
2802 | 
2803 | This page showcases various Model Context Protocol (MCP) servers that demonstrate the protocol's capabilities and versatility. These servers enable Large Language Models (LLMs) to securely access tools and data sources.
2804 | 
2805 | ## Reference implementations
2806 | 
2807 | These official reference servers demonstrate core MCP features and SDK usage:
2808 | 
2809 | ### Data and file systems
2810 | 
2811 | *   **[Filesystem](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem)** - Secure file operations with configurable access controls
2812 | *   **[PostgreSQL](https://github.com/modelcontextprotocol/servers/tree/main/src/postgres)** - Read-only database access with schema inspection capabilities
2813 | *   **[SQLite](https://github.com/modelcontextprotocol/servers/tree/main/src/sqlite)** - Database interaction and business intelligence features
2814 | *   **[Google Drive](https://github.com/modelcontextprotocol/servers/tree/main/src/gdrive)** - File access and search capabilities for Google Drive
2815 | 
2816 | ### Development tools
2817 | 
2818 | *   **[Git](https://github.com/modelcontextprotocol/servers/tree/main/src/git)** - Tools to read, search, and manipulate Git repositories
2819 | *   **[GitHub](https://github.com/modelcontextprotocol/servers/tree/main/src/github)** - Repository management, file operations, and GitHub API integration
2820 | *   **[GitLab](https://github.com/modelcontextprotocol/servers/tree/main/src/gitlab)** - GitLab API integration enabling project management
2821 | *   **[Sentry](https://github.com/modelcontextprotocol/servers/tree/main/src/sentry)** - Retrieving and analyzing issues from Sentry.io
2822 | 
2823 | ### Web and browser automation
2824 | 
2825 | *   **[Brave Search](https://github.com/modelcontextprotocol/servers/tree/main/src/brave-search)** - Web and local search using Brave's Search API
2826 | *   **[Fetch](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch)** - Web content fetching and conversion optimized for LLM usage
2827 | *   **[Puppeteer](https://github.com/modelcontextprotocol/servers/tree/main/src/puppeteer)** - Browser automation and web scraping capabilities
2828 | 
2829 | ### Productivity and communication
2830 | 
2831 | *   **[Slack](https://github.com/modelcontextprotocol/servers/tree/main/src/slack)** - Channel management and messaging capabilities
2832 | *   **[Google Maps](https://github.com/modelcontextprotocol/servers/tree/main/src/google-maps)** - Location services, directions, and place details
2833 | *   **[Memory](https://github.com/modelcontextprotocol/servers/tree/main/src/memory)** - Knowledge graph-based persistent memory system
2834 | 
2835 | ### AI and specialized tools
2836 | 
2837 | *   **[EverArt](https://github.com/modelcontextprotocol/servers/tree/main/src/everart)** - AI image generation using various models
2838 | *   **[Sequential Thinking](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking)** - Dynamic problem-solving through thought sequences
2839 | *   **[AWS KB Retrieval](https://github.com/modelcontextprotocol/servers/tree/main/src/aws-kb-retrieval-server)** - Retrieval from AWS Knowledge Base using Bedrock Agent Runtime
2840 | 
2841 | ## Official integrations
2842 | 
2843 | These MCP servers are maintained by companies for their platforms:
2844 | 
2845 | *   **[Axiom](https://github.com/axiomhq/mcp-server-axiom)** - Query and analyze logs, traces, and event data using natural language
2846 | *   **[Browserbase](https://github.com/browserbase/mcp-server-browserbase)** - Automate browser interactions in the cloud
2847 | *   **[Cloudflare](https://github.com/cloudflare/mcp-server-cloudflare)** - Deploy and manage resources on the Cloudflare developer platform
2848 | *   **[E2B](https://github.com/e2b-dev/mcp-server)** - Execute code in secure cloud sandboxes
2849 | *   **[Neon](https://github.com/neondatabase/mcp-server-neon)** - Interact with the Neon serverless Postgres platform
2850 | *   **[Obsidian Markdown Notes](https://github.com/calclavia/mcp-obsidian)** - Read and search through Markdown notes in Obsidian vaults
2851 | *   **[Qdrant](https://github.com/qdrant/mcp-server-qdrant/)** - Implement semantic memory using the Qdrant vector search engine
2852 | *   **[Raygun](https://github.com/MindscapeHQ/mcp-server-raygun)** - Access crash reporting and monitoring data
2853 | *   **[Search1API](https://github.com/fatwang2/search1api-mcp)** - Unified API for search, crawling, and sitemaps
2854 | *   **[Tinybird](https://github.com/tinybirdco/mcp-tinybird)** - Interface with the Tinybird serverless ClickHouse platform
2855 | 
2856 | ## Community highlights
2857 | 
2858 | A growing ecosystem of community-developed servers extends MCP's capabilities:
2859 | 
2860 | *   **[Docker](https://github.com/ckreiling/mcp-server-docker)** - Manage containers, images, volumes, and networks
2861 | *   **[Kubernetes](https://github.com/Flux159/mcp-server-kubernetes)** - Manage pods, deployments, and services
2862 | *   **[Linear](https://github.com/jerhadf/linear-mcp-server)** - Project management and issue tracking
2863 | *   **[Snowflake](https://github.com/datawiz168/mcp-snowflake-service)** - Interact with Snowflake databases
2864 | *   **[Spotify](https://github.com/varunneal/spotify-mcp)** - Control Spotify playback and manage playlists
2865 | *   **[Todoist](https://github.com/abhiz123/todoist-mcp-server)** - Task management integration
2866 | 
2867 | > **Note:** Community servers are untested and should be used at your own risk. They are not affiliated with or endorsed by Anthropic.
2868 | 
2869 | For a complete list of community servers, visit the [MCP Servers Repository](https://github.com/modelcontextprotocol/servers).
2870 | 
2871 | ## Getting started
2872 | 
2873 | ### Using reference servers
2874 | 
2875 | TypeScript-based servers can be used directly with `npx`:
2876 | 
2877 | ```bash
2878 | npx -y @modelcontextprotocol/server-memory
2879 | ```
2880 | 
2881 | Python-based servers can be used with `uvx` (recommended) or `pip`:
2882 | 
2883 | ```bash
2884 | # Using uvx
2885 | uvx mcp-server-git
2886 | 
2887 | # Using pip
2888 | pip install mcp-server-git
2889 | python -m mcp_server_git
2890 | ```
2891 | 
2892 | ### Configuring with Claude
2893 | 
2894 | To use an MCP server with Claude, add it to your configuration:
2895 | 
2896 | ```json
2897 | {
2898 |   "mcpServers": {
2899 |     "memory": {
2900 |       "command": "npx",
2901 |       "args": ["-y", "@modelcontextprotocol/server-memory"]
2902 |     },
2903 |     "filesystem": {
2904 |       "command": "npx",
2905 |       "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"]
2906 |     },
2907 |     "github": {
2908 |       "command": "npx",
2909 |       "args": ["-y", "@modelcontextprotocol/server-github"],
2910 |       "env": {
2911 |         "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
2912 |       }
2913 |     }
2914 |   }
2915 | }
2916 | ```
2917 | 
2918 | ## Additional resources
2919 | 
2920 | *   [MCP Servers Repository](https://github.com/modelcontextprotocol/servers) - Complete collection of reference implementations and community servers
2921 | *   [Awesome MCP Servers](https://github.com/punkpeye/awesome-mcp-servers) - Curated list of MCP servers
2922 | *   [MCP CLI](https://github.com/wong2/mcp-cli) - Command-line inspector for testing MCP servers
2923 | *   [MCP Get](https://mcp-get.com) - Tool for installing and managing MCP servers
2924 | *   [Supergateway](https://github.com/supercorp-ai/supergateway) - Run MCP stdio servers over SSE
2925 | 
2926 | Visit our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to engage with the MCP community.
2927 | 
2928 | 
2929 | # Introduction
2930 | 
2931 | Get started with the Model Context Protocol (MCP)
2932 | 
2933 | MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
2934 | 
2935 | ## Why MCP?
2936 | 
2937 | MCP helps you build agents and complex workflows on top of LLMs. LLMs frequently need to integrate with data and tools, and MCP provides:
2938 | 
2939 | *   A growing list of pre-built integrations that your LLM can directly plug into
2940 | *   The flexibility to switch between LLM providers and vendors
2941 | *   Best practices for securing your data within your infrastructure
2942 | 
2943 | ### General architecture
2944 | 
2945 | At its core, MCP follows a client-server architecture where a host application can connect to multiple servers:
2946 | 
2947 | ```mermaid
2948 | flowchart LR
2949 |     subgraph "Your Computer"
2950 |         Host["Host with MCP Client\n(Claude, IDEs, Tools)"]
2951 |         S1["MCP Server A"]
2952 |         S2["MCP Server B"]
2953 |         S3["MCP Server C"]
2954 |         Host <-->|"MCP Protocol"| S1
2955 |         Host <-->|"MCP Protocol"| S2
2956 |         Host <-->|"MCP Protocol"| S3
2957 |         S1 <--> D1[("Local\nData Source A")]
2958 |         S2 <--> D2[("Local\nData Source B")]
2959 |     end
2960 |     subgraph "Internet"
2961 |         S3 <-->|"Web APIs"| D3[("Remote\nService C")]
2962 |     end
2963 | ```
2964 | 
2965 | *   **MCP Hosts**: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP
2966 | *   **MCP Clients**: Protocol clients that maintain 1:1 connections with servers
2967 | *   **MCP Servers**: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol
2968 | *   **Local Data Sources**: Your computer's files, databases, and services that MCP servers can securely access
2969 | *   **Remote Services**: External systems available over the internet (e.g., through APIs) that MCP servers can connect to
2970 | 
2971 | ## Get started
2972 | 
2973 | Choose the path that best fits your needs:
2974 | 
2975 | #### Quick Starts
2976 | 
2977 | <CardGroup cols={2}>
2978 |   <Card title="For Server Developers" icon="bolt" href="/quickstart/server">
2979 |     Get started building your own server to use in Claude for Desktop and other clients
2980 |   </Card>
2981 | 
2982 |   <Card title="For Client Developers" icon="bolt" href="/quickstart/client">
2983 |     Get started building your own client that can integrate with all MCP servers
2984 |   </Card>
2985 | 
2986 |   <Card title="For Claude Desktop Users" icon="bolt" href="/quickstart/user">
2987 |     Get started using pre-built servers in Claude for Desktop
2988 |   </Card>
2989 | </CardGroup>
2990 | 
2991 | #### Examples
2992 | 
2993 | <CardGroup cols={2}>
2994 |   <Card title="Example Servers" icon="grid" href="/examples">
2995 |     Check out our gallery of official MCP servers and implementations
2996 |   </Card>
2997 | 
2998 |   <Card title="Example Clients" icon="cubes" href="/clients">
2999 |     View the list of clients that support MCP integrations
3000 |   </Card>
3001 | </CardGroup>
3002 | 
3003 | ## Tutorials
3004 | 
3005 | <CardGroup cols={2}>
3006 |   <Card title="Building MCP with LLMs" icon="comments" href="/tutorials/building-mcp-with-llms">
3007 |     Learn how to use LLMs like Claude to speed up your MCP development
3008 |   </Card>
3009 | 
3010 |   <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging">
3011 |     Learn how to effectively debug MCP servers and integrations
3012 |   </Card>
3013 | 
3014 |   <Card title="MCP Inspector" icon="magnifying-glass" href="/docs/tools/inspector">
3015 |     Test and inspect your MCP servers with our interactive debugging tool
3016 |   </Card>
3017 | </CardGroup>
3018 | 
3019 | ## Explore MCP
3020 | 
3021 | Dive deeper into MCP's core concepts and capabilities:
3022 | 
3023 | <CardGroup cols={2}>
3024 |   <Card title="Core architecture" icon="sitemap" href="/docs/concepts/architecture">
3025 |     Understand how MCP connects clients, servers, and LLMs
3026 |   </Card>
3027 | 
3028 |   <Card title="Resources" icon="database" href="/docs/concepts/resources">
3029 |     Expose data and content from your servers to LLMs
3030 |   </Card>
3031 | 
3032 |   <Card title="Prompts" icon="message" href="/docs/concepts/prompts">
3033 |     Create reusable prompt templates and workflows
3034 |   </Card>
3035 | 
3036 |   <Card title="Tools" icon="wrench" href="/docs/concepts/tools">
3037 |     Enable LLMs to perform actions through your server
3038 |   </Card>
3039 | 
3040 |   <Card title="Sampling" icon="robot" href="/docs/concepts/sampling">
3041 |     Let your servers request completions from LLMs
3042 |   </Card>
3043 | 
3044 |   <Card title="Transports" icon="network-wired" href="/docs/concepts/transports">
3045 |     Learn about MCP's communication mechanism
3046 |   </Card>
3047 | </CardGroup>
3048 | 
3049 | ## Contributing
3050 | 
3051 | Want to contribute? Check out our [Contributing Guide](/development/contributing) to learn how you can help improve MCP.
3052 | 
3053 | 
3054 | # For Client Developers
3055 | 
3056 | Get started building your own client that can integrate with all MCP servers.
3057 | 
3058 | In this tutorial, you'll learn how to build a LLM-powered chatbot client that connects to MCP servers. It helps to have gone through the [Server quickstart](/quickstart/server) that guides you through the basic of building your first server.
3059 | 
3060 | <Tabs>
3061 |   <Tab title="Python">
3062 |     [You can find the complete code for this tutorial here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client)
3063 | 
3064 |     ## System Requirements
3065 | 
3066 |     Before starting, ensure your system meets these requirements:
3067 | 
3068 |     *   Mac or Windows computer
3069 |     *   Latest Python version installed
3070 |     *   Latest version of `uv` installed
3071 | 
3072 |     ## Setting Up Your Environment
3073 | 
3074 |     First, create a new Python project with `uv`:
3075 | 
3076 |     ```bash
3077 |     # Create project directory
3078 |     uv init mcp-client
3079 |     cd mcp-client
3080 | 
3081 |     # Create virtual environment
3082 |     uv venv
3083 | 
3084 |     # Activate virtual environment
3085 |     # On Windows:
3086 |     .venv\Scripts\activate
3087 |     # On Unix or MacOS:
3088 |     source .venv/bin/activate
3089 | 
3090 |     # Install required packages
3091 |     uv add mcp anthropic python-dotenv
3092 | 
3093 |     # Remove boilerplate files
3094 |     rm hello.py
3095 | 
3096 |     # Create our main file
3097 |     touch client.py
3098 |     ```
3099 | 
3100 |     ## Setting Up Your API Key
3101 | 
3102 |     You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys).
3103 | 
3104 |     Create a `.env` file to store it:
3105 | 
3106 |     ```bash
3107 |     # Create .env file
3108 |     touch .env
3109 |     ```
3110 | 
3111 |     Add your key to the `.env` file:
3112 | 
3113 |     ```bash
3114 |     ANTHROPIC_API_KEY=<your key here>
3115 |     ```
3116 | 
3117 |     Add `.env` to your `.gitignore`:
3118 | 
3119 |     ```bash
3120 |     echo ".env" >> .gitignore
3121 |     ```
3122 | 
3123 |     <Warning>
3124 |       Make sure you keep your `ANTHROPIC_API_KEY` secure!
3125 |     </Warning>
3126 | 
3127 |     ## Creating the Client
3128 | 
3129 |     ### Basic Client Structure
3130 | 
3131 |     First, let's set up our imports and create the basic client class:
3132 | 
3133 |     ```python
3134 |     import asyncio
3135 |     from typing import Optional
3136 |     from contextlib import AsyncExitStack
3137 | 
3138 |     from mcp import ClientSession, StdioServerParameters
3139 |     from mcp.client.stdio import stdio_client
3140 | 
3141 |     from anthropic import Anthropic
3142 |     from dotenv import load_dotenv
3143 | 
3144 |     load_dotenv()  # load environment variables from .env
3145 | 
3146 |     class MCPClient:
3147 |         def __init__(self):
3148 |             # Initialize session and client objects
3149 |             self.session: Optional[ClientSession] = None
3150 |             self.exit_stack = AsyncExitStack()
3151 |             self.anthropic = Anthropic()
3152 |         # methods will go here
3153 |     ```
3154 | 
3155 |     ### Server Connection Management
3156 | 
3157 |     Next, we'll implement the method to connect to an MCP server:
3158 | 
3159 |     ```python
3160 |     async def connect_to_server(self, server_script_path: str):
3161 |         """Connect to an MCP server
3162 |         
3163 |         Args:
3164 |             server_script_path: Path to the server script (.py or .js)
3165 |         """
3166 |         is_python = server_script_path.endswith('.py')
3167 |         is_js = server_script_path.endswith('.js')
3168 |         if not (is_python or is_js):
3169 |             raise ValueError("Server script must be a .py or .js file")
3170 |             
3171 |         command = "python" if is_python else "node"
3172 |         server_params = StdioServerParameters(
3173 |             command=command,
3174 |             args=[server_script_path],
3175 |             env=None
3176 |         )
3177 |         
3178 |         stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
3179 |         self.stdio, self.write = stdio_transport
3180 |         self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
3181 |         
3182 |         await self.session.initialize()
3183 |         
3184 |         # List available tools
3185 |         response = await self.session.list_tools()
3186 |         tools = response.tools
3187 |         print("\nConnected to server with tools:", [tool.name for tool in tools])
3188 |     ```
3189 | 
3190 |     ### Query Processing Logic
3191 | 
3192 |     Now let's add the core functionality for processing queries and handling tool calls:
3193 | 
3194 |     ```python
3195 |     async def process_query(self, query: str) -> str:
3196 |         """Process a query using Claude and available tools"""
3197 |         messages = [
3198 |             {
3199 |                 "role": "user",
3200 |                 "content": query
3201 |             }
3202 |         ]
3203 | 
3204 |         response = await self.session.list_tools()
3205 |         available_tools = [{ 
3206 |             "name": tool.name,
3207 |             "description": tool.description,
3208 |             "input_schema": tool.inputSchema
3209 |         } for tool in response.tools]
3210 | 
3211 |         # Initial Claude API call
3212 |         response = self.anthropic.messages.create(
3213 |             model="claude-3-5-sonnet-20241022",
3214 |             max_tokens=1000,
3215 |             messages=messages,
3216 |             tools=available_tools
3217 |         )
3218 | 
3219 |         # Process response and handle tool calls
3220 |         tool_results = []
3221 |         final_text = []
3222 | 
3223 |         for content in response.content:
3224 |             if content.type == 'text':
3225 |                 final_text.append(content.text)
3226 |             elif content.type == 'tool_use':
3227 |                 tool_name = content.name
3228 |                 tool_args = content.input
3229 |                 
3230 |                 # Execute tool call
3231 |                 result = await self.session.call_tool(tool_name, tool_args)
3232 |                 tool_results.append({"call": tool_name, "result": result})
3233 |                 final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
3234 | 
3235 |                 # Continue conversation with tool results
3236 |                 if hasattr(content, 'text') and content.text:
3237 |                     messages.append({
3238 |                       "role": "assistant",
3239 |                       "content": content.text
3240 |                     })
3241 |                 messages.append({
3242 |                     "role": "user", 
3243 |                     "content": result.content
3244 |                 })
3245 | 
3246 |                 # Get next response from Claude
3247 |                 response = self.anthropic.messages.create(
3248 |                     model="claude-3-5-sonnet-20241022",
3249 |                     max_tokens=1000,
3250 |                     messages=messages,
3251 |                 )
3252 | 
3253 |                 final_text.append(response.content[0].text)
3254 | 
3255 |         return "\n".join(final_text)
3256 |     ```
3257 | 
3258 |     ### Interactive Chat Interface
3259 | 
3260 |     Now we'll add the chat loop and cleanup functionality:
3261 | 
3262 |     ```python
3263 |     async def chat_loop(self):
3264 |         """Run an interactive chat loop"""
3265 |         print("\nMCP Client Started!")
3266 |         print("Type your queries or 'quit' to exit.")
3267 |         
3268 |         while True:
3269 |             try:
3270 |                 query = input("\nQuery: ").strip()
3271 |                 
3272 |                 if query.lower() == 'quit':
3273 |                     break
3274 |                     
3275 |                 response = await self.process_query(query)
3276 |                 print("\n" + response)
3277 |                     
3278 |             except Exception as e:
3279 |                 print(f"\nError: {str(e)}")
3280 | 
3281 |     async def cleanup(self):
3282 |         """Clean up resources"""
3283 |         await self.exit_stack.aclose()
3284 |     ```
3285 | 
3286 |     ### Main Entry Point
3287 | 
3288 |     Finally, we'll add the main execution logic:
3289 | 
3290 |     ```python
3291 |     async def main():
3292 |         if len(sys.argv) < 2:
3293 |             print("Usage: python client.py <path_to_server_script>")
3294 |             sys.exit(1)
3295 |             
3296 |         client = MCPClient()
3297 |         try:
3298 |             await client.connect_to_server(sys.argv[1])
3299 |             await client.chat_loop()
3300 |         finally:
3301 |             await client.cleanup()
3302 | 
3303 |     if __name__ == "__main__":
3304 |         import sys
3305 |         asyncio.run(main())
3306 |     ```
3307 | 
3308 |     You can find the complete `client.py` file [here.](https://gist.github.com/zckly/f3f28ea731e096e53b39b47bf0a2d4b1)
3309 | 
3310 |     ## Key Components Explained
3311 | 
3312 |     ### 1. Client Initialization
3313 | 
3314 |     *   The `MCPClient` class initializes with session management and API clients
3315 |     *   Uses `AsyncExitStack` for proper resource management
3316 |     *   Configures the Anthropic client for Claude interactions
3317 | 
3318 |     ### 2. Server Connection
3319 | 
3320 |     *   Supports both Python and Node.js servers
3321 |     *   Validates server script type
3322 |     *   Sets up proper communication channels
3323 |     *   Initializes the session and lists available tools
3324 | 
3325 |     ### 3. Query Processing
3326 | 
3327 |     *   Maintains conversation context
3328 |     *   Handles Claude's responses and tool calls
3329 |     *   Manages the message flow between Claude and tools
3330 |     *   Combines results into a coherent response
3331 | 
3332 |     ### 4. Interactive Interface
3333 | 
3334 |     *   Provides a simple command-line interface
3335 |     *   Handles user input and displays responses
3336 |     *   Includes basic error handling
3337 |     *   Allows graceful exit
3338 | 
3339 |     ### 5. Resource Management
3340 | 
3341 |     *   Proper cleanup of resources
3342 |     *   Error handling for connection issues
3343 |     *   Graceful shutdown procedures
3344 | 
3345 |     ## Common Customization Points
3346 | 
3347 |     1.  **Tool Handling**
3348 |         *   Modify `process_query()` to handle specific tool types
3349 |         *   Add custom error handling for tool calls
3350 |         *   Implement tool-specific response formatting
3351 | 
3352 |     2.  **Response Processing**
3353 |         *   Customize how tool results are formatted
3354 |         *   Add response filtering or transformation
3355 |         *   Implement custom logging
3356 | 
3357 |     3.  **User Interface**
3358 |         *   Add a GUI or web interface
3359 |         *   Implement rich console output
3360 |         *   Add command history or auto-completion
3361 | 
3362 |     ## Running the Client
3363 | 
3364 |     To run your client with any MCP server:
3365 | 
3366 |     ```bash
3367 |     uv run client.py path/to/server.py # python server
3368 |     uv run client.py path/to/build/index.js # node server
3369 |     ```
3370 | 
3371 |     <Note>
3372 |       If you're continuing the weather tutorial from the server quickstart, your command might look something like this: `python client.py .../weather/src/weather/server.py`
3373 |     </Note>
3374 | 
3375 |     The client will:
3376 | 
3377 |     1.  Connect to the specified server
3378 |     2.  List available tools
3379 |     3.  Start an interactive chat session where you can:
3380 |         *   Enter queries
3381 |         *   See tool executions
3382 |         *   Get responses from Claude
3383 | 
3384 |     Here's an example of what it should look like if connected to the weather server from the server quickstart:
3385 | 
3386 |     <Frame>
3387 |       <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/client-claude-cli-python.png" />
3388 |     </Frame>
3389 | 
3390 |     ## How It Works
3391 | 
3392 |     When you submit a query:
3393 | 
3394 |     1.  The client gets the list of available tools from the server
3395 |     2.  Your query is sent to Claude along with tool descriptions
3396 |     3.  Claude decides which tools (if any) to use
3397 |     4.  The client executes any requested tool calls through the server
3398 |     5.  Results are sent back to Claude
3399 |     6.  Claude provides a natural language response
3400 |     7.  The response is displayed to you
3401 | 
3402 |     ## Best practices
3403 | 
3404 |     1.  **Error Handling**
3405 |         *   Always wrap tool calls in try-catch blocks
3406 |         *   Provide meaningful error messages
3407 |         *   Gracefully handle connection issues
3408 | 
3409 |     2.  **Resource Management**
3410 |         *   Use `AsyncExitStack` for proper cleanup
3411 |         *   Close connections when done
3412 |         *   Handle server disconnections
3413 | 
3414 |     3.  **Security**
3415 |         *   Store API keys securely in `.env`
3416 |         *   Validate server responses
3417 |         *   Be cautious with tool permissions
3418 | 
3419 |     ## Troubleshooting
3420 | 
3421 |     ### Server Path Issues
3422 | 
3423 |     *   Double-check the path to your server script is correct
3424 |     *   Use the absolute path if the relative path isn't working
3425 |     *   For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path
3426 |     *   Verify the server file has the correct extension (.py for Python or .js for Node.js)
3427 | 
3428 |     Example of correct path usage:
3429 | 
3430 |     ```bash
3431 |     # Relative path
3432 |     uv run client.py ./server/weather.py
3433 | 
3434 |     # Absolute path
3435 |     uv run client.py /Users/username/projects/mcp-server/weather.py
3436 | 
3437 |     # Windows path (either format works)
3438 |     uv run client.py C:/projects/mcp-server/weather.py
3439 |     uv run client.py C:\\projects\\mcp-server\\weather.py
3440 |     ```
3441 | 
3442 |     ### Response Timing
3443 | 
3444 |     *   The first response might take up to 30 seconds to return
3445 |     *   This is normal and happens while:
3446 |         *   The server initializes
3447 |         *   Claude processes the query
3448 |         *   Tools are being executed
3449 |     *   Subsequent responses are typically faster
3450 |     *   Don't interrupt the process during this initial waiting period
3451 | 
3452 |     ### Common Error Messages
3453 | 
3454 |     If you see:
3455 | 
3456 |     *   `FileNotFoundError`: Check your server path
3457 |     *   `Connection refused`: Ensure the server is running and the path is correct
3458 |     *   `Tool execution failed`: Verify the tool's required environment variables are set
3459 |     *   `Timeout error`: Consider increasing the timeout in your client configuration
3460 |   </Tab>
3461 | </Tabs>
3462 | 
3463 | ## Next steps
3464 | 
3465 | <CardGroup cols={2}>
3466 |   <Card title="Example servers" icon="grid" href="/examples">
3467 |     Check out our gallery of official MCP servers and implementations
3468 |   </Card>
3469 | 
3470 |   <Card title="Clients" icon="cubes" href="/clients">
3471 |     View the list of clients that support MCP integrations
3472 |   </Card>
3473 | 
3474 |   <Card title="Building MCP with LLMs" icon="comments" href="/building-mcp-with-llms">
3475 |     Learn how to use LLMs like Claude to speed up your MCP development
3476 |   </Card>
3477 | 
3478 |   <Card title="Core architecture" icon="sitemap" href="/docs/concepts/architecture">
3479 |     Understand how MCP connects clients, servers, and LLMs
3480 |   </Card>
3481 | </CardGroup>
3482 | 
3483 | 
3484 | # For Server Developers
3485 | 
3486 | Get started building your own server to use in Claude for Desktop and other clients.
3487 | 
3488 | In this tutorial, we'll build a simple MCP weather server and connect it to a host, Claude for Desktop. We'll start with a basic setup, and then progress to more complex use cases.
3489 | 
3490 | ### What we'll be building
3491 | 
3492 | Many LLMs (including Claude) do not currently have the ability to fetch the forecast and severe weather alerts. Let's use MCP to solve that!
3493 | 
3494 | We'll build a server that exposes two tools: `get-alerts` and `get-forecast`. Then we'll connect the server to an MCP host (in this case, Claude for Desktop):
3495 | 
3496 | <Frame>
3497 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/weather-alerts.png" />
3498 | </Frame>
3499 | 
3500 | <Frame>
3501 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/current-weather.png" />
3502 | </Frame>
3503 | 
3504 | <Note>
3505 |   Servers can connect to any client. We've chosen Claude for Desktop here for simplicity, but we also have guides on [building your own client](/quickstart/client) as well as a [list of other clients here](/clients).
3506 | </Note>
3507 | 
3508 | <Accordion title="Why Claude for Desktop and not Claude.ai?">
3509 |   Because servers are locally run, MCP currently only supports desktop hosts. Remote hosts are in active development.
3510 | </Accordion>
3511 | 
3512 | ### Core MCP Concepts
3513 | 
3514 | MCP servers can provide three main types of capabilities:
3515 | 
3516 | 1.  **Resources**: File-like data that can be read by clients (like API responses or file contents)
3517 | 2.  **Tools**: Functions that can be called by the LLM (with user approval)
3518 | 3.  **Prompts**: Pre-written templates that help users accomplish specific tasks
3519 | 
3520 | This tutorial will primarily focus on tools.
3521 | 
3522 | <Tabs>
3523 |   <Tab title="Python">
3524 |     Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-python)
3525 | 
3526 |     ### Prerequisite knowledge
3527 | 
3528 |     This quickstart assumes you have familiarity with:
3529 | 
3530 |     *   Python
3531 |     *   LLMs like Claude
3532 | 
3533 |     ### System requirements
3534 | 
3535 |     *   Python 3.10 or higher installed.
3536 |     *   You must use the Python MCP SDK 1.2.0 or higher.
3537 | 
3538 |     ### Set up your environment
3539 | 
3540 |     First, let's install `uv` and set up our Python project and environment:
3541 | 
3542 |     <CodeGroup>
3543 |       ```bash MacOS/Linux
3544 |       curl -LsSf https://astral.sh/uv/install.sh | sh
3545 |       ```
3546 | 
3547 |       ```powershell Windows
3548 |       powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
3549 |       ```
3550 |     </CodeGroup>
3551 | 
3552 |     Make sure to restart your terminal afterwards to ensure that the `uv` command gets picked up.
3553 | 
3554 |     Now, let's create and set up our project:
3555 | 
3556 |     <CodeGroup>
3557 |       ```bash MacOS/Linux
3558 |       # Create a new directory for our project
3559 |       uv init weather
3560 |       cd weather
3561 | 
3562 |       # Create virtual environment and activate it
3563 |       uv venv
3564 |       source .venv/bin/activate
3565 | 
3566 |       # Install dependencies
3567 |       uv add "mcp[cli]" httpx
3568 | 
3569 |       # Create our server file
3570 |       touch weather.py
3571 |       ```
3572 | 
3573 |       ```powershell Windows
3574 |       # Create a new directory for our project
3575 |       uv init weather
3576 |       cd weather
3577 | 
3578 |       # Create virtual environment and activate it
3579 |       uv venv
3580 |       .venv\Scripts\activate
3581 | 
3582 |       # Install dependencies
3583 |       uv add mcp[cli] httpx
3584 | 
3585 |       # Create our server file
3586 |       new-item weather.py
3587 |       ```
3588 |     </CodeGroup>
3589 | 
3590 |     Now let's dive into building your server.
3591 | 
3592 |     ## Building your server
3593 | 
3594 |     ### Importing packages and setting up the instance
3595 | 
3596 |     Add these to the top of your `weather.py`:
3597 | 
3598 |     ```python
3599 |     from typing import Any
3600 |     import httpx
3601 |     from mcp.server.fastmcp import FastMCP
3602 | 
3603 |     # Initialize FastMCP server
3604 |     mcp = FastMCP("weather")
3605 | 
3606 |     # Constants
3607 |     NWS_API_BASE = "https://api.weather.gov"
3608 |     USER_AGENT = "weather-app/1.0"
3609 |     ```
3610 | 
3611 |     The FastMCP class uses Python type hints and docstrings to automatically generate tool definitions, making it easy to create and maintain MCP tools.
3612 | 
3613 |     ### Helper functions
3614 | 
3615 |     Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
3616 | 
3617 |     ```python
3618 |     async def make_nws_request(url: str) -> dict[str, Any] | None:
3619 |         """Make a request to the NWS API with proper error handling."""
3620 |         headers = {
3621 |             "User-Agent": USER_AGENT,
3622 |             "Accept": "application/geo+json"
3623 |         }
3624 |         async with httpx.AsyncClient() as client:
3625 |             try:
3626 |                 response = await client.get(url, headers=headers, timeout=30.0)
3627 |                 response.raise_for_status()
3628 |                 return response.json()
3629 |             except Exception:
3630 |                 return None
3631 | 
3632 |     def format_alert(feature: dict) -> str:
3633 |         """Format an alert feature into a readable string."""
3634 |         props = feature["properties"]
3635 |         return f"""
3636 |     Event: {props.get('event', 'Unknown')}
3637 |     Area: {props.get('areaDesc', 'Unknown')}
3638 |     Severity: {props.get('severity', 'Unknown')}
3639 |     Description: {props.get('description', 'No description available')}
3640 |     Instructions: {props.get('instruction', 'No specific instructions provided')}
3641 |     """
3642 |     ```
3643 | 
3644 |     ### Implementing tool execution
3645 | 
3646 |     The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
3647 | 
3648 |     ```python
3649 |     @mcp.tool()
3650 |     async def get_alerts(state: str) -> str:
3651 |         """Get weather alerts for a US state.
3652 | 
3653 |         Args:
3654 |             state: Two-letter US state code (e.g. CA, NY)
3655 |         """
3656 |         url = f"{NWS_API_BASE}/alerts/active/area/{state}"
3657 |         data = await make_nws_request(url)
3658 | 
3659 |         if not data or "features" not in data:
3660 |             return "Unable to fetch alerts or no alerts found."
3661 | 
3662 |         if not data["features"]:
3663 |             return "No active alerts for this state."
3664 | 
3665 |         alerts = [format_alert(feature) for feature in data["features"]]
3666 |         return "\n---\n".join(alerts)
3667 | 
3668 |     @mcp.tool()
3669 |     async def get_forecast(latitude: float, longitude: float) -> str:
3670 |         """Get weather forecast for a location.
3671 | 
3672 |         Args:
3673 |             latitude: Latitude of the location
3674 |             longitude: Longitude of the location
3675 |         """
3676 |         # First get the forecast grid endpoint
3677 |         points_url = f"{NWS_API_BASE}/points/{latitude},{longitude}"
3678 |         points_data = await make_nws_request(points_url)
3679 | 
3680 |         if not points_data:
3681 |             return "Unable to fetch forecast data for this location."
3682 | 
3683 |         # Get the forecast URL from the points response
3684 |         forecast_url = points_data["properties"]["forecast"]
3685 |         forecast_data = await make_nws_request(forecast_url)
3686 | 
3687 |         if not forecast_data:
3688 |             return "Unable to fetch detailed forecast."
3689 | 
3690 |         # Format the periods into a readable forecast
3691 |         periods = forecast_data["properties"]["periods"]
3692 |         forecasts = []
3693 |         for period in periods[:5]:  # Only show next 5 periods
3694 |             forecast = f"""
3695 |     {period['name']}:
3696 |     Temperature: {period['temperature']}°{period['temperatureUnit']}
3697 |     Wind: {period['windSpeed']} {period['windDirection']}
3698 |     Forecast: {period['detailedForecast']}
3699 |     """
3700 |             forecasts.append(forecast)
3701 | 
3702 |         return "\n---\n".join(forecasts)
3703 |     ```
3704 | 
3705 |     ### Running the server
3706 | 
3707 |     Finally, let's initialize and run the server:
3708 | 
3709 |     ```python
3710 |     if __name__ == "__main__":
3711 |         # Initialize and run the server
3712 |         mcp.run(transport='stdio')
3713 |     ```
3714 | 
3715 |     Your server is complete! Run `uv run weather.py` to confirm that everything's working.
3716 | 
3717 |     Let's now test your server from an existing MCP host, Claude for Desktop.
3718 | 
3719 |     ## Testing your server with Claude for Desktop
3720 | 
3721 |     <Note>
3722 |       Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/quickstart/client) tutorial to build an MCP client that connects to the server we just built.
3723 |     </Note>
3724 | 
3725 |     First, make sure you have Claude for Desktop installed. [You can install the latest version
3726 |     here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
3727 | 
3728 |     We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
3729 | 
3730 |     For example, if you have [VS Code](https://code.visualstudio.com/) installed:
3731 | 
3732 |     <Tabs>
3733 |       <Tab title="MacOS/Linux">
3734 |         ```bash
3735 |         code ~/Library/Application\ Support/Claude/claude_desktop_config.json
3736 |         ```
3737 |       </Tab>
3738 | 
3739 |       <Tab title="Windows">
3740 |         ```powershell
3741 |         code $env:AppData\Claude\claude_desktop_config.json
3742 |         ```
3743 |       </Tab>
3744 |     </Tabs>
3745 | 
3746 |     You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
3747 | 
3748 |     In this case, we'll add our single weather server like so:
3749 | 
3750 |     <Tabs>
3751 |       <Tab title="MacOS/Linux">
3752 |         ```json Python
3753 |         {
3754 |             "mcpServers": {
3755 |                 "weather": {
3756 |                     "command": "uv",
3757 |                     "args": [
3758 |                         "--directory",
3759 |                         "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather",
3760 |                         "run",
3761 |                         "weather.py"
3762 |                     ]
3763 |                 }
3764 |             }
3765 |         }
3766 |         ```
3767 |       </Tab>
3768 | 
3769 |       <Tab title="Windows">
3770 |         ```json Python
3771 |         {
3772 |             "mcpServers": {
3773 |                 "weather": {
3774 |                     "command": "uv",
3775 |                     "args": [
3776 |                         "--directory",
3777 |                         "C:\\ABSOLUTE\\PATH\\TO\\PARENT\\FOLDER\\weather",
3778 |                         "run",
3779 |                         "weather.py"
3780 |                     ]
3781 |                 }
3782 |             }
3783 |         }
3784 |         ```
3785 |       </Tab>
3786 |     </Tabs>
3787 | 
3788 |     <Warning>
3789 |       You may need to put the full path to the `uv` executable in the `command` field. You can get this by running `which uv` on MacOS/Linux or `where uv` on Windows.
3790 |     </Warning>
3791 | 
3792 |     <Note>
3793 |       Make sure you pass in the absolute path to your server.
3794 |     </Note>
3795 | 
3796 |     This tells Claude for Desktop:
3797 | 
3798 |     1.  There's an MCP server named "weather"
3799 |     2.  To launch it by running `uv --directory /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather run weather`
3800 | 
3801 |     Save the file, and restart **Claude for Desktop**.
3802 |   </Tab>
3803 | 
3804 |   <Tab title="Node">
3805 |     Let's get started with building our weather server! [You can find the complete code for what we'll be building here.](https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-typescript)
3806 | 
3807 |     ### Prerequisite knowledge
3808 | 
3809 |     This quickstart assumes you have familiarity with:
3810 | 
3811 |     *   TypeScript
3812 |     *   LLMs like Claude
3813 | 
3814 |     ### System requirements
3815 | 
3816 |     For TypeScript, make sure you have the latest version of Node installed.
3817 | 
3818 |     ### Set up your environment
3819 | 
3820 |     First, let's install Node.js and npm if you haven't already. You can download them from [nodejs.org](https://nodejs.org/).
3821 |     Verify your Node.js installation:
3822 | 
3823 |     ```bash
3824 |     node --version
3825 |     npm --version
3826 |     ```
3827 | 
3828 |     For this tutorial, you'll need Node.js version 16 or higher.
3829 | 
3830 |     Now, let's create and set up our project:
3831 | 
3832 |     <CodeGroup>
3833 |       ```bash MacOS/Linux
3834 |       # Create a new directory for our project
3835 |       mkdir weather
3836 |       cd weather
3837 | 
3838 |       # Initialize a new npm project
3839 |       npm init -y
3840 | 
3841 |       # Install dependencies
3842 |       npm install @modelcontextprotocol/sdk zod
3843 |       npm install -D @types/node typescript
3844 | 
3845 |       # Create our files
3846 |       mkdir src
3847 |       touch src/index.ts
3848 |       ```
3849 | 
3850 |       ```powershell Windows
3851 |       # Create a new directory for our project
3852 |       md weather
3853 |       cd weather
3854 | 
3855 |       # Initialize a new npm project
3856 |       npm init -y
3857 | 
3858 |       # Install dependencies
3859 |       npm install @modelcontextprotocol/sdk zod
3860 |       npm install -D @types/node typescript
3861 | 
3862 |       # Create our files
3863 |       md src
3864 |       new-item src\index.ts
3865 |       ```
3866 |     </CodeGroup>
3867 | 
3868 |     Update your package.json to add type: "module" and a build script:
3869 | 
3870 |     ```json package.json
3871 |     {
3872 |       "type": "module",
3873 |       "bin": {
3874 |         "weather": "./build/index.js"
3875 |       },
3876 |       "scripts": {
3877 |         "build": "tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
3878 |       },
3879 |       "files": [
3880 |         "build"
3881 |       ],
3882 |     }
3883 |     ```
3884 | 
3885 |     Create a `tsconfig.json` in the root of your project:
3886 | 
3887 |     ```json tsconfig.json
3888 |     {
3889 |       "compilerOptions": {
3890 |         "target": "ES2022",
3891 |         "module": "Node16",
3892 |         "moduleResolution": "Node16",
3893 |         "outDir": "./build",
3894 |         "rootDir": "./src",
3895 |         "strict": true,
3896 |         "esModuleInterop": true,
3897 |         "skipLibCheck": true,
3898 |         "forceConsistentCasingInFileNames": true
3899 |       },
3900 |       "include": ["src/**/*"],
3901 |       "exclude": ["node_modules"]
3902 |     }
3903 |     ```
3904 | 
3905 |     Now let's dive into building your server.
3906 | 
3907 |     ## Building your server
3908 | 
3909 |     ### Importing packages
3910 | 
3911 |     Add these to the top of your `src/index.ts`:
3912 | 
3913 |     ```typescript
3914 |     import { Server } from "@modelcontextprotocol/sdk/server/index.js";
3915 |     import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
3916 |     import {
3917 |       CallToolRequestSchema,
3918 |       ListToolsRequestSchema,
3919 |     } from "@modelcontextprotocol/sdk/types.js";
3920 |     import { z } from "zod";
3921 |     ```
3922 | 
3923 |     ### Setting up the instance
3924 | 
3925 |     Then initialize the NWS API base URL, validation schemas, and server instance:
3926 | 
3927 |     ```typescript
3928 |     const NWS_API_BASE = "https://api.weather.gov";
3929 |     const USER_AGENT = "weather-app/1.0";
3930 | 
3931 |     // Define Zod schemas for validation
3932 |     const AlertsArgumentsSchema = z.object({
3933 |       state: z.string().length(2),
3934 |     });
3935 | 
3936 |     const ForecastArgumentsSchema = z.object({
3937 |       latitude: z.number().min(-90).max(90),
3938 |       longitude: z.number().min(-180).max(180),
3939 |     });
3940 | 
3941 |     // Create server instance
3942 |     const server = new Server(
3943 |       {
3944 |         name: "weather",
3945 |         version: "1.0.0",
3946 |       },
3947 |       {
3948 |         capabilities: {
3949 |           tools: {},
3950 |         },
3951 |       }
3952 |     );
3953 |     ```
3954 | 
3955 |     ### Implementing tool listing
3956 | 
3957 |     We need to tell clients what tools are available. This `server.setRequestHandler` call will register this list for us:
3958 | 
3959 |     ```typescript
3960 |     // List available tools
3961 |     server.setRequestHandler(ListToolsRequestSchema, async () => {
3962 |       return {
3963 |         tools: [
3964 |           {
3965 |             name: "get-alerts",
3966 |             description: "Get weather alerts for a state",
3967 |             inputSchema: {
3968 |               type: "object",
3969 |               properties: {
3970 |                 state: {
3971 |                   type: "string",
3972 |                   description: "Two-letter state code (e.g. CA, NY)",
3973 |                 },
3974 |               },
3975 |               required: ["state"],
3976 |             },
3977 |           },
3978 |           {
3979 |             name: "get-forecast",
3980 |             description: "Get weather forecast for a location",
3981 |             inputSchema: {
3982 |               type: "object",
3983 |               properties: {
3984 |                 latitude: {
3985 |                   type: "number",
3986 |                   description: "Latitude of the location",
3987 |                 },
3988 |                 longitude: {
3989 |                   type: "number",
3990 |                   description: "Longitude of the location",
3991 |                 },
3992 |               },
3993 |               required: ["latitude", "longitude"],
3994 |             },
3995 |           },
3996 |         ],
3997 |       };
3998 |     });
3999 |     ```
4000 | 
4001 |     This defines our two tools: `get-alerts` and `get-forecast`.
4002 | 
4003 |     ### Helper functions
4004 | 
4005 |     Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
4006 | 
4007 |     ```typescript
4008 |     // Helper function for making NWS API requests
4009 |     async function makeNWSRequest<T>(url: string): Promise<T | null> {
4010 |       const headers = {
4011 |         "User-Agent": USER_AGENT,
4012 |         Accept: "application/geo+json",
4013 |       };
4014 | 
4015 |       try {
4016 |         const response = await fetch(url, { headers });
4017 |         if (!response.ok) {
4018 |           throw new Error(`HTTP error! status: ${response.status}`);
4019 |         }
4020 |         return (await response.json()) as T;
4021 |       } catch (error) {
4022 |         console.error("Error making NWS request:", error);
4023 |         return null;
4024 |       }
4025 |     }
4026 | 
4027 |     interface AlertFeature {
4028 |       properties: {
4029 |         event?: string;
4030 |         areaDesc?: string;
4031 |         severity?: string;
4032 |         status?: string;
4033 |         headline?: string;
4034 |       };
4035 |     }
4036 | 
4037 |     // Format alert data
4038 |     function formatAlert(feature: AlertFeature): string {
4039 |       const props = feature.properties;
4040 |       return [
4041 |         `Event: ${props.event || "Unknown"}`,
4042 |         `Area: ${props.areaDesc || "Unknown"}`,
4043 |         `Severity: ${props.severity || "Unknown"}`,
4044 |         `Status: ${props.status || "Unknown"}`,
4045 |         `Headline: ${props.headline || "No headline"}`,
4046 |         "---",
4047 |       ].join("\n");
4048 |     }
4049 | 
4050 |     interface ForecastPeriod {
4051 |       name?: string;
4052 |       temperature?: number;
4053 |       temperatureUnit?: string;
4054 |       windSpeed?: string;
4055 |       windDirection?: string;
4056 |       shortForecast?: string;
4057 |     }
4058 | 
4059 |     interface AlertsResponse {
4060 |       features: AlertFeature[];
4061 |     }
4062 | 
4063 |     interface PointsResponse {
4064 |       properties: {
4065 |         forecast?: string;
4066 |       };
4067 |     }
4068 | 
4069 |     interface ForecastResponse {
4070 |       properties: {
4071 |         periods: ForecastPeriod[];
4072 |       };
4073 |     }
4074 |     ```
4075 | 
4076 |     ### Implementing tool execution
4077 | 
4078 |     The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
4079 | 
4080 |     ```typescript
4081 |     // Handle tool execution
4082 |     server.setRequestHandler(CallToolRequestSchema, async (request) => {
4083 |       const { name, arguments: args } = request.params;
4084 | 
4085 |       try {
4086 |         if (name === "get-alerts") {
4087 |           const { state } = AlertsArgumentsSchema.parse(args);
4088 |           const stateCode = state.toUpperCase();
4089 | 
4090 |           const alertsUrl = `${NWS_API_BASE}/alerts?area=${stateCode}`;
4091 |           const alertsData = await makeNWSRequest<AlertsResponse>(alertsUrl);
4092 | 
4093 |           if (!alertsData) {
4094 |             return {
4095 |               content: [
4096 |                 {
4097 |                   type: "text",
4098 |                   text: "Failed to retrieve alerts data",
4099 |                 },
4100 |               ],
4101 |             };
4102 |           }
4103 | 
4104 |           const features = alertsData.features || [];
4105 |           if (features.length === 0) {
4106 |             return {
4107 |               content: [
4108 |                 {
4109 |                   type: "text",
4110 |                   text: `No active alerts for ${stateCode}`,
4111 |                 },
4112 |               ],
4113 |             };
4114 |           }
4115 | 
4116 |           const formattedAlerts = features.map(formatAlert).slice(0, 20) // only take the first 20 alerts;
4117 |           const alertsText = `Active alerts for ${stateCode}:\n\n${formattedAlerts.join(
4118 |             "\n"
4119 |           )}`;
4120 | 
4121 |           return {
4122 |             content: [
4123 |               {
4124 |                 type: "text",
4125 |                 text: alertsText,
4126 |               },
4127 |             ],
4128 |           };
4129 |         } else if (name === "get-forecast") {
4130 |           const { latitude, longitude } = ForecastArgumentsSchema.parse(args);
4131 | 
4132 |           // Get grid point data
4133 |           const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(
4134 |             4
4135 |           )},${longitude.toFixed(4)}`;
4136 |           const pointsData = await makeNWSRequest<PointsResponse>(pointsUrl);
4137 | 
4138 |           if (!pointsData) {
4139 |             return {
4140 |               content: [
4141 |                 {
4142 |                   type: "text",
4143 |                   text: `Failed to retrieve grid point data for coordinates: ${latitude}, ${longitude}. This location may not be supported by the NWS API (only US locations are supported).`,
4144 |                 },
4145 |               ],
4146 |             };
4147 |           }
4148 | 
4149 |           const forecastUrl = pointsData.properties?.forecast;
4150 |           if (!forecastUrl) {
4151 |             return {
4152 |               content: [
4153 |                 {
4154 |                   type: "text",
4155 |                   text: "Failed to get forecast URL from grid point data",
4156 |                 },
4157 |               ],
4158 |             };
4159 |           }
4160 | 
4161 |           // Get forecast data
4162 |           const forecastData = await makeNWSRequest<ForecastResponse>(forecastUrl);
4163 |           if (!forecastData) {
4164 |             return {
4165 |               content: [
4166 |                 {
4167 |                   type: "text",
4168 |                   text: "Failed to retrieve forecast data",
4169 |                 },
4170 |               ],
4171 |             };
4172 |           }
4173 | 
4174 |           const periods = forecastData.properties?.periods || [];
4175 |           if (periods.length === 0) {
4176 |             return {
4177 |               content: [
4178 |                 {
4179 |                   type: "text",
4180 |                   text: "No forecast periods available",
4181 |                 },
4182 |               ],
4183 |             };
4184 |           }
4185 | 
4186 |           // Format forecast periods
4187 |           const formattedForecast = periods.map((period: ForecastPeriod) =>
4188 |             [
4189 |               `${period.name || "Unknown"}:`,
4190 |               `Temperature: ${period.temperature || "Unknown"}°${
4191 |                 period.temperatureUnit || "F"
4192 |               }`,
4193 |               `Wind: ${period.windSpeed || "Unknown"} ${
4194 |                 period.windDirection || ""
4195 |               }`,
4196 |               `${period.shortForecast || "No forecast available"}`,
4197 |               "---",
4198 |             ].join("\n")
4199 |           );
4200 | 
4201 |           const forecastText = `Forecast for ${latitude}, ${longitude}:\n\n${formattedForecast.join(
4202 |             "\n"
4203 |           )}`;
4204 | 
4205 |           return {
4206 |             content: [
4207 |               {
4208 |                 type: "text",
4209 |                 text: forecastText,
4210 |               },
4211 |             ],
4212 |           };
4213 |         } else {
4214 |           throw new Error(`Unknown tool: ${name}`);
4215 |         }
4216 |       } catch (error) {
4217 |         if (error instanceof z.ZodError) {
4218 |           throw new Error(
4219 |             `Invalid arguments: ${error.errors
4220 |               .map((e) => `${e.path.join(".")}: ${e.message}`)
4221 |               .join(", ")}`
4222 |           );
4223 |         }
4224 |         throw error;
4225 |       }
4226 |     });
4227 |     ```
4228 | 
4229 |     ### Running the server
4230 | 
4231 |     Finally, implement the main function to run the server:
4232 | 
4233 |     ```typescript
4234 |     // Start the server
4235 |     async function main() {
4236 |       const transport = new StdioServerTransport();
4237 |       await server.connect(transport);
4238 |       console.error("Weather MCP Server running on stdio");
4239 |     }
4240 | 
4241 |     main().catch((error) => {
4242 |       console.error("Fatal error in main():", error);
4243 |       process.exit(1);
4244 |     });
4245 |     ```
4246 | 
4247 |     Make sure to run `npm run build` to build your server! This is a very important step in getting your server to connect.
4248 | 
4249 |     Let's now test your server from an existing MCP host, Claude for Desktop.
4250 | 
4251 |     ## Testing your server with Claude for Desktop
4252 | 
4253 |     <Note>
4254 |       Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/quickstart/client) tutorial to build an MCP client that connects to the server we just built.
4255 |     </Note>
4256 | 
4257 |     First, make sure you have Claude for Desktop installed. [You can install the latest version
4258 |     here.](https://claude.ai/download) If you already have Claude for Desktop, **make sure it's updated to the latest version.**
4259 | 
4260 |     We'll need to configure Claude for Desktop for whichever MCP servers you want to use. To do this, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. Make sure to create the file if it doesn't exist.
4261 | 
4262 |     For example, if you have [VS Code](https://code.visualstudio.com/) installed:
4263 | 
4264 |     <Tabs>
4265 |       <Tab title="MacOS/Linux">
4266 |         ```bash
4267 |         code ~/Library/Application\ Support/Claude/claude_desktop_config.json
4268 |         ```
4269 |       </Tab>
4270 | 
4271 |       <Tab title="Windows">
4272 |         ```powershell
4273 |         code $env:AppData\Claude\claude_desktop_config.json
4274 |         ```
4275 |       </Tab>
4276 |     </Tabs>
4277 | 
4278 |     You'll then add your servers in the `mcpServers` key. The MCP UI elements will only show up in Claude for Desktop if at least one server is properly configured.
4279 | 
4280 |     In this case, we'll add our single weather server like so:
4281 | 
4282 |     <Tabs>
4283 |       <Tab title="MacOS/Linux">
4284 |         <CodeGroup>
4285 |           ```json Node
4286 |           {
4287 |               "mcpServers": {
4288 |                   "weather": {
4289 |                       "command": "node",
4290 |                       "args": [
4291 |                           "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js"
4292 |                       ]
4293 |                   }
4294 |               }
4295 |           }
4296 |           ```
4297 |         </CodeGroup>
4298 |       </Tab>
4299 | 
4300 |       <Tab title="Windows">
4301 |         <CodeGroup>
4302 |           ```json Node
4303 |           {
4304 |               "mcpServers": {
4305 |                   "weather": {
4306 |                       "command": "node",
4307 |                       "args": [
4308 |                           "C:\\PATH\\TO\\PARENT\\FOLDER\\weather\\build\\index.js"
4309 |                       ]
4310 |                   }
4311 |               }
4312 |           }
4313 |           ```
4314 |         </CodeGroup>
4315 |       </Tab>
4316 |     </Tabs>
4317 | 
4318 |     This tells Claude for Desktop:
4319 | 
4320 |     1.  There's an MCP server named "weather"
4321 |     2.  Launch it by running `node /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js`
4322 | 
4323 |     Save the file, and restart **Claude for Desktop**.
4324 |   </Tab>
4325 | </Tabs>
4326 | 
4327 | ### Test with commands
4328 | 
4329 | Let's make sure Claude for Desktop is picking up the two tools we've exposed in our `weather` server. You can do this by looking for the hammer <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-hammer-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon:
4330 | 
4331 | <Frame>
4332 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/visual-indicator-mcp-tools.png" />
4333 | </Frame>
4334 | 
4335 | After clicking on the hammer icon, you should see two tools listed:
4336 | 
4337 | <Frame>
4338 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/available-mcp-tools.png" />
4339 | </Frame>
4340 | 
4341 | If your server isn't being picked up by Claude for Desktop, proceed to the [Troubleshooting](#troubleshooting) section for debugging tips.
4342 | 
4343 | If the hammer icon has shown up, you can now test your server by running the following commands in Claude for Desktop:
4344 | 
4345 | *   What's the weather in Sacramento?
4346 | *   What are the active weather alerts in Texas?
4347 | 
4348 | <Frame>
4349 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/current-weather.png" />
4350 | </Frame>
4351 | 
4352 | <Frame>
4353 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/weather-alerts.png" />
4354 | </Frame>
4355 | 
4356 | <Note>
4357 |   Since this is the US National Weather service, the queries will only work for US locations.
4358 | </Note>
4359 | 
4360 | ## What's happening under the hood
4361 | 
4362 | When you ask a question:
4363 | 
4364 | 1.  The client sends your question to Claude
4365 | 2.  Claude analyzes the available tools and decides which one(s) to use
4366 | 3.  The client executes the chosen tool(s) through the MCP server
4367 | 4.  The results are sent back to Claude
4368 | 5.  Claude formulates a natural language response
4369 | 6.  The response is displayed to you!
4370 | 
4371 | ## Troubleshooting
4372 | 
4373 | <AccordionGroup>
4374 |   <Accordion title="Claude for Desktop Integration Issues">
4375 |     **Getting logs from Claude for Desktop**
4376 | 
4377 |     Claude.app logging related to MCP is written to log files in `~/Library/Logs/Claude`:
4378 | 
4379 |     *   `mcp.log` will contain general logging about MCP connections and connection failures.
4380 |     *   Files named `mcp-server-SERVERNAME.log` will contain error (stderr) logging from the named server.
4381 | 
4382 |     You can run the following command to list recent logs and follow along with any new ones:
4383 | 
4384 |     ```bash
4385 |     # Check Claude's logs for errors
4386 |     tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
4387 |     ```
4388 | 
4389 |     **Server not showing up in Claude**
4390 | 
4391 |     1.  Check your `claude_desktop_config.json` file syntax
4392 |     2.  Make sure the path to your project is absolute and not relative
4393 |     3.  Restart Claude for Desktop completely
4394 | 
4395 |     **Tool calls failing silently**
4396 | 
4397 |     If Claude attempts to use the tools but they fail:
4398 | 
4399 |     1.  Check Claude's logs for errors
4400 |     2.  Verify your server builds and runs without errors
4401 |     3.  Try restarting Claude for Desktop
4402 | 
4403 |     **None of this is working. What do I do?**
4404 | 
4405 |     Please refer to our [debugging guide](/docs/tools/debugging) for better debugging tools and more detailed guidance.
4406 |   </Accordion>
4407 | 
4408 |   <Accordion title="Weather API Issues">
4409 |     **Error: Failed to retrieve grid point data**
4410 | 
4411 |     This usually means either:
4412 | 
4413 |     1.  The coordinates are outside the US
4414 |     2.  The NWS API is having issues
4415 |     3.  You're being rate limited
4416 | 
4417 |     Fix:
4418 | 
4419 |     *   Verify you're using US coordinates
4420 |     *   Add a small delay between requests
4421 |     *   Check the NWS API status page
4422 | 
4423 |     **Error: No active alerts for \[STATE]**
4424 | 
4425 |     This isn't an error - it just means there are no current weather alerts for that state. Try a different state or check during severe weather.
4426 |   </Accordion>
4427 | </AccordionGroup>
4428 | 
4429 | <Note>
4430 |   For more advanced troubleshooting, check out our guide on [Debugging MCP](/docs/tools/debugging)
4431 | </Note>
4432 | 
4433 | ## Next steps
4434 | 
4435 | <CardGroup cols={2}>
4436 |   <Card title="Building a client" icon="outlet" href="/quickstart/client">
4437 |     Learn how to build your own MCP client that can connect to your server
4438 |   </Card>
4439 | 
4440 |   <Card title="Example servers" icon="grid" href="/examples">
4441 |     Check out our gallery of official MCP servers and implementations
4442 |   </Card>
4443 | 
4444 |   <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging">
4445 |     Learn how to effectively debug MCP servers and integrations
4446 |   </Card>
4447 | 
4448 |   <Card title="Building MCP with LLMs" icon="comments" href="/building-mcp-with-llms">
4449 |     Learn how to use LLMs like Claude to speed up your MCP development
4450 |   </Card>
4451 | </CardGroup>
4452 | 
4453 | 
4454 | # For Claude Desktop Users
4455 | 
4456 | Get started using pre-built servers in Claude for Desktop.
4457 | 
4458 | In this tutorial, you will extend [Claude for Desktop](https://claude.ai/download) so that it can read from your computer's file system, write new files, move files, and even search files.
4459 | 
4460 | <Frame>
4461 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-filesystem.png" />
4462 | </Frame>
4463 | 
4464 | Don't worry — it will ask you for your permission before executing these actions!
4465 | 
4466 | ## 1. Download Claude for Desktop
4467 | 
4468 | Start by downloading [Claude for Desktop](https://claude.ai/download), choosing either macOS or Windows. (Linux is not yet supported for Claude for Desktop.)
4469 | 
4470 | Follow the installation instructions.
4471 | 
4472 | If you already have Claude for Desktop, make sure it's on the latest version by clicking on the Claude menu on your computer and selecting "Check for Updates..."
4473 | 
4474 | <Accordion title="Why Claude for Desktop and not Claude.ai?">
4475 |   Because servers are locally run, MCP currently only supports desktop hosts. Remote hosts are in active development.
4476 | </Accordion>
4477 | 
4478 | ## 2. Add the Filesystem MCP Server
4479 | 
4480 | To add this filesystem functionality, we will be installing a pre-built [Filesystem MCP Server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem) to Claude for Desktop. This is one of dozens of [servers](https://github.com/modelcontextprotocol/servers/tree/main) created by Anthropic and the community.
4481 | 
4482 | Get started by opening up the Claude menu on your computer and select "Settings..." Please note that these are not the Claude Account Settings found in the app window itself.
4483 | 
4484 | This is what it should look like on a Mac:
4485 | 
4486 | <Frame style={{ textAlign: 'center' }}>
4487 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-menu.png" width="400" />
4488 | </Frame>
4489 | 
4490 | Click on "Developer" in the lefthand bar of the Settings pane, and then click on "Edit Config":
4491 | 
4492 | <Frame>
4493 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-developer.png" />
4494 | </Frame>
4495 | 
4496 | This will create a configuration file at:
4497 | 
4498 | *   macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
4499 | *   Windows: `%APPDATA%\Claude\claude_desktop_config.json`
4500 | 
4501 | if you don't already have one, and will display the file in your file system.
4502 | 
4503 | Open up the configuration file in any text editor. Replace the file contents with this:
4504 | 
4505 | <Tabs>
4506 |   <Tab title="MacOS/Linux">
4507 |     ```json
4508 |     {
4509 |       "mcpServers": {
4510 |         "filesystem": {
4511 |           "command": "npx",
4512 |           "args": [
4513 |             "-y",
4514 |             "@modelcontextprotocol/server-filesystem",
4515 |             "/Users/username/Desktop",
4516 |             "/Users/username/Downloads"
4517 |           ]
4518 |         }
4519 |       }
4520 |     }
4521 |     ```
4522 |   </Tab>
4523 | 
4524 |   <Tab title="Windows">
4525 |     ```json
4526 |     {
4527 |       "mcpServers": {
4528 |         "filesystem": {
4529 |           "command": "npx",
4530 |           "args": [
4531 |             "-y",
4532 |             "@modelcontextprotocol/server-filesystem",
4533 |             "C:\\Users\\username\\Desktop",
4534 |             "C:\\Users\\username\\Downloads"
4535 |           ]
4536 |         }
4537 |       }
4538 |     }
4539 |     ```
4540 |   </Tab>
4541 | </Tabs>
4542 | 
4543 | Make sure to replace `username` with your computer's username. The paths should point to valid directories that you want Claude to be able to access and modify. It's set up to work for Desktop and Downloads, but you can add more paths as well.
4544 | 
4545 | You will also need [Node.js](https://nodejs.org) on your computer for this to run properly. To verify you have Node installed, open the command line on your computer.
4546 | 
4547 | *   On macOS, open the Terminal from your Applications folder
4548 | *   On Windows, press Windows + R, type "cmd", and press Enter
4549 | 
4550 | Once in the command line, verify you have Node installed by entering in the following command:
4551 | 
4552 | ```bash
4553 | node --version
4554 | ```
4555 | 
4556 | If you get an error saying "command not found" or "node is not recognized", download Node from [nodejs.org](https://nodejs.org/).
4557 | 
4558 | <Tip>
4559 |   **How does the configuration file work?**
4560 | 
4561 |   This configuration file tells Claude for Dekstop which MCP servers to start up every time you start the application. In this case, we have added one server called "filesystem" that will use the Node `npx` command to install and run `@modelcontextprotocol/server-filesystem`. This server, described [here](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem), will let you access your file system in Claude for Desktop.
4562 | </Tip>
4563 | 
4564 | <Warning>
4565 |   **Command Privileges**
4566 | 
4567 |   Claude for Desktop will run the commands in the configuration file with the permissions of your user account, and access to your local files. Only add commands if you understand and trust the source.
4568 | </Warning>
4569 | 
4570 | ## 3. Restart Claude
4571 | 
4572 | After updating your configuration file, you need to restart Claude for Desktop.
4573 | 
4574 | Upon restarting, you should see a hammer <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-hammer-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon in the bottom right corner of the input box:
4575 | 
4576 | <Frame>
4577 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-hammer.png" />
4578 | </Frame>
4579 | 
4580 | After clicking on the hammer icon, you should see the tools that come with the Filesystem MCP Server:
4581 | 
4582 | <Frame style={{ textAlign: 'center' }}>
4583 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-tools.png" width="400" />
4584 | </Frame>
4585 | 
4586 | If your server isn't being picked up by Claude for Desktop, proceed to the [Troubleshooting](#troubleshooting) section for debugging tips.
4587 | 
4588 | ## 4. Try it out!
4589 | 
4590 | You can now talk to Claude and ask it about your filesystem. It should know when to call the relevant tools.
4591 | 
4592 | Things you might try asking Claude:
4593 | 
4594 | *   Can you write a poem and save it to my desktop?
4595 | *   What are some work-related files in my downloads folder?
4596 | *   Can you take all the images on my desktop and move them to a new folder called "Images"?
4597 | 
4598 | As needed, Claude will call the relevant tools and seek your approval before taking an action:
4599 | 
4600 | <Frame style={{ textAlign: 'center' }}>
4601 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-approve.png" width="500" />
4602 | </Frame>
4603 | 
4604 | ## Troubleshooting
4605 | 
4606 | <AccordionGroup>
4607 |   <Accordion title="Server not showing up in Claude / hammer icon missing">
4608 |     1.  Restart Claude for Desktop completely
4609 |     2.  Check your `claude_desktop_config.json` file syntax
4610 |     3.  Make sure the file paths included in `claude_desktop_config.json` are valid and that they are absolute and not relative
4611 |     4.  Look at [logs](#getting-logs-from-claude-for-desktop) to see why the server is not connecting
4612 |     5.  In your command line, try manually running the server (replacing `username` as you did in `claude_desktop_config.json`) to see if you get any errors:
4613 | 
4614 |     <Tabs>
4615 |       <Tab title="MacOS/Linux">
4616 |         ```bash
4617 |         npx -y @modelcontextprotocol/server-filesystem /Users/username/Desktop /Users/username/Downloads
4618 |         ```
4619 |       </Tab>
4620 | 
4621 |       <Tab title="Windows">
4622 |         ```bash
4623 |         npx -y @modelcontextprotocol/server-filesystem C:\Users\username\Desktop C:\Users\username\Downloads
4624 |         ```
4625 |       </Tab>
4626 |     </Tabs>
4627 |   </Accordion>
4628 | 
4629 |   <Accordion title="Getting logs from Claude for Desktop">
4630 |     Claude.app logging related to MCP is written to log files in:
4631 | 
4632 |     *   macOS: `~/Library/Logs/Claude`
4633 | 
4634 |     *   Windows: `%APPDATA%\Claude\logs`
4635 | 
4636 |     *   `mcp.log` will contain general logging about MCP connections and connection failures.
4637 | 
4638 |     *   Files named `mcp-server-SERVERNAME.log` will contain error (stderr) logging from the named server.
4639 | 
4640 |     You can run the following command to list recent logs and follow along with any new ones (on Windows, it will only show recent logs):
4641 | 
4642 |     <Tabs>
4643 |       <Tab title="MacOS/Linux">
4644 |         ```bash
4645 |         # Check Claude's logs for errors
4646 |         tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
4647 |         ```
4648 |       </Tab>
4649 | 
4650 |       <Tab title="Windows">
4651 |         ```bash
4652 |         type "%APPDATA%\Claude\logs\mcp*.log"
4653 |         ```
4654 |       </Tab>
4655 |     </Tabs>
4656 |   </Accordion>
4657 | 
4658 |   <Accordion title="Tool calls failing silently">
4659 |     If Claude attempts to use the tools but they fail:
4660 | 
4661 |     1.  Check Claude's logs for errors
4662 |     2.  Verify your server builds and runs without errors
4663 |     3.  Try restarting Claude for Desktop
4664 |   </Accordion>
4665 | 
4666 |   <Accordion title="None of this is working. What do I do?">
4667 |     Please refer to our [debugging guide](/docs/tools/debugging) for better debugging tools and more detailed guidance.
4668 |   </Accordion>
4669 | </AccordionGroup>
4670 | 
4671 | ## Next steps
4672 | 
4673 | <CardGroup cols={2}>
4674 |   <Card title="Explore other servers" icon="grid" href="/examples">
4675 |     Check out our gallery of official MCP servers and implementations
4676 |   </Card>
4677 | 
4678 |   <Card title="Build your own server" icon="code" href="/quickstart/server">
4679 |     Now build your own custom server to use in Claude for Desktop and other clients
4680 |   </Card>
4681 | </CardGroup>
4682 | 
4683 | 
4684 | # Building MCP with LLMs
4685 | 
4686 | Speed up your MCP development using LLMs such as Claude!
4687 | 
4688 | This guide will help you use LLMs to help you build custom Model Context Protocol (MCP) servers and clients. We'll be focusing on Claude for this tutorial, but you can do this with any frontier LLM.
4689 | 
4690 | ## Preparing the documentation
4691 | 
4692 | Before starting, gather the necessary documentation to help Claude understand MCP:
4693 | 
4694 | 1.  Visit [https://modelcontextprotocol.io/llms-full.txt](https://modelcontextprotocol.io/llms-full.txt) and copy the full documentation text
4695 | 2.  Navigate to either the [MCP TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk) or [Python SDK repository](https://github.com/modelcontextprotocol/python-sdk)
4696 | 3.  Copy the README files and other relevant documentation
4697 | 4.  Paste these documents into your conversation with Claude
4698 | 
4699 | ## Describing your server
4700 | 
4701 | Once you've provided the documentation, clearly describe to Claude what kind of server you want to build. Be specific about:
4702 | 
4703 | *   What resources your server will expose
4704 | *   What tools it will provide
4705 | *   Any prompts it should offer
4706 | *   What external systems it needs to interact with
4707 | 
4708 | For example:
4709 | 
4710 | ```
4711 | Build an MCP server that:
4712 | - Connects to my company's PostgreSQL database
4713 | - Exposes table schemas as resources
4714 | - Provides tools for running read-only SQL queries
4715 | - Includes prompts for common data analysis tasks
4716 | ```
4717 | 
4718 | ## Working with Claude
4719 | 
4720 | When working with Claude on MCP servers:
4721 | 
4722 | 1.  Start with the core functionality first, then iterate to add more features
4723 | 2.  Ask Claude to explain any parts of the code you don't understand
4724 | 3.  Request modifications or improvements as needed
4725 | 4.  Have Claude help you test the server and handle edge cases
4726 | 
4727 | Claude can help implement all the key MCP features:
4728 | 
4729 | *   Resource management and exposure
4730 | *   Tool definitions and implementations
4731 | *   Prompt templates and handlers
4732 | *   Error handling and logging
4733 | *   Connection and transport setup
4734 | 
4735 | ## Best practices
4736 | 
4737 | When building MCP servers with Claude:
4738 | 
4739 | *   Break down complex servers into smaller pieces
4740 | *   Test each component thoroughly before moving on
4741 | *   Keep security in mind - validate inputs and limit access appropriately
4742 | *   Document your code well for future maintenance
4743 | *   Follow MCP protocol specifications carefully
4744 | 
4745 | ## Next steps
4746 | 
4747 | After Claude helps you build your server:
4748 | 
4749 | 1.  Review the generated code carefully
4750 | 2.  Test the server with the MCP Inspector tool
4751 | 3.  Connect it to Claude.app or other MCP clients
4752 | 4.  Iterate based on real usage and feedback
4753 | 
4754 | Remember that Claude can help you modify and improve your server as requirements change over time.
4755 | 
4756 | Need more guidance? Just ask Claude specific questions about implementing MCP features or troubleshooting issues that arise.
4757 | 
4758 | 
4759 | 
```
Page 3/3FirstPrevNextLast