#
tokens: 41386/50000 1/7 files (page 2/2)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 2 of 2. Use http://codebase.md/phialsbasement/mcp-webresearch-stealthified?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .cursorrules
├── .gitignore
├── docs
│   └── mcp_spec
│       └── llms-full.txt
├── index.ts
├── LICENSE
├── package.json
├── pnpm-lock.yaml
├── README.md
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/docs/mcp_spec/llms-full.txt:
--------------------------------------------------------------------------------

```
   1 | # Clients
   2 | 
   3 | A list of applications that support MCP integrations
   4 | 
   5 | This page provides an overview of applications that support the Model Context Protocol (MCP). Each client may support different MCP features, allowing for varying levels of integration with MCP servers.
   6 | 
   7 | ## Feature support matrix
   8 | 
   9 | | Client                       | [Resources] | [Prompts] | [Tools] | [Sampling] | Roots | Notes                                            |
  10 | | ---------------------------- | ----------- | --------- | ------- | ---------- | ----- | ------------------------------------------------ |
  11 | | [Claude Desktop App][Claude] | ✅           | ✅         | ✅       | ❌          | ❌     | Full support for all MCP features                |
  12 | | [Zed][Zed]                   | ❌           | ✅         | ❌       | ❌          | ❌     | Prompts appear as slash commands                 |
  13 | | [Sourcegraph Cody][Cody]     | ✅           | ❌         | ❌       | ❌          | ❌     | Supports resources through OpenCTX               |
  14 | | [Firebase Genkit][Genkit]    | ⚠️          | ✅         | ✅       | ❌          | ❌     | Supports resource list and lookup through tools. |
  15 | | [Continue][Continue]         | ✅           | ✅         | ✅       | ❌          | ❌     | Full support for all MCP features                |
  16 | 
  17 | [Claude]: https://claude.ai/download
  18 | 
  19 | [Zed]: https://zed.dev
  20 | 
  21 | [Cody]: https://sourcegraph.com/cody
  22 | 
  23 | [Genkit]: https://github.com/firebase/genkit
  24 | 
  25 | [Continue]: https://github.com/continuedev/continue
  26 | 
  27 | [Resources]: https://modelcontextprotocol.io/docs/concepts/resources
  28 | 
  29 | [Prompts]: https://modelcontextprotocol.io/docs/concepts/prompts
  30 | 
  31 | [Tools]: https://modelcontextprotocol.io/docs/concepts/tools
  32 | 
  33 | [Sampling]: https://modelcontextprotocol.io/docs/concepts/sampling
  34 | 
  35 | ## Client details
  36 | 
  37 | ### Claude Desktop App
  38 | 
  39 | The Claude desktop application provides comprehensive support for MCP, enabling deep integration with local tools and data sources.
  40 | 
  41 | **Key features:**
  42 | 
  43 | *   Full support for resources, allowing attachment of local files and data
  44 | *   Support for prompt templates
  45 | *   Tool integration for executing commands and scripts
  46 | *   Local server connections for enhanced privacy and security
  47 | 
  48 | > ⓘ Note: The Claude.ai web application does not currently support MCP. MCP features are only available in the desktop application.
  49 | 
  50 | ### Zed
  51 | 
  52 | [Zed](https://zed.dev/docs/assistant/model-context-protocol) is a high-performance code editor with built-in MCP support, focusing on prompt templates and tool integration.
  53 | 
  54 | **Key features:**
  55 | 
  56 | *   Prompt templates surface as slash commands in the editor
  57 | *   Tool integration for enhanced coding workflows
  58 | *   Tight integration with editor features and workspace context
  59 | *   Does not support MCP resources
  60 | 
  61 | ### Sourcegraph Cody
  62 | 
  63 | [Cody](https://openctx.org/docs/providers/modelcontextprotocol) is Sourcegraph's AI coding assistant, which implements MCP through OpenCTX.
  64 | 
  65 | **Key features:**
  66 | 
  67 | *   Support for MCP resources
  68 | *   Integration with Sourcegraph's code intelligence
  69 | *   Uses OpenCTX as an abstraction layer
  70 | *   Future support planned for additional MCP features
  71 | 
  72 | ### Firebase Genkit
  73 | 
  74 | [Genkit](https://github.com/firebase/genkit) is Firebase's SDK for building and integrating GenAI features into applications. The [genkitx-mcp](https://github.com/firebase/genkit/tree/main/js/plugins/mcp) plugin enables consuming MCP servers as a client or creating MCP servers from Genkit tools and prompts.
  75 | 
  76 | **Key features:**
  77 | 
  78 | *   Client support for tools and prompts (resources partially supported)
  79 | *   Rich discovery with support in Genkit's Dev UI playground
  80 | *   Seamless interoperability with Genkit's existing tools and prompts
  81 | *   Works across a wide variety of GenAI models from top providers
  82 | 
  83 | ### Continue
  84 | 
  85 | [Continue](https://github.com/continuedev/continue) is an open-source AI code assistant, with built-in support for all MCP features.
  86 | 
  87 | **Key features**
  88 | 
  89 | *   Type "@" to mention MCP resources
  90 | *   Prompt templates surface as slash commands
  91 | *   Use both built-in and MCP tools directly in chat
  92 | *   Supports VS Code and JetBrains IDEs, with any LLM
  93 | 
  94 | ## Adding MCP support to your application
  95 | 
  96 | If you've added MCP support to your application, we encourage you to submit a pull request to add it to this list. MCP integration can provide your users with powerful contextual AI capabilities and make your application part of the growing MCP ecosystem.
  97 | 
  98 | Benefits of adding MCP support:
  99 | 
 100 | *   Enable users to bring their own context and tools
 101 | *   Join a growing ecosystem of interoperable AI applications
 102 | *   Provide users with flexible integration options
 103 | *   Support local-first AI workflows
 104 | 
 105 | To get started with implementing MCP in your application, check out our [Python](https://github.com/modelcontextprotocol/python-sdk) or [TypeScript SDK Documentation](https://github.com/modelcontextprotocol/typescript-sdk)
 106 | 
 107 | ## Updates and corrections
 108 | 
 109 | This list is maintained by the community. If you notice any inaccuracies or would like to update information about MCP support in your application, please submit a pull request or [open an issue in our documentation repository](https://github.com/modelcontextprotocol/docs/issues).
 110 | 
 111 | 
 112 | # Core architecture
 113 | 
 114 | Understand how MCP connects clients, servers, and LLMs
 115 | 
 116 | The Model Context Protocol (MCP) is built on a flexible, extensible architecture that enables seamless communication between LLM applications and integrations. This document covers the core architectural components and concepts.
 117 | 
 118 | ## Overview
 119 | 
 120 | MCP follows a client-server architecture where:
 121 | 
 122 | *   **Hosts** are LLM applications (like Claude Desktop or IDEs) that initiate connections
 123 | *   **Clients** maintain 1:1 connections with servers, inside the host application
 124 | *   **Servers** provide context, tools, and prompts to clients
 125 | 
 126 | ```mermaid
 127 | flowchart LR
 128 |     subgraph " Host (e.g., Claude Desktop) "
 129 |         client1[MCP Client]
 130 |         client2[MCP Client]
 131 |     end
 132 |     subgraph "Server Process"
 133 |         server1[MCP Server]
 134 |     end
 135 |     subgraph "Server Process"
 136 |         server2[MCP Server]
 137 |     end
 138 | 
 139 |     client1 <-->|Transport Layer| server1
 140 |     client2 <-->|Transport Layer| server2
 141 | ```
 142 | 
 143 | ## Core components
 144 | 
 145 | ### Protocol layer
 146 | 
 147 | The protocol layer handles message framing, request/response linking, and high-level communication patterns.
 148 | 
 149 | <Tabs>
 150 |   <Tab title="TypeScript">
 151 |     ```typescript
 152 |     class Protocol<Request, Notification, Result> {
 153 |         // Handle incoming requests
 154 |         setRequestHandler<T>(schema: T, handler: (request: T, extra: RequestHandlerExtra) => Promise<Result>): void
 155 | 
 156 |         // Handle incoming notifications
 157 |         setNotificationHandler<T>(schema: T, handler: (notification: T) => Promise<void>): void
 158 | 
 159 |         // Send requests and await responses
 160 |         request<T>(request: Request, schema: T, options?: RequestOptions): Promise<T>
 161 | 
 162 |         // Send one-way notifications
 163 |         notification(notification: Notification): Promise<void>
 164 |     }
 165 |     ```
 166 |   </Tab>
 167 | 
 168 |   <Tab title="Python">
 169 |     ```python
 170 |     class Session(BaseSession[RequestT, NotificationT, ResultT]):
 171 |         async def send_request(
 172 |             self,
 173 |             request: RequestT,
 174 |             result_type: type[Result]
 175 |         ) -> Result:
 176 |             """
 177 |             Send request and wait for response. Raises McpError if response contains error.
 178 |             """
 179 |             # Request handling implementation
 180 | 
 181 |         async def send_notification(
 182 |             self,
 183 |             notification: NotificationT
 184 |         ) -> None:
 185 |             """Send one-way notification that doesn't expect response."""
 186 |             # Notification handling implementation
 187 | 
 188 |         async def _received_request(
 189 |             self,
 190 |             responder: RequestResponder[ReceiveRequestT, ResultT]
 191 |         ) -> None:
 192 |             """Handle incoming request from other side."""
 193 |             # Request handling implementation
 194 | 
 195 |         async def _received_notification(
 196 |             self,
 197 |             notification: ReceiveNotificationT
 198 |         ) -> None:
 199 |             """Handle incoming notification from other side."""
 200 |             # Notification handling implementation
 201 |     ```
 202 |   </Tab>
 203 | </Tabs>
 204 | 
 205 | Key classes include:
 206 | 
 207 | *   `Protocol`
 208 | *   `Client`
 209 | *   `Server`
 210 | 
 211 | ### Transport layer
 212 | 
 213 | The transport layer handles the actual communication between clients and servers. MCP supports multiple transport mechanisms:
 214 | 
 215 | 1.  **Stdio transport**
 216 |     *   Uses standard input/output for communication
 217 |     *   Ideal for local processes
 218 | 
 219 | 2.  **HTTP with SSE transport**
 220 |     *   Uses Server-Sent Events for server-to-client messages
 221 |     *   HTTP POST for client-to-server messages
 222 | 
 223 | All transports use [JSON-RPC](https://www.jsonrpc.org/) 2.0 to exchange messages. See the [specification](https://spec.modelcontextprotocol.io) for detailed information about the Model Context Protocol message format.
 224 | 
 225 | ### Message types
 226 | 
 227 | MCP has these main types of messages:
 228 | 
 229 | 1.  **Requests** expect a response from the other side:
 230 |     ```typescript
 231 |     interface Request {
 232 |       method: string;
 233 |       params?: { ... };
 234 |     }
 235 |     ```
 236 | 
 237 | 2.  **Notifications** are one-way messages that don't expect a response:
 238 |     ```typescript
 239 |     interface Notification {
 240 |       method: string;
 241 |       params?: { ... };
 242 |     }
 243 |     ```
 244 | 
 245 | 3.  **Results** are successful responses to requests:
 246 |     ```typescript
 247 |     interface Result {
 248 |       [key: string]: unknown;
 249 |     }
 250 |     ```
 251 | 
 252 | 4.  **Errors** indicate that a request failed:
 253 |     ```typescript
 254 |     interface Error {
 255 |       code: number;
 256 |       message: string;
 257 |       data?: unknown;
 258 |     }
 259 |     ```
 260 | 
 261 | ## Connection lifecycle
 262 | 
 263 | ### 1. Initialization
 264 | 
 265 | ```mermaid
 266 | sequenceDiagram
 267 |     participant Client
 268 |     participant Server
 269 | 
 270 |     Client->>Server: initialize request
 271 |     Server->>Client: initialize response
 272 |     Client->>Server: initialized notification
 273 | 
 274 |     Note over Client,Server: Connection ready for use
 275 | ```
 276 | 
 277 | 1.  Client sends `initialize` request with protocol version and capabilities
 278 | 2.  Server responds with its protocol version and capabilities
 279 | 3.  Client sends `initialized` notification as acknowledgment
 280 | 4.  Normal message exchange begins
 281 | 
 282 | ### 2. Message exchange
 283 | 
 284 | After initialization, the following patterns are supported:
 285 | 
 286 | *   **Request-Response**: Client or server sends requests, the other responds
 287 | *   **Notifications**: Either party sends one-way messages
 288 | 
 289 | ### 3. Termination
 290 | 
 291 | Either party can terminate the connection:
 292 | 
 293 | *   Clean shutdown via `close()`
 294 | *   Transport disconnection
 295 | *   Error conditions
 296 | 
 297 | ## Error handling
 298 | 
 299 | MCP defines these standard error codes:
 300 | 
 301 | ```typescript
 302 | enum ErrorCode {
 303 |   // Standard JSON-RPC error codes
 304 |   ParseError = -32700,
 305 |   InvalidRequest = -32600,
 306 |   MethodNotFound = -32601,
 307 |   InvalidParams = -32602,
 308 |   InternalError = -32603
 309 | }
 310 | ```
 311 | 
 312 | SDKs and applications can define their own error codes above -32000.
 313 | 
 314 | Errors are propagated through:
 315 | 
 316 | *   Error responses to requests
 317 | *   Error events on transports
 318 | *   Protocol-level error handlers
 319 | 
 320 | ## Implementation example
 321 | 
 322 | Here's a basic example of implementing an MCP server:
 323 | 
 324 | <Tabs>
 325 |   <Tab title="TypeScript">
 326 |     ```typescript
 327 |     import { Server } from "@modelcontextprotocol/sdk/server/index.js";
 328 |     import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
 329 | 
 330 |     const server = new Server({
 331 |       name: "example-server",
 332 |       version: "1.0.0"
 333 |     }, {
 334 |       capabilities: {
 335 |         resources: {}
 336 |       }
 337 |     });
 338 | 
 339 |     // Handle requests
 340 |     server.setRequestHandler(ListResourcesRequestSchema, async () => {
 341 |       return {
 342 |         resources: [
 343 |           {
 344 |             uri: "example://resource",
 345 |             name: "Example Resource"
 346 |           }
 347 |         ]
 348 |       };
 349 |     });
 350 | 
 351 |     // Connect transport
 352 |     const transport = new StdioServerTransport();
 353 |     await server.connect(transport);
 354 |     ```
 355 |   </Tab>
 356 | 
 357 |   <Tab title="Python">
 358 |     ```python
 359 |     import asyncio
 360 |     import mcp.types as types
 361 |     from mcp.server import Server
 362 |     from mcp.server.stdio import stdio_server
 363 | 
 364 |     app = Server("example-server")
 365 | 
 366 |     @app.list_resources()
 367 |     async def list_resources() -> list[types.Resource]:
 368 |         return [
 369 |             types.Resource(
 370 |                 uri="example://resource",
 371 |                 name="Example Resource"
 372 |             )
 373 |         ]
 374 | 
 375 |     async def main():
 376 |         async with stdio_server() as streams:
 377 |             await app.run(
 378 |                 streams[0],
 379 |                 streams[1],
 380 |                 app.create_initialization_options()
 381 |             )
 382 | 
 383 |     if __name__ == "__main__":
 384 |         asyncio.run(main)
 385 |     ```
 386 |   </Tab>
 387 | </Tabs>
 388 | 
 389 | ## Best practices
 390 | 
 391 | ### Transport selection
 392 | 
 393 | 1.  **Local communication**
 394 |     *   Use stdio transport for local processes
 395 |     *   Efficient for same-machine communication
 396 |     *   Simple process management
 397 | 
 398 | 2.  **Remote communication**
 399 |     *   Use SSE for scenarios requiring HTTP compatibility
 400 |     *   Consider security implications including authentication and authorization
 401 | 
 402 | ### Message handling
 403 | 
 404 | 1.  **Request processing**
 405 |     *   Validate inputs thoroughly
 406 |     *   Use type-safe schemas
 407 |     *   Handle errors gracefully
 408 |     *   Implement timeouts
 409 | 
 410 | 2.  **Progress reporting**
 411 |     *   Use progress tokens for long operations
 412 |     *   Report progress incrementally
 413 |     *   Include total progress when known
 414 | 
 415 | 3.  **Error management**
 416 |     *   Use appropriate error codes
 417 |     *   Include helpful error messages
 418 |     *   Clean up resources on errors
 419 | 
 420 | ## Security considerations
 421 | 
 422 | 1.  **Transport security**
 423 |     *   Use TLS for remote connections
 424 |     *   Validate connection origins
 425 |     *   Implement authentication when needed
 426 | 
 427 | 2.  **Message validation**
 428 |     *   Validate all incoming messages
 429 |     *   Sanitize inputs
 430 |     *   Check message size limits
 431 |     *   Verify JSON-RPC format
 432 | 
 433 | 3.  **Resource protection**
 434 |     *   Implement access controls
 435 |     *   Validate resource paths
 436 |     *   Monitor resource usage
 437 |     *   Rate limit requests
 438 | 
 439 | 4.  **Error handling**
 440 |     *   Don't leak sensitive information
 441 |     *   Log security-relevant errors
 442 |     *   Implement proper cleanup
 443 |     *   Handle DoS scenarios
 444 | 
 445 | ## Debugging and monitoring
 446 | 
 447 | 1.  **Logging**
 448 |     *   Log protocol events
 449 |     *   Track message flow
 450 |     *   Monitor performance
 451 |     *   Record errors
 452 | 
 453 | 2.  **Diagnostics**
 454 |     *   Implement health checks
 455 |     *   Monitor connection state
 456 |     *   Track resource usage
 457 |     *   Profile performance
 458 | 
 459 | 3.  **Testing**
 460 |     *   Test different transports
 461 |     *   Verify error handling
 462 |     *   Check edge cases
 463 |     *   Load test servers
 464 | 
 465 | 
 466 | # Prompts
 467 | 
 468 | Create reusable prompt templates and workflows
 469 | 
 470 | Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a powerful way to standardize and share common LLM interactions.
 471 | 
 472 | <Note>
 473 |   Prompts are designed to be **user-controlled**, meaning they are exposed from servers to clients with the intention of the user being able to explicitly select them for use.
 474 | </Note>
 475 | 
 476 | ## Overview
 477 | 
 478 | Prompts in MCP are predefined templates that can:
 479 | 
 480 | *   Accept dynamic arguments
 481 | *   Include context from resources
 482 | *   Chain multiple interactions
 483 | *   Guide specific workflows
 484 | *   Surface as UI elements (like slash commands)
 485 | 
 486 | ## Prompt structure
 487 | 
 488 | Each prompt is defined with:
 489 | 
 490 | ```typescript
 491 | {
 492 |   name: string;              // Unique identifier for the prompt
 493 |   description?: string;      // Human-readable description
 494 |   arguments?: [              // Optional list of arguments
 495 |     {
 496 |       name: string;          // Argument identifier
 497 |       description?: string;  // Argument description
 498 |       required?: boolean;    // Whether argument is required
 499 |     }
 500 |   ]
 501 | }
 502 | ```
 503 | 
 504 | ## Discovering prompts
 505 | 
 506 | Clients can discover available prompts through the `prompts/list` endpoint:
 507 | 
 508 | ```typescript
 509 | // Request
 510 | {
 511 |   method: "prompts/list"
 512 | }
 513 | 
 514 | // Response
 515 | {
 516 |   prompts: [
 517 |     {
 518 |       name: "analyze-code",
 519 |       description: "Analyze code for potential improvements",
 520 |       arguments: [
 521 |         {
 522 |           name: "language",
 523 |           description: "Programming language",
 524 |           required: true
 525 |         }
 526 |       ]
 527 |     }
 528 |   ]
 529 | }
 530 | ```
 531 | 
 532 | ## Using prompts
 533 | 
 534 | To use a prompt, clients make a `prompts/get` request:
 535 | 
 536 | ````typescript
 537 | // Request
 538 | {
 539 |   method: "prompts/get",
 540 |   params: {
 541 |     name: "analyze-code",
 542 |     arguments: {
 543 |       language: "python"
 544 |     }
 545 |   }
 546 | }
 547 | 
 548 | // Response
 549 | {
 550 |   description: "Analyze Python code for potential improvements",
 551 |   messages: [
 552 |     {
 553 |       role: "user",
 554 |       content: {
 555 |         type: "text",
 556 |         text: "Please analyze the following Python code for potential improvements:\n\n```python\ndef calculate_sum(numbers):\n    total = 0\n    for num in numbers:\n        total = total + num\n    return total\n\nresult = calculate_sum([1, 2, 3, 4, 5])\nprint(result)\n```"
 557 |       }
 558 |     }
 559 |   ]
 560 | }
 561 | ````
 562 | 
 563 | ## Dynamic prompts
 564 | 
 565 | Prompts can be dynamic and include:
 566 | 
 567 | ### Embedded resource context
 568 | 
 569 | ```json
 570 | {
 571 |   "name": "analyze-project",
 572 |   "description": "Analyze project logs and code",
 573 |   "arguments": [
 574 |     {
 575 |       "name": "timeframe",
 576 |       "description": "Time period to analyze logs",
 577 |       "required": true
 578 |     },
 579 |     {
 580 |       "name": "fileUri",
 581 |       "description": "URI of code file to review",
 582 |       "required": true
 583 |     }
 584 |   ]
 585 | }
 586 | ```
 587 | 
 588 | When handling the `prompts/get` request:
 589 | 
 590 | ```json
 591 | {
 592 |   "messages": [
 593 |     {
 594 |       "role": "user",
 595 |       "content": {
 596 |         "type": "text",
 597 |         "text": "Analyze these system logs and the code file for any issues:"
 598 |       }
 599 |     },
 600 |     {
 601 |       "role": "user",
 602 |       "content": {
 603 |         "type": "resource",
 604 |         "resource": {
 605 |           "uri": "logs://recent?timeframe=1h",
 606 |           "text": "[2024-03-14 15:32:11] ERROR: Connection timeout in network.py:127\n[2024-03-14 15:32:15] WARN: Retrying connection (attempt 2/3)\n[2024-03-14 15:32:20] ERROR: Max retries exceeded",
 607 |           "mimeType": "text/plain"
 608 |         }
 609 |       }
 610 |     },
 611 |     {
 612 |       "role": "user",
 613 |       "content": {
 614 |         "type": "resource",
 615 |         "resource": {
 616 |           "uri": "file:///path/to/code.py",
 617 |           "text": "def connect_to_service(timeout=30):\n    retries = 3\n    for attempt in range(retries):\n        try:\n            return establish_connection(timeout)\n        except TimeoutError:\n            if attempt == retries - 1:\n                raise\n            time.sleep(5)\n\ndef establish_connection(timeout):\n    # Connection implementation\n    pass",
 618 |           "mimeType": "text/x-python"
 619 |         }
 620 |       }
 621 |     }
 622 |   ]
 623 | }
 624 | ```
 625 | 
 626 | ### Multi-step workflows
 627 | 
 628 | ```typescript
 629 | const debugWorkflow = {
 630 |   name: "debug-error",
 631 |   async getMessages(error: string) {
 632 |     return [
 633 |       {
 634 |         role: "user",
 635 |         content: {
 636 |           type: "text",
 637 |           text: `Here's an error I'm seeing: ${error}`
 638 |         }
 639 |       },
 640 |       {
 641 |         role: "assistant",
 642 |         content: {
 643 |           type: "text",
 644 |           text: "I'll help analyze this error. What have you tried so far?"
 645 |         }
 646 |       },
 647 |       {
 648 |         role: "user",
 649 |         content: {
 650 |           type: "text",
 651 |           text: "I've tried restarting the service, but the error persists."
 652 |         }
 653 |       }
 654 |     ];
 655 |   }
 656 | };
 657 | ```
 658 | 
 659 | ## Example implementation
 660 | 
 661 | Here's a complete example of implementing prompts in an MCP server:
 662 | 
 663 | <Tabs>
 664 |   <Tab title="TypeScript">
 665 |     ```typescript
 666 |     import { Server } from "@modelcontextprotocol/sdk/server";
 667 |     import {
 668 |       ListPromptsRequestSchema,
 669 |       GetPromptRequestSchema
 670 |     } from "@modelcontextprotocol/sdk/types";
 671 | 
 672 |     const PROMPTS = {
 673 |       "git-commit": {
 674 |         name: "git-commit",
 675 |         description: "Generate a Git commit message",
 676 |         arguments: [
 677 |           {
 678 |             name: "changes",
 679 |             description: "Git diff or description of changes",
 680 |             required: true
 681 |           }
 682 |         ]
 683 |       },
 684 |       "explain-code": {
 685 |         name: "explain-code",
 686 |         description: "Explain how code works",
 687 |         arguments: [
 688 |           {
 689 |             name: "code",
 690 |             description: "Code to explain",
 691 |             required: true
 692 |           },
 693 |           {
 694 |             name: "language",
 695 |             description: "Programming language",
 696 |             required: false
 697 |           }
 698 |         ]
 699 |       }
 700 |     };
 701 | 
 702 |     const server = new Server({
 703 |       name: "example-prompts-server",
 704 |       version: "1.0.0"
 705 |     }, {
 706 |       capabilities: {
 707 |         prompts: {}
 708 |       }
 709 |     });
 710 | 
 711 |     // List available prompts
 712 |     server.setRequestHandler(ListPromptsRequestSchema, async () => {
 713 |       return {
 714 |         prompts: Object.values(PROMPTS)
 715 |       };
 716 |     });
 717 | 
 718 |     // Get specific prompt
 719 |     server.setRequestHandler(GetPromptRequestSchema, async (request) => {
 720 |       const prompt = PROMPTS[request.params.name];
 721 |       if (!prompt) {
 722 |         throw new Error(`Prompt not found: ${request.params.name}`);
 723 |       }
 724 | 
 725 |       if (request.params.name === "git-commit") {
 726 |         return {
 727 |           messages: [
 728 |             {
 729 |               role: "user",
 730 |               content: {
 731 |                 type: "text",
 732 |                 text: `Generate a concise but descriptive commit message for these changes:\n\n${request.params.arguments?.changes}`
 733 |               }
 734 |             }
 735 |           ]
 736 |         };
 737 |       }
 738 | 
 739 |       if (request.params.name === "explain-code") {
 740 |         const language = request.params.arguments?.language || "Unknown";
 741 |         return {
 742 |           messages: [
 743 |             {
 744 |               role: "user",
 745 |               content: {
 746 |                 type: "text",
 747 |                 text: `Explain how this ${language} code works:\n\n${request.params.arguments?.code}`
 748 |               }
 749 |             }
 750 |           ]
 751 |         };
 752 |       }
 753 | 
 754 |       throw new Error("Prompt implementation not found");
 755 |     });
 756 |     ```
 757 |   </Tab>
 758 | 
 759 |   <Tab title="Python">
 760 |     ```python
 761 |     from mcp.server import Server
 762 |     import mcp.types as types
 763 | 
 764 |     # Define available prompts
 765 |     PROMPTS = {
 766 |         "git-commit": types.Prompt(
 767 |             name="git-commit",
 768 |             description="Generate a Git commit message",
 769 |             arguments=[
 770 |                 types.PromptArgument(
 771 |                     name="changes",
 772 |                     description="Git diff or description of changes",
 773 |                     required=True
 774 |                 )
 775 |             ],
 776 |         ),
 777 |         "explain-code": types.Prompt(
 778 |             name="explain-code",
 779 |             description="Explain how code works",
 780 |             arguments=[
 781 |                 types.PromptArgument(
 782 |                     name="code",
 783 |                     description="Code to explain",
 784 |                     required=True
 785 |                 ),
 786 |                 types.PromptArgument(
 787 |                     name="language",
 788 |                     description="Programming language",
 789 |                     required=False
 790 |                 )
 791 |             ],
 792 |         )
 793 |     }
 794 | 
 795 |     # Initialize server
 796 |     app = Server("example-prompts-server")
 797 | 
 798 |     @app.list_prompts()
 799 |     async def list_prompts() -> list[types.Prompt]:
 800 |         return list(PROMPTS.values())
 801 | 
 802 |     @app.get_prompt()
 803 |     async def get_prompt(
 804 |         name: str, arguments: dict[str, str] | None = None
 805 |     ) -> types.GetPromptResult:
 806 |         if name not in PROMPTS:
 807 |             raise ValueError(f"Prompt not found: {name}")
 808 | 
 809 |         if name == "git-commit":
 810 |             changes = arguments.get("changes") if arguments else ""
 811 |             return types.GetPromptResult(
 812 |                 messages=[
 813 |                     types.PromptMessage(
 814 |                         role="user",
 815 |                         content=types.TextContent(
 816 |                             type="text",
 817 |                             text=f"Generate a concise but descriptive commit message "
 818 |                             f"for these changes:\n\n{changes}"
 819 |                         )
 820 |                     )
 821 |                 ]
 822 |             )
 823 | 
 824 |         if name == "explain-code":
 825 |             code = arguments.get("code") if arguments else ""
 826 |             language = arguments.get("language", "Unknown") if arguments else "Unknown"
 827 |             return types.GetPromptResult(
 828 |                 messages=[
 829 |                     types.PromptMessage(
 830 |                         role="user",
 831 |                         content=types.TextContent(
 832 |                             type="text",
 833 |                             text=f"Explain how this {language} code works:\n\n{code}"
 834 |                         )
 835 |                     )
 836 |                 ]
 837 |             )
 838 | 
 839 |         raise ValueError("Prompt implementation not found")
 840 |     ```
 841 |   </Tab>
 842 | </Tabs>
 843 | 
 844 | ## Best practices
 845 | 
 846 | When implementing prompts:
 847 | 
 848 | 1.  Use clear, descriptive prompt names
 849 | 2.  Provide detailed descriptions for prompts and arguments
 850 | 3.  Validate all required arguments
 851 | 4.  Handle missing arguments gracefully
 852 | 5.  Consider versioning for prompt templates
 853 | 6.  Cache dynamic content when appropriate
 854 | 7.  Implement error handling
 855 | 8.  Document expected argument formats
 856 | 9.  Consider prompt composability
 857 | 10. Test prompts with various inputs
 858 | 
 859 | ## UI integration
 860 | 
 861 | Prompts can be surfaced in client UIs as:
 862 | 
 863 | *   Slash commands
 864 | *   Quick actions
 865 | *   Context menu items
 866 | *   Command palette entries
 867 | *   Guided workflows
 868 | *   Interactive forms
 869 | 
 870 | ## Updates and changes
 871 | 
 872 | Servers can notify clients about prompt changes:
 873 | 
 874 | 1.  Server capability: `prompts.listChanged`
 875 | 2.  Notification: `notifications/prompts/list_changed`
 876 | 3.  Client re-fetches prompt list
 877 | 
 878 | ## Security considerations
 879 | 
 880 | When implementing prompts:
 881 | 
 882 | *   Validate all arguments
 883 | *   Sanitize user input
 884 | *   Consider rate limiting
 885 | *   Implement access controls
 886 | *   Audit prompt usage
 887 | *   Handle sensitive data appropriately
 888 | *   Validate generated content
 889 | *   Implement timeouts
 890 | *   Consider prompt injection risks
 891 | *   Document security requirements
 892 | 
 893 | 
 894 | # Resources
 895 | 
 896 | Expose data and content from your servers to LLMs
 897 | 
 898 | Resources are a core primitive in the Model Context Protocol (MCP) that allow servers to expose data and content that can be read by clients and used as context for LLM interactions.
 899 | 
 900 | <Note>
 901 |   Resources are designed to be **application-controlled**, meaning that the client application can decide how and when they should be used.
 902 |   Different MCP clients may handle resources differently. For example:
 903 | 
 904 |   *   Claude Desktop currently requires users to explicitly select resources before they can be used
 905 |   *   Other clients might automatically select resources based on heuristics
 906 |   *   Some implementations may even allow the AI model itself to determine which resources to use
 907 | 
 908 |   Server authors should be prepared to handle any of these interaction patterns when implementing resource support. In order to expose data to models automatically, server authors should use a **model-controlled** primitive such as [Tools](./tools).
 909 | </Note>
 910 | 
 911 | ## Overview
 912 | 
 913 | Resources represent any kind of data that an MCP server wants to make available to clients. This can include:
 914 | 
 915 | *   File contents
 916 | *   Database records
 917 | *   API responses
 918 | *   Live system data
 919 | *   Screenshots and images
 920 | *   Log files
 921 | *   And more
 922 | 
 923 | Each resource is identified by a unique URI and can contain either text or binary data.
 924 | 
 925 | ## Resource URIs
 926 | 
 927 | Resources are identified using URIs that follow this format:
 928 | 
 929 | ```
 930 | [protocol]://[host]/[path]
 931 | ```
 932 | 
 933 | For example:
 934 | 
 935 | *   `file:///home/user/documents/report.pdf`
 936 | *   `postgres://database/customers/schema`
 937 | *   `screen://localhost/display1`
 938 | 
 939 | The protocol and path structure is defined by the MCP server implementation. Servers can define their own custom URI schemes.
 940 | 
 941 | ## Resource types
 942 | 
 943 | Resources can contain two types of content:
 944 | 
 945 | ### Text resources
 946 | 
 947 | Text resources contain UTF-8 encoded text data. These are suitable for:
 948 | 
 949 | *   Source code
 950 | *   Configuration files
 951 | *   Log files
 952 | *   JSON/XML data
 953 | *   Plain text
 954 | 
 955 | ### Binary resources
 956 | 
 957 | Binary resources contain raw binary data encoded in base64. These are suitable for:
 958 | 
 959 | *   Images
 960 | *   PDFs
 961 | *   Audio files
 962 | *   Video files
 963 | *   Other non-text formats
 964 | 
 965 | ## Resource discovery
 966 | 
 967 | Clients can discover available resources through two main methods:
 968 | 
 969 | ### Direct resources
 970 | 
 971 | Servers expose a list of concrete resources via the `resources/list` endpoint. Each resource includes:
 972 | 
 973 | ```typescript
 974 | {
 975 |   uri: string;           // Unique identifier for the resource
 976 |   name: string;          // Human-readable name
 977 |   description?: string;  // Optional description
 978 |   mimeType?: string;     // Optional MIME type
 979 | }
 980 | ```
 981 | 
 982 | ### Resource templates
 983 | 
 984 | For dynamic resources, servers can expose [URI templates](https://datatracker.ietf.org/doc/html/rfc6570) that clients can use to construct valid resource URIs:
 985 | 
 986 | ```typescript
 987 | {
 988 |   uriTemplate: string;   // URI template following RFC 6570
 989 |   name: string;          // Human-readable name for this type
 990 |   description?: string;  // Optional description
 991 |   mimeType?: string;     // Optional MIME type for all matching resources
 992 | }
 993 | ```
 994 | 
 995 | ## Reading resources
 996 | 
 997 | To read a resource, clients make a `resources/read` request with the resource URI.
 998 | 
 999 | The server responds with a list of resource contents:
1000 | 
1001 | ```typescript
1002 | {
1003 |   contents: [
1004 |     {
1005 |       uri: string;        // The URI of the resource
1006 |       mimeType?: string;  // Optional MIME type
1007 | 
1008 |       // One of:
1009 |       text?: string;      // For text resources
1010 |       blob?: string;      // For binary resources (base64 encoded)
1011 |     }
1012 |   ]
1013 | }
1014 | ```
1015 | 
1016 | <Tip>
1017 |   Servers may return multiple resources in response to one `resources/read` request. This could be used, for example, to return a list of files inside a directory when the directory is read.
1018 | </Tip>
1019 | 
1020 | ## Resource updates
1021 | 
1022 | MCP supports real-time updates for resources through two mechanisms:
1023 | 
1024 | ### List changes
1025 | 
1026 | Servers can notify clients when their list of available resources changes via the `notifications/resources/list_changed` notification.
1027 | 
1028 | ### Content changes
1029 | 
1030 | Clients can subscribe to updates for specific resources:
1031 | 
1032 | 1.  Client sends `resources/subscribe` with resource URI
1033 | 2.  Server sends `notifications/resources/updated` when the resource changes
1034 | 3.  Client can fetch latest content with `resources/read`
1035 | 4.  Client can unsubscribe with `resources/unsubscribe`
1036 | 
1037 | ## Example implementation
1038 | 
1039 | Here's a simple example of implementing resource support in an MCP server:
1040 | 
1041 | <Tabs>
1042 |   <Tab title="TypeScript">
1043 |     ```typescript
1044 |     const server = new Server({
1045 |       name: "example-server",
1046 |       version: "1.0.0"
1047 |     }, {
1048 |       capabilities: {
1049 |         resources: {}
1050 |       }
1051 |     });
1052 | 
1053 |     // List available resources
1054 |     server.setRequestHandler(ListResourcesRequestSchema, async () => {
1055 |       return {
1056 |         resources: [
1057 |           {
1058 |             uri: "file:///logs/app.log",
1059 |             name: "Application Logs",
1060 |             mimeType: "text/plain"
1061 |           }
1062 |         ]
1063 |       };
1064 |     });
1065 | 
1066 |     // Read resource contents
1067 |     server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
1068 |       const uri = request.params.uri;
1069 | 
1070 |       if (uri === "file:///logs/app.log") {
1071 |         const logContents = await readLogFile();
1072 |         return {
1073 |           contents: [
1074 |             {
1075 |               uri,
1076 |               mimeType: "text/plain",
1077 |               text: logContents
1078 |             }
1079 |           ]
1080 |         };
1081 |       }
1082 | 
1083 |       throw new Error("Resource not found");
1084 |     });
1085 |     ```
1086 |   </Tab>
1087 | 
1088 |   <Tab title="Python">
1089 |     ```python
1090 |     app = Server("example-server")
1091 | 
1092 |     @app.list_resources()
1093 |     async def list_resources() -> list[types.Resource]:
1094 |         return [
1095 |             types.Resource(
1096 |                 uri="file:///logs/app.log",
1097 |                 name="Application Logs",
1098 |                 mimeType="text/plain"
1099 |             )
1100 |         ]
1101 | 
1102 |     @app.read_resource()
1103 |     async def read_resource(uri: AnyUrl) -> str:
1104 |         if str(uri) == "file:///logs/app.log":
1105 |             log_contents = await read_log_file()
1106 |             return log_contents
1107 | 
1108 |         raise ValueError("Resource not found")
1109 | 
1110 |     # Start server
1111 |     async with stdio_server() as streams:
1112 |         await app.run(
1113 |             streams[0],
1114 |             streams[1],
1115 |             app.create_initialization_options()
1116 |         )
1117 |     ```
1118 |   </Tab>
1119 | </Tabs>
1120 | 
1121 | ## Best practices
1122 | 
1123 | When implementing resource support:
1124 | 
1125 | 1.  Use clear, descriptive resource names and URIs
1126 | 2.  Include helpful descriptions to guide LLM understanding
1127 | 3.  Set appropriate MIME types when known
1128 | 4.  Implement resource templates for dynamic content
1129 | 5.  Use subscriptions for frequently changing resources
1130 | 6.  Handle errors gracefully with clear error messages
1131 | 7.  Consider pagination for large resource lists
1132 | 8.  Cache resource contents when appropriate
1133 | 9.  Validate URIs before processing
1134 | 10. Document your custom URI schemes
1135 | 
1136 | ## Security considerations
1137 | 
1138 | When exposing resources:
1139 | 
1140 | *   Validate all resource URIs
1141 | *   Implement appropriate access controls
1142 | *   Sanitize file paths to prevent directory traversal
1143 | *   Be cautious with binary data handling
1144 | *   Consider rate limiting for resource reads
1145 | *   Audit resource access
1146 | *   Encrypt sensitive data in transit
1147 | *   Validate MIME types
1148 | *   Implement timeouts for long-running reads
1149 | *   Handle resource cleanup appropriately
1150 | 
1151 | 
1152 | # Sampling
1153 | 
1154 | Let your servers request completions from LLMs
1155 | 
1156 | Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated agentic behaviors while maintaining security and privacy.
1157 | 
1158 | <Info>
1159 |   This feature of MCP is not yet supported in the Claude Desktop client.
1160 | </Info>
1161 | 
1162 | ## How sampling works
1163 | 
1164 | The sampling flow follows these steps:
1165 | 
1166 | 1.  Server sends a `sampling/createMessage` request to the client
1167 | 2.  Client reviews the request and can modify it
1168 | 3.  Client samples from an LLM
1169 | 4.  Client reviews the completion
1170 | 5.  Client returns the result to the server
1171 | 
1172 | This human-in-the-loop design ensures users maintain control over what the LLM sees and generates.
1173 | 
1174 | ## Message format
1175 | 
1176 | Sampling requests use a standardized message format:
1177 | 
1178 | ```typescript
1179 | {
1180 |   messages: [
1181 |     {
1182 |       role: "user" | "assistant",
1183 |       content: {
1184 |         type: "text" | "image",
1185 | 
1186 |         // For text:
1187 |         text?: string,
1188 | 
1189 |         // For images:
1190 |         data?: string,             // base64 encoded
1191 |         mimeType?: string
1192 |       }
1193 |     }
1194 |   ],
1195 |   modelPreferences?: {
1196 |     hints?: [{
1197 |       name?: string                // Suggested model name/family
1198 |     }],
1199 |     costPriority?: number,         // 0-1, importance of minimizing cost
1200 |     speedPriority?: number,        // 0-1, importance of low latency
1201 |     intelligencePriority?: number  // 0-1, importance of capabilities
1202 |   },
1203 |   systemPrompt?: string,
1204 |   includeContext?: "none" | "thisServer" | "allServers",
1205 |   temperature?: number,
1206 |   maxTokens: number,
1207 |   stopSequences?: string[],
1208 |   metadata?: Record<string, unknown>
1209 | }
1210 | ```
1211 | 
1212 | ## Request parameters
1213 | 
1214 | ### Messages
1215 | 
1216 | The `messages` array contains the conversation history to send to the LLM. Each message has:
1217 | 
1218 | *   `role`: Either "user" or "assistant"
1219 | *   `content`: The message content, which can be:
1220 |     *   Text content with a `text` field
1221 |     *   Image content with `data` (base64) and `mimeType` fields
1222 | 
1223 | ### Model preferences
1224 | 
1225 | The `modelPreferences` object allows servers to specify their model selection preferences:
1226 | 
1227 | *   `hints`: Array of model name suggestions that clients can use to select an appropriate model:
1228 |     *   `name`: String that can match full or partial model names (e.g. "claude-3", "sonnet")
1229 |     *   Clients may map hints to equivalent models from different providers
1230 |     *   Multiple hints are evaluated in preference order
1231 | 
1232 | *   Priority values (0-1 normalized):
1233 |     *   `costPriority`: Importance of minimizing costs
1234 |     *   `speedPriority`: Importance of low latency response
1235 |     *   `intelligencePriority`: Importance of advanced model capabilities
1236 | 
1237 | Clients make the final model selection based on these preferences and their available models.
1238 | 
1239 | ### System prompt
1240 | 
1241 | An optional `systemPrompt` field allows servers to request a specific system prompt. The client may modify or ignore this.
1242 | 
1243 | ### Context inclusion
1244 | 
1245 | The `includeContext` parameter specifies what MCP context to include:
1246 | 
1247 | *   `"none"`: No additional context
1248 | *   `"thisServer"`: Include context from the requesting server
1249 | *   `"allServers"`: Include context from all connected MCP servers
1250 | 
1251 | The client controls what context is actually included.
1252 | 
1253 | ### Sampling parameters
1254 | 
1255 | Fine-tune the LLM sampling with:
1256 | 
1257 | *   `temperature`: Controls randomness (0.0 to 1.0)
1258 | *   `maxTokens`: Maximum tokens to generate
1259 | *   `stopSequences`: Array of sequences that stop generation
1260 | *   `metadata`: Additional provider-specific parameters
1261 | 
1262 | ## Response format
1263 | 
1264 | The client returns a completion result:
1265 | 
1266 | ```typescript
1267 | {
1268 |   model: string,  // Name of the model used
1269 |   stopReason?: "endTurn" | "stopSequence" | "maxTokens" | string,
1270 |   role: "user" | "assistant",
1271 |   content: {
1272 |     type: "text" | "image",
1273 |     text?: string,
1274 |     data?: string,
1275 |     mimeType?: string
1276 |   }
1277 | }
1278 | ```
1279 | 
1280 | ## Example request
1281 | 
1282 | Here's an example of requesting sampling from a client:
1283 | 
1284 | ```json
1285 | {
1286 |   "method": "sampling/createMessage",
1287 |   "params": {
1288 |     "messages": [
1289 |       {
1290 |         "role": "user",
1291 |         "content": {
1292 |           "type": "text",
1293 |           "text": "What files are in the current directory?"
1294 |         }
1295 |       }
1296 |     ],
1297 |     "systemPrompt": "You are a helpful file system assistant.",
1298 |     "includeContext": "thisServer",
1299 |     "maxTokens": 100
1300 |   }
1301 | }
1302 | ```
1303 | 
1304 | ## Best practices
1305 | 
1306 | When implementing sampling:
1307 | 
1308 | 1.  Always provide clear, well-structured prompts
1309 | 2.  Handle both text and image content appropriately
1310 | 3.  Set reasonable token limits
1311 | 4.  Include relevant context through `includeContext`
1312 | 5.  Validate responses before using them
1313 | 6.  Handle errors gracefully
1314 | 7.  Consider rate limiting sampling requests
1315 | 8.  Document expected sampling behavior
1316 | 9.  Test with various model parameters
1317 | 10. Monitor sampling costs
1318 | 
1319 | ## Human in the loop controls
1320 | 
1321 | Sampling is designed with human oversight in mind:
1322 | 
1323 | ### For prompts
1324 | 
1325 | *   Clients should show users the proposed prompt
1326 | *   Users should be able to modify or reject prompts
1327 | *   System prompts can be filtered or modified
1328 | *   Context inclusion is controlled by the client
1329 | 
1330 | ### For completions
1331 | 
1332 | *   Clients should show users the completion
1333 | *   Users should be able to modify or reject completions
1334 | *   Clients can filter or modify completions
1335 | *   Users control which model is used
1336 | 
1337 | ## Security considerations
1338 | 
1339 | When implementing sampling:
1340 | 
1341 | *   Validate all message content
1342 | *   Sanitize sensitive information
1343 | *   Implement appropriate rate limits
1344 | *   Monitor sampling usage
1345 | *   Encrypt data in transit
1346 | *   Handle user data privacy
1347 | *   Audit sampling requests
1348 | *   Control cost exposure
1349 | *   Implement timeouts
1350 | *   Handle model errors gracefully
1351 | 
1352 | ## Common patterns
1353 | 
1354 | ### Agentic workflows
1355 | 
1356 | Sampling enables agentic patterns like:
1357 | 
1358 | *   Reading and analyzing resources
1359 | *   Making decisions based on context
1360 | *   Generating structured data
1361 | *   Handling multi-step tasks
1362 | *   Providing interactive assistance
1363 | 
1364 | ### Context management
1365 | 
1366 | Best practices for context:
1367 | 
1368 | *   Request minimal necessary context
1369 | *   Structure context clearly
1370 | *   Handle context size limits
1371 | *   Update context as needed
1372 | *   Clean up stale context
1373 | 
1374 | ### Error handling
1375 | 
1376 | Robust error handling should:
1377 | 
1378 | *   Catch sampling failures
1379 | *   Handle timeout errors
1380 | *   Manage rate limits
1381 | *   Validate responses
1382 | *   Provide fallback behaviors
1383 | *   Log errors appropriately
1384 | 
1385 | ## Limitations
1386 | 
1387 | Be aware of these limitations:
1388 | 
1389 | *   Sampling depends on client capabilities
1390 | *   Users control sampling behavior
1391 | *   Context size has limits
1392 | *   Rate limits may apply
1393 | *   Costs should be considered
1394 | *   Model availability varies
1395 | *   Response times vary
1396 | *   Not all content types supported
1397 | 
1398 | 
1399 | # Tools
1400 | 
1401 | Enable LLMs to perform actions through your server
1402 | 
1403 | Tools are a powerful primitive in the Model Context Protocol (MCP) that enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world.
1404 | 
1405 | <Note>
1406 |   Tools are designed to be **model-controlled**, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval).
1407 | </Note>
1408 | 
1409 | ## Overview
1410 | 
1411 | Tools in MCP allow servers to expose executable functions that can be invoked by clients and used by LLMs to perform actions. Key aspects of tools include:
1412 | 
1413 | *   **Discovery**: Clients can list available tools through the `tools/list` endpoint
1414 | *   **Invocation**: Tools are called using the `tools/call` endpoint, where servers perform the requested operation and return results
1415 | *   **Flexibility**: Tools can range from simple calculations to complex API interactions
1416 | 
1417 | Like [resources](/docs/concepts/resources), tools are identified by unique names and can include descriptions to guide their usage. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems.
1418 | 
1419 | ## Tool definition structure
1420 | 
1421 | Each tool is defined with the following structure:
1422 | 
1423 | ```typescript
1424 | {
1425 |   name: string;          // Unique identifier for the tool
1426 |   description?: string;  // Human-readable description
1427 |   inputSchema: {         // JSON Schema for the tool's parameters
1428 |     type: "object",
1429 |     properties: { ... }  // Tool-specific parameters
1430 |   }
1431 | }
1432 | ```
1433 | 
1434 | ## Implementing tools
1435 | 
1436 | Here's an example of implementing a basic tool in an MCP server:
1437 | 
1438 | <Tabs>
1439 |   <Tab title="TypeScript">
1440 |     ```typescript
1441 |     const server = new Server({
1442 |       name: "example-server",
1443 |       version: "1.0.0"
1444 |     }, {
1445 |       capabilities: {
1446 |         tools: {}
1447 |       }
1448 |     });
1449 | 
1450 |     // Define available tools
1451 |     server.setRequestHandler(ListToolsRequestSchema, async () => {
1452 |       return {
1453 |         tools: [{
1454 |           name: "calculate_sum",
1455 |           description: "Add two numbers together",
1456 |           inputSchema: {
1457 |             type: "object",
1458 |             properties: {
1459 |               a: { type: "number" },
1460 |               b: { type: "number" }
1461 |             },
1462 |             required: ["a", "b"]
1463 |           }
1464 |         }]
1465 |       };
1466 |     });
1467 | 
1468 |     // Handle tool execution
1469 |     server.setRequestHandler(CallToolRequestSchema, async (request) => {
1470 |       if (request.params.name === "calculate_sum") {
1471 |         const { a, b } = request.params.arguments;
1472 |         return {
1473 |           toolResult: a + b
1474 |         };
1475 |       }
1476 |       throw new Error("Tool not found");
1477 |     });
1478 |     ```
1479 |   </Tab>
1480 | 
1481 |   <Tab title="Python">
1482 |     ```python
1483 |     app = Server("example-server")
1484 | 
1485 |     @app.list_tools()
1486 |     async def list_tools() -> list[types.Tool]:
1487 |         return [
1488 |             types.Tool(
1489 |                 name="calculate_sum",
1490 |                 description="Add two numbers together",
1491 |                 inputSchema={
1492 |                     "type": "object",
1493 |                     "properties": {
1494 |                         "a": {"type": "number"},
1495 |                         "b": {"type": "number"}
1496 |                     },
1497 |                     "required": ["a", "b"]
1498 |                 }
1499 |             )
1500 |         ]
1501 | 
1502 |     @app.call_tool()
1503 |     async def call_tool(
1504 |         name: str,
1505 |         arguments: dict
1506 |     ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
1507 |         if name == "calculate_sum":
1508 |             a = arguments["a"]
1509 |             b = arguments["b"]
1510 |             result = a + b
1511 |             return [types.TextContent(type="text", text=str(result))]
1512 |         raise ValueError(f"Tool not found: {name}")
1513 |     ```
1514 |   </Tab>
1515 | </Tabs>
1516 | 
1517 | ## Example tool patterns
1518 | 
1519 | Here are some examples of types of tools that a server could provide:
1520 | 
1521 | ### System operations
1522 | 
1523 | Tools that interact with the local system:
1524 | 
1525 | ```typescript
1526 | {
1527 |   name: "execute_command",
1528 |   description: "Run a shell command",
1529 |   inputSchema: {
1530 |     type: "object",
1531 |     properties: {
1532 |       command: { type: "string" },
1533 |       args: { type: "array", items: { type: "string" } }
1534 |     }
1535 |   }
1536 | }
1537 | ```
1538 | 
1539 | ### API integrations
1540 | 
1541 | Tools that wrap external APIs:
1542 | 
1543 | ```typescript
1544 | {
1545 |   name: "github_create_issue",
1546 |   description: "Create a GitHub issue",
1547 |   inputSchema: {
1548 |     type: "object",
1549 |     properties: {
1550 |       title: { type: "string" },
1551 |       body: { type: "string" },
1552 |       labels: { type: "array", items: { type: "string" } }
1553 |     }
1554 |   }
1555 | }
1556 | ```
1557 | 
1558 | ### Data processing
1559 | 
1560 | Tools that transform or analyze data:
1561 | 
1562 | ```typescript
1563 | {
1564 |   name: "analyze_csv",
1565 |   description: "Analyze a CSV file",
1566 |   inputSchema: {
1567 |     type: "object",
1568 |     properties: {
1569 |       filepath: { type: "string" },
1570 |       operations: {
1571 |         type: "array",
1572 |         items: {
1573 |           enum: ["sum", "average", "count"]
1574 |         }
1575 |       }
1576 |     }
1577 |   }
1578 | }
1579 | ```
1580 | 
1581 | ## Best practices
1582 | 
1583 | When implementing tools:
1584 | 
1585 | 1.  Provide clear, descriptive names and descriptions
1586 | 2.  Use detailed JSON Schema definitions for parameters
1587 | 3.  Include examples in tool descriptions to demonstrate how the model should use them
1588 | 4.  Implement proper error handling and validation
1589 | 5.  Use progress reporting for long operations
1590 | 6.  Keep tool operations focused and atomic
1591 | 7.  Document expected return value structures
1592 | 8.  Implement proper timeouts
1593 | 9.  Consider rate limiting for resource-intensive operations
1594 | 10. Log tool usage for debugging and monitoring
1595 | 
1596 | ## Security considerations
1597 | 
1598 | When exposing tools:
1599 | 
1600 | ### Input validation
1601 | 
1602 | *   Validate all parameters against the schema
1603 | *   Sanitize file paths and system commands
1604 | *   Validate URLs and external identifiers
1605 | *   Check parameter sizes and ranges
1606 | *   Prevent command injection
1607 | 
1608 | ### Access control
1609 | 
1610 | *   Implement authentication where needed
1611 | *   Use appropriate authorization checks
1612 | *   Audit tool usage
1613 | *   Rate limit requests
1614 | *   Monitor for abuse
1615 | 
1616 | ### Error handling
1617 | 
1618 | *   Don't expose internal errors to clients
1619 | *   Log security-relevant errors
1620 | *   Handle timeouts appropriately
1621 | *   Clean up resources after errors
1622 | *   Validate return values
1623 | 
1624 | ## Tool discovery and updates
1625 | 
1626 | MCP supports dynamic tool discovery:
1627 | 
1628 | 1.  Clients can list available tools at any time
1629 | 2.  Servers can notify clients when tools change using `notifications/tools/list_changed`
1630 | 3.  Tools can be added or removed during runtime
1631 | 4.  Tool definitions can be updated (though this should be done carefully)
1632 | 
1633 | ## Error handling
1634 | 
1635 | Tool errors should be reported within the result object, not as MCP protocol-level errors. This allows the LLM to see and potentially handle the error. When a tool encounters an error:
1636 | 
1637 | 1.  Set `isError` to `true` in the result
1638 | 2.  Include error details in the `content` array
1639 | 
1640 | Here's an example of proper error handling for tools:
1641 | 
1642 | <Tabs>
1643 |   <Tab title="TypeScript">
1644 |     ```typescript
1645 |     try {
1646 |       // Tool operation
1647 |       const result = performOperation();
1648 |       return {
1649 |         content: [
1650 |           {
1651 |             type: "text",
1652 |             text: `Operation successful: ${result}`
1653 |           }
1654 |         ]
1655 |       };
1656 |     } catch (error) {
1657 |       return {
1658 |         isError: true,
1659 |         content: [
1660 |           {
1661 |             type: "text",
1662 |             text: `Error: ${error.message}`
1663 |           }
1664 |         ]
1665 |       };
1666 |     }
1667 |     ```
1668 |   </Tab>
1669 | 
1670 |   <Tab title="Python">
1671 |     ```python
1672 |     try:
1673 |         # Tool operation
1674 |         result = perform_operation()
1675 |         return types.CallToolResult(
1676 |             content=[
1677 |                 types.TextContent(
1678 |                     type="text",
1679 |                     text=f"Operation successful: {result}"
1680 |                 )
1681 |             ]
1682 |         )
1683 |     except Exception as error:
1684 |         return types.CallToolResult(
1685 |             isError=True,
1686 |             content=[
1687 |                 types.TextContent(
1688 |                     type="text",
1689 |                     text=f"Error: {str(error)}"
1690 |                 )
1691 |             ]
1692 |         )
1693 |     ```
1694 |   </Tab>
1695 | </Tabs>
1696 | 
1697 | This approach allows the LLM to see that an error occurred and potentially take corrective action or request human intervention.
1698 | 
1699 | ## Testing tools
1700 | 
1701 | A comprehensive testing strategy for MCP tools should cover:
1702 | 
1703 | *   **Functional testing**: Verify tools execute correctly with valid inputs and handle invalid inputs appropriately
1704 | *   **Integration testing**: Test tool interaction with external systems using both real and mocked dependencies
1705 | *   **Security testing**: Validate authentication, authorization, input sanitization, and rate limiting
1706 | *   **Performance testing**: Check behavior under load, timeout handling, and resource cleanup
1707 | *   **Error handling**: Ensure tools properly report errors through the MCP protocol and clean up resources
1708 | 
1709 | 
1710 | # Transports
1711 | 
1712 | Learn about MCP's communication mechanisms
1713 | 
1714 | Transports in the Model Context Protocol (MCP) provide the foundation for communication between clients and servers. A transport handles the underlying mechanics of how messages are sent and received.
1715 | 
1716 | ## Message Format
1717 | 
1718 | MCP uses [JSON-RPC](https://www.jsonrpc.org/) 2.0 as its wire format. The transport layer is responsible for converting MCP protocol messages into JSON-RPC format for transmission and converting received JSON-RPC messages back into MCP protocol messages.
1719 | 
1720 | There are three types of JSON-RPC messages used:
1721 | 
1722 | ### Requests
1723 | 
1724 | ```typescript
1725 | {
1726 |   jsonrpc: "2.0",
1727 |   id: number | string,
1728 |   method: string,
1729 |   params?: object
1730 | }
1731 | ```
1732 | 
1733 | ### Responses
1734 | 
1735 | ```typescript
1736 | {
1737 |   jsonrpc: "2.0",
1738 |   id: number | string,
1739 |   result?: object,
1740 |   error?: {
1741 |     code: number,
1742 |     message: string,
1743 |     data?: unknown
1744 |   }
1745 | }
1746 | ```
1747 | 
1748 | ### Notifications
1749 | 
1750 | ```typescript
1751 | {
1752 |   jsonrpc: "2.0",
1753 |   method: string,
1754 |   params?: object
1755 | }
1756 | ```
1757 | 
1758 | ## Built-in Transport Types
1759 | 
1760 | MCP includes two standard transport implementations:
1761 | 
1762 | ### Standard Input/Output (stdio)
1763 | 
1764 | The stdio transport enables communication through standard input and output streams. This is particularly useful for local integrations and command-line tools.
1765 | 
1766 | Use stdio when:
1767 | 
1768 | *   Building command-line tools
1769 | *   Implementing local integrations
1770 | *   Needing simple process communication
1771 | *   Working with shell scripts
1772 | 
1773 | <Tabs>
1774 |   <Tab title="TypeScript (Server)">
1775 |     ```typescript
1776 |     const server = new Server({
1777 |       name: "example-server",
1778 |       version: "1.0.0"
1779 |     }, {
1780 |       capabilities: {}
1781 |     });
1782 | 
1783 |     const transport = new StdioServerTransport();
1784 |     await server.connect(transport);
1785 |     ```
1786 |   </Tab>
1787 | 
1788 |   <Tab title="TypeScript (Client)">
1789 |     ```typescript
1790 |     const client = new Client({
1791 |       name: "example-client",
1792 |       version: "1.0.0"
1793 |     }, {
1794 |       capabilities: {}
1795 |     });
1796 | 
1797 |     const transport = new StdioClientTransport({
1798 |       command: "./server",
1799 |       args: ["--option", "value"]
1800 |     });
1801 |     await client.connect(transport);
1802 |     ```
1803 |   </Tab>
1804 | 
1805 |   <Tab title="Python (Server)">
1806 |     ```python
1807 |     app = Server("example-server")
1808 | 
1809 |     async with stdio_server() as streams:
1810 |         await app.run(
1811 |             streams[0],
1812 |             streams[1],
1813 |             app.create_initialization_options()
1814 |         )
1815 |     ```
1816 |   </Tab>
1817 | 
1818 |   <Tab title="Python (Client)">
1819 |     ```python
1820 |     params = StdioServerParameters(
1821 |         command="./server",
1822 |         args=["--option", "value"]
1823 |     )
1824 | 
1825 |     async with stdio_client(params) as streams:
1826 |         async with ClientSession(streams[0], streams[1]) as session:
1827 |             await session.initialize()
1828 |     ```
1829 |   </Tab>
1830 | </Tabs>
1831 | 
1832 | ### Server-Sent Events (SSE)
1833 | 
1834 | SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication.
1835 | 
1836 | Use SSE when:
1837 | 
1838 | *   Only server-to-client streaming is needed
1839 | *   Working with restricted networks
1840 | *   Implementing simple updates
1841 | 
1842 | <Tabs>
1843 |   <Tab title="TypeScript (Server)">
1844 |     ```typescript
1845 |     const server = new Server({
1846 |       name: "example-server",
1847 |       version: "1.0.0"
1848 |     }, {
1849 |       capabilities: {}
1850 |     });
1851 | 
1852 |     const transport = new SSEServerTransport("/message", response);
1853 |     await server.connect(transport);
1854 |     ```
1855 |   </Tab>
1856 | 
1857 |   <Tab title="TypeScript (Client)">
1858 |     ```typescript
1859 |     const client = new Client({
1860 |       name: "example-client",
1861 |       version: "1.0.0"
1862 |     }, {
1863 |       capabilities: {}
1864 |     });
1865 | 
1866 |     const transport = new SSEClientTransport(
1867 |       new URL("http://localhost:3000/sse")
1868 |     );
1869 |     await client.connect(transport);
1870 |     ```
1871 |   </Tab>
1872 | 
1873 |   <Tab title="Python (Server)">
1874 |     ```python
1875 |     from mcp.server.sse import SseServerTransport
1876 |     from starlette.applications import Starlette
1877 |     from starlette.routing import Route
1878 | 
1879 |     app = Server("example-server")
1880 |     sse = SseServerTransport("/messages")
1881 | 
1882 |     async def handle_sse(scope, receive, send):
1883 |         async with sse.connect_sse(scope, receive, send) as streams:
1884 |             await app.run(streams[0], streams[1], app.create_initialization_options())
1885 | 
1886 |     async def handle_messages(scope, receive, send):
1887 |         await sse.handle_post_message(scope, receive, send)
1888 | 
1889 |     starlette_app = Starlette(
1890 |         routes=[
1891 |             Route("/sse", endpoint=handle_sse),
1892 |             Route("/messages", endpoint=handle_messages, methods=["POST"]),
1893 |         ]
1894 |     )
1895 |     ```
1896 |   </Tab>
1897 | 
1898 |   <Tab title="Python (Client)">
1899 |     ```python
1900 |     async with sse_client("http://localhost:8000/sse") as streams:
1901 |         async with ClientSession(streams[0], streams[1]) as session:
1902 |             await session.initialize()
1903 |     ```
1904 |   </Tab>
1905 | </Tabs>
1906 | 
1907 | ## Custom Transports
1908 | 
1909 | MCP makes it easy to implement custom transports for specific needs. Any transport implementation just needs to conform to the Transport interface:
1910 | 
1911 | You can implement custom transports for:
1912 | 
1913 | *   Custom network protocols
1914 | *   Specialized communication channels
1915 | *   Integration with existing systems
1916 | *   Performance optimization
1917 | 
1918 | <Tabs>
1919 |   <Tab title="TypeScript">
1920 |     ```typescript
1921 |     interface Transport {
1922 |       // Start processing messages
1923 |       start(): Promise<void>;
1924 | 
1925 |       // Send a JSON-RPC message
1926 |       send(message: JSONRPCMessage): Promise<void>;
1927 | 
1928 |       // Close the connection
1929 |       close(): Promise<void>;
1930 | 
1931 |       // Callbacks
1932 |       onclose?: () => void;
1933 |       onerror?: (error: Error) => void;
1934 |       onmessage?: (message: JSONRPCMessage) => void;
1935 |     }
1936 |     ```
1937 |   </Tab>
1938 | 
1939 |   <Tab title="Python">
1940 |     Note that while MCP Servers are often implemented with asyncio, we recommend
1941 |     implementing low-level interfaces like transports with `anyio` for wider compatibility.
1942 | 
1943 |     ```python
1944 |     @contextmanager
1945 |     async def create_transport(
1946 |         read_stream: MemoryObjectReceiveStream[JSONRPCMessage | Exception],
1947 |         write_stream: MemoryObjectSendStream[JSONRPCMessage]
1948 |     ):
1949 |         """
1950 |         Transport interface for MCP.
1951 | 
1952 |         Args:
1953 |             read_stream: Stream to read incoming messages from
1954 |             write_stream: Stream to write outgoing messages to
1955 |         """
1956 |         async with anyio.create_task_group() as tg:
1957 |             try:
1958 |                 # Start processing messages
1959 |                 tg.start_soon(lambda: process_messages(read_stream))
1960 | 
1961 |                 # Send messages
1962 |                 async with write_stream:
1963 |                     yield write_stream
1964 | 
1965 |             except Exception as exc:
1966 |                 # Handle errors
1967 |                 raise exc
1968 |             finally:
1969 |                 # Clean up
1970 |                 tg.cancel_scope.cancel()
1971 |                 await write_stream.aclose()
1972 |                 await read_stream.aclose()
1973 |     ```
1974 |   </Tab>
1975 | </Tabs>
1976 | 
1977 | ## Error Handling
1978 | 
1979 | Transport implementations should handle various error scenarios:
1980 | 
1981 | 1.  Connection errors
1982 | 2.  Message parsing errors
1983 | 3.  Protocol errors
1984 | 4.  Network timeouts
1985 | 5.  Resource cleanup
1986 | 
1987 | Example error handling:
1988 | 
1989 | <Tabs>
1990 |   <Tab title="TypeScript">
1991 |     ```typescript
1992 |     class ExampleTransport implements Transport {
1993 |       async start() {
1994 |         try {
1995 |           // Connection logic
1996 |         } catch (error) {
1997 |           this.onerror?.(new Error(`Failed to connect: ${error}`));
1998 |           throw error;
1999 |         }
2000 |       }
2001 | 
2002 |       async send(message: JSONRPCMessage) {
2003 |         try {
2004 |           // Sending logic
2005 |         } catch (error) {
2006 |           this.onerror?.(new Error(`Failed to send message: ${error}`));
2007 |           throw error;
2008 |         }
2009 |       }
2010 |     }
2011 |     ```
2012 |   </Tab>
2013 | 
2014 |   <Tab title="Python">
2015 |     Note that while MCP Servers are often implemented with asyncio, we recommend
2016 |     implementing low-level interfaces like transports with `anyio` for wider compatibility.
2017 | 
2018 |     ```python
2019 |     @contextmanager
2020 |     async def example_transport(scope: Scope, receive: Receive, send: Send):
2021 |         try:
2022 |             # Create streams for bidirectional communication
2023 |             read_stream_writer, read_stream = anyio.create_memory_object_stream(0)
2024 |             write_stream, write_stream_reader = anyio.create_memory_object_stream(0)
2025 | 
2026 |             async def message_handler():
2027 |                 try:
2028 |                     async with read_stream_writer:
2029 |                         # Message handling logic
2030 |                         pass
2031 |                 except Exception as exc:
2032 |                     logger.error(f"Failed to handle message: {exc}")
2033 |                     raise exc
2034 | 
2035 |             async with anyio.create_task_group() as tg:
2036 |                 tg.start_soon(message_handler)
2037 |                 try:
2038 |                     # Yield streams for communication
2039 |                     yield read_stream, write_stream
2040 |                 except Exception as exc:
2041 |                     logger.error(f"Transport error: {exc}")
2042 |                     raise exc
2043 |                 finally:
2044 |                     tg.cancel_scope.cancel()
2045 |                     await write_stream.aclose()
2046 |                     await read_stream.aclose()
2047 |         except Exception as exc:
2048 |             logger.error(f"Failed to initialize transport: {exc}")
2049 |             raise exc
2050 |     ```
2051 |   </Tab>
2052 | </Tabs>
2053 | 
2054 | ## Best Practices
2055 | 
2056 | When implementing or using MCP transport:
2057 | 
2058 | 1.  Handle connection lifecycle properly
2059 | 2.  Implement proper error handling
2060 | 3.  Clean up resources on connection close
2061 | 4.  Use appropriate timeouts
2062 | 5.  Validate messages before sending
2063 | 6.  Log transport events for debugging
2064 | 7.  Implement reconnection logic when appropriate
2065 | 8.  Handle backpressure in message queues
2066 | 9.  Monitor connection health
2067 | 10. Implement proper security measures
2068 | 
2069 | ## Security Considerations
2070 | 
2071 | When implementing transport:
2072 | 
2073 | ### Authentication and Authorization
2074 | 
2075 | *   Implement proper authentication mechanisms
2076 | *   Validate client credentials
2077 | *   Use secure token handling
2078 | *   Implement authorization checks
2079 | 
2080 | ### Data Security
2081 | 
2082 | *   Use TLS for network transport
2083 | *   Encrypt sensitive data
2084 | *   Validate message integrity
2085 | *   Implement message size limits
2086 | *   Sanitize input data
2087 | 
2088 | ### Network Security
2089 | 
2090 | *   Implement rate limiting
2091 | *   Use appropriate timeouts
2092 | *   Handle denial of service scenarios
2093 | *   Monitor for unusual patterns
2094 | *   Implement proper firewall rules
2095 | 
2096 | ## Debugging Transport
2097 | 
2098 | Tips for debugging transport issues:
2099 | 
2100 | 1.  Enable debug logging
2101 | 2.  Monitor message flow
2102 | 3.  Check connection states
2103 | 4.  Validate message formats
2104 | 5.  Test error scenarios
2105 | 6.  Use network analysis tools
2106 | 7.  Implement health checks
2107 | 8.  Monitor resource usage
2108 | 9.  Test edge cases
2109 | 10. Use proper error tracking
2110 | 
2111 | 
2112 | # Python
2113 | 
2114 | Create a simple MCP server in Python in 15 minutes
2115 | 
2116 | Let's build your first MCP server in Python! We'll create a weather server that provides current weather data as a resource and lets Claude fetch forecasts using tools.
2117 | 
2118 | <Note>
2119 |   This guide uses the OpenWeatherMap API. You'll need a free API key from [OpenWeatherMap](https://openweathermap.org/api) to follow along.
2120 | </Note>
2121 | 
2122 | ## Prerequisites
2123 | 
2124 | <Info>
2125 |   The following steps are for macOS. Guides for other platforms are coming soon.
2126 | </Info>
2127 | 
2128 | <Steps>
2129 |   <Step title="Install Python">
2130 |     You'll need Python 3.10 or higher:
2131 | 
2132 |     ```bash
2133 |     python --version  # Should be 3.10 or higher
2134 |     ```
2135 |   </Step>
2136 | 
2137 |   <Step title="Install uv via homebrew">
2138 |     See [https://docs.astral.sh/uv/](https://docs.astral.sh/uv/) for more information.
2139 | 
2140 |     ```bash
2141 |     brew install uv
2142 |     uv --version # Should be 0.4.18 or higher
2143 |     ```
2144 |   </Step>
2145 | 
2146 |   <Step title="Create a new project using the MCP project creator">
2147 |     ```bash
2148 |     uvx create-mcp-server --path weather_service
2149 |     cd weather_service
2150 |     ```
2151 |   </Step>
2152 | 
2153 |   <Step title="Install additional dependencies">
2154 |     ```bash
2155 |     uv add httpx python-dotenv
2156 |     ```
2157 |   </Step>
2158 | 
2159 |   <Step title="Set up environment">
2160 |     Create `.env`:
2161 | 
2162 |     ```bash
2163 |     OPENWEATHER_API_KEY=your-api-key-here
2164 |     ```
2165 |   </Step>
2166 | </Steps>
2167 | 
2168 | ## Create your server
2169 | 
2170 | <Steps>
2171 |   <Step title="Add the base imports and setup">
2172 |     In `weather_service/src/weather_service/server.py`
2173 | 
2174 |     ```python
2175 |     import os
2176 |     import json
2177 |     import logging
2178 |     from datetime import datetime, timedelta
2179 |     from collections.abc import Sequence
2180 |     from functools import lru_cache
2181 |     from typing import Any
2182 | 
2183 |     import httpx
2184 |     import asyncio
2185 |     from dotenv import load_dotenv
2186 |     from mcp.server import Server
2187 |     from mcp.types import (
2188 |         Resource,
2189 |         Tool,
2190 |         TextContent,
2191 |         ImageContent,
2192 |         EmbeddedResource,
2193 |         LoggingLevel
2194 |     )
2195 |     from pydantic import AnyUrl
2196 | 
2197 |     # Load environment variables
2198 |     load_dotenv()
2199 | 
2200 |     # Configure logging
2201 |     logging.basicConfig(level=logging.INFO)
2202 |     logger = logging.getLogger("weather-server")
2203 | 
2204 |     # API configuration
2205 |     API_KEY = os.getenv("OPENWEATHER_API_KEY")
2206 |     if not API_KEY:
2207 |         raise ValueError("OPENWEATHER_API_KEY environment variable required")
2208 | 
2209 |     API_BASE_URL = "http://api.openweathermap.org/data/2.5"
2210 |     DEFAULT_CITY = "London"
2211 |     CURRENT_WEATHER_ENDPOINT = "weather"
2212 |     FORECAST_ENDPOINT = "forecast"
2213 | 
2214 |     # The rest of our server implementation will go here
2215 |     ```
2216 |   </Step>
2217 | 
2218 |   <Step title="Add weather fetching functionality">
2219 |     Add this functionality:
2220 | 
2221 |     ```python
2222 |     # Create reusable params
2223 |     http_params = {
2224 |         "appid": API_KEY,
2225 |         "units": "metric"
2226 |     }
2227 | 
2228 |     async def fetch_weather(city: str) -> dict[str, Any]:
2229 |         async with httpx.AsyncClient() as client:
2230 |             response = await client.get(
2231 |                 f"{API_BASE_URL}/weather",
2232 |                 params={"q": city, **http_params}
2233 |             )
2234 |             response.raise_for_status()
2235 |             data = response.json()
2236 | 
2237 |         return {
2238 |             "temperature": data["main"]["temp"],
2239 |             "conditions": data["weather"][0]["description"],
2240 |             "humidity": data["main"]["humidity"],
2241 |             "wind_speed": data["wind"]["speed"],
2242 |             "timestamp": datetime.now().isoformat()
2243 |         }
2244 | 
2245 | 
2246 |     app = Server("weather-server")
2247 |     ```
2248 |   </Step>
2249 | 
2250 |   <Step title="Implement resource handlers">
2251 |     Add these resource-related handlers to our main function:
2252 | 
2253 |     ```python
2254 |     app = Server("weather-server")
2255 | 
2256 |     @app.list_resources()
2257 |     async def list_resources() -> list[Resource]:
2258 |         """List available weather resources."""
2259 |         uri = AnyUrl(f"weather://{DEFAULT_CITY}/current")
2260 |         return [
2261 |             Resource(
2262 |                 uri=uri,
2263 |                 name=f"Current weather in {DEFAULT_CITY}",
2264 |                 mimeType="application/json",
2265 |                 description="Real-time weather data"
2266 |             )
2267 |         ]
2268 | 
2269 |     @app.read_resource()
2270 |     async def read_resource(uri: AnyUrl) -> str:
2271 |         """Read current weather data for a city."""
2272 |         city = DEFAULT_CITY
2273 |         if str(uri).startswith("weather://") and str(uri).endswith("/current"):
2274 |             city = str(uri).split("/")[-2]
2275 |         else:
2276 |             raise ValueError(f"Unknown resource: {uri}")
2277 | 
2278 |         try:
2279 |             weather_data = await fetch_weather(city)
2280 |             return json.dumps(weather_data, indent=2)
2281 |         except httpx.HTTPError as e:
2282 |             raise RuntimeError(f"Weather API error: {str(e)}")
2283 | 
2284 |     ```
2285 |   </Step>
2286 | 
2287 |   <Step title="Implement tool handlers">
2288 |     Add these tool-related handlers:
2289 | 
2290 |     ```python
2291 |     app = Server("weather-server")
2292 | 
2293 |     # Resource implementation ...
2294 | 
2295 |     @app.list_tools()
2296 |     async def list_tools() -> list[Tool]:
2297 |         """List available weather tools."""
2298 |         return [
2299 |             Tool(
2300 |                 name="get_forecast",
2301 |                 description="Get weather forecast for a city",
2302 |                 inputSchema={
2303 |                     "type": "object",
2304 |                     "properties": {
2305 |                         "city": {
2306 |                             "type": "string",
2307 |                             "description": "City name"
2308 |                         },
2309 |                         "days": {
2310 |                             "type": "number",
2311 |                             "description": "Number of days (1-5)",
2312 |                             "minimum": 1,
2313 |                             "maximum": 5
2314 |                         }
2315 |                     },
2316 |                     "required": ["city"]
2317 |                 }
2318 |             )
2319 |         ]
2320 | 
2321 |     @app.call_tool()
2322 |     async def call_tool(name: str, arguments: Any) -> Sequence[TextContent | ImageContent | EmbeddedResource]:
2323 |         """Handle tool calls for weather forecasts."""
2324 |         if name != "get_forecast":
2325 |             raise ValueError(f"Unknown tool: {name}")
2326 | 
2327 |         if not isinstance(arguments, dict) or "city" not in arguments:
2328 |             raise ValueError("Invalid forecast arguments")
2329 | 
2330 |         city = arguments["city"]
2331 |         days = min(int(arguments.get("days", 3)), 5)
2332 | 
2333 |         try:
2334 |             async with httpx.AsyncClient() as client:
2335 |                 response = await client.get(
2336 |                     f"{API_BASE_URL}/{FORECAST_ENDPOINT}",
2337 |                     params={
2338 |                         "q": city,
2339 |                         "cnt": days * 8,  # API returns 3-hour intervals
2340 |                         **http_params,
2341 |                     }
2342 |                 )
2343 |                 response.raise_for_status()
2344 |                 data = response.json()
2345 | 
2346 |             forecasts = []
2347 |             for i in range(0, len(data["list"]), 8):
2348 |                 day_data = data["list"][i]
2349 |                 forecasts.append({
2350 |                     "date": day_data["dt_txt"].split()[0],
2351 |                     "temperature": day_data["main"]["temp"],
2352 |                     "conditions": day_data["weather"][0]["description"]
2353 |                 })
2354 | 
2355 |             return [
2356 |                 TextContent(
2357 |                     type="text",
2358 |                     text=json.dumps(forecasts, indent=2)
2359 |                 )
2360 |             ]
2361 |         except httpx.HTTPError as e:
2362 |             logger.error(f"Weather API error: {str(e)}")
2363 |             raise RuntimeError(f"Weather API error: {str(e)}")
2364 |     ```
2365 |   </Step>
2366 | 
2367 |   <Step title="Add the main function">
2368 |     Add this to the end of `weather_service/src/weather_service/server.py`:
2369 | 
2370 |     ```python
2371 |     async def main():
2372 |         # Import here to avoid issues with event loops
2373 |         from mcp.server.stdio import stdio_server
2374 | 
2375 |         async with stdio_server() as (read_stream, write_stream):
2376 |             await app.run(
2377 |                 read_stream,
2378 |                 write_stream,
2379 |                 app.create_initialization_options()
2380 |             )
2381 |     ```
2382 |   </Step>
2383 | 
2384 |   <Step title="Check your entry point in __init__.py">
2385 |     Add this to the end of `weather_service/src/weather_service/__init__.py`:
2386 | 
2387 |     ```python
2388 |     from . import server
2389 |     import asyncio
2390 | 
2391 |     def main():
2392 |        """Main entry point for the package."""
2393 |        asyncio.run(server.main())
2394 | 
2395 |     # Optionally expose other important items at package level
2396 |     __all__ = ['main', 'server']
2397 |     ```
2398 |   </Step>
2399 | </Steps>
2400 | 
2401 | ## Connect to Claude Desktop
2402 | 
2403 | <Steps>
2404 |   <Step title="Update Claude config">
2405 |     Add to `claude_desktop_config.json`:
2406 | 
2407 |     ```json
2408 |     {
2409 |       "mcpServers": {
2410 |         "weather": {
2411 |           "command": "uv",
2412 |           "args": [
2413 |             "--directory",
2414 |             "path/to/your/project",
2415 |             "run",
2416 |             "weather-service"
2417 |           ],
2418 |           "env": {
2419 |             "OPENWEATHER_API_KEY": "your-api-key"
2420 |           }
2421 |         }
2422 |       }
2423 |     }
2424 |     ```
2425 |   </Step>
2426 | 
2427 |   <Step title="Restart Claude">
2428 |     1.  Quit Claude completely
2429 | 
2430 |     2.  Start Claude again
2431 | 
2432 |     3.  Look for your weather server in the 🔌 menu
2433 |   </Step>
2434 | </Steps>
2435 | 
2436 | ## Try it out!
2437 | 
2438 | <AccordionGroup>
2439 |   <Accordion title="Check Current Weather" active>
2440 |     Ask Claude:
2441 | 
2442 |     ```
2443 |     What's the current weather in San Francisco? Can you analyze the conditions and tell me if it's a good day for outdoor activities?
2444 |     ```
2445 |   </Accordion>
2446 | 
2447 |   <Accordion title="Get a Forecast">
2448 |     Ask Claude:
2449 | 
2450 |     ```
2451 |     Can you get me a 5-day forecast for Tokyo and help me plan what clothes to pack for my trip?
2452 |     ```
2453 |   </Accordion>
2454 | 
2455 |   <Accordion title="Compare Weather">
2456 |     Ask Claude:
2457 | 
2458 |     ```
2459 |     Can you analyze the forecast for both Tokyo and San Francisco and tell me which city would be better for outdoor photography this week?
2460 |     ```
2461 |   </Accordion>
2462 | </AccordionGroup>
2463 | 
2464 | ## Understanding the code
2465 | 
2466 | <Tabs>
2467 |   <Tab title="Type Hints">
2468 |     ```python
2469 |     async def read_resource(self, uri: str) -> ReadResourceResult:
2470 |         # ...
2471 |     ```
2472 | 
2473 |     Python type hints help catch errors early and improve code maintainability.
2474 |   </Tab>
2475 | 
2476 |   <Tab title="Resources">
2477 |     ```python
2478 |     @app.list_resources()
2479 |     async def list_resources(self) -> ListResourcesResult:
2480 |         return ListResourcesResult(
2481 |             resources=[
2482 |                 Resource(
2483 |                     uri=f"weather://{DEFAULT_CITY}/current",
2484 |                     name=f"Current weather in {DEFAULT_CITY}",
2485 |                     mimeType="application/json",
2486 |                     description="Real-time weather data"
2487 |                 )
2488 |             ]
2489 |         )
2490 |     ```
2491 | 
2492 |     Resources provide data that Claude can access as context.
2493 |   </Tab>
2494 | 
2495 |   <Tab title="Tools">
2496 |     ```python
2497 |     Tool(
2498 |         name="get_forecast",
2499 |         description="Get weather forecast for a city",
2500 |         inputSchema={
2501 |             "type": "object",
2502 |             "properties": {
2503 |                 "city": {
2504 |                     "type": "string",
2505 |                     "description": "City name"
2506 |                 },
2507 |                 "days": {
2508 |                     "type": "number",
2509 |                     "description": "Number of days (1-5)",
2510 |                     "minimum": 1,
2511 |                     "maximum": 5
2512 |                 }
2513 |             },
2514 |             "required": ["city"]
2515 |         }
2516 |     )
2517 |     ```
2518 | 
2519 |     Tools let Claude take actions through your server with validated inputs.
2520 |   </Tab>
2521 | 
2522 |   <Tab title="Server Structure">
2523 |     ```python
2524 |     # Create server instance with name
2525 |     app = Server("weather-server")
2526 | 
2527 |     # Register resource handler
2528 |     @app.list_resources()
2529 |     async def list_resources() -> list[Resource]:
2530 |         """List available resources"""
2531 |         return [...]
2532 | 
2533 |     # Register tool handler
2534 |     @app.call_tool()
2535 |     async def call_tool(name: str, arguments: Any) -> Sequence[TextContent]:
2536 |         """Handle tool execution"""
2537 |         return [...]
2538 | 
2539 |     # Register additional handlers
2540 |     @app.read_resource()
2541 |     ...
2542 |     @app.list_tools()
2543 |     ...
2544 |     ```
2545 | 
2546 |     The MCP server uses a simple app pattern - create a Server instance and register handlers with decorators. Each handler maps to a specific MCP protocol operation.
2547 |   </Tab>
2548 | </Tabs>
2549 | 
2550 | ## Best practices
2551 | 
2552 | <CardGroup cols={1}>
2553 |   <Card title="Error Handling" icon="shield">
2554 |     ```python
2555 |     try:
2556 |         async with httpx.AsyncClient() as client:
2557 |             response = await client.get(..., params={..., **http_params})
2558 |             response.raise_for_status()
2559 |     except httpx.HTTPError as e:
2560 |         raise McpError(
2561 |             ErrorCode.INTERNAL_ERROR,
2562 |             f"API error: {str(e)}"
2563 |         )
2564 |     ```
2565 |   </Card>
2566 | 
2567 |   <Card title="Type Validation" icon="check">
2568 |     ```python
2569 |     if not isinstance(args, dict) or "city" not in args:
2570 |         raise McpError(
2571 |             ErrorCode.INVALID_PARAMS,
2572 |             "Invalid forecast arguments"
2573 |         )
2574 |     ```
2575 |   </Card>
2576 | 
2577 |   <Card title="Environment Variables" icon="gear">
2578 |     ```python
2579 |     if not API_KEY:
2580 |         raise ValueError("OPENWEATHER_API_KEY is required")
2581 |     ```
2582 |   </Card>
2583 | </CardGroup>
2584 | 
2585 | ## Available transports
2586 | 
2587 | While this guide uses stdio transport, MCP supports additional transport options:
2588 | 
2589 | ### SSE (Server-Sent Events)
2590 | 
2591 | ```python
2592 | from mcp.server.sse import SseServerTransport
2593 | from starlette.applications import Starlette
2594 | from starlette.routing import Route
2595 | 
2596 | # Create SSE transport with endpoint
2597 | sse = SseServerTransport("/messages")
2598 | 
2599 | # Handler for SSE connections
2600 | async def handle_sse(scope, receive, send):
2601 |     async with sse.connect_sse(scope, receive, send) as streams:
2602 |         await app.run(
2603 |             streams[0], streams[1], app.create_initialization_options()
2604 |         )
2605 | 
2606 | # Handler for client messages
2607 | async def handle_messages(scope, receive, send):
2608 |     await sse.handle_post_message(scope, receive, send)
2609 | 
2610 | # Create Starlette app with routes
2611 | app = Starlette(
2612 |     debug=True,
2613 |     routes=[
2614 |         Route("/sse", endpoint=handle_sse),
2615 |         Route("/messages", endpoint=handle_messages, methods=["POST"]),
2616 |     ],
2617 | )
2618 | 
2619 | # Run with any ASGI server
2620 | import uvicorn
2621 | uvicorn.run(app, host="0.0.0.0", port=8000)
2622 | ```
2623 | 
2624 | ## Advanced features
2625 | 
2626 | <Steps>
2627 |   <Step title="Understanding Request Context">
2628 |     The request context provides access to the current request's metadata and the active client session. Access it through `server.request_context`:
2629 | 
2630 |     ```python
2631 |     @app.call_tool()
2632 |     async def call_tool(name: str, arguments: Any) -> Sequence[TextContent]:
2633 |         # Access the current request context
2634 |         ctx = self.request_context
2635 | 
2636 |         # Get request metadata like progress tokens
2637 |         if progress_token := ctx.meta.progressToken:
2638 |             # Send progress notifications via the session
2639 |             await ctx.session.send_progress_notification(
2640 |                 progress_token=progress_token,
2641 |                 progress=0.5,
2642 |                 total=1.0
2643 |             )
2644 | 
2645 |         # Sample from the LLM client
2646 |         result = await ctx.session.create_message(
2647 |             messages=[
2648 |                 SamplingMessage(
2649 |                     role="user",
2650 |                     content=TextContent(
2651 |                         type="text",
2652 |                         text="Analyze this weather data: " + json.dumps(arguments)
2653 |                     )
2654 |                 )
2655 |             ],
2656 |             max_tokens=100
2657 |         )
2658 | 
2659 |         return [TextContent(type="text", text=result.content.text)]
2660 |     ```
2661 |   </Step>
2662 | 
2663 |   <Step title="Add caching">
2664 |     ```python
2665 |     # Cache settings
2666 |     cache_timeout = timedelta(minutes=15)
2667 |     last_cache_time = None
2668 |     cached_weather = None
2669 | 
2670 |     async def fetch_weather(city: str) -> dict[str, Any]:
2671 |         global cached_weather, last_cache_time
2672 | 
2673 |         now = datetime.now()
2674 |         if (cached_weather is None or
2675 |             last_cache_time is None or
2676 |             now - last_cache_time > cache_timeout):
2677 | 
2678 |             async with httpx.AsyncClient() as client:
2679 |                 response = await client.get(
2680 |                     f"{API_BASE_URL}/{CURRENT_WEATHER_ENDPOINT}",
2681 |                     params={"q": city, **http_params}
2682 |                 )
2683 |                 response.raise_for_status()
2684 |                 data = response.json()
2685 | 
2686 |             cached_weather = {
2687 |                 "temperature": data["main"]["temp"],
2688 |                 "conditions": data["weather"][0]["description"],
2689 |                 "humidity": data["main"]["humidity"],
2690 |                 "wind_speed": data["wind"]["speed"],
2691 |                 "timestamp": datetime.now().isoformat()
2692 |             }
2693 |             last_cache_time = now
2694 | 
2695 |         return cached_weather
2696 |     ```
2697 |   </Step>
2698 | 
2699 |   <Step title="Add progress notifications">
2700 |     ```python
2701 |     @self.call_tool()
2702 |     async def call_tool(self, name: str, arguments: Any) -> CallToolResult:
2703 |         if progress_token := self.request_context.meta.progressToken:
2704 |             # Send progress notifications
2705 |             await self.request_context.session.send_progress_notification(
2706 |                 progress_token=progress_token,
2707 |                 progress=1,
2708 |                 total=2
2709 |             )
2710 | 
2711 |             # Fetch data...
2712 | 
2713 |             await self.request_context.session.send_progress_notification(
2714 |                 progress_token=progress_token,
2715 |                 progress=2,
2716 |                 total=2
2717 |             )
2718 | 
2719 |         # Rest of the method implementation...
2720 |     ```
2721 |   </Step>
2722 | 
2723 |   <Step title="Add logging support">
2724 |     ```python
2725 |     # Set up logging
2726 |     logger = logging.getLogger("weather-server")
2727 |     logger.setLevel(logging.INFO)
2728 | 
2729 |     @app.set_logging_level()
2730 |     async def set_logging_level(level: LoggingLevel) -> EmptyResult:
2731 |         logger.setLevel(level.upper())
2732 |         await app.request_context.session.send_log_message(
2733 |             level="info",
2734 |             data=f"Log level set to {level}",
2735 |             logger="weather-server"
2736 |         )
2737 |         return EmptyResult()
2738 | 
2739 |     # Use logger throughout the code
2740 |     # For example:
2741 |     # logger.info("Weather data fetched successfully")
2742 |     # logger.error(f"Error fetching weather data: {str(e)}")
2743 |     ```
2744 |   </Step>
2745 | 
2746 |   <Step title="Add resource templates">
2747 |     ```python
2748 |     @app.list_resource_templates()
2749 |     async def list_resource_templates() -> list[ResourceTemplate]:
2750 |         return [
2751 |             ResourceTemplate(
2752 |                 uriTemplate="weather://{city}/current",
2753 |                 name="Current weather for any city",
2754 |                 mimeType="application/json"
2755 |             )
2756 |         ]
2757 |     ```
2758 |   </Step>
2759 | </Steps>
2760 | 
2761 | ## Testing
2762 | 
2763 | <Steps>
2764 |   <Step title="Create test file">
2765 |     Create `tests/weather_test.py`:
2766 | 
2767 |     ```python
2768 |     import pytest
2769 |     import os
2770 |     from unittest.mock import patch, Mock
2771 |     from datetime import datetime
2772 |     import json
2773 |     from pydantic import AnyUrl
2774 |     os.environ["OPENWEATHER_API_KEY"] = "TEST"
2775 | 
2776 |     from weather_service.server import (
2777 |         fetch_weather,
2778 |         read_resource,
2779 |         call_tool,
2780 |         list_resources,
2781 |         list_tools,
2782 |         DEFAULT_CITY
2783 |     )
2784 | 
2785 |     @pytest.fixture
2786 |     def anyio_backend():
2787 |         return "asyncio"
2788 | 
2789 |     @pytest.fixture
2790 |     def mock_weather_response():
2791 |         return {
2792 |             "main": {
2793 |                 "temp": 20.5,
2794 |                 "humidity": 65
2795 |             },
2796 |             "weather": [
2797 |                 {"description": "scattered clouds"}
2798 |             ],
2799 |             "wind": {
2800 |                 "speed": 3.6
2801 |             }
2802 |         }
2803 | 
2804 |     @pytest.fixture
2805 |     def mock_forecast_response():
2806 |         return {
2807 |             "list": [
2808 |                 {
2809 |                     "dt_txt": "2024-01-01 12:00:00",
2810 |                     "main": {"temp": 18.5},
2811 |                     "weather": [{"description": "sunny"}]
2812 |                 },
2813 |                 {
2814 |                     "dt_txt": "2024-01-02 12:00:00",
2815 |                     "main": {"temp": 17.2},
2816 |                     "weather": [{"description": "cloudy"}]
2817 |                 }
2818 |             ]
2819 |         }
2820 | 
2821 |     @pytest.mark.anyio
2822 |     async def test_fetch_weather(mock_weather_response):
2823 |         with patch('requests.Session.get') as mock_get:
2824 |             mock_get.return_value.json.return_value = mock_weather_response
2825 |             mock_get.return_value.raise_for_status = Mock()
2826 | 
2827 |             weather = await fetch_weather("London")
2828 | 
2829 |             assert weather["temperature"] == 20.5
2830 |             assert weather["conditions"] == "scattered clouds"
2831 |             assert weather["humidity"] == 65
2832 |             assert weather["wind_speed"] == 3.6
2833 |             assert "timestamp" in weather
2834 | 
2835 |     @pytest.mark.anyio
2836 |     async def test_read_resource():
2837 |         with patch('weather_service.server.fetch_weather') as mock_fetch:
2838 |             mock_fetch.return_value = {
2839 |                 "temperature": 20.5,
2840 |                 "conditions": "clear sky",
2841 |                 "timestamp": datetime.now().isoformat()
2842 |             }
2843 | 
2844 |             uri = AnyUrl("weather://London/current")
2845 |             result = await read_resource(uri)
2846 | 
2847 |             assert isinstance(result, str)
2848 |             assert "temperature" in result
2849 |             assert "clear sky" in result
2850 | 
2851 |     @pytest.mark.anyio
2852 |     async def test_call_tool(mock_forecast_response):
2853 |         class Response():
2854 |             def raise_for_status(self):
2855 |                 pass
2856 | 
2857 |             def json(self):
2858 |                 return mock_forecast_response
2859 | 
2860 |         class AsyncClient():
2861 |             async def __aenter__(self):
2862 |                 return self
2863 | 
2864 |             async def __aexit__(self, *exc_info):
2865 |                 pass
2866 | 
2867 |             async def get(self, *args, **kwargs):
2868 |                 return Response()
2869 | 
2870 |         with patch('httpx.AsyncClient', new=AsyncClient) as mock_client:
2871 |             result = await call_tool("get_forecast", {"city": "London", "days": 2})
2872 | 
2873 |             assert len(result) == 1
2874 |             assert result[0].type == "text"
2875 |             forecast_data = json.loads(result[0].text)
2876 |             assert len(forecast_data) == 1
2877 |             assert forecast_data[0]["temperature"] == 18.5
2878 |             assert forecast_data[0]["conditions"] == "sunny"
2879 | 
2880 |     @pytest.mark.anyio
2881 |     async def test_list_resources():
2882 |         resources = await list_resources()
2883 |         assert len(resources) == 1
2884 |         assert resources[0].name == f"Current weather in {DEFAULT_CITY}"
2885 |         assert resources[0].mimeType == "application/json"
2886 | 
2887 |     @pytest.mark.anyio
2888 |     async def test_list_tools():
2889 |         tools = await list_tools()
2890 |         assert len(tools) == 1
2891 |         assert tools[0].name == "get_forecast"
2892 |         assert "city" in tools[0].inputSchema["properties"]
2893 |     ```
2894 |   </Step>
2895 | 
2896 |   <Step title="Run tests">
2897 |     ```bash
2898 |     uv add --dev pytest
2899 |     uv run pytest
2900 |     ```
2901 |   </Step>
2902 | </Steps>
2903 | 
2904 | ## Troubleshooting
2905 | 
2906 | ### Installation issues
2907 | 
2908 | ```bash
2909 | # Check Python version
2910 | python --version
2911 | 
2912 | # Reinstall dependencies
2913 | uv sync --reinstall
2914 | ```
2915 | 
2916 | ### Type checking
2917 | 
2918 | ```bash
2919 | # Install mypy
2920 | uv add --dev pyright
2921 | 
2922 | # Run type checker
2923 | uv run pyright src
2924 | ```
2925 | 
2926 | ## Next steps
2927 | 
2928 | <CardGroup cols={2}>
2929 |   <Card title="Architecture overview" icon="sitemap" href="/docs/concepts/architecture">
2930 |     Learn more about the MCP architecture
2931 |   </Card>
2932 | 
2933 |   <Card title="Python SDK" icon="python" href="https://github.com/modelcontextprotocol/python-sdk">
2934 |     Check out the Python SDK on GitHub
2935 |   </Card>
2936 | </CardGroup>
2937 | 
2938 | 
2939 | # TypeScript
2940 | 
2941 | Create a simple MCP server in TypeScript in 15 minutes
2942 | 
2943 | Let's build your first MCP server in TypeScript! We'll create a weather server that provides current weather data as a resource and lets Claude fetch forecasts using tools.
2944 | 
2945 | <Note>
2946 |   This guide uses the OpenWeatherMap API. You'll need a free API key from [OpenWeatherMap](https://openweathermap.org/api) to follow along.
2947 | </Note>
2948 | 
2949 | ## Prerequisites
2950 | 
2951 | <Steps>
2952 |   <Step title="Install Node.js">
2953 |     You'll need Node.js 18 or higher:
2954 | 
2955 |     ```bash
2956 |     node --version  # Should be v18 or higher
2957 |     npm --version
2958 |     ```
2959 |   </Step>
2960 | 
2961 |   <Step title="Create a new project">
2962 |     You can use our [create-typescript-server](https://github.com/modelcontextprotocol/create-typescript-server) tool to bootstrap a new project:
2963 | 
2964 |     ```bash
2965 |     npx @modelcontextprotocol/create-server weather-server
2966 |     cd weather-server
2967 |     ```
2968 |   </Step>
2969 | 
2970 |   <Step title="Install dependencies">
2971 |     ```bash
2972 |     npm install --save axios dotenv
2973 |     ```
2974 |   </Step>
2975 | 
2976 |   <Step title="Set up environment">
2977 |     Create `.env`:
2978 | 
2979 |     ```bash
2980 |     OPENWEATHER_API_KEY=your-api-key-here
2981 |     ```
2982 | 
2983 |     Make sure to add your environment file to `.gitignore`
2984 | 
2985 |     ```bash
2986 |     .env
2987 |     ```
2988 |   </Step>
2989 | </Steps>
2990 | 
2991 | ## Create your server
2992 | 
2993 | <Steps>
2994 |   <Step title="Define types">
2995 |     Create a file `src/types.ts`, and add the following:
2996 | 
2997 |     ```typescript
2998 |     export interface OpenWeatherResponse {
2999 |       main: {
3000 |         temp: number;
3001 |         humidity: number;
3002 |       };
3003 |       weather: Array<{
3004 |         description: string;
3005 |       }>;
3006 |       wind: {
3007 |         speed: number;
3008 |       };
3009 |       dt_txt?: string;
3010 |     }
3011 | 
3012 |     export interface WeatherData {
3013 |       temperature: number;
3014 |       conditions: string;
3015 |       humidity: number;
3016 |       wind_speed: number;
3017 |       timestamp: string;
3018 |     }
3019 | 
3020 |     export interface ForecastDay {
3021 |       date: string;
3022 |       temperature: number;
3023 |       conditions: string;
3024 |     }
3025 | 
3026 |     export interface GetForecastArgs {
3027 |       city: string;
3028 |       days?: number;
3029 |     }
3030 | 
3031 |     // Type guard for forecast arguments
3032 |     export function isValidForecastArgs(args: any): args is GetForecastArgs {
3033 |       return (
3034 |         typeof args === "object" && 
3035 |         args !== null && 
3036 |         "city" in args &&
3037 |         typeof args.city === "string" &&
3038 |         (args.days === undefined || typeof args.days === "number")
3039 |       );
3040 |     }
3041 |     ```
3042 |   </Step>
3043 | 
3044 |   <Step title="Add the base code">
3045 |     Replace `src/index.ts` with the following:
3046 | 
3047 |     ```typescript
3048 |     #!/usr/bin/env node
3049 |     import { Server } from "@modelcontextprotocol/sdk/server/index.js";
3050 |     import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
3051 |     import {
3052 |       ListResourcesRequestSchema,
3053 |       ReadResourceRequestSchema,
3054 |       ListToolsRequestSchema,
3055 |       CallToolRequestSchema,
3056 |       ErrorCode,
3057 |       McpError
3058 |     } from "@modelcontextprotocol/sdk/types.js";
3059 |     import axios from "axios";
3060 |     import dotenv from "dotenv";
3061 |     import { 
3062 |       WeatherData, 
3063 |       ForecastDay, 
3064 |       OpenWeatherResponse,
3065 |       isValidForecastArgs 
3066 |     } from "./types.js";
3067 | 
3068 |     dotenv.config();
3069 | 
3070 |     const API_KEY = process.env.OPENWEATHER_API_KEY;
3071 |     if (!API_KEY) {
3072 |       throw new Error("OPENWEATHER_API_KEY environment variable is required");
3073 |     }
3074 | 
3075 |     const API_CONFIG = {
3076 |       BASE_URL: 'http://api.openweathermap.org/data/2.5',
3077 |       DEFAULT_CITY: 'San Francisco',
3078 |       ENDPOINTS: {
3079 |         CURRENT: 'weather',
3080 |         FORECAST: 'forecast'
3081 |       }
3082 |     } as const;
3083 | 
3084 |     class WeatherServer {
3085 |       private server: Server;
3086 |       private axiosInstance;
3087 | 
3088 |       constructor() {
3089 |         this.server = new Server({
3090 |           name: "example-weather-server",
3091 |           version: "0.1.0"
3092 |         }, {
3093 |           capabilities: {
3094 |             resources: {},
3095 |             tools: {}
3096 |           }
3097 |         });
3098 | 
3099 |         // Configure axios with defaults
3100 |         this.axiosInstance = axios.create({
3101 |           baseURL: API_CONFIG.BASE_URL,
3102 |           params: {
3103 |             appid: API_KEY,
3104 |             units: "metric"
3105 |           }
3106 |         });
3107 | 
3108 |         this.setupHandlers();
3109 |         this.setupErrorHandling();
3110 |       }
3111 | 
3112 |       private setupErrorHandling(): void {
3113 |         this.server.onerror = (error) => {
3114 |           console.error("[MCP Error]", error);
3115 |         };
3116 | 
3117 |         process.on('SIGINT', async () => {
3118 |           await this.server.close();
3119 |           process.exit(0);
3120 |         });
3121 |       }
3122 | 
3123 |       private setupHandlers(): void {
3124 |         this.setupResourceHandlers();
3125 |         this.setupToolHandlers();
3126 |       }
3127 | 
3128 |       private setupResourceHandlers(): void {
3129 |         // Implementation continues in next section
3130 |       }
3131 | 
3132 |       private setupToolHandlers(): void {
3133 |         // Implementation continues in next section
3134 |       }
3135 | 
3136 |       async run(): Promise<void> {
3137 |         const transport = new StdioServerTransport();
3138 |         await this.server.connect(transport);
3139 |         
3140 |         // Although this is just an informative message, we must log to stderr,
3141 |         // to avoid interfering with MCP communication that happens on stdout
3142 |         console.error("Weather MCP server running on stdio");
3143 |       }
3144 |     }
3145 | 
3146 |     const server = new WeatherServer();
3147 |     server.run().catch(console.error);
3148 |     ```
3149 |   </Step>
3150 | 
3151 |   <Step title="Add resource handlers">
3152 |     Add this to the `setupResourceHandlers` method:
3153 | 
3154 |     ```typescript
3155 |     private setupResourceHandlers(): void {
3156 |       this.server.setRequestHandler(
3157 |         ListResourcesRequestSchema,
3158 |         async () => ({
3159 |           resources: [{
3160 |             uri: `weather://${API_CONFIG.DEFAULT_CITY}/current`,
3161 |             name: `Current weather in ${API_CONFIG.DEFAULT_CITY}`,
3162 |             mimeType: "application/json",
3163 |             description: "Real-time weather data including temperature, conditions, humidity, and wind speed"
3164 |           }]
3165 |         })
3166 |       );
3167 | 
3168 |       this.server.setRequestHandler(
3169 |         ReadResourceRequestSchema,
3170 |         async (request) => {
3171 |           const city = API_CONFIG.DEFAULT_CITY;
3172 |           if (request.params.uri !== `weather://${city}/current`) {
3173 |             throw new McpError(
3174 |               ErrorCode.InvalidRequest,
3175 |               `Unknown resource: ${request.params.uri}`
3176 |             );
3177 |           }
3178 | 
3179 |           try {
3180 |             const response = await this.axiosInstance.get<OpenWeatherResponse>(
3181 |               API_CONFIG.ENDPOINTS.CURRENT,
3182 |               {
3183 |                 params: { q: city }
3184 |               }
3185 |             );
3186 | 
3187 |             const weatherData: WeatherData = {
3188 |               temperature: response.data.main.temp,
3189 |               conditions: response.data.weather[0].description,
3190 |               humidity: response.data.main.humidity,
3191 |               wind_speed: response.data.wind.speed,
3192 |               timestamp: new Date().toISOString()
3193 |             };
3194 | 
3195 |             return {
3196 |               contents: [{
3197 |                 uri: request.params.uri,
3198 |                 mimeType: "application/json",
3199 |                 text: JSON.stringify(weatherData, null, 2)
3200 |               }]
3201 |             };
3202 |           } catch (error) {
3203 |             if (axios.isAxiosError(error)) {
3204 |               throw new McpError(
3205 |                 ErrorCode.InternalError,
3206 |                 `Weather API error: ${error.response?.data.message ?? error.message}`
3207 |               );
3208 |             }
3209 |             throw error;
3210 |           }
3211 |         }
3212 |       );
3213 |     }
3214 |     ```
3215 |   </Step>
3216 | 
3217 |   <Step title="Add tool handlers">
3218 |     Add these handlers to the `setupToolHandlers` method:
3219 | 
3220 |     ```typescript
3221 |     private setupToolHandlers(): void {
3222 |       this.server.setRequestHandler(
3223 |         ListToolsRequestSchema,
3224 |         async () => ({
3225 |           tools: [{
3226 |             name: "get_forecast",
3227 |             description: "Get weather forecast for a city",
3228 |             inputSchema: {
3229 |               type: "object",
3230 |               properties: {
3231 |                 city: {
3232 |                   type: "string",
3233 |                   description: "City name"
3234 |                 },
3235 |                 days: {
3236 |                   type: "number",
3237 |                   description: "Number of days (1-5)",
3238 |                   minimum: 1,
3239 |                   maximum: 5
3240 |                 }
3241 |               },
3242 |               required: ["city"]
3243 |             }
3244 |           }]
3245 |         })
3246 |       );
3247 | 
3248 |       this.server.setRequestHandler(
3249 |         CallToolRequestSchema,
3250 |         async (request) => {
3251 |           if (request.params.name !== "get_forecast") {
3252 |             throw new McpError(
3253 |               ErrorCode.MethodNotFound,
3254 |               `Unknown tool: ${request.params.name}`
3255 |             );
3256 |           }
3257 | 
3258 |           if (!isValidForecastArgs(request.params.arguments)) {
3259 |             throw new McpError(
3260 |               ErrorCode.InvalidParams,
3261 |               "Invalid forecast arguments"
3262 |             );
3263 |           }
3264 | 
3265 |           const city = request.params.arguments.city;
3266 |           const days = Math.min(request.params.arguments.days || 3, 5);
3267 | 
3268 |           try {
3269 |             const response = await this.axiosInstance.get<{
3270 |               list: OpenWeatherResponse[]
3271 |             }>(API_CONFIG.ENDPOINTS.FORECAST, {
3272 |               params: {
3273 |                 q: city,
3274 |                 cnt: days * 8 // API returns 3-hour intervals
3275 |               }
3276 |             });
3277 | 
3278 |             const forecasts: ForecastDay[] = [];
3279 |             for (let i = 0; i < response.data.list.length; i += 8) {
3280 |               const dayData = response.data.list[i];
3281 |               forecasts.push({
3282 |                 date: dayData.dt_txt?.split(' ')[0] ?? new Date().toISOString().split('T')[0],
3283 |                 temperature: dayData.main.temp,
3284 |                 conditions: dayData.weather[0].description
3285 |               });
3286 |             }
3287 | 
3288 |             return {
3289 |               content: [{
3290 |                 type: "text",
3291 |                 text: JSON.stringify(forecasts, null, 2)
3292 |               }]
3293 |             };
3294 |           } catch (error) {
3295 |             if (axios.isAxiosError(error)) {
3296 |               return {
3297 |                 content: [{
3298 |                   type: "text",
3299 |                   text: `Weather API error: ${error.response?.data.message ?? error.message}`
3300 |                 }],
3301 |                 isError: true,
3302 |               }
3303 |             }
3304 |             throw error;
3305 |           }
3306 |         }
3307 |       );
3308 |     }
3309 |     ```
3310 |   </Step>
3311 | 
3312 |   <Step title="Build and test">
3313 |     ```bash
3314 |     npm run build
3315 |     ```
3316 |   </Step>
3317 | </Steps>
3318 | 
3319 | ## Connect to Claude Desktop
3320 | 
3321 | <Steps>
3322 |   <Step title="Update Claude config">
3323 |     If you didn't already connect to Claude Desktop during project setup, add to `claude_desktop_config.json`:
3324 | 
3325 |     ```json
3326 |     {
3327 |       "mcpServers": {
3328 |         "weather": {
3329 |           "command": "node",
3330 |           "args": ["/path/to/weather-server/build/index.js"],
3331 |           "env": {
3332 |             "OPENWEATHER_API_KEY": "your-api-key",
3333 |           }
3334 |         }
3335 |       }
3336 |     }
3337 |     ```
3338 |   </Step>
3339 | 
3340 |   <Step title="Restart Claude">
3341 |     1.  Quit Claude completely
3342 |     2.  Start Claude again
3343 |     3.  Look for your weather server in the 🔌 menu
3344 |   </Step>
3345 | </Steps>
3346 | 
3347 | ## Try it out!
3348 | 
3349 | <AccordionGroup>
3350 |   <Accordion title="Check Current Weather" active>
3351 |     Ask Claude:
3352 | 
3353 |     ```
3354 |     What's the current weather in San Francisco? Can you analyze the conditions?
3355 |     ```
3356 |   </Accordion>
3357 | 
3358 |   <Accordion title="Get a Forecast">
3359 |     Ask Claude:
3360 | 
3361 |     ```
3362 |     Can you get me a 5-day forecast for Tokyo and tell me if I should pack an umbrella?
3363 |     ```
3364 |   </Accordion>
3365 | 
3366 |   <Accordion title="Compare Weather">
3367 |     Ask Claude:
3368 | 
3369 |     ```
3370 |     Can you analyze the forecast for both Tokyo and San Francisco and tell me which city will be warmer this week?
3371 |     ```
3372 |   </Accordion>
3373 | </AccordionGroup>
3374 | 
3375 | ## Understanding the code
3376 | 
3377 | <Tabs>
3378 |   <Tab title="Type Safety">
3379 |     ```typescript
3380 |     interface WeatherData {
3381 |       temperature: number;
3382 |       conditions: string;
3383 |       humidity: number;
3384 |       wind_speed: number;
3385 |       timestamp: string;
3386 |     }
3387 |     ```
3388 | 
3389 |     TypeScript adds type safety to our MCP server, making it more reliable and easier to maintain.
3390 |   </Tab>
3391 | 
3392 |   <Tab title="Resources">
3393 |     ```typescript
3394 |     this.server.setRequestHandler(
3395 |       ListResourcesRequestSchema,
3396 |       async () => ({
3397 |         resources: [{
3398 |           uri: `weather://${DEFAULT_CITY}/current`,
3399 |           name: `Current weather in ${DEFAULT_CITY}`,
3400 |           mimeType: "application/json"
3401 |         }]
3402 |       })
3403 |     );
3404 |     ```
3405 | 
3406 |     Resources provide data that Claude can access as context.
3407 |   </Tab>
3408 | 
3409 |   <Tab title="Tools">
3410 |     ```typescript
3411 |     {
3412 |       name: "get_forecast",
3413 |       description: "Get weather forecast for a city",
3414 |       inputSchema: {
3415 |         type: "object",
3416 |         properties: {
3417 |           city: { type: "string" },
3418 |           days: { type: "number" }
3419 |         }
3420 |       }
3421 |     }
3422 |     ```
3423 | 
3424 |     Tools let Claude take actions through your server with type-safe inputs.
3425 |   </Tab>
3426 | </Tabs>
3427 | 
3428 | ## Best practices
3429 | 
3430 | <CardGroup cols={1}>
3431 |   <Card title="Error Handling" icon="shield">
3432 |     When a tool encounters an error, return the error message with `isError: true`, so the model can self-correct:
3433 | 
3434 |     ```typescript
3435 |     try {
3436 |       const response = await axiosInstance.get(...);
3437 |     } catch (error) {
3438 |       if (axios.isAxiosError(error)) {
3439 |         return {
3440 |           content: {
3441 |             mimeType: "text/plain",
3442 |             text: `Weather API error: ${error.response?.data.message ?? error.message}`
3443 |           },
3444 |           isError: true,
3445 |         }
3446 |       }
3447 |       throw error;
3448 |     }
3449 |     ```
3450 | 
3451 |     For other handlers, throw an error, so the application can notify the user:
3452 | 
3453 |     ```typescript
3454 |     try {
3455 |       const response = await this.axiosInstance.get(...);
3456 |     } catch (error) {
3457 |       if (axios.isAxiosError(error)) {
3458 |         throw new McpError(
3459 |           ErrorCode.InternalError,
3460 |           `Weather API error: ${error.response?.data.message}`
3461 |         );
3462 |       }
3463 |       throw error;
3464 |     }
3465 |     ```
3466 |   </Card>
3467 | 
3468 |   <Card title="Type Validation" icon="check">
3469 |     ```typescript
3470 |     function isValidForecastArgs(args: any): args is GetForecastArgs {
3471 |       return (
3472 |         typeof args === "object" && 
3473 |         args !== null && 
3474 |         "city" in args &&
3475 |         typeof args.city === "string"
3476 |       );
3477 |     }
3478 |     ```
3479 | 
3480 |     <Tip>You can also use libraries like [Zod](https://zod.dev/) to perform this validation automatically.</Tip>
3481 |   </Card>
3482 | </CardGroup>
3483 | 
3484 | ## Available transports
3485 | 
3486 | While this guide uses stdio to run the MCP server as a local process, MCP supports other [transports](/docs/concepts/transports) as well.
3487 | 
3488 | ## Troubleshooting
3489 | 
3490 | <Info>
3491 |   The following troubleshooting tips are for macOS. Guides for other platforms are coming soon.
3492 | </Info>
3493 | 
3494 | ### Build errors
3495 | 
3496 | ```bash
3497 | # Check TypeScript version
3498 | npx tsc --version
3499 | 
3500 | # Clean and rebuild
3501 | rm -rf build/
3502 | npm run build
3503 | ```
3504 | 
3505 | ### Runtime errors
3506 | 
3507 | Look for detailed error messages in the Claude Desktop logs:
3508 | 
3509 | ```bash
3510 | # Monitor logs
3511 | tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
3512 | ```
3513 | 
3514 | ### Type errors
3515 | 
3516 | ```bash
3517 | # Check types without building
3518 | npx tsc --noEmit
3519 | ```
3520 | 
3521 | ## Next steps
3522 | 
3523 | <CardGroup cols={2}>
3524 |   <Card title="Architecture overview" icon="sitemap" href="/docs/concepts/architecture">
3525 |     Learn more about the MCP architecture
3526 |   </Card>
3527 | 
3528 |   <Card title="TypeScript SDK" icon="square-js" href="https://github.com/modelcontextprotocol/typescript-sdk">
3529 |     Check out the TypeScript SDK on GitHub
3530 |   </Card>
3531 | </CardGroup>
3532 | 
3533 | <Note>
3534 |   Need help? Ask Claude! Since it has access to the MCP SDK documentation, it can help you debug issues and suggest improvements to your server.
3535 | </Note>
3536 | 
3537 | 
3538 | # Debugging
3539 | 
3540 | A comprehensive guide to debugging Model Context Protocol (MCP) integrations
3541 | 
3542 | Effective debugging is essential when developing MCP servers or integrating them with applications. This guide covers the debugging tools and approaches available in the MCP ecosystem.
3543 | 
3544 | <Info>
3545 |   This guide is for macOS. Guides for other platforms are coming soon.
3546 | </Info>
3547 | 
3548 | ## Debugging tools overview
3549 | 
3550 | MCP provides several tools for debugging at different levels:
3551 | 
3552 | 1.  **MCP Inspector**
3553 |     *   Interactive debugging interface
3554 |     *   Direct server testing
3555 |     *   See the [Inspector guide](/docs/tools/inspector) for details
3556 | 
3557 | 2.  **Claude Desktop Developer Tools**
3558 |     *   Integration testing
3559 |     *   Log collection
3560 |     *   Chrome DevTools integration
3561 | 
3562 | 3.  **Server Logging**
3563 |     *   Custom logging implementations
3564 |     *   Error tracking
3565 |     *   Performance monitoring
3566 | 
3567 | ## Debugging in Claude Desktop
3568 | 
3569 | ### Checking server status
3570 | 
3571 | The Claude.app interface provides basic server status information:
3572 | 
3573 | 1.  Click the 🔌 icon to view:
3574 |     *   Connected servers
3575 |     *   Available prompts and resources
3576 | 
3577 | 2.  Click the 🔨 icon to view:
3578 |     *   Tools made available to the model
3579 | 
3580 | ### Viewing logs
3581 | 
3582 | Review detailed MCP logs from Claude Desktop:
3583 | 
3584 | ```bash
3585 | # Follow logs in real-time
3586 | tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
3587 | ```
3588 | 
3589 | The logs capture:
3590 | 
3591 | *   Server connection events
3592 | *   Configuration issues
3593 | *   Runtime errors
3594 | *   Message exchanges
3595 | 
3596 | ### Using Chrome DevTools
3597 | 
3598 | Access Chrome's developer tools inside Claude Desktop to investigate client-side errors:
3599 | 
3600 | 1.  Enable DevTools:
3601 | 
3602 | ```bash
3603 | jq '.allowDevTools = true' ~/Library/Application\ Support/Claude/developer_settings.json > tmp.json \
3604 |   && mv tmp.json ~/Library/Application\ Support/Claude/developer_settings.json
3605 | ```
3606 | 
3607 | 2.  Open DevTools: `Command-Option-Shift-i`
3608 | 
3609 | Note: You'll see two DevTools windows:
3610 | 
3611 | *   Main content window
3612 | *   App title bar window
3613 | 
3614 | Use the Console panel to inspect client-side errors.
3615 | 
3616 | Use the Network panel to inspect:
3617 | 
3618 | *   Message payloads
3619 | *   Connection timing
3620 | 
3621 | ## Common issues
3622 | 
3623 | ### Environment variables
3624 | 
3625 | MCP servers inherit only a subset of environment variables automatically, like `USER`, `HOME`, and `PATH`.
3626 | 
3627 | To override the default variables or provide your own, you can specify an `env` key in `claude_desktop_config.json`:
3628 | 
3629 | ```json
3630 | {
3631 |   "myserver": {
3632 |     "command": "mcp-server-myapp",
3633 |     "env": {
3634 |       "MYAPP_API_KEY": "some_key",
3635 |     }
3636 |   }
3637 | }
3638 | ```
3639 | 
3640 | ### Server initialization
3641 | 
3642 | Common initialization problems:
3643 | 
3644 | 1.  **Path Issues**
3645 |     *   Incorrect server executable path
3646 |     *   Missing required files
3647 |     *   Permission problems
3648 | 
3649 | 2.  **Configuration Errors**
3650 |     *   Invalid JSON syntax
3651 |     *   Missing required fields
3652 |     *   Type mismatches
3653 | 
3654 | 3.  **Environment Problems**
3655 |     *   Missing environment variables
3656 |     *   Incorrect variable values
3657 |     *   Permission restrictions
3658 | 
3659 | ### Connection problems
3660 | 
3661 | When servers fail to connect:
3662 | 
3663 | 1.  Check Claude Desktop logs
3664 | 2.  Verify server process is running
3665 | 3.  Test standalone with [Inspector](/docs/tools/inspector)
3666 | 4.  Verify protocol compatibility
3667 | 
3668 | ## Implementing logging
3669 | 
3670 | ### Server-side logging
3671 | 
3672 | When building a server that uses the local stdio [transport](/docs/concepts/transports), all messages logged to stderr (standard error) will be captured by the host application (e.g., Claude Desktop) automatically.
3673 | 
3674 | <Warning>
3675 |   Local MCP servers should not log messages to stdout (standard out), as this will interfere with protocol operation.
3676 | </Warning>
3677 | 
3678 | For all [transports](/docs/concepts/transports), you can also provide logging to the client by sending a log message notification:
3679 | 
3680 | <Tabs>
3681 |   <Tab title="Python">
3682 |     ```python
3683 |     server.request_context.session.send_log_message(
3684 |       level="info",
3685 |       data="Server started successfully",
3686 |     )
3687 |     ```
3688 |   </Tab>
3689 | 
3690 |   <Tab title="TypeScript">
3691 |     ```typescript
3692 |     server.sendLoggingMessage({
3693 |       level: "info",
3694 |       data: "Server started successfully",
3695 |     });
3696 |     ```
3697 |   </Tab>
3698 | </Tabs>
3699 | 
3700 | Important events to log:
3701 | 
3702 | *   Initialization steps
3703 | *   Resource access
3704 | *   Tool execution
3705 | *   Error conditions
3706 | *   Performance metrics
3707 | 
3708 | ### Client-side logging
3709 | 
3710 | In client applications:
3711 | 
3712 | 1.  Enable debug logging
3713 | 2.  Monitor network traffic
3714 | 3.  Track message exchanges
3715 | 4.  Record error states
3716 | 
3717 | ## Debugging workflow
3718 | 
3719 | ### Development cycle
3720 | 
3721 | 1.  Initial Development
3722 |     *   Use [Inspector](/docs/tools/inspector) for basic testing
3723 |     *   Implement core functionality
3724 |     *   Add logging points
3725 | 
3726 | 2.  Integration Testing
3727 |     *   Test in Claude Desktop
3728 |     *   Monitor logs
3729 |     *   Check error handling
3730 | 
3731 | ### Testing changes
3732 | 
3733 | To test changes efficiently:
3734 | 
3735 | *   **Configuration changes**: Restart Claude Desktop
3736 | *   **Server code changes**: Use Command-R to reload
3737 | *   **Quick iteration**: Use [Inspector](/docs/tools/inspector) during development
3738 | 
3739 | ## Best practices
3740 | 
3741 | ### Logging strategy
3742 | 
3743 | 1.  **Structured Logging**
3744 |     *   Use consistent formats
3745 |     *   Include context
3746 |     *   Add timestamps
3747 |     *   Track request IDs
3748 | 
3749 | 2.  **Error Handling**
3750 |     *   Log stack traces
3751 |     *   Include error context
3752 |     *   Track error patterns
3753 |     *   Monitor recovery
3754 | 
3755 | 3.  **Performance Tracking**
3756 |     *   Log operation timing
3757 |     *   Monitor resource usage
3758 |     *   Track message sizes
3759 |     *   Measure latency
3760 | 
3761 | ### Security considerations
3762 | 
3763 | When debugging:
3764 | 
3765 | 1.  **Sensitive Data**
3766 |     *   Sanitize logs
3767 |     *   Protect credentials
3768 |     *   Mask personal information
3769 | 
3770 | 2.  **Access Control**
3771 |     *   Verify permissions
3772 |     *   Check authentication
3773 |     *   Monitor access patterns
3774 | 
3775 | ## Getting help
3776 | 
3777 | When encountering issues:
3778 | 
3779 | 1.  **First Steps**
3780 |     *   Check server logs
3781 |     *   Test with [Inspector](/docs/tools/inspector)
3782 |     *   Review configuration
3783 |     *   Verify environment
3784 | 
3785 | 2.  **Support Channels**
3786 |     *   GitHub issues
3787 |     *   GitHub discussions
3788 | 
3789 | 3.  **Providing Information**
3790 |     *   Log excerpts
3791 |     *   Configuration files
3792 |     *   Steps to reproduce
3793 |     *   Environment details
3794 | 
3795 | ## Next steps
3796 | 
3797 | <CardGroup cols={2}>
3798 |   <Card title="MCP Inspector" icon="magnifying-glass" href="/docs/tools/inspector">
3799 |     Learn to use the MCP Inspector
3800 |   </Card>
3801 | </CardGroup>
3802 | 
3803 | 
3804 | # Inspector
3805 | 
3806 | In-depth guide to using the MCP Inspector for testing and debugging Model Context Protocol servers
3807 | 
3808 | The [MCP Inspector](https://github.com/modelcontextprotocol/inspector) is an interactive developer tool for testing and debugging MCP servers. While the [Debugging Guide](/docs/tools/debugging) covers the Inspector as part of the overall debugging toolkit, this document provides a detailed exploration of the Inspector's features and capabilities.
3809 | 
3810 | ## Getting started
3811 | 
3812 | ### Installation and basic usage
3813 | 
3814 | The Inspector runs directly through `npx` without requiring installation:
3815 | 
3816 | ```bash
3817 | npx @modelcontextprotocol/inspector <command>
3818 | ```
3819 | 
3820 | ```bash
3821 | npx @modelcontextprotocol/inspector <command> <arg1> <arg2>
3822 | ```
3823 | 
3824 | #### Inspecting servers from NPM or PyPi
3825 | 
3826 | A common way to start server packages from [NPM](https://npmjs.com) or [PyPi](https://pypi.com).
3827 | 
3828 | <Tabs>
3829 |   <Tab title="NPM package">
3830 |     ```bash
3831 |     npx -y @modelcontextprotocol/inspector npx <package-name> <args>
3832 |     # For example
3833 |     npx -y @modelcontextprotocol/inspector npx server-postgres postgres://127.0.0.1/testdb
3834 |     ```
3835 |   </Tab>
3836 | 
3837 |   <Tab title="PyPi package">
3838 |     ```bash
3839 |     npx @modelcontextprotocol/inspector uvx <package-name> <args>
3840 |     # For example
3841 |     npx @modelcontextprotocol/inspector uvx mcp-server-git --repository ~/code/mcp/servers.git
3842 |     ```
3843 |   </Tab>
3844 | </Tabs>
3845 | 
3846 | #### Inspecting locally developed servers
3847 | 
3848 | To inspect servers locally developed or downloaded as a repository, the most common
3849 | way is:
3850 | 
3851 | <Tabs>
3852 |   <Tab title="TypeScript">
3853 |     ```bash
3854 |     npx @modelcontextprotocol/inspector node path/to/server/index.js args...
3855 |     ```
3856 |   </Tab>
3857 | 
3858 |   <Tab title="Python">
3859 |     ```bash
3860 |     npx @modelcontextprotocol/inspector \
3861 |       uv \
3862 |       --directory path/to/server \
3863 |       run \
3864 |       package-name \
3865 |       args...
3866 |     ```
3867 |   </Tab>
3868 | </Tabs>
3869 | 
3870 | Please carefully read any attached README for the most accurate instructions.
3871 | 
3872 | ## Feature overview
3873 | 
3874 | <Frame caption="The MCP Inspector interface">
3875 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/mcp-inspector.png" />
3876 | </Frame>
3877 | 
3878 | The Inspector provides several features for interacting with your MCP server:
3879 | 
3880 | ### Server connection pane
3881 | 
3882 | *   Allows selecting the [transport](/docs/concepts/transports) for connecting to the server
3883 | *   For local servers, supports customizing the command-line arguments and environment
3884 | 
3885 | ### Resources tab
3886 | 
3887 | *   Lists all available resources
3888 | *   Shows resource metadata (MIME types, descriptions)
3889 | *   Allows resource content inspection
3890 | *   Supports subscription testing
3891 | 
3892 | ### Prompts tab
3893 | 
3894 | *   Displays available prompt templates
3895 | *   Shows prompt arguments and descriptions
3896 | *   Enables prompt testing with custom arguments
3897 | *   Previews generated messages
3898 | 
3899 | ### Tools tab
3900 | 
3901 | *   Lists available tools
3902 | *   Shows tool schemas and descriptions
3903 | *   Enables tool testing with custom inputs
3904 | *   Displays tool execution results
3905 | 
3906 | ### Notifications pane
3907 | 
3908 | *   Presents all logs recorded from the server
3909 | *   Shows notifications received from the server
3910 | 
3911 | ## Best practices
3912 | 
3913 | ### Development workflow
3914 | 
3915 | 1.  Start Development
3916 |     *   Launch Inspector with your server
3917 |     *   Verify basic connectivity
3918 |     *   Check capability negotiation
3919 | 
3920 | 2.  Iterative testing
3921 |     *   Make server changes
3922 |     *   Rebuild the server
3923 |     *   Reconnect the Inspector
3924 |     *   Test affected features
3925 |     *   Monitor messages
3926 | 
3927 | 3.  Test edge cases
3928 |     *   Invalid inputs
3929 |     *   Missing prompt arguments
3930 |     *   Concurrent operations
3931 |     *   Verify error handling and error responses
3932 | 
3933 | ## Next steps
3934 | 
3935 | <CardGroup cols={2}>
3936 |   <Card title="Inspector Repository" icon="github" href="https://github.com/modelcontextprotocol/inspector">
3937 |     Check out the MCP Inspector source code
3938 |   </Card>
3939 | 
3940 |   <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging">
3941 |     Learn about broader debugging strategies
3942 |   </Card>
3943 | </CardGroup>
3944 | 
3945 | 
3946 | # Introduction
3947 | 
3948 | Get started with the Model Context Protocol (MCP)
3949 | 
3950 | The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you're building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.
3951 | 
3952 | ## Get started with MCP
3953 | 
3954 | Choose the path that best fits your needs:
3955 | 
3956 | <CardGroup cols={1}>
3957 |   <Card title="Quickstart" icon="bolt" href="/quickstart">
3958 |     The fastest way to see MCP in action—connect example servers to Claude Desktop
3959 |   </Card>
3960 | 
3961 |   <Card title="Build your first server (Python)" icon="python" href="/docs/first-server/python">
3962 |     Create a simple MCP server in Python to understand the basics
3963 |   </Card>
3964 | 
3965 |   <Card title="Build your first server (TypeScript)" icon="square-js" href="/docs/first-server/typescript">
3966 |     Create a simple MCP server in TypeScript to understand the basics
3967 |   </Card>
3968 | </CardGroup>
3969 | 
3970 | ## Development tools
3971 | 
3972 | Essential tools for building and debugging MCP servers:
3973 | 
3974 | <CardGroup cols={2}>
3975 |   <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging">
3976 |     Learn how to effectively debug MCP servers and integrations
3977 |   </Card>
3978 | 
3979 |   <Card title="MCP Inspector" icon="magnifying-glass" href="/docs/tools/inspector">
3980 |     Test and inspect your MCP servers with our interactive debugging tool
3981 |   </Card>
3982 | </CardGroup>
3983 | 
3984 | ## Explore MCP
3985 | 
3986 | Dive deeper into MCP's core concepts and capabilities:
3987 | 
3988 | <CardGroup cols={2}>
3989 |   <Card title="Core Architecture" icon="sitemap" href="/docs/concepts/architecture">
3990 |     Understand how MCP connects clients, servers, and LLMs
3991 |   </Card>
3992 | 
3993 |   <Card title="Resources" icon="database" href="/docs/concepts/resources">
3994 |     Expose data and content from your servers to LLMs
3995 |   </Card>
3996 | 
3997 |   <Card title="Prompts" icon="message" href="/docs/concepts/prompts">
3998 |     Create reusable prompt templates and workflows
3999 |   </Card>
4000 | 
4001 |   <Card title="Tools" icon="wrench" href="/docs/concepts/tools">
4002 |     Enable LLMs to perform actions through your server
4003 |   </Card>
4004 | 
4005 |   <Card title="Sampling" icon="robot" href="/docs/concepts/sampling">
4006 |     Let your servers request completions from LLMs
4007 |   </Card>
4008 | 
4009 |   <Card title="Transports" icon="network-wired" href="/docs/concepts/transports">
4010 |     Learn about MCP's communication mechanism
4011 |   </Card>
4012 | </CardGroup>
4013 | 
4014 | ## Contributing
4015 | 
4016 | Want to contribute? Check out [@modelcontextprotocol](https://github.com/modelcontextprotocol) on GitHub to join our growing community of developers building with MCP.
4017 | 
4018 | 
4019 | # Quickstart
4020 | 
4021 | Get started with MCP in less than 5 minutes
4022 | 
4023 | MCP is a protocol that enables secure connections between host applications, such as [Claude Desktop](https://claude.ai/download), and local services. In this quickstart guide, you'll learn how to:
4024 | 
4025 | *   Set up a local SQLite database
4026 | *   Connect Claude Desktop to it through MCP
4027 | *   Query and analyze your data securely
4028 | 
4029 | <Note>
4030 |   While this guide focuses on using Claude Desktop as an example MCP host, the protocol is open and can be integrated by any application. IDEs, AI tools, and other software can all use MCP to connect to local integrations in a standardized way.
4031 | </Note>
4032 | 
4033 | <Warning>
4034 |   Claude Desktop's MCP support is currently in developer preview and only supports connecting to local MCP servers running on your machine. Remote MCP connections are not yet supported. This integration is only available in the Claude Desktop app, not the Claude web interface (claude.ai).
4035 | </Warning>
4036 | 
4037 | ## How MCP works
4038 | 
4039 | MCP (Model Context Protocol) is an open protocol that enables secure, controlled interactions between AI applications and local or remote resources. Let's break down how it works, then look at how we'll use it in this guide.
4040 | 
4041 | ### General Architecture
4042 | 
4043 | At its core, MCP follows a client-server architecture where a host application can connect to multiple servers:
4044 | 
4045 | ```mermaid
4046 | flowchart LR
4047 |     subgraph "Your Computer"
4048 |         Host["MCP Host\n(Claude, IDEs, Tools)"]
4049 |         S1["MCP Server A"]
4050 |         S2["MCP Server B"]
4051 |         S3["MCP Server C"]
4052 | 
4053 |         Host <-->|"MCP Protocol"| S1
4054 |         Host <-->|"MCP Protocol"| S2
4055 |         Host <-->|"MCP Protocol"| S3
4056 | 
4057 |         S1 <--> R1[("Local\nResource A")]
4058 |         S2 <--> R2[("Local\nResource B")]
4059 |     end
4060 | 
4061 |     subgraph "Internet"
4062 |         S3 <-->|"Web APIs"| R3[("Remote\nResource C")]
4063 |     end
4064 | ```
4065 | 
4066 | *   **MCP Hosts**: Programs like Claude Desktop, IDEs, or AI tools that want to access resources through MCP
4067 | *   **MCP Clients**: Protocol clients that maintain 1:1 connections with servers
4068 | *   **MCP Servers**: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol
4069 | *   **Local Resources**: Your computer's resources (databases, files, services) that MCP servers can securely access
4070 | *   **Remote Resources**: Resources available over the internet (e.g., through APIs) that MCP servers can connect to
4071 | 
4072 | ### In This Guide
4073 | 
4074 | For this quickstart, we'll implement a focused example using SQLite:
4075 | 
4076 | ```mermaid
4077 | flowchart LR
4078 |     subgraph "Your Computer"
4079 |         direction LR
4080 |         Claude["Claude Desktop"]
4081 |         MCP["SQLite MCP Server"]
4082 |         DB[(SQLite Database\n~/test.db)]
4083 | 
4084 |         Claude <-->|"MCP Protocol\n(Queries & Results)"| MCP
4085 |         MCP <-->|"Local Access\n(SQL Operations)"| DB
4086 |     end
4087 | ```
4088 | 
4089 | 1.  Claude Desktop acts as our MCP client
4090 | 2.  A SQLite MCP Server provides secure database access
4091 | 3.  Your local SQLite database stores the actual data
4092 | 
4093 | The communication between the SQLite MCP server and your local SQLite database happens entirely on your machine—your SQLite database is not exposed to the internet. The Model Context Protocol ensures that Claude Desktop can only perform approved database operations through well-defined interfaces. This gives you a secure way to let Claude analyze and interact with your local data while maintaining complete control over what it can access.
4094 | 
4095 | ## Prerequisites
4096 | 
4097 | *   macOS or Windows
4098 | *   The latest version of [Claude Desktop](https://claude.ai/download) installed
4099 | *   [uv](https://docs.astral.sh/uv/) 0.4.18 or higher (`uv --version` to check)
4100 | *   Git (`git --version` to check)
4101 | *   SQLite (`sqlite3 --version` to check)
4102 | 
4103 | <AccordionGroup>
4104 |   <Accordion title="Installing prerequisites (macOS)">
4105 |     ```bash
4106 |     # Using Homebrew
4107 |     brew install uv git sqlite3
4108 | 
4109 |     # Or download directly:
4110 |     # uv: https://docs.astral.sh/uv/
4111 |     # Git: https://git-scm.com
4112 |     # SQLite: https://www.sqlite.org/download.html
4113 |     ```
4114 |   </Accordion>
4115 | 
4116 |   <Accordion title="Installing prerequisites (Windows)">
4117 |     ```powershell
4118 |     # Using winget
4119 |     winget install --id=astral-sh.uv -e
4120 |     winget install git.git sqlite.sqlite
4121 | 
4122 |     # Or download directly:
4123 |     # uv: https://docs.astral.sh/uv/
4124 |     # Git: https://git-scm.com
4125 |     # SQLite: https://www.sqlite.org/download.html
4126 |     ```
4127 |   </Accordion>
4128 | </AccordionGroup>
4129 | 
4130 | ## Installation
4131 | 
4132 | <Tabs>
4133 |   <Tab title="macOS">
4134 |     <Steps>
4135 |       <Step title="Create a sample database">
4136 |         Let's create a simple SQLite database for testing:
4137 | 
4138 |         ```bash
4139 |         # Create a new SQLite database
4140 |         sqlite3 ~/test.db <<EOF
4141 |         CREATE TABLE products (
4142 |           id INTEGER PRIMARY KEY,
4143 |           name TEXT,
4144 |           price REAL
4145 |         );
4146 | 
4147 |         INSERT INTO products (name, price) VALUES
4148 |           ('Widget', 19.99),
4149 |           ('Gadget', 29.99),
4150 |           ('Gizmo', 39.99),
4151 |           ('Smart Watch', 199.99),
4152 |           ('Wireless Earbuds', 89.99),
4153 |           ('Portable Charger', 24.99),
4154 |           ('Bluetooth Speaker', 79.99),
4155 |           ('Phone Stand', 15.99),
4156 |           ('Laptop Sleeve', 34.99),
4157 |           ('Mini Drone', 299.99),
4158 |           ('LED Desk Lamp', 45.99),
4159 |           ('Keyboard', 129.99),
4160 |           ('Mouse Pad', 12.99),
4161 |           ('USB Hub', 49.99),
4162 |           ('Webcam', 69.99),
4163 |           ('Screen Protector', 9.99),
4164 |           ('Travel Adapter', 27.99),
4165 |           ('Gaming Headset', 159.99),
4166 |           ('Fitness Tracker', 119.99),
4167 |           ('Portable SSD', 179.99);
4168 |         EOF
4169 |         ```
4170 |       </Step>
4171 | 
4172 |       <Step title="Configure Claude Desktop">
4173 |         Open your Claude Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor.
4174 | 
4175 |         For example, if you have [VS Code](https://code.visualstudio.com/) installed:
4176 | 
4177 |         ```bash
4178 |         code ~/Library/Application\ Support/Claude/claude_desktop_config.json
4179 |         ```
4180 | 
4181 |         Add this configuration (replace YOUR\_USERNAME with your actual username):
4182 | 
4183 |         ```json
4184 |         {
4185 |           "mcpServers": {
4186 |             "sqlite": {
4187 |               "command": "uvx",
4188 |               "args": ["mcp-server-sqlite", "--db-path", "/Users/YOUR_USERNAME/test.db"]
4189 |             }
4190 |           }
4191 |         }
4192 |         ```
4193 | 
4194 |         This tells Claude Desktop:
4195 | 
4196 |         1.  There's an MCP server named "sqlite"
4197 |         2.  Launch it by running `uvx mcp-server-sqlite`
4198 |         3.  Connect it to your test database
4199 | 
4200 |         Save the file, and restart **Claude Desktop**.
4201 |       </Step>
4202 |     </Steps>
4203 |   </Tab>
4204 | 
4205 |   <Tab title="Windows">
4206 |     <Steps>
4207 |       <Step title="Create a sample database">
4208 |         Let's create a simple SQLite database for testing:
4209 | 
4210 |         ```powershell
4211 |         # Create a new SQLite database
4212 |         $sql = @'
4213 |         CREATE TABLE products (
4214 |           id INTEGER PRIMARY KEY,
4215 |           name TEXT,
4216 |           price REAL
4217 |         );
4218 | 
4219 |         INSERT INTO products (name, price) VALUES
4220 |           ('Widget', 19.99),
4221 |           ('Gadget', 29.99),
4222 |           ('Gizmo', 39.99),
4223 |           ('Smart Watch', 199.99),
4224 |           ('Wireless Earbuds', 89.99),
4225 |           ('Portable Charger', 24.99),
4226 |           ('Bluetooth Speaker', 79.99),
4227 |           ('Phone Stand', 15.99),
4228 |           ('Laptop Sleeve', 34.99),
4229 |           ('Mini Drone', 299.99),
4230 |           ('LED Desk Lamp', 45.99),
4231 |           ('Keyboard', 129.99),
4232 |           ('Mouse Pad', 12.99),
4233 |           ('USB Hub', 49.99),
4234 |           ('Webcam', 69.99),
4235 |           ('Screen Protector', 9.99),
4236 |           ('Travel Adapter', 27.99),
4237 |           ('Gaming Headset', 159.99),
4238 |           ('Fitness Tracker', 119.99),
4239 |           ('Portable SSD', 179.99);
4240 |         '@
4241 | 
4242 |         cd ~
4243 |         & sqlite3 test.db $sql
4244 |         ```
4245 |       </Step>
4246 | 
4247 |       <Step title="Configure Claude Desktop">
4248 |         Open your Claude Desktop App configuration at `%APPDATA%\Claude\claude_desktop_config.json` in a text editor.
4249 | 
4250 |         For example, if you have [VS Code](https://code.visualstudio.com/) installed:
4251 | 
4252 |         ```powershell
4253 |         code $env:AppData\Claude\claude_desktop_config.json
4254 |         ```
4255 | 
4256 |         Add this configuration (replace YOUR\_USERNAME with your actual username):
4257 | 
4258 |         ```json
4259 |         {
4260 |           "mcpServers": {
4261 |             "sqlite": {
4262 |               "command": "uvx",
4263 |               "args": [
4264 |                 "mcp-server-sqlite",
4265 |                 "--db-path",
4266 |                 "C:\\Users\\YOUR_USERNAME\\test.db"
4267 |               ]
4268 |             }
4269 |           }
4270 |         }
4271 |         ```
4272 | 
4273 |         This tells Claude Desktop:
4274 | 
4275 |         1.  There's an MCP server named "sqlite"
4276 |         2.  Launch it by running `uvx mcp-server-sqlite`
4277 |         3.  Connect it to your test database
4278 | 
4279 |         Save the file, and restart **Claude Desktop**.
4280 |       </Step>
4281 |     </Steps>
4282 |   </Tab>
4283 | </Tabs>
4284 | 
4285 | ## Test it out
4286 | 
4287 | Let's verify everything is working. Try sending this prompt to Claude Desktop:
4288 | 
4289 | ```
4290 | Can you connect to my SQLite database and tell me what products are available, and their prices?
4291 | ```
4292 | 
4293 | Claude Desktop will:
4294 | 
4295 | 1.  Connect to the SQLite MCP server
4296 | 2.  Query your local database
4297 | 3.  Format and present the results
4298 | 
4299 | <Frame caption="Claude Desktop successfully queries our SQLite database 🎉">
4300 |   <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-screenshot.png" alt="Example Claude Desktop conversation showing database query results" />
4301 | </Frame>
4302 | 
4303 | ## What's happening under the hood?
4304 | 
4305 | When you interact with Claude Desktop using MCP:
4306 | 
4307 | 1.  **Server Discovery**: Claude Desktop connects to your configured MCP servers on startup
4308 | 
4309 | 2.  **Protocol Handshake**: When you ask about data, Claude Desktop:
4310 |     *   Identifies which MCP server can help (sqlite in this case)
4311 |     *   Negotiates capabilities through the protocol
4312 |     *   Requests data or actions from the MCP server
4313 | 
4314 | 3.  **Interaction Flow**:
4315 |     ```mermaid
4316 |     sequenceDiagram
4317 |         participant C as Claude Desktop
4318 |         participant M as MCP Server
4319 |         participant D as SQLite DB
4320 | 
4321 |         C->>M: Initialize connection
4322 |         M-->>C: Available capabilities
4323 | 
4324 |         C->>M: Query request
4325 |         M->>D: SQL query
4326 |         D-->>M: Results
4327 |         M-->>C: Formatted results
4328 |     ```
4329 | 
4330 | 4.  **Security**:
4331 |     *   MCP servers only expose specific, controlled capabilities
4332 |     *   MCP servers run locally on your machine, and the resources they access are not exposed to the internet
4333 |     *   Claude Desktop requires user confirmation for sensitive operations
4334 | 
4335 | ## Try these examples
4336 | 
4337 | Now that MCP is working, try these increasingly powerful examples:
4338 | 
4339 | <AccordionGroup>
4340 |   <Accordion title="Basic Queries" active>
4341 |     ```
4342 |     What's the average price of all products in the database?
4343 |     ```
4344 |   </Accordion>
4345 | 
4346 |   <Accordion title="Data Analysis">
4347 |     ```
4348 |     Can you analyze the price distribution and suggest any pricing optimizations?
4349 |     ```
4350 |   </Accordion>
4351 | 
4352 |   <Accordion title="Complex Operations">
4353 |     ```
4354 |     Could you help me design and create a new table for storing customer orders?
4355 |     ```
4356 |   </Accordion>
4357 | </AccordionGroup>
4358 | 
4359 | ## Add more capabilities
4360 | 
4361 | Want to give Claude Desktop more local integration capabilities? Add these servers to your configuration:
4362 | 
4363 | <Note>
4364 |   Note that these MCP servers will require [Node.js](https://nodejs.org/en) to be installed on your machine.
4365 | </Note>
4366 | 
4367 | <AccordionGroup>
4368 |   <Accordion title="File System Access" icon="folder-open">
4369 |     Add this to your config to let Claude Desktop read and analyze files:
4370 | 
4371 |     ```json
4372 |     "filesystem": {
4373 |       "command": "npx",
4374 |       "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/YOUR_USERNAME/Desktop"]
4375 |     }
4376 |     ```
4377 |   </Accordion>
4378 | 
4379 |   <Accordion title="PostgreSQL Connection" icon="database">
4380 |     Connect Claude Desktop to your PostgreSQL database:
4381 | 
4382 |     ```json
4383 |     "postgres": {
4384 |       "command": "npx",
4385 |       "args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"]
4386 |     }
4387 |     ```
4388 |   </Accordion>
4389 | </AccordionGroup>
4390 | 
4391 | ## More MCP Clients
4392 | 
4393 | While this guide demonstrates MCP using Claude Desktop as a client, several other applications support MCP integration:
4394 | 
4395 | <CardGroup cols={2}>
4396 |   <Card title="Zed Editor" icon="pen-to-square" href="https://zed.dev">
4397 |     A high-performance, multiplayer code editor with built-in MCP support for AI-powered coding assistance
4398 |   </Card>
4399 | 
4400 |   <Card title="Cody" icon="magnifying-glass" href="https://sourcegraph.com/cody">
4401 |     Code intelligence platform featuring MCP integration for enhanced code search and analysis capabilities
4402 |   </Card>
4403 | </CardGroup>
4404 | 
4405 | Each host application may implement MCP features differently or support different capabilities. Check their respective documentation for specific setup instructions and supported features.
4406 | 
4407 | ## Troubleshooting
4408 | 
4409 | <AccordionGroup>
4410 |   <Accordion title="Nothing showing up in Claude Desktop?">
4411 |     1.  Check if MCP is enabled:
4412 |         *   Click the 🔌 icon in Claude Desktop, next to the chat box
4413 |         *   Expand "Installed MCP Servers"
4414 |         *   You should see your configured servers
4415 | 
4416 |     2.  Verify your config:
4417 |         *   From Claude Desktop, go to Claude > Settings…
4418 |         *   Open the "Developer" tab to see your configuration
4419 | 
4420 |     3.  Restart Claude Desktop completely:
4421 |         *   Quit the app (not just close the window)
4422 |         *   Start it again
4423 |   </Accordion>
4424 | 
4425 |   <Accordion title="MCP or database errors?">
4426 |     1.  Check Claude Desktop's logs:
4427 |         ```bash
4428 |         tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
4429 |         ```
4430 | 
4431 |     2.  Verify database access:
4432 |         ```bash
4433 |         # Test database connection
4434 |         sqlite3 ~/test.db ".tables"
4435 |         ```
4436 | 
4437 |     3.  Common fixes:
4438 |         *   Check file paths in your config
4439 |         *   Verify database file permissions
4440 |         *   Ensure SQLite is installed properly
4441 |   </Accordion>
4442 | </AccordionGroup>
4443 | 
4444 | ## Next steps
4445 | 
4446 | <CardGroup cols={2}>
4447 |   <Card title="Build your first MCP server" icon="code" href="/docs/first-server/python">
4448 |     Create your own MCP servers to give your LLM clients new capabilities.
4449 |   </Card>
4450 | 
4451 |   <Card title="Explore examples" icon="github" href="https://github.com/modelcontextprotocol/servers">
4452 |     Browse our collection of example servers to see what's possible.
4453 |   </Card>
4454 | </CardGroup>
4455 | 
4456 | 
4457 | 
```
Page 2/2FirstPrevNextLast