This is page 2 of 2. Use http://codebase.md/evalstate/mcp-miro?lines=true&page={x} to view the full context. # Directory Structure ``` ├── .gitignore ├── 2024-12-02-screenshot_1.png ├── LICENSE ├── package-lock.json ├── package.json ├── prompts │ ├── 01-oauthtoken.md │ ├── 02-boards-as-resources.md │ └── ref │ ├── mcp-llms.txt.md │ └── mcp-types.ts ├── README.md ├── resources │ └── boards-key-facts.md ├── src │ ├── index.ts │ └── MiroClient.ts └── tsconfig.json ``` # Files -------------------------------------------------------------------------------- /prompts/ref/mcp-llms.txt.md: -------------------------------------------------------------------------------- ```markdown 1 | # Clients 2 | 3 | A list of applications that support MCP integrations 4 | 5 | This page provides an overview of applications that support the Model Context Protocol (MCP). Each client may support different MCP features, allowing for varying levels of integration with MCP servers. 6 | 7 | ## Feature support matrix 8 | 9 | | Client | [Resources] | [Prompts] | [Tools] | [Sampling] | Roots | Notes | 10 | | ---------------------------- | ----------- | --------- | ------- | ---------- | ----- | ---------------------------------- | 11 | | [Claude Desktop App][Claude] | ✅ | ✅ | ✅ | ❌ | ❌ | Full support for all MCP features | 12 | | [Zed][Zed] | ❌ | ✅ | ❌ | ❌ | ❌ | Prompts appear as slash commands | 13 | | [Sourcegraph Cody][Cody] | ✅ | ❌ | ❌ | ❌ | ❌ | Supports resources through OpenCTX | 14 | 15 | [Claude]: https://claude.ai/download 16 | 17 | [Zed]: https://zed.dev 18 | 19 | [Cody]: https://sourcegraph.com/cody 20 | 21 | [Resources]: https://modelcontextprotocol.io/docs/concepts/resources 22 | 23 | [Prompts]: https://modelcontextprotocol.io/docs/concepts/prompts 24 | 25 | [Tools]: https://modelcontextprotocol.io/docs/concepts/tools 26 | 27 | [Sampling]: https://modelcontextprotocol.io/docs/concepts/sampling 28 | 29 | ## Client details 30 | 31 | ### Claude Desktop App 32 | 33 | The Claude desktop application provides comprehensive support for MCP, enabling deep integration with local tools and data sources. 34 | 35 | **Key features:** 36 | 37 | * Full support for resources, allowing attachment of local files and data 38 | * Support for prompt templates 39 | * Tool integration for executing commands and scripts 40 | * Local server connections for enhanced privacy and security 41 | 42 | > ⓘ Note: The Claude.ai web application does not currently support MCP. MCP features are only available in the desktop application. 43 | 44 | ### Zed 45 | 46 | [Zed](https://zed.dev/docs/assistant/model-context-protocol) is a high-performance code editor with built-in MCP support, focusing on prompt templates and tool integration. 47 | 48 | **Key features:** 49 | 50 | * Prompt templates surface as slash commands in the editor 51 | * Tool integration for enhanced coding workflows 52 | * Tight integration with editor features and workspace context 53 | * Does not support MCP resources 54 | 55 | ### Sourcegraph Cody 56 | 57 | [Cody](https://openctx.org/docs/providers/modelcontextprotocol) is Sourcegraph's AI coding assistant, which implements MCP through OpenCTX. 58 | 59 | **Key features:** 60 | 61 | * Support for MCP resources 62 | * Integration with Sourcegraph's code intelligence 63 | * Uses OpenCTX as an abstraction layer 64 | * Future support planned for additional MCP features 65 | 66 | ## Adding MCP support to your application 67 | 68 | If you've added MCP support to your application, we encourage you to submit a pull request to add it to this list. MCP integration can provide your users with powerful contextual AI capabilities and make your application part of the growing MCP ecosystem. 69 | 70 | Benefits of adding MCP support: 71 | 72 | * Enable users to bring their own context and tools 73 | * Join a growing ecosystem of interoperable AI applications 74 | * Provide users with flexible integration options 75 | * Support local-first AI workflows 76 | 77 | To get started with implementing MCP in your application, check out our [Python](https://github.com/modelcontextprotocol/python-sdk) or [TypeScript SDK Documentation](https://github.com/modelcontextprotocol/typescript-sdk) 78 | 79 | ## Updates and corrections 80 | 81 | This list is maintained by the community. If you notice any inaccuracies or would like to update information about MCP support in your application, please submit a pull request or [open an issue in our documentation repository](https://github.com/modelcontextprotocol/docs/issues). 82 | 83 | 84 | # Core architecture 85 | 86 | Understand how MCP connects clients, servers, and LLMs 87 | 88 | The Model Context Protocol (MCP) is built on a flexible, extensible architecture that enables seamless communication between LLM applications and integrations. This document covers the core architectural components and concepts. 89 | 90 | ## Overview 91 | 92 | MCP follows a client-server architecture where: 93 | 94 | * **Hosts** are LLM applications (like Claude Desktop or IDEs) that initiate connections 95 | * **Clients** maintain 1:1 connections with servers, inside the host application 96 | * **Servers** provide context, tools, and prompts to clients 97 | 98 | ```mermaid 99 | flowchart LR 100 | subgraph " Host (e.g., Claude Desktop) " 101 | client1[MCP Client] 102 | client2[MCP Client] 103 | end 104 | subgraph "Server Process" 105 | server1[MCP Server] 106 | end 107 | subgraph "Server Process" 108 | server2[MCP Server] 109 | end 110 | 111 | client1 <-->|Transport Layer| server1 112 | client2 <-->|Transport Layer| server2 113 | ``` 114 | 115 | ## Core components 116 | 117 | ### Protocol layer 118 | 119 | The protocol layer handles message framing, request/response linking, and high-level communication patterns. 120 | 121 | <Tabs> 122 | <Tab title="TypeScript"> 123 | ```typescript 124 | class Protocol<Request, Notification, Result> { 125 | // Handle incoming requests 126 | setRequestHandler<T>(schema: T, handler: (request: T, extra: RequestHandlerExtra) => Promise<Result>): void 127 | 128 | // Handle incoming notifications 129 | setNotificationHandler<T>(schema: T, handler: (notification: T) => Promise<void>): void 130 | 131 | // Send requests and await responses 132 | request<T>(request: Request, schema: T, options?: RequestOptions): Promise<T> 133 | 134 | // Send one-way notifications 135 | notification(notification: Notification): Promise<void> 136 | } 137 | ``` 138 | </Tab> 139 | 140 | <Tab title="Python"> 141 | ```python 142 | class Session(BaseSession[RequestT, NotificationT, ResultT]): 143 | async def send_request( 144 | self, 145 | request: RequestT, 146 | result_type: type[Result] 147 | ) -> Result: 148 | """ 149 | Send request and wait for response. Raises McpError if response contains error. 150 | """ 151 | # Request handling implementation 152 | 153 | async def send_notification( 154 | self, 155 | notification: NotificationT 156 | ) -> None: 157 | """Send one-way notification that doesn't expect response.""" 158 | # Notification handling implementation 159 | 160 | async def _received_request( 161 | self, 162 | responder: RequestResponder[ReceiveRequestT, ResultT] 163 | ) -> None: 164 | """Handle incoming request from other side.""" 165 | # Request handling implementation 166 | 167 | async def _received_notification( 168 | self, 169 | notification: ReceiveNotificationT 170 | ) -> None: 171 | """Handle incoming notification from other side.""" 172 | # Notification handling implementation 173 | ``` 174 | </Tab> 175 | </Tabs> 176 | 177 | Key classes include: 178 | 179 | * `Protocol` 180 | * `Client` 181 | * `Server` 182 | 183 | ### Transport layer 184 | 185 | The transport layer handles the actual communication between clients and servers. MCP supports multiple transport mechanisms: 186 | 187 | 1. **Stdio transport** 188 | * Uses standard input/output for communication 189 | * Ideal for local processes 190 | 191 | 2. **HTTP with SSE transport** 192 | * Uses Server-Sent Events for server-to-client messages 193 | * HTTP POST for client-to-server messages 194 | 195 | All transports use [JSON-RPC](https://www.jsonrpc.org/) 2.0 to exchange messages. See the [specification](https://spec.modelcontextprotocol.io) for detailed information about the Model Context Protocol message format. 196 | 197 | ### Message types 198 | 199 | MCP has these main types of messages: 200 | 201 | 1. **Requests** expect a response from the other side: 202 | ```typescript 203 | interface Request { 204 | method: string; 205 | params?: { ... }; 206 | } 207 | ``` 208 | 209 | 2. **Notifications** are one-way messages that don't expect a response: 210 | ```typescript 211 | interface Notification { 212 | method: string; 213 | params?: { ... }; 214 | } 215 | ``` 216 | 217 | 3. **Results** are successful responses to requests: 218 | ```typescript 219 | interface Result { 220 | [key: string]: unknown; 221 | } 222 | ``` 223 | 224 | 4. **Errors** indicate that a request failed: 225 | ```typescript 226 | interface Error { 227 | code: number; 228 | message: string; 229 | data?: unknown; 230 | } 231 | ``` 232 | 233 | ## Connection lifecycle 234 | 235 | ### 1. Initialization 236 | 237 | ```mermaid 238 | sequenceDiagram 239 | participant Client 240 | participant Server 241 | 242 | Client->>Server: initialize request 243 | Server->>Client: initialize response 244 | Client->>Server: initialized notification 245 | 246 | Note over Client,Server: Connection ready for use 247 | ``` 248 | 249 | 1. Client sends `initialize` request with protocol version and capabilities 250 | 2. Server responds with its protocol version and capabilities 251 | 3. Client sends `initialized` notification as acknowledgment 252 | 4. Normal message exchange begins 253 | 254 | ### 2. Message exchange 255 | 256 | After initialization, the following patterns are supported: 257 | 258 | * **Request-Response**: Client or server sends requests, the other responds 259 | * **Notifications**: Either party sends one-way messages 260 | 261 | ### 3. Termination 262 | 263 | Either party can terminate the connection: 264 | 265 | * Clean shutdown via `close()` 266 | * Transport disconnection 267 | * Error conditions 268 | 269 | ## Error handling 270 | 271 | MCP defines these standard error codes: 272 | 273 | ```typescript 274 | enum ErrorCode { 275 | // Standard JSON-RPC error codes 276 | ParseError = -32700, 277 | InvalidRequest = -32600, 278 | MethodNotFound = -32601, 279 | InvalidParams = -32602, 280 | InternalError = -32603 281 | } 282 | ``` 283 | 284 | SDKs and applications can define their own error codes above -32000. 285 | 286 | Errors are propagated through: 287 | 288 | * Error responses to requests 289 | * Error events on transports 290 | * Protocol-level error handlers 291 | 292 | ## Implementation example 293 | 294 | Here's a basic example of implementing an MCP server: 295 | 296 | <Tabs> 297 | <Tab title="TypeScript"> 298 | ```typescript 299 | import { Server } from "@modelcontextprotocol/sdk/server/index.js"; 300 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; 301 | 302 | const server = new Server({ 303 | name: "example-server", 304 | version: "1.0.0" 305 | }, { 306 | capabilities: { 307 | resources: {} 308 | } 309 | }); 310 | 311 | // Handle requests 312 | server.setRequestHandler(ListResourcesRequestSchema, async () => { 313 | return { 314 | resources: [ 315 | { 316 | uri: "example://resource", 317 | name: "Example Resource" 318 | } 319 | ] 320 | }; 321 | }); 322 | 323 | // Connect transport 324 | const transport = new StdioServerTransport(); 325 | await server.connect(transport); 326 | ``` 327 | </Tab> 328 | 329 | <Tab title="Python"> 330 | ```python 331 | import asyncio 332 | import mcp.types as types 333 | from mcp.server import Server 334 | from mcp.server.stdio import stdio_server 335 | 336 | app = Server("example-server") 337 | 338 | @app.list_resources() 339 | async def list_resources() -> list[types.Resource]: 340 | return [ 341 | types.Resource( 342 | uri="example://resource", 343 | name="Example Resource" 344 | ) 345 | ] 346 | 347 | async def main(): 348 | async with stdio_server() as streams: 349 | await app.run( 350 | streams[0], 351 | streams[1], 352 | app.create_initialization_options() 353 | ) 354 | 355 | if __name__ == "__main__": 356 | asyncio.run(main) 357 | ``` 358 | </Tab> 359 | </Tabs> 360 | 361 | ## Best practices 362 | 363 | ### Transport selection 364 | 365 | 1. **Local communication** 366 | * Use stdio transport for local processes 367 | * Efficient for same-machine communication 368 | * Simple process management 369 | 370 | 2. **Remote communication** 371 | * Use SSE for scenarios requiring HTTP compatibility 372 | * Consider security implications including authentication and authorization 373 | 374 | ### Message handling 375 | 376 | 1. **Request processing** 377 | * Validate inputs thoroughly 378 | * Use type-safe schemas 379 | * Handle errors gracefully 380 | * Implement timeouts 381 | 382 | 2. **Progress reporting** 383 | * Use progress tokens for long operations 384 | * Report progress incrementally 385 | * Include total progress when known 386 | 387 | 3. **Error management** 388 | * Use appropriate error codes 389 | * Include helpful error messages 390 | * Clean up resources on errors 391 | 392 | ## Security considerations 393 | 394 | 1. **Transport security** 395 | * Use TLS for remote connections 396 | * Validate connection origins 397 | * Implement authentication when needed 398 | 399 | 2. **Message validation** 400 | * Validate all incoming messages 401 | * Sanitize inputs 402 | * Check message size limits 403 | * Verify JSON-RPC format 404 | 405 | 3. **Resource protection** 406 | * Implement access controls 407 | * Validate resource paths 408 | * Monitor resource usage 409 | * Rate limit requests 410 | 411 | 4. **Error handling** 412 | * Don't leak sensitive information 413 | * Log security-relevant errors 414 | * Implement proper cleanup 415 | * Handle DoS scenarios 416 | 417 | ## Debugging and monitoring 418 | 419 | 1. **Logging** 420 | * Log protocol events 421 | * Track message flow 422 | * Monitor performance 423 | * Record errors 424 | 425 | 2. **Diagnostics** 426 | * Implement health checks 427 | * Monitor connection state 428 | * Track resource usage 429 | * Profile performance 430 | 431 | 3. **Testing** 432 | * Test different transports 433 | * Verify error handling 434 | * Check edge cases 435 | * Load test servers 436 | 437 | 438 | # Prompts 439 | 440 | Create reusable prompt templates and workflows 441 | 442 | Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a powerful way to standardize and share common LLM interactions. 443 | 444 | <Note> 445 | Prompts are designed to be **user-controlled**, meaning they are exposed from servers to clients with the intention of the user being able to explicitly select them for use. 446 | </Note> 447 | 448 | ## Overview 449 | 450 | Prompts in MCP are predefined templates that can: 451 | 452 | * Accept dynamic arguments 453 | * Include context from resources 454 | * Chain multiple interactions 455 | * Guide specific workflows 456 | * Surface as UI elements (like slash commands) 457 | 458 | ## Prompt structure 459 | 460 | Each prompt is defined with: 461 | 462 | ```typescript 463 | { 464 | name: string; // Unique identifier for the prompt 465 | description?: string; // Human-readable description 466 | arguments?: [ // Optional list of arguments 467 | { 468 | name: string; // Argument identifier 469 | description?: string; // Argument description 470 | required?: boolean; // Whether argument is required 471 | } 472 | ] 473 | } 474 | ``` 475 | 476 | ## Discovering prompts 477 | 478 | Clients can discover available prompts through the `prompts/list` endpoint: 479 | 480 | ```typescript 481 | // Request 482 | { 483 | method: "prompts/list" 484 | } 485 | 486 | // Response 487 | { 488 | prompts: [ 489 | { 490 | name: "analyze-code", 491 | description: "Analyze code for potential improvements", 492 | arguments: [ 493 | { 494 | name: "language", 495 | description: "Programming language", 496 | required: true 497 | } 498 | ] 499 | } 500 | ] 501 | } 502 | ``` 503 | 504 | ## Using prompts 505 | 506 | To use a prompt, clients make a `prompts/get` request: 507 | 508 | ````typescript 509 | // Request 510 | { 511 | method: "prompts/get", 512 | params: { 513 | name: "analyze-code", 514 | arguments: { 515 | language: "python" 516 | } 517 | } 518 | } 519 | 520 | // Response 521 | { 522 | description: "Analyze Python code for potential improvements", 523 | messages: [ 524 | { 525 | role: "user", 526 | content: { 527 | type: "text", 528 | text: "Please analyze the following Python code for potential improvements:\n\n```python\ndef calculate_sum(numbers):\n total = 0\n for num in numbers:\n total = total + num\n return total\n\nresult = calculate_sum([1, 2, 3, 4, 5])\nprint(result)\n```" 529 | } 530 | } 531 | ] 532 | } 533 | ```` 534 | 535 | ## Dynamic prompts 536 | 537 | Prompts can be dynamic and include: 538 | 539 | ### Embedded resource context 540 | 541 | ```json 542 | { 543 | "name": "analyze-project", 544 | "description": "Analyze project logs and code", 545 | "arguments": [ 546 | { 547 | "name": "timeframe", 548 | "description": "Time period to analyze logs", 549 | "required": true 550 | }, 551 | { 552 | "name": "fileUri", 553 | "description": "URI of code file to review", 554 | "required": true 555 | } 556 | ] 557 | } 558 | ``` 559 | 560 | When handling the `prompts/get` request: 561 | 562 | ```json 563 | { 564 | "messages": [ 565 | { 566 | "role": "user", 567 | "content": { 568 | "type": "text", 569 | "text": "Analyze these system logs and the code file for any issues:" 570 | } 571 | }, 572 | { 573 | "role": "user", 574 | "content": { 575 | "type": "resource", 576 | "resource": { 577 | "uri": "logs://recent?timeframe=1h", 578 | "text": "[2024-03-14 15:32:11] ERROR: Connection timeout in network.py:127\n[2024-03-14 15:32:15] WARN: Retrying connection (attempt 2/3)\n[2024-03-14 15:32:20] ERROR: Max retries exceeded", 579 | "mimeType": "text/plain" 580 | } 581 | } 582 | }, 583 | { 584 | "role": "user", 585 | "content": { 586 | "type": "resource", 587 | "resource": { 588 | "uri": "file:///path/to/code.py", 589 | "text": "def connect_to_service(timeout=30):\n retries = 3\n for attempt in range(retries):\n try:\n return establish_connection(timeout)\n except TimeoutError:\n if attempt == retries - 1:\n raise\n time.sleep(5)\n\ndef establish_connection(timeout):\n # Connection implementation\n pass", 590 | "mimeType": "text/x-python" 591 | } 592 | } 593 | } 594 | ] 595 | } 596 | ``` 597 | 598 | ### Multi-step workflows 599 | 600 | ```typescript 601 | const debugWorkflow = { 602 | name: "debug-error", 603 | async getMessages(error: string) { 604 | return [ 605 | { 606 | role: "user", 607 | content: { 608 | type: "text", 609 | text: `Here's an error I'm seeing: ${error}` 610 | } 611 | }, 612 | { 613 | role: "assistant", 614 | content: { 615 | type: "text", 616 | text: "I'll help analyze this error. What have you tried so far?" 617 | } 618 | }, 619 | { 620 | role: "user", 621 | content: { 622 | type: "text", 623 | text: "I've tried restarting the service, but the error persists." 624 | } 625 | } 626 | ]; 627 | } 628 | }; 629 | ``` 630 | 631 | ## Example implementation 632 | 633 | Here's a complete example of implementing prompts in an MCP server: 634 | 635 | <Tabs> 636 | <Tab title="TypeScript"> 637 | ```typescript 638 | import { Server } from "@modelcontextprotocol/sdk/server"; 639 | import { 640 | ListPromptsRequestSchema, 641 | GetPromptRequestSchema 642 | } from "@modelcontextprotocol/sdk/types"; 643 | 644 | const PROMPTS = { 645 | "git-commit": { 646 | name: "git-commit", 647 | description: "Generate a Git commit message", 648 | arguments: [ 649 | { 650 | name: "changes", 651 | description: "Git diff or description of changes", 652 | required: true 653 | } 654 | ] 655 | }, 656 | "explain-code": { 657 | name: "explain-code", 658 | description: "Explain how code works", 659 | arguments: [ 660 | { 661 | name: "code", 662 | description: "Code to explain", 663 | required: true 664 | }, 665 | { 666 | name: "language", 667 | description: "Programming language", 668 | required: false 669 | } 670 | ] 671 | } 672 | }; 673 | 674 | const server = new Server({ 675 | name: "example-prompts-server", 676 | version: "1.0.0" 677 | }, { 678 | capabilities: { 679 | prompts: {} 680 | } 681 | }); 682 | 683 | // List available prompts 684 | server.setRequestHandler(ListPromptsRequestSchema, async () => { 685 | return { 686 | prompts: Object.values(PROMPTS) 687 | }; 688 | }); 689 | 690 | // Get specific prompt 691 | server.setRequestHandler(GetPromptRequestSchema, async (request) => { 692 | const prompt = PROMPTS[request.params.name]; 693 | if (!prompt) { 694 | throw new Error(`Prompt not found: ${request.params.name}`); 695 | } 696 | 697 | if (request.params.name === "git-commit") { 698 | return { 699 | messages: [ 700 | { 701 | role: "user", 702 | content: { 703 | type: "text", 704 | text: `Generate a concise but descriptive commit message for these changes:\n\n${request.params.arguments?.changes}` 705 | } 706 | } 707 | ] 708 | }; 709 | } 710 | 711 | if (request.params.name === "explain-code") { 712 | const language = request.params.arguments?.language || "Unknown"; 713 | return { 714 | messages: [ 715 | { 716 | role: "user", 717 | content: { 718 | type: "text", 719 | text: `Explain how this ${language} code works:\n\n${request.params.arguments?.code}` 720 | } 721 | } 722 | ] 723 | }; 724 | } 725 | 726 | throw new Error("Prompt implementation not found"); 727 | }); 728 | ``` 729 | </Tab> 730 | 731 | <Tab title="Python"> 732 | ```python 733 | from mcp.server import Server 734 | import mcp.types as types 735 | 736 | # Define available prompts 737 | PROMPTS = { 738 | "git-commit": types.Prompt( 739 | name="git-commit", 740 | description="Generate a Git commit message", 741 | arguments=[ 742 | types.PromptArgument( 743 | name="changes", 744 | description="Git diff or description of changes", 745 | required=True 746 | ) 747 | ], 748 | ), 749 | "explain-code": types.Prompt( 750 | name="explain-code", 751 | description="Explain how code works", 752 | arguments=[ 753 | types.PromptArgument( 754 | name="code", 755 | description="Code to explain", 756 | required=True 757 | ), 758 | types.PromptArgument( 759 | name="language", 760 | description="Programming language", 761 | required=False 762 | ) 763 | ], 764 | ) 765 | } 766 | 767 | # Initialize server 768 | app = Server("example-prompts-server") 769 | 770 | @app.list_prompts() 771 | async def list_prompts() -> list[types.Prompt]: 772 | return list(PROMPTS.values()) 773 | 774 | @app.get_prompt() 775 | async def get_prompt( 776 | name: str, arguments: dict[str, str] | None = None 777 | ) -> types.GetPromptResult: 778 | if name not in PROMPTS: 779 | raise ValueError(f"Prompt not found: {name}") 780 | 781 | if name == "git-commit": 782 | changes = arguments.get("changes") if arguments else "" 783 | return types.GetPromptResult( 784 | messages=[ 785 | types.PromptMessage( 786 | role="user", 787 | content=types.TextContent( 788 | type="text", 789 | text=f"Generate a concise but descriptive commit message " 790 | f"for these changes:\n\n{changes}" 791 | ) 792 | ) 793 | ] 794 | ) 795 | 796 | if name == "explain-code": 797 | code = arguments.get("code") if arguments else "" 798 | language = arguments.get("language", "Unknown") if arguments else "Unknown" 799 | return types.GetPromptResult( 800 | messages=[ 801 | types.PromptMessage( 802 | role="user", 803 | content=types.TextContent( 804 | type="text", 805 | text=f"Explain how this {language} code works:\n\n{code}" 806 | ) 807 | ) 808 | ] 809 | ) 810 | 811 | raise ValueError("Prompt implementation not found") 812 | ``` 813 | </Tab> 814 | </Tabs> 815 | 816 | ## Best practices 817 | 818 | When implementing prompts: 819 | 820 | 1. Use clear, descriptive prompt names 821 | 2. Provide detailed descriptions for prompts and arguments 822 | 3. Validate all required arguments 823 | 4. Handle missing arguments gracefully 824 | 5. Consider versioning for prompt templates 825 | 6. Cache dynamic content when appropriate 826 | 7. Implement error handling 827 | 8. Document expected argument formats 828 | 9. Consider prompt composability 829 | 10. Test prompts with various inputs 830 | 831 | ## UI integration 832 | 833 | Prompts can be surfaced in client UIs as: 834 | 835 | * Slash commands 836 | * Quick actions 837 | * Context menu items 838 | * Command palette entries 839 | * Guided workflows 840 | * Interactive forms 841 | 842 | ## Updates and changes 843 | 844 | Servers can notify clients about prompt changes: 845 | 846 | 1. Server capability: `prompts.listChanged` 847 | 2. Notification: `notifications/prompts/list_changed` 848 | 3. Client re-fetches prompt list 849 | 850 | ## Security considerations 851 | 852 | When implementing prompts: 853 | 854 | * Validate all arguments 855 | * Sanitize user input 856 | * Consider rate limiting 857 | * Implement access controls 858 | * Audit prompt usage 859 | * Handle sensitive data appropriately 860 | * Validate generated content 861 | * Implement timeouts 862 | * Consider prompt injection risks 863 | * Document security requirements 864 | 865 | 866 | # Resources 867 | 868 | Expose data and content from your servers to LLMs 869 | 870 | Resources are a core primitive in the Model Context Protocol (MCP) that allow servers to expose data and content that can be read by clients and used as context for LLM interactions. 871 | 872 | <Note> 873 | Resources are designed to be **application-controlled**, meaning that the client application can decide how and when they should be used. 874 | 875 | For example, one application may require users to explicitly select resources, while another could automatically select them based on heuristics or even at the discretion of the AI model itself. 876 | </Note> 877 | 878 | ## Overview 879 | 880 | Resources represent any kind of data that an MCP server wants to make available to clients. This can include: 881 | 882 | * File contents 883 | * Database records 884 | * API responses 885 | * Live system data 886 | * Screenshots and images 887 | * Log files 888 | * And more 889 | 890 | Each resource is identified by a unique URI and can contain either text or binary data. 891 | 892 | ## Resource URIs 893 | 894 | Resources are identified using URIs that follow this format: 895 | 896 | ``` 897 | [protocol]://[host]/[path] 898 | ``` 899 | 900 | For example: 901 | 902 | * `file:///home/user/documents/report.pdf` 903 | * `postgres://database/customers/schema` 904 | * `screen://localhost/display1` 905 | 906 | The protocol and path structure is defined by the MCP server implementation. Servers can define their own custom URI schemes. 907 | 908 | ## Resource types 909 | 910 | Resources can contain two types of content: 911 | 912 | ### Text resources 913 | 914 | Text resources contain UTF-8 encoded text data. These are suitable for: 915 | 916 | * Source code 917 | * Configuration files 918 | * Log files 919 | * JSON/XML data 920 | * Plain text 921 | 922 | ### Binary resources 923 | 924 | Binary resources contain raw binary data encoded in base64. These are suitable for: 925 | 926 | * Images 927 | * PDFs 928 | * Audio files 929 | * Video files 930 | * Other non-text formats 931 | 932 | ## Resource discovery 933 | 934 | Clients can discover available resources through two main methods: 935 | 936 | ### Direct resources 937 | 938 | Servers expose a list of concrete resources via the `resources/list` endpoint. Each resource includes: 939 | 940 | ```typescript 941 | { 942 | uri: string; // Unique identifier for the resource 943 | name: string; // Human-readable name 944 | description?: string; // Optional description 945 | mimeType?: string; // Optional MIME type 946 | } 947 | ``` 948 | 949 | ### Resource templates 950 | 951 | For dynamic resources, servers can expose [URI templates](https://datatracker.ietf.org/doc/html/rfc6570) that clients can use to construct valid resource URIs: 952 | 953 | ```typescript 954 | { 955 | uriTemplate: string; // URI template following RFC 6570 956 | name: string; // Human-readable name for this type 957 | description?: string; // Optional description 958 | mimeType?: string; // Optional MIME type for all matching resources 959 | } 960 | ``` 961 | 962 | ## Reading resources 963 | 964 | To read a resource, clients make a `resources/read` request with the resource URI. 965 | 966 | The server responds with a list of resource contents: 967 | 968 | ```typescript 969 | { 970 | contents: [ 971 | { 972 | uri: string; // The URI of the resource 973 | mimeType?: string; // Optional MIME type 974 | 975 | // One of: 976 | text?: string; // For text resources 977 | blob?: string; // For binary resources (base64 encoded) 978 | } 979 | ] 980 | } 981 | ``` 982 | 983 | <Tip> 984 | Servers may return multiple resources in response to one `resources/read` request. This could be used, for example, to return a list of files inside a directory when the directory is read. 985 | </Tip> 986 | 987 | ## Resource updates 988 | 989 | MCP supports real-time updates for resources through two mechanisms: 990 | 991 | ### List changes 992 | 993 | Servers can notify clients when their list of available resources changes via the `notifications/resources/list_changed` notification. 994 | 995 | ### Content changes 996 | 997 | Clients can subscribe to updates for specific resources: 998 | 999 | 1. Client sends `resources/subscribe` with resource URI 1000 | 2. Server sends `notifications/resources/updated` when the resource changes 1001 | 3. Client can fetch latest content with `resources/read` 1002 | 4. Client can unsubscribe with `resources/unsubscribe` 1003 | 1004 | ## Example implementation 1005 | 1006 | Here's a simple example of implementing resource support in an MCP server: 1007 | 1008 | <Tabs> 1009 | <Tab title="TypeScript"> 1010 | ```typescript 1011 | const server = new Server({ 1012 | name: "example-server", 1013 | version: "1.0.0" 1014 | }, { 1015 | capabilities: { 1016 | resources: {} 1017 | } 1018 | }); 1019 | 1020 | // List available resources 1021 | server.setRequestHandler(ListResourcesRequestSchema, async () => { 1022 | return { 1023 | resources: [ 1024 | { 1025 | uri: "file:///logs/app.log", 1026 | name: "Application Logs", 1027 | mimeType: "text/plain" 1028 | } 1029 | ] 1030 | }; 1031 | }); 1032 | 1033 | // Read resource contents 1034 | server.setRequestHandler(ReadResourceRequestSchema, async (request) => { 1035 | const uri = request.params.uri; 1036 | 1037 | if (uri === "file:///logs/app.log") { 1038 | const logContents = await readLogFile(); 1039 | return { 1040 | contents: [ 1041 | { 1042 | uri, 1043 | mimeType: "text/plain", 1044 | text: logContents 1045 | } 1046 | ] 1047 | }; 1048 | } 1049 | 1050 | throw new Error("Resource not found"); 1051 | }); 1052 | ``` 1053 | </Tab> 1054 | 1055 | <Tab title="Python"> 1056 | ```python 1057 | app = Server("example-server") 1058 | 1059 | @app.list_resources() 1060 | async def list_resources() -> list[types.Resource]: 1061 | return [ 1062 | types.Resource( 1063 | uri="file:///logs/app.log", 1064 | name="Application Logs", 1065 | mimeType="text/plain" 1066 | ) 1067 | ] 1068 | 1069 | @app.read_resource() 1070 | async def read_resource(uri: AnyUrl) -> str: 1071 | if str(uri) == "file:///logs/app.log": 1072 | log_contents = await read_log_file() 1073 | return log_contents 1074 | 1075 | raise ValueError("Resource not found") 1076 | 1077 | # Start server 1078 | async with stdio_server() as streams: 1079 | await app.run( 1080 | streams[0], 1081 | streams[1], 1082 | app.create_initialization_options() 1083 | ) 1084 | ``` 1085 | </Tab> 1086 | </Tabs> 1087 | 1088 | ## Best practices 1089 | 1090 | When implementing resource support: 1091 | 1092 | 1. Use clear, descriptive resource names and URIs 1093 | 2. Include helpful descriptions to guide LLM understanding 1094 | 3. Set appropriate MIME types when known 1095 | 4. Implement resource templates for dynamic content 1096 | 5. Use subscriptions for frequently changing resources 1097 | 6. Handle errors gracefully with clear error messages 1098 | 7. Consider pagination for large resource lists 1099 | 8. Cache resource contents when appropriate 1100 | 9. Validate URIs before processing 1101 | 10. Document your custom URI schemes 1102 | 1103 | ## Security considerations 1104 | 1105 | When exposing resources: 1106 | 1107 | * Validate all resource URIs 1108 | * Implement appropriate access controls 1109 | * Sanitize file paths to prevent directory traversal 1110 | * Be cautious with binary data handling 1111 | * Consider rate limiting for resource reads 1112 | * Audit resource access 1113 | * Encrypt sensitive data in transit 1114 | * Validate MIME types 1115 | * Implement timeouts for long-running reads 1116 | * Handle resource cleanup appropriately 1117 | 1118 | 1119 | # Sampling 1120 | 1121 | Let your servers request completions from LLMs 1122 | 1123 | Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated agentic behaviors while maintaining security and privacy. 1124 | 1125 | <Info> 1126 | This feature of MCP is not yet supported in the Claude Desktop client. 1127 | </Info> 1128 | 1129 | ## How sampling works 1130 | 1131 | The sampling flow follows these steps: 1132 | 1133 | 1. Server sends a `sampling/createMessage` request to the client 1134 | 2. Client reviews the request and can modify it 1135 | 3. Client samples from an LLM 1136 | 4. Client reviews the completion 1137 | 5. Client returns the result to the server 1138 | 1139 | This human-in-the-loop design ensures users maintain control over what the LLM sees and generates. 1140 | 1141 | ## Message format 1142 | 1143 | Sampling requests use a standardized message format: 1144 | 1145 | ```typescript 1146 | { 1147 | messages: [ 1148 | { 1149 | role: "user" | "assistant", 1150 | content: { 1151 | type: "text" | "image", 1152 | 1153 | // For text: 1154 | text?: string, 1155 | 1156 | // For images: 1157 | data?: string, // base64 encoded 1158 | mimeType?: string 1159 | } 1160 | } 1161 | ], 1162 | modelPreferences?: { 1163 | hints?: [{ 1164 | name?: string // Suggested model name/family 1165 | }], 1166 | costPriority?: number, // 0-1, importance of minimizing cost 1167 | speedPriority?: number, // 0-1, importance of low latency 1168 | intelligencePriority?: number // 0-1, importance of capabilities 1169 | }, 1170 | systemPrompt?: string, 1171 | includeContext?: "none" | "thisServer" | "allServers", 1172 | temperature?: number, 1173 | maxTokens: number, 1174 | stopSequences?: string[], 1175 | metadata?: Record<string, unknown> 1176 | } 1177 | ``` 1178 | 1179 | ## Request parameters 1180 | 1181 | ### Messages 1182 | 1183 | The `messages` array contains the conversation history to send to the LLM. Each message has: 1184 | 1185 | * `role`: Either "user" or "assistant" 1186 | * `content`: The message content, which can be: 1187 | * Text content with a `text` field 1188 | * Image content with `data` (base64) and `mimeType` fields 1189 | 1190 | ### Model preferences 1191 | 1192 | The `modelPreferences` object allows servers to specify their model selection preferences: 1193 | 1194 | * `hints`: Array of model name suggestions that clients can use to select an appropriate model: 1195 | * `name`: String that can match full or partial model names (e.g. "claude-3", "sonnet") 1196 | * Clients may map hints to equivalent models from different providers 1197 | * Multiple hints are evaluated in preference order 1198 | 1199 | * Priority values (0-1 normalized): 1200 | * `costPriority`: Importance of minimizing costs 1201 | * `speedPriority`: Importance of low latency response 1202 | * `intelligencePriority`: Importance of advanced model capabilities 1203 | 1204 | Clients make the final model selection based on these preferences and their available models. 1205 | 1206 | ### System prompt 1207 | 1208 | An optional `systemPrompt` field allows servers to request a specific system prompt. The client may modify or ignore this. 1209 | 1210 | ### Context inclusion 1211 | 1212 | The `includeContext` parameter specifies what MCP context to include: 1213 | 1214 | * `"none"`: No additional context 1215 | * `"thisServer"`: Include context from the requesting server 1216 | * `"allServers"`: Include context from all connected MCP servers 1217 | 1218 | The client controls what context is actually included. 1219 | 1220 | ### Sampling parameters 1221 | 1222 | Fine-tune the LLM sampling with: 1223 | 1224 | * `temperature`: Controls randomness (0.0 to 1.0) 1225 | * `maxTokens`: Maximum tokens to generate 1226 | * `stopSequences`: Array of sequences that stop generation 1227 | * `metadata`: Additional provider-specific parameters 1228 | 1229 | ## Response format 1230 | 1231 | The client returns a completion result: 1232 | 1233 | ```typescript 1234 | { 1235 | model: string, // Name of the model used 1236 | stopReason?: "endTurn" | "stopSequence" | "maxTokens" | string, 1237 | role: "user" | "assistant", 1238 | content: { 1239 | type: "text" | "image", 1240 | text?: string, 1241 | data?: string, 1242 | mimeType?: string 1243 | } 1244 | } 1245 | ``` 1246 | 1247 | ## Example request 1248 | 1249 | Here's an example of requesting sampling from a client: 1250 | 1251 | ```json 1252 | { 1253 | "method": "sampling/createMessage", 1254 | "params": { 1255 | "messages": [ 1256 | { 1257 | "role": "user", 1258 | "content": { 1259 | "type": "text", 1260 | "text": "What files are in the current directory?" 1261 | } 1262 | } 1263 | ], 1264 | "systemPrompt": "You are a helpful file system assistant.", 1265 | "includeContext": "thisServer", 1266 | "maxTokens": 100 1267 | } 1268 | } 1269 | ``` 1270 | 1271 | ## Best practices 1272 | 1273 | When implementing sampling: 1274 | 1275 | 1. Always provide clear, well-structured prompts 1276 | 2. Handle both text and image content appropriately 1277 | 3. Set reasonable token limits 1278 | 4. Include relevant context through `includeContext` 1279 | 5. Validate responses before using them 1280 | 6. Handle errors gracefully 1281 | 7. Consider rate limiting sampling requests 1282 | 8. Document expected sampling behavior 1283 | 9. Test with various model parameters 1284 | 10. Monitor sampling costs 1285 | 1286 | ## Human in the loop controls 1287 | 1288 | Sampling is designed with human oversight in mind: 1289 | 1290 | ### For prompts 1291 | 1292 | * Clients should show users the proposed prompt 1293 | * Users should be able to modify or reject prompts 1294 | * System prompts can be filtered or modified 1295 | * Context inclusion is controlled by the client 1296 | 1297 | ### For completions 1298 | 1299 | * Clients should show users the completion 1300 | * Users should be able to modify or reject completions 1301 | * Clients can filter or modify completions 1302 | * Users control which model is used 1303 | 1304 | ## Security considerations 1305 | 1306 | When implementing sampling: 1307 | 1308 | * Validate all message content 1309 | * Sanitize sensitive information 1310 | * Implement appropriate rate limits 1311 | * Monitor sampling usage 1312 | * Encrypt data in transit 1313 | * Handle user data privacy 1314 | * Audit sampling requests 1315 | * Control cost exposure 1316 | * Implement timeouts 1317 | * Handle model errors gracefully 1318 | 1319 | ## Common patterns 1320 | 1321 | ### Agentic workflows 1322 | 1323 | Sampling enables agentic patterns like: 1324 | 1325 | * Reading and analyzing resources 1326 | * Making decisions based on context 1327 | * Generating structured data 1328 | * Handling multi-step tasks 1329 | * Providing interactive assistance 1330 | 1331 | ### Context management 1332 | 1333 | Best practices for context: 1334 | 1335 | * Request minimal necessary context 1336 | * Structure context clearly 1337 | * Handle context size limits 1338 | * Update context as needed 1339 | * Clean up stale context 1340 | 1341 | ### Error handling 1342 | 1343 | Robust error handling should: 1344 | 1345 | * Catch sampling failures 1346 | * Handle timeout errors 1347 | * Manage rate limits 1348 | * Validate responses 1349 | * Provide fallback behaviors 1350 | * Log errors appropriately 1351 | 1352 | ## Limitations 1353 | 1354 | Be aware of these limitations: 1355 | 1356 | * Sampling depends on client capabilities 1357 | * Users control sampling behavior 1358 | * Context size has limits 1359 | * Rate limits may apply 1360 | * Costs should be considered 1361 | * Model availability varies 1362 | * Response times vary 1363 | * Not all content types supported 1364 | 1365 | 1366 | # Tools 1367 | 1368 | Enable LLMs to perform actions through your server 1369 | 1370 | Tools are a powerful primitive in the Model Context Protocol (MCP) that enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world. 1371 | 1372 | <Note> 1373 | Tools are designed to be **model-controlled**, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval). 1374 | </Note> 1375 | 1376 | ## Overview 1377 | 1378 | Tools in MCP allow servers to expose executable functions that can be invoked by clients and used by LLMs to perform actions. Key aspects of tools include: 1379 | 1380 | * **Discovery**: Clients can list available tools through the `tools/list` endpoint 1381 | * **Invocation**: Tools are called using the `tools/call` endpoint, where servers perform the requested operation and return results 1382 | * **Flexibility**: Tools can range from simple calculations to complex API interactions 1383 | 1384 | Like [resources](/docs/concepts/resources), tools are identified by unique names and can include descriptions to guide their usage. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems. 1385 | 1386 | ## Tool definition structure 1387 | 1388 | Each tool is defined with the following structure: 1389 | 1390 | ```typescript 1391 | { 1392 | name: string; // Unique identifier for the tool 1393 | description?: string; // Human-readable description 1394 | inputSchema: { // JSON Schema for the tool's parameters 1395 | type: "object", 1396 | properties: { ... } // Tool-specific parameters 1397 | } 1398 | } 1399 | ``` 1400 | 1401 | ## Implementing tools 1402 | 1403 | Here's an example of implementing a basic tool in an MCP server: 1404 | 1405 | <Tabs> 1406 | <Tab title="TypeScript"> 1407 | ```typescript 1408 | const server = new Server({ 1409 | name: "example-server", 1410 | version: "1.0.0" 1411 | }, { 1412 | capabilities: { 1413 | tools: {} 1414 | } 1415 | }); 1416 | 1417 | // Define available tools 1418 | server.setRequestHandler(ListToolsRequestSchema, async () => { 1419 | return { 1420 | tools: [{ 1421 | name: "calculate_sum", 1422 | description: "Add two numbers together", 1423 | inputSchema: { 1424 | type: "object", 1425 | properties: { 1426 | a: { type: "number" }, 1427 | b: { type: "number" } 1428 | }, 1429 | required: ["a", "b"] 1430 | } 1431 | }] 1432 | }; 1433 | }); 1434 | 1435 | // Handle tool execution 1436 | server.setRequestHandler(CallToolRequestSchema, async (request) => { 1437 | if (request.params.name === "calculate_sum") { 1438 | const { a, b } = request.params.arguments; 1439 | return { 1440 | toolResult: a + b 1441 | }; 1442 | } 1443 | throw new Error("Tool not found"); 1444 | }); 1445 | ``` 1446 | </Tab> 1447 | 1448 | <Tab title="Python"> 1449 | ```python 1450 | app = Server("example-server") 1451 | 1452 | @app.list_tools() 1453 | async def list_tools() -> list[types.Tool]: 1454 | return [ 1455 | types.Tool( 1456 | name="calculate_sum", 1457 | description="Add two numbers together", 1458 | inputSchema={ 1459 | "type": "object", 1460 | "properties": { 1461 | "a": {"type": "number"}, 1462 | "b": {"type": "number"} 1463 | }, 1464 | "required": ["a", "b"] 1465 | } 1466 | ) 1467 | ] 1468 | 1469 | @app.call_tool() 1470 | async def call_tool( 1471 | name: str, 1472 | arguments: dict 1473 | ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]: 1474 | if name == "calculate_sum": 1475 | a = arguments["a"] 1476 | b = arguments["b"] 1477 | result = a + b 1478 | return [types.TextContent(type="text", text=str(result))] 1479 | raise ValueError(f"Tool not found: {name}") 1480 | ``` 1481 | </Tab> 1482 | </Tabs> 1483 | 1484 | ## Example tool patterns 1485 | 1486 | Here are some examples of types of tools that a server could provide: 1487 | 1488 | ### System operations 1489 | 1490 | Tools that interact with the local system: 1491 | 1492 | ```typescript 1493 | { 1494 | name: "execute_command", 1495 | description: "Run a shell command", 1496 | inputSchema: { 1497 | type: "object", 1498 | properties: { 1499 | command: { type: "string" }, 1500 | args: { type: "array", items: { type: "string" } } 1501 | } 1502 | } 1503 | } 1504 | ``` 1505 | 1506 | ### API integrations 1507 | 1508 | Tools that wrap external APIs: 1509 | 1510 | ```typescript 1511 | { 1512 | name: "github_create_issue", 1513 | description: "Create a GitHub issue", 1514 | inputSchema: { 1515 | type: "object", 1516 | properties: { 1517 | title: { type: "string" }, 1518 | body: { type: "string" }, 1519 | labels: { type: "array", items: { type: "string" } } 1520 | } 1521 | } 1522 | } 1523 | ``` 1524 | 1525 | ### Data processing 1526 | 1527 | Tools that transform or analyze data: 1528 | 1529 | ```typescript 1530 | { 1531 | name: "analyze_csv", 1532 | description: "Analyze a CSV file", 1533 | inputSchema: { 1534 | type: "object", 1535 | properties: { 1536 | filepath: { type: "string" }, 1537 | operations: { 1538 | type: "array", 1539 | items: { 1540 | enum: ["sum", "average", "count"] 1541 | } 1542 | } 1543 | } 1544 | } 1545 | } 1546 | ``` 1547 | 1548 | ## Best practices 1549 | 1550 | When implementing tools: 1551 | 1552 | 1. Provide clear, descriptive names and descriptions 1553 | 2. Use detailed JSON Schema definitions for parameters 1554 | 3. Include examples in tool descriptions to demonstrate how the model should use them 1555 | 4. Implement proper error handling and validation 1556 | 5. Use progress reporting for long operations 1557 | 6. Keep tool operations focused and atomic 1558 | 7. Document expected return value structures 1559 | 8. Implement proper timeouts 1560 | 9. Consider rate limiting for resource-intensive operations 1561 | 10. Log tool usage for debugging and monitoring 1562 | 1563 | ## Security considerations 1564 | 1565 | When exposing tools: 1566 | 1567 | ### Input validation 1568 | 1569 | * Validate all parameters against the schema 1570 | * Sanitize file paths and system commands 1571 | * Validate URLs and external identifiers 1572 | * Check parameter sizes and ranges 1573 | * Prevent command injection 1574 | 1575 | ### Access control 1576 | 1577 | * Implement authentication where needed 1578 | * Use appropriate authorization checks 1579 | * Audit tool usage 1580 | * Rate limit requests 1581 | * Monitor for abuse 1582 | 1583 | ### Error handling 1584 | 1585 | * Don't expose internal errors to clients 1586 | * Log security-relevant errors 1587 | * Handle timeouts appropriately 1588 | * Clean up resources after errors 1589 | * Validate return values 1590 | 1591 | ## Tool discovery and updates 1592 | 1593 | MCP supports dynamic tool discovery: 1594 | 1595 | 1. Clients can list available tools at any time 1596 | 2. Servers can notify clients when tools change using `notifications/tools/list_changed` 1597 | 3. Tools can be added or removed during runtime 1598 | 4. Tool definitions can be updated (though this should be done carefully) 1599 | 1600 | ## Error handling 1601 | 1602 | Tool errors should be reported within the result object, not as MCP protocol-level errors. This allows the LLM to see and potentially handle the error. When a tool encounters an error: 1603 | 1604 | 1. Set `isError` to `true` in the result 1605 | 2. Include error details in the `content` array 1606 | 1607 | Here's an example of proper error handling for tools: 1608 | 1609 | <Tabs> 1610 | <Tab title="TypeScript"> 1611 | ```typescript 1612 | try { 1613 | // Tool operation 1614 | const result = performOperation(); 1615 | return { 1616 | content: [ 1617 | { 1618 | type: "text", 1619 | text: `Operation successful: ${result}` 1620 | } 1621 | ] 1622 | }; 1623 | } catch (error) { 1624 | return { 1625 | isError: true, 1626 | content: [ 1627 | { 1628 | type: "text", 1629 | text: `Error: ${error.message}` 1630 | } 1631 | ] 1632 | }; 1633 | } 1634 | ``` 1635 | </Tab> 1636 | 1637 | <Tab title="Python"> 1638 | ```python 1639 | try: 1640 | # Tool operation 1641 | result = perform_operation() 1642 | return types.CallToolResult( 1643 | content=[ 1644 | types.TextContent( 1645 | type="text", 1646 | text=f"Operation successful: {result}" 1647 | ) 1648 | ] 1649 | ) 1650 | except Exception as error: 1651 | return types.CallToolResult( 1652 | isError=True, 1653 | content=[ 1654 | types.TextContent( 1655 | type="text", 1656 | text=f"Error: {str(error)}" 1657 | ) 1658 | ] 1659 | ) 1660 | ``` 1661 | </Tab> 1662 | </Tabs> 1663 | 1664 | This approach allows the LLM to see that an error occurred and potentially take corrective action or request human intervention. 1665 | 1666 | ## Testing tools 1667 | 1668 | A comprehensive testing strategy for MCP tools should cover: 1669 | 1670 | * **Functional testing**: Verify tools execute correctly with valid inputs and handle invalid inputs appropriately 1671 | * **Integration testing**: Test tool interaction with external systems using both real and mocked dependencies 1672 | * **Security testing**: Validate authentication, authorization, input sanitization, and rate limiting 1673 | * **Performance testing**: Check behavior under load, timeout handling, and resource cleanup 1674 | * **Error handling**: Ensure tools properly report errors through the MCP protocol and clean up resources 1675 | 1676 | 1677 | # Transports 1678 | 1679 | Learn about MCP's communication mechanisms 1680 | 1681 | Transports in the Model Context Protocol (MCP) provide the foundation for communication between clients and servers. A transport handles the underlying mechanics of how messages are sent and received. 1682 | 1683 | ## Message Format 1684 | 1685 | MCP uses [JSON-RPC](https://www.jsonrpc.org/) 2.0 as its wire format. The transport layer is responsible for converting MCP protocol messages into JSON-RPC format for transmission and converting received JSON-RPC messages back into MCP protocol messages. 1686 | 1687 | There are three types of JSON-RPC messages used: 1688 | 1689 | ### Requests 1690 | 1691 | ```typescript 1692 | { 1693 | jsonrpc: "2.0", 1694 | id: number | string, 1695 | method: string, 1696 | params?: object 1697 | } 1698 | ``` 1699 | 1700 | ### Responses 1701 | 1702 | ```typescript 1703 | { 1704 | jsonrpc: "2.0", 1705 | id: number | string, 1706 | result?: object, 1707 | error?: { 1708 | code: number, 1709 | message: string, 1710 | data?: unknown 1711 | } 1712 | } 1713 | ``` 1714 | 1715 | ### Notifications 1716 | 1717 | ```typescript 1718 | { 1719 | jsonrpc: "2.0", 1720 | method: string, 1721 | params?: object 1722 | } 1723 | ``` 1724 | 1725 | ## Built-in Transport Types 1726 | 1727 | MCP includes two standard transport implementations: 1728 | 1729 | ### Standard Input/Output (stdio) 1730 | 1731 | The stdio transport enables communication through standard input and output streams. This is particularly useful for local integrations and command-line tools. 1732 | 1733 | Use stdio when: 1734 | 1735 | * Building command-line tools 1736 | * Implementing local integrations 1737 | * Needing simple process communication 1738 | * Working with shell scripts 1739 | 1740 | <Tabs> 1741 | <Tab title="TypeScript (Server)"> 1742 | ```typescript 1743 | const server = new Server({ 1744 | name: "example-server", 1745 | version: "1.0.0" 1746 | }, { 1747 | capabilities: {} 1748 | }); 1749 | 1750 | const transport = new StdioServerTransport(); 1751 | await server.connect(transport); 1752 | ``` 1753 | </Tab> 1754 | 1755 | <Tab title="TypeScript (Client)"> 1756 | ```typescript 1757 | const client = new Client({ 1758 | name: "example-client", 1759 | version: "1.0.0" 1760 | }, { 1761 | capabilities: {} 1762 | }); 1763 | 1764 | const transport = new StdioClientTransport({ 1765 | command: "./server", 1766 | args: ["--option", "value"] 1767 | }); 1768 | await client.connect(transport); 1769 | ``` 1770 | </Tab> 1771 | 1772 | <Tab title="Python (Server)"> 1773 | ```python 1774 | app = Server("example-server") 1775 | 1776 | async with stdio_server() as streams: 1777 | await app.run( 1778 | streams[0], 1779 | streams[1], 1780 | app.create_initialization_options() 1781 | ) 1782 | ``` 1783 | </Tab> 1784 | 1785 | <Tab title="Python (Client)"> 1786 | ```python 1787 | params = StdioServerParameters( 1788 | command="./server", 1789 | args=["--option", "value"] 1790 | ) 1791 | 1792 | async with stdio_client(params) as streams: 1793 | async with ClientSession(streams[0], streams[1]) as session: 1794 | await session.initialize() 1795 | ``` 1796 | </Tab> 1797 | </Tabs> 1798 | 1799 | ### Server-Sent Events (SSE) 1800 | 1801 | SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication. 1802 | 1803 | Use SSE when: 1804 | 1805 | * Only server-to-client streaming is needed 1806 | * Working with restricted networks 1807 | * Implementing simple updates 1808 | 1809 | <Tabs> 1810 | <Tab title="TypeScript (Server)"> 1811 | ```typescript 1812 | const server = new Server({ 1813 | name: "example-server", 1814 | version: "1.0.0" 1815 | }, { 1816 | capabilities: {} 1817 | }); 1818 | 1819 | const transport = new SSEServerTransport("/message", response); 1820 | await server.connect(transport); 1821 | ``` 1822 | </Tab> 1823 | 1824 | <Tab title="TypeScript (Client)"> 1825 | ```typescript 1826 | const client = new Client({ 1827 | name: "example-client", 1828 | version: "1.0.0" 1829 | }, { 1830 | capabilities: {} 1831 | }); 1832 | 1833 | const transport = new SSEClientTransport( 1834 | new URL("http://localhost:3000/sse") 1835 | ); 1836 | await client.connect(transport); 1837 | ``` 1838 | </Tab> 1839 | 1840 | <Tab title="Python (Server)"> 1841 | ```python 1842 | from mcp.server.sse import SseServerTransport 1843 | from starlette.applications import Starlette 1844 | from starlette.routing import Route 1845 | 1846 | app = Server("example-server") 1847 | sse = SseServerTransport("/messages") 1848 | 1849 | async def handle_sse(scope, receive, send): 1850 | async with sse.connect_sse(scope, receive, send) as streams: 1851 | await app.run(streams[0], streams[1], app.create_initialization_options()) 1852 | 1853 | async def handle_messages(scope, receive, send): 1854 | await sse.handle_post_message(scope, receive, send) 1855 | 1856 | starlette_app = Starlette( 1857 | routes=[ 1858 | Route("/sse", endpoint=handle_sse), 1859 | Route("/messages", endpoint=handle_messages, methods=["POST"]), 1860 | ] 1861 | ) 1862 | ``` 1863 | </Tab> 1864 | 1865 | <Tab title="Python (Client)"> 1866 | ```python 1867 | async with sse_client("http://localhost:8000/sse") as streams: 1868 | async with ClientSession(streams[0], streams[1]) as session: 1869 | await session.initialize() 1870 | ``` 1871 | </Tab> 1872 | </Tabs> 1873 | 1874 | ## Custom Transports 1875 | 1876 | MCP makes it easy to implement custom transports for specific needs. Any transport implementation just needs to conform to the Transport interface: 1877 | 1878 | You can implement custom transports for: 1879 | 1880 | * Custom network protocols 1881 | * Specialized communication channels 1882 | * Integration with existing systems 1883 | * Performance optimization 1884 | 1885 | <Tabs> 1886 | <Tab title="TypeScript"> 1887 | ```typescript 1888 | interface Transport { 1889 | // Start processing messages 1890 | start(): Promise<void>; 1891 | 1892 | // Send a JSON-RPC message 1893 | send(message: JSONRPCMessage): Promise<void>; 1894 | 1895 | // Close the connection 1896 | close(): Promise<void>; 1897 | 1898 | // Callbacks 1899 | onclose?: () => void; 1900 | onerror?: (error: Error) => void; 1901 | onmessage?: (message: JSONRPCMessage) => void; 1902 | } 1903 | ``` 1904 | </Tab> 1905 | 1906 | <Tab title="Python"> 1907 | Note that while MCP Servers are often implemented with asyncio, we recommend 1908 | implementing low-level interfaces like transports with `anyio` for wider compatibility. 1909 | 1910 | ```python 1911 | @contextmanager 1912 | async def create_transport( 1913 | read_stream: MemoryObjectReceiveStream[JSONRPCMessage | Exception], 1914 | write_stream: MemoryObjectSendStream[JSONRPCMessage] 1915 | ): 1916 | """ 1917 | Transport interface for MCP. 1918 | 1919 | Args: 1920 | read_stream: Stream to read incoming messages from 1921 | write_stream: Stream to write outgoing messages to 1922 | """ 1923 | async with anyio.create_task_group() as tg: 1924 | try: 1925 | # Start processing messages 1926 | tg.start_soon(lambda: process_messages(read_stream)) 1927 | 1928 | # Send messages 1929 | async with write_stream: 1930 | yield write_stream 1931 | 1932 | except Exception as exc: 1933 | # Handle errors 1934 | raise exc 1935 | finally: 1936 | # Clean up 1937 | tg.cancel_scope.cancel() 1938 | await write_stream.aclose() 1939 | await read_stream.aclose() 1940 | ``` 1941 | </Tab> 1942 | </Tabs> 1943 | 1944 | ## Error Handling 1945 | 1946 | Transport implementations should handle various error scenarios: 1947 | 1948 | 1. Connection errors 1949 | 2. Message parsing errors 1950 | 3. Protocol errors 1951 | 4. Network timeouts 1952 | 5. Resource cleanup 1953 | 1954 | Example error handling: 1955 | 1956 | <Tabs> 1957 | <Tab title="TypeScript"> 1958 | ```typescript 1959 | class ExampleTransport implements Transport { 1960 | async start() { 1961 | try { 1962 | // Connection logic 1963 | } catch (error) { 1964 | this.onerror?.(new Error(`Failed to connect: ${error}`)); 1965 | throw error; 1966 | } 1967 | } 1968 | 1969 | async send(message: JSONRPCMessage) { 1970 | try { 1971 | // Sending logic 1972 | } catch (error) { 1973 | this.onerror?.(new Error(`Failed to send message: ${error}`)); 1974 | throw error; 1975 | } 1976 | } 1977 | } 1978 | ``` 1979 | </Tab> 1980 | 1981 | <Tab title="Python"> 1982 | Note that while MCP Servers are often implemented with asyncio, we recommend 1983 | implementing low-level interfaces like transports with `anyio` for wider compatibility. 1984 | 1985 | ```python 1986 | @contextmanager 1987 | async def example_transport(scope: Scope, receive: Receive, send: Send): 1988 | try: 1989 | # Create streams for bidirectional communication 1990 | read_stream_writer, read_stream = anyio.create_memory_object_stream(0) 1991 | write_stream, write_stream_reader = anyio.create_memory_object_stream(0) 1992 | 1993 | async def message_handler(): 1994 | try: 1995 | async with read_stream_writer: 1996 | # Message handling logic 1997 | pass 1998 | except Exception as exc: 1999 | logger.error(f"Failed to handle message: {exc}") 2000 | raise exc 2001 | 2002 | async with anyio.create_task_group() as tg: 2003 | tg.start_soon(message_handler) 2004 | try: 2005 | # Yield streams for communication 2006 | yield read_stream, write_stream 2007 | except Exception as exc: 2008 | logger.error(f"Transport error: {exc}") 2009 | raise exc 2010 | finally: 2011 | tg.cancel_scope.cancel() 2012 | await write_stream.aclose() 2013 | await read_stream.aclose() 2014 | except Exception as exc: 2015 | logger.error(f"Failed to initialize transport: {exc}") 2016 | raise exc 2017 | ``` 2018 | </Tab> 2019 | </Tabs> 2020 | 2021 | ## Best Practices 2022 | 2023 | When implementing or using MCP transport: 2024 | 2025 | 1. Handle connection lifecycle properly 2026 | 2. Implement proper error handling 2027 | 3. Clean up resources on connection close 2028 | 4. Use appropriate timeouts 2029 | 5. Validate messages before sending 2030 | 6. Log transport events for debugging 2031 | 7. Implement reconnection logic when appropriate 2032 | 8. Handle backpressure in message queues 2033 | 9. Monitor connection health 2034 | 10. Implement proper security measures 2035 | 2036 | ## Security Considerations 2037 | 2038 | When implementing transport: 2039 | 2040 | ### Authentication and Authorization 2041 | 2042 | * Implement proper authentication mechanisms 2043 | * Validate client credentials 2044 | * Use secure token handling 2045 | * Implement authorization checks 2046 | 2047 | ### Data Security 2048 | 2049 | * Use TLS for network transport 2050 | * Encrypt sensitive data 2051 | * Validate message integrity 2052 | * Implement message size limits 2053 | * Sanitize input data 2054 | 2055 | ### Network Security 2056 | 2057 | * Implement rate limiting 2058 | * Use appropriate timeouts 2059 | * Handle denial of service scenarios 2060 | * Monitor for unusual patterns 2061 | * Implement proper firewall rules 2062 | 2063 | ## Debugging Transport 2064 | 2065 | Tips for debugging transport issues: 2066 | 2067 | 1. Enable debug logging 2068 | 2. Monitor message flow 2069 | 3. Check connection states 2070 | 4. Validate message formats 2071 | 5. Test error scenarios 2072 | 6. Use network analysis tools 2073 | 7. Implement health checks 2074 | 8. Monitor resource usage 2075 | 9. Test edge cases 2076 | 10. Use proper error tracking 2077 | 2078 | 2079 | # Python 2080 | 2081 | Create a simple MCP server in Python in 15 minutes 2082 | 2083 | Let's build your first MCP server in Python! We'll create a weather server that provides current weather data as a resource and lets Claude fetch forecasts using tools. 2084 | 2085 | <Note> 2086 | This guide uses the OpenWeatherMap API. You'll need a free API key from [OpenWeatherMap](https://openweathermap.org/api) to follow along. 2087 | </Note> 2088 | 2089 | ## Prerequisites 2090 | 2091 | <Info> 2092 | The following steps are for macOS. Guides for other platforms are coming soon. 2093 | </Info> 2094 | 2095 | <Steps> 2096 | <Step title="Install Python"> 2097 | You'll need Python 3.10 or higher: 2098 | 2099 | ```bash 2100 | python --version # Should be 3.10 or higher 2101 | ``` 2102 | </Step> 2103 | 2104 | <Step title="Install uv via homebrew"> 2105 | See [https://docs.astral.sh/uv/](https://docs.astral.sh/uv/) for more information. 2106 | 2107 | ```bash 2108 | brew install uv 2109 | uv --version # Should be 0.4.18 or higher 2110 | ``` 2111 | </Step> 2112 | 2113 | <Step title="Create a new project using the MCP project creator"> 2114 | ```bash 2115 | uvx create-mcp-server --path weather_service 2116 | cd weather_service 2117 | ``` 2118 | </Step> 2119 | 2120 | <Step title="Install additional dependencies"> 2121 | ```bash 2122 | uv add httpx python-dotenv 2123 | ``` 2124 | </Step> 2125 | 2126 | <Step title="Set up environment"> 2127 | Create `.env`: 2128 | 2129 | ```bash 2130 | OPENWEATHER_API_KEY=your-api-key-here 2131 | ``` 2132 | </Step> 2133 | </Steps> 2134 | 2135 | ## Create your server 2136 | 2137 | <Steps> 2138 | <Step title="Add the base imports and setup"> 2139 | In `weather_service/src/weather_service/server.py` 2140 | 2141 | ```python 2142 | import os 2143 | import json 2144 | import logging 2145 | from datetime import datetime, timedelta 2146 | from collections.abc import Sequence 2147 | from functools import lru_cache 2148 | from typing import Any 2149 | 2150 | import httpx 2151 | import asyncio 2152 | from dotenv import load_dotenv 2153 | from mcp.server import Server 2154 | from mcp.types import ( 2155 | Resource, 2156 | Tool, 2157 | TextContent, 2158 | ImageContent, 2159 | EmbeddedResource, 2160 | LoggingLevel 2161 | ) 2162 | from pydantic import AnyUrl 2163 | 2164 | # Load environment variables 2165 | load_dotenv() 2166 | 2167 | # Configure logging 2168 | logging.basicConfig(level=logging.INFO) 2169 | logger = logging.getLogger("weather-server") 2170 | 2171 | # API configuration 2172 | API_KEY = os.getenv("OPENWEATHER_API_KEY") 2173 | if not API_KEY: 2174 | raise ValueError("OPENWEATHER_API_KEY environment variable required") 2175 | 2176 | API_BASE_URL = "http://api.openweathermap.org/data/2.5" 2177 | DEFAULT_CITY = "London" 2178 | CURRENT_WEATHER_ENDPOINT = "weather" 2179 | FORECAST_ENDPOINT = "forecast" 2180 | 2181 | # The rest of our server implementation will go here 2182 | ``` 2183 | </Step> 2184 | 2185 | <Step title="Add weather fetching functionality"> 2186 | Add this functionality: 2187 | 2188 | ```python 2189 | # Create reusable params 2190 | http_params = { 2191 | "appid": API_KEY, 2192 | "units": "metric" 2193 | } 2194 | 2195 | async def fetch_weather(city: str) -> dict[str, Any]: 2196 | async with httpx.AsyncClient() as client: 2197 | response = await client.get( 2198 | f"{API_BASE_URL}/weather", 2199 | params={"q": city, **http_params} 2200 | ) 2201 | response.raise_for_status() 2202 | data = response.json() 2203 | 2204 | return { 2205 | "temperature": data["main"]["temp"], 2206 | "conditions": data["weather"][0]["description"], 2207 | "humidity": data["main"]["humidity"], 2208 | "wind_speed": data["wind"]["speed"], 2209 | "timestamp": datetime.now().isoformat() 2210 | } 2211 | 2212 | 2213 | app = Server("weather-server") 2214 | ``` 2215 | </Step> 2216 | 2217 | <Step title="Implement resource handlers"> 2218 | Add these resource-related handlers to our main function: 2219 | 2220 | ```python 2221 | app = Server("weather-server") 2222 | 2223 | @app.list_resources() 2224 | async def list_resources() -> list[Resource]: 2225 | """List available weather resources.""" 2226 | uri = AnyUrl(f"weather://{DEFAULT_CITY}/current") 2227 | return [ 2228 | Resource( 2229 | uri=uri, 2230 | name=f"Current weather in {DEFAULT_CITY}", 2231 | mimeType="application/json", 2232 | description="Real-time weather data" 2233 | ) 2234 | ] 2235 | 2236 | @app.read_resource() 2237 | async def read_resource(uri: AnyUrl) -> str: 2238 | """Read current weather data for a city.""" 2239 | city = DEFAULT_CITY 2240 | if str(uri).startswith("weather://") and str(uri).endswith("/current"): 2241 | city = str(uri).split("/")[-2] 2242 | else: 2243 | raise ValueError(f"Unknown resource: {uri}") 2244 | 2245 | try: 2246 | weather_data = await fetch_weather(city) 2247 | return json.dumps(weather_data, indent=2) 2248 | except httpx.HTTPError as e: 2249 | raise RuntimeError(f"Weather API error: {str(e)}") 2250 | 2251 | ``` 2252 | </Step> 2253 | 2254 | <Step title="Implement tool handlers"> 2255 | Add these tool-related handlers: 2256 | 2257 | ```python 2258 | app = Server("weather-server") 2259 | 2260 | # Resource implementation ... 2261 | 2262 | @app.list_tools() 2263 | async def list_tools() -> list[Tool]: 2264 | """List available weather tools.""" 2265 | return [ 2266 | Tool( 2267 | name="get_forecast", 2268 | description="Get weather forecast for a city", 2269 | inputSchema={ 2270 | "type": "object", 2271 | "properties": { 2272 | "city": { 2273 | "type": "string", 2274 | "description": "City name" 2275 | }, 2276 | "days": { 2277 | "type": "number", 2278 | "description": "Number of days (1-5)", 2279 | "minimum": 1, 2280 | "maximum": 5 2281 | } 2282 | }, 2283 | "required": ["city"] 2284 | } 2285 | ) 2286 | ] 2287 | 2288 | @app.call_tool() 2289 | async def call_tool(name: str, arguments: Any) -> Sequence[TextContent | ImageContent | EmbeddedResource]: 2290 | """Handle tool calls for weather forecasts.""" 2291 | if name != "get_forecast": 2292 | raise ValueError(f"Unknown tool: {name}") 2293 | 2294 | if not isinstance(arguments, dict) or "city" not in arguments: 2295 | raise ValueError("Invalid forecast arguments") 2296 | 2297 | city = arguments["city"] 2298 | days = min(int(arguments.get("days", 3)), 5) 2299 | 2300 | try: 2301 | async with httpx.AsyncClient() as client: 2302 | response = await client.get( 2303 | f"{API_BASE_URL}/{FORECAST_ENDPOINT}", 2304 | params={ 2305 | "q": city, 2306 | "cnt": days * 8, # API returns 3-hour intervals 2307 | **http_params, 2308 | } 2309 | ) 2310 | response.raise_for_status() 2311 | data = response.json() 2312 | 2313 | forecasts = [] 2314 | for i in range(0, len(data["list"]), 8): 2315 | day_data = data["list"][i] 2316 | forecasts.append({ 2317 | "date": day_data["dt_txt"].split()[0], 2318 | "temperature": day_data["main"]["temp"], 2319 | "conditions": day_data["weather"][0]["description"] 2320 | }) 2321 | 2322 | return [ 2323 | TextContent( 2324 | type="text", 2325 | text=json.dumps(forecasts, indent=2) 2326 | ) 2327 | ] 2328 | except httpx.HTTPError as e: 2329 | logger.error(f"Weather API error: {str(e)}") 2330 | raise RuntimeError(f"Weather API error: {str(e)}") 2331 | ``` 2332 | </Step> 2333 | 2334 | <Step title="Add the main function"> 2335 | Add this to the end of `weather_service/src/weather_service/server.py`: 2336 | 2337 | ```python 2338 | async def main(): 2339 | # Import here to avoid issues with event loops 2340 | from mcp.server.stdio import stdio_server 2341 | 2342 | async with stdio_server() as (read_stream, write_stream): 2343 | await app.run( 2344 | read_stream, 2345 | write_stream, 2346 | app.create_initialization_options() 2347 | ) 2348 | ``` 2349 | </Step> 2350 | 2351 | <Step title="Check your entry point in __init__.py"> 2352 | Add this to the end of `weather_service/src/weather_service/__init__.py`: 2353 | 2354 | ```python 2355 | from . import server 2356 | import asyncio 2357 | 2358 | def main(): 2359 | """Main entry point for the package.""" 2360 | asyncio.run(server.main()) 2361 | 2362 | # Optionally expose other important items at package level 2363 | __all__ = ['main', 'server'] 2364 | ``` 2365 | </Step> 2366 | </Steps> 2367 | 2368 | ## Connect to Claude Desktop 2369 | 2370 | <Steps> 2371 | <Step title="Update Claude config"> 2372 | Add to `claude_desktop_config.json`: 2373 | 2374 | ```json 2375 | { 2376 | "mcpServers": { 2377 | "weather": { 2378 | "command": "uv", 2379 | "args": [ 2380 | "--directory", 2381 | "path/to/your/project", 2382 | "run", 2383 | "weather-service" 2384 | ], 2385 | "env": { 2386 | "OPENWEATHER_API_KEY": "your-api-key" 2387 | } 2388 | } 2389 | } 2390 | } 2391 | ``` 2392 | </Step> 2393 | 2394 | <Step title="Restart Claude"> 2395 | 1. Quit Claude completely 2396 | 2397 | 2. Start Claude again 2398 | 2399 | 3. Look for your weather server in the 🔌 menu 2400 | </Step> 2401 | </Steps> 2402 | 2403 | ## Try it out! 2404 | 2405 | <AccordionGroup> 2406 | <Accordion title="Check Current Weather" active> 2407 | Ask Claude: 2408 | 2409 | ``` 2410 | What's the current weather in San Francisco? Can you analyze the conditions and tell me if it's a good day for outdoor activities? 2411 | ``` 2412 | </Accordion> 2413 | 2414 | <Accordion title="Get a Forecast"> 2415 | Ask Claude: 2416 | 2417 | ``` 2418 | Can you get me a 5-day forecast for Tokyo and help me plan what clothes to pack for my trip? 2419 | ``` 2420 | </Accordion> 2421 | 2422 | <Accordion title="Compare Weather"> 2423 | Ask Claude: 2424 | 2425 | ``` 2426 | Can you analyze the forecast for both Tokyo and San Francisco and tell me which city would be better for outdoor photography this week? 2427 | ``` 2428 | </Accordion> 2429 | </AccordionGroup> 2430 | 2431 | ## Understanding the code 2432 | 2433 | <Tabs> 2434 | <Tab title="Type Hints"> 2435 | ```python 2436 | async def read_resource(self, uri: str) -> ReadResourceResult: 2437 | # ... 2438 | ``` 2439 | 2440 | Python type hints help catch errors early and improve code maintainability. 2441 | </Tab> 2442 | 2443 | <Tab title="Resources"> 2444 | ```python 2445 | @app.list_resources() 2446 | async def list_resources(self) -> ListResourcesResult: 2447 | return ListResourcesResult( 2448 | resources=[ 2449 | Resource( 2450 | uri=f"weather://{DEFAULT_CITY}/current", 2451 | name=f"Current weather in {DEFAULT_CITY}", 2452 | mimeType="application/json", 2453 | description="Real-time weather data" 2454 | ) 2455 | ] 2456 | ) 2457 | ``` 2458 | 2459 | Resources provide data that Claude can access as context. 2460 | </Tab> 2461 | 2462 | <Tab title="Tools"> 2463 | ```python 2464 | Tool( 2465 | name="get_forecast", 2466 | description="Get weather forecast for a city", 2467 | inputSchema={ 2468 | "type": "object", 2469 | "properties": { 2470 | "city": { 2471 | "type": "string", 2472 | "description": "City name" 2473 | }, 2474 | "days": { 2475 | "type": "number", 2476 | "description": "Number of days (1-5)", 2477 | "minimum": 1, 2478 | "maximum": 5 2479 | } 2480 | }, 2481 | "required": ["city"] 2482 | } 2483 | ) 2484 | ``` 2485 | 2486 | Tools let Claude take actions through your server with validated inputs. 2487 | </Tab> 2488 | 2489 | <Tab title="Server Structure"> 2490 | ```python 2491 | # Create server instance with name 2492 | app = Server("weather-server") 2493 | 2494 | # Register resource handler 2495 | @app.list_resources() 2496 | async def list_resources() -> list[Resource]: 2497 | """List available resources""" 2498 | return [...] 2499 | 2500 | # Register tool handler 2501 | @app.call_tool() 2502 | async def call_tool(name: str, arguments: Any) -> Sequence[TextContent]: 2503 | """Handle tool execution""" 2504 | return [...] 2505 | 2506 | # Register additional handlers 2507 | @app.read_resource() 2508 | ... 2509 | @app.list_tools() 2510 | ... 2511 | ``` 2512 | 2513 | The MCP server uses a simple app pattern - create a Server instance and register handlers with decorators. Each handler maps to a specific MCP protocol operation. 2514 | </Tab> 2515 | </Tabs> 2516 | 2517 | ## Best practices 2518 | 2519 | <CardGroup cols={1}> 2520 | <Card title="Error Handling" icon="shield"> 2521 | ```python 2522 | try: 2523 | async with httpx.AsyncClient() as client: 2524 | response = await client.get(..., params={..., **http_params}) 2525 | response.raise_for_status() 2526 | except httpx.HTTPError as e: 2527 | raise McpError( 2528 | ErrorCode.INTERNAL_ERROR, 2529 | f"API error: {str(e)}" 2530 | ) 2531 | ``` 2532 | </Card> 2533 | 2534 | <Card title="Type Validation" icon="check"> 2535 | ```python 2536 | if not isinstance(args, dict) or "city" not in args: 2537 | raise McpError( 2538 | ErrorCode.INVALID_PARAMS, 2539 | "Invalid forecast arguments" 2540 | ) 2541 | ``` 2542 | </Card> 2543 | 2544 | <Card title="Environment Variables" icon="gear"> 2545 | ```python 2546 | if not API_KEY: 2547 | raise ValueError("OPENWEATHER_API_KEY is required") 2548 | ``` 2549 | </Card> 2550 | </CardGroup> 2551 | 2552 | ## Available transports 2553 | 2554 | While this guide uses stdio transport, MCP supports additonal transport options: 2555 | 2556 | ### SSE (Server-Sent Events) 2557 | 2558 | ```python 2559 | from mcp.server.sse import SseServerTransport 2560 | from starlette.applications import Starlette 2561 | from starlette.routing import Route 2562 | 2563 | # Create SSE transport with endpoint 2564 | sse = SseServerTransport("/messages") 2565 | 2566 | # Handler for SSE connections 2567 | async def handle_sse(scope, receive, send): 2568 | async with sse.connect_sse(scope, receive, send) as streams: 2569 | await app.run( 2570 | streams[0], streams[1], app.create_initialization_options() 2571 | ) 2572 | 2573 | # Handler for client messages 2574 | async def handle_messages(scope, receive, send): 2575 | await sse.handle_post_message(scope, receive, send) 2576 | 2577 | # Create Starlette app with routes 2578 | app = Starlette( 2579 | debug=True, 2580 | routes=[ 2581 | Route("/sse", endpoint=handle_sse), 2582 | Route("/messages", endpoint=handle_messages, methods=["POST"]), 2583 | ], 2584 | ) 2585 | 2586 | # Run with any ASGI server 2587 | import uvicorn 2588 | uvicorn.run(app, host="0.0.0.0", port=8000) 2589 | ``` 2590 | 2591 | ## Advanced features 2592 | 2593 | <Steps> 2594 | <Step title="Understanding Request Context"> 2595 | The request context provides access to the current request's metadata and the active client session. Access it through `server.request_context`: 2596 | 2597 | ```python 2598 | @app.call_tool() 2599 | async def call_tool(name: str, arguments: Any) -> Sequence[TextContent]: 2600 | # Access the current request context 2601 | ctx = self.request_context 2602 | 2603 | # Get request metadata like progress tokens 2604 | if progress_token := ctx.meta.progressToken: 2605 | # Send progress notifications via the session 2606 | await ctx.session.send_progress_notification( 2607 | progress_token=progress_token, 2608 | progress=0.5, 2609 | total=1.0 2610 | ) 2611 | 2612 | # Sample from the LLM client 2613 | result = await ctx.session.create_message( 2614 | messages=[ 2615 | SamplingMessage( 2616 | role="user", 2617 | content=TextContent( 2618 | type="text", 2619 | text="Analyze this weather data: " + json.dumps(arguments) 2620 | ) 2621 | ) 2622 | ], 2623 | max_tokens=100 2624 | ) 2625 | 2626 | return [TextContent(type="text", text=result.content.text)] 2627 | ``` 2628 | </Step> 2629 | 2630 | <Step title="Add caching"> 2631 | ```python 2632 | # Cache settings 2633 | cache_timeout = timedelta(minutes=15) 2634 | last_cache_time = None 2635 | cached_weather = None 2636 | 2637 | async def fetch_weather(city: str) -> dict[str, Any]: 2638 | global cached_weather, last_cache_time 2639 | 2640 | now = datetime.now() 2641 | if (cached_weather is None or 2642 | last_cache_time is None or 2643 | now - last_cache_time > cache_timeout): 2644 | 2645 | async with httpx.AsyncClient() as client: 2646 | response = await client.get( 2647 | f"{API_BASE_URL}/{CURRENT_WEATHER_ENDPOINT}", 2648 | params={"q": city, **http_params} 2649 | ) 2650 | response.raise_for_status() 2651 | data = response.json() 2652 | 2653 | cached_weather = { 2654 | "temperature": data["main"]["temp"], 2655 | "conditions": data["weather"][0]["description"], 2656 | "humidity": data["main"]["humidity"], 2657 | "wind_speed": data["wind"]["speed"], 2658 | "timestamp": datetime.now().isoformat() 2659 | } 2660 | last_cache_time = now 2661 | 2662 | return cached_weather 2663 | ``` 2664 | </Step> 2665 | 2666 | <Step title="Add progress notifications"> 2667 | ```python 2668 | @self.call_tool() 2669 | async def call_tool(self, name: str, arguments: Any) -> CallToolResult: 2670 | if progress_token := self.request_context.meta.progressToken: 2671 | # Send progress notifications 2672 | await self.request_context.session.send_progress_notification( 2673 | progress_token=progress_token, 2674 | progress=1, 2675 | total=2 2676 | ) 2677 | 2678 | # Fetch data... 2679 | 2680 | await self.request_context.session.send_progress_notification( 2681 | progress_token=progress_token, 2682 | progress=2, 2683 | total=2 2684 | ) 2685 | 2686 | # Rest of the method implementation... 2687 | ``` 2688 | </Step> 2689 | 2690 | <Step title="Add logging support"> 2691 | ```python 2692 | # Set up logging 2693 | logger = logging.getLogger("weather-server") 2694 | logger.setLevel(logging.INFO) 2695 | 2696 | @app.set_logging_level() 2697 | async def set_logging_level(level: LoggingLevel) -> EmptyResult: 2698 | logger.setLevel(level.upper()) 2699 | await app.request_context.session.send_log_message( 2700 | level="info", 2701 | data=f"Log level set to {level}", 2702 | logger="weather-server" 2703 | ) 2704 | return EmptyResult() 2705 | 2706 | # Use logger throughout the code 2707 | # For example: 2708 | # logger.info("Weather data fetched successfully") 2709 | # logger.error(f"Error fetching weather data: {str(e)}") 2710 | ``` 2711 | </Step> 2712 | 2713 | <Step title="Add resource templates"> 2714 | ```python 2715 | @app.list_resource_templates() 2716 | async def list_resource_templates() -> list[ResourceTemplate]: 2717 | return [ 2718 | ResourceTemplate( 2719 | uriTemplate="weather://{city}/current", 2720 | name="Current weather for any city", 2721 | mimeType="application/json" 2722 | ) 2723 | ] 2724 | ``` 2725 | </Step> 2726 | </Steps> 2727 | 2728 | ## Testing 2729 | 2730 | <Steps> 2731 | <Step title="Create test file"> 2732 | Create `tests/weather_test.py`: 2733 | 2734 | ```python 2735 | import pytest 2736 | import os 2737 | from unittest.mock import patch, Mock 2738 | from datetime import datetime 2739 | import json 2740 | from pydantic import AnyUrl 2741 | os.environ["OPENWEATHER_API_KEY"] = "TEST" 2742 | 2743 | from weather_service.server import ( 2744 | fetch_weather, 2745 | read_resource, 2746 | call_tool, 2747 | list_resources, 2748 | list_tools, 2749 | DEFAULT_CITY 2750 | ) 2751 | 2752 | @pytest.fixture 2753 | def anyio_backend(): 2754 | return "asyncio" 2755 | 2756 | @pytest.fixture 2757 | def mock_weather_response(): 2758 | return { 2759 | "main": { 2760 | "temp": 20.5, 2761 | "humidity": 65 2762 | }, 2763 | "weather": [ 2764 | {"description": "scattered clouds"} 2765 | ], 2766 | "wind": { 2767 | "speed": 3.6 2768 | } 2769 | } 2770 | 2771 | @pytest.fixture 2772 | def mock_forecast_response(): 2773 | return { 2774 | "list": [ 2775 | { 2776 | "dt_txt": "2024-01-01 12:00:00", 2777 | "main": {"temp": 18.5}, 2778 | "weather": [{"description": "sunny"}] 2779 | }, 2780 | { 2781 | "dt_txt": "2024-01-02 12:00:00", 2782 | "main": {"temp": 17.2}, 2783 | "weather": [{"description": "cloudy"}] 2784 | } 2785 | ] 2786 | } 2787 | 2788 | @pytest.mark.anyio 2789 | async def test_fetch_weather(mock_weather_response): 2790 | with patch('requests.Session.get') as mock_get: 2791 | mock_get.return_value.json.return_value = mock_weather_response 2792 | mock_get.return_value.raise_for_status = Mock() 2793 | 2794 | weather = await fetch_weather("London") 2795 | 2796 | assert weather["temperature"] == 20.5 2797 | assert weather["conditions"] == "scattered clouds" 2798 | assert weather["humidity"] == 65 2799 | assert weather["wind_speed"] == 3.6 2800 | assert "timestamp" in weather 2801 | 2802 | @pytest.mark.anyio 2803 | async def test_read_resource(): 2804 | with patch('weather_service.server.fetch_weather') as mock_fetch: 2805 | mock_fetch.return_value = { 2806 | "temperature": 20.5, 2807 | "conditions": "clear sky", 2808 | "timestamp": datetime.now().isoformat() 2809 | } 2810 | 2811 | uri = AnyUrl("weather://London/current") 2812 | result = await read_resource(uri) 2813 | 2814 | assert isinstance(result, str) 2815 | assert "temperature" in result 2816 | assert "clear sky" in result 2817 | 2818 | @pytest.mark.anyio 2819 | async def test_call_tool(mock_forecast_response): 2820 | class Response(): 2821 | def raise_for_status(self): 2822 | pass 2823 | 2824 | def json(self): 2825 | return mock_forecast_response 2826 | 2827 | class AsyncClient(): 2828 | def __aenter__(self): 2829 | return self 2830 | 2831 | async def __aexit__(self, *exc_info): 2832 | pass 2833 | 2834 | async def get(self, *args, **kwargs): 2835 | return Response() 2836 | 2837 | with patch('httpx.AsyncClient', new=AsyncClient) as mock_client: 2838 | result = await call_tool("get_forecast", {"city": "London", "days": 2}) 2839 | 2840 | assert len(result) == 1 2841 | assert result[0].type == "text" 2842 | forecast_data = json.loads(result[0].text) 2843 | assert len(forecast_data) == 1 2844 | assert forecast_data[0]["temperature"] == 18.5 2845 | assert forecast_data[0]["conditions"] == "sunny" 2846 | 2847 | @pytest.mark.anyio 2848 | async def test_list_resources(): 2849 | resources = await list_resources() 2850 | assert len(resources) == 1 2851 | assert resources[0].name == f"Current weather in {DEFAULT_CITY}" 2852 | assert resources[0].mimeType == "application/json" 2853 | 2854 | @pytest.mark.anyio 2855 | async def test_list_tools(): 2856 | tools = await list_tools() 2857 | assert len(tools) == 1 2858 | assert tools[0].name == "get_forecast" 2859 | assert "city" in tools[0].inputSchema["properties"] 2860 | ``` 2861 | </Step> 2862 | 2863 | <Step title="Run tests"> 2864 | ```bash 2865 | uv add --dev pytest 2866 | uv run pytest 2867 | ``` 2868 | </Step> 2869 | </Steps> 2870 | 2871 | ## Troubleshooting 2872 | 2873 | ### Installation issues 2874 | 2875 | ```bash 2876 | # Check Python version 2877 | python --version 2878 | 2879 | # Reinstall dependencies 2880 | uv sync --reinstall 2881 | ``` 2882 | 2883 | ### Type checking 2884 | 2885 | ```bash 2886 | # Install mypy 2887 | uv add --dev pyright 2888 | 2889 | # Run type checker 2890 | uv run pyright src 2891 | ``` 2892 | 2893 | ## Next steps 2894 | 2895 | <CardGroup cols={2}> 2896 | <Card title="Architecture overview" icon="sitemap" href="/docs/concepts/architecture"> 2897 | Learn more about the MCP architecture 2898 | </Card> 2899 | 2900 | <Card title="Python SDK" icon="python" href="https://github.com/modelcontextprotocol/python-sdk"> 2901 | Check out the Python SDK on GitHub 2902 | </Card> 2903 | </CardGroup> 2904 | 2905 | 2906 | # TypeScript 2907 | 2908 | Create a simple MCP server in TypeScript in 15 minutes 2909 | 2910 | Let's build your first MCP server in TypeScript! We'll create a weather server that provides current weather data as a resource and lets Claude fetch forecasts using tools. 2911 | 2912 | <Note> 2913 | This guide uses the OpenWeatherMap API. You'll need a free API key from [OpenWeatherMap](https://openweathermap.org/api) to follow along. 2914 | </Note> 2915 | 2916 | ## Prerequisites 2917 | 2918 | <Steps> 2919 | <Step title="Install Node.js"> 2920 | You'll need Node.js 18 or higher: 2921 | 2922 | ```bash 2923 | node --version # Should be v18 or higher 2924 | npm --version 2925 | ``` 2926 | </Step> 2927 | 2928 | <Step title="Create a new project"> 2929 | You can use our [create-typescript-server](https://github.com/modelcontextprotocol/create-typescript-server) tool to bootstrap a new project: 2930 | 2931 | ```bash 2932 | npx @modelcontextprotocol/create-server weather-server 2933 | cd weather-server 2934 | ``` 2935 | </Step> 2936 | 2937 | <Step title="Install dependencies"> 2938 | ```bash 2939 | npm install --save axios dotenv 2940 | ``` 2941 | </Step> 2942 | 2943 | <Step title="Set up environment"> 2944 | Create `.env`: 2945 | 2946 | ```bash 2947 | OPENWEATHER_API_KEY=your-api-key-here 2948 | ``` 2949 | 2950 | Make sure to add your environment file to `.gitignore` 2951 | 2952 | ```bash 2953 | .env 2954 | ``` 2955 | </Step> 2956 | </Steps> 2957 | 2958 | ## Create your server 2959 | 2960 | <Steps> 2961 | <Step title="Define types"> 2962 | Create a file `src/types.ts`, and add the following: 2963 | 2964 | ```typescript 2965 | export interface OpenWeatherResponse { 2966 | main: { 2967 | temp: number; 2968 | humidity: number; 2969 | }; 2970 | weather: Array<{ 2971 | description: string; 2972 | }>; 2973 | wind: { 2974 | speed: number; 2975 | }; 2976 | dt_txt?: string; 2977 | } 2978 | 2979 | export interface WeatherData { 2980 | temperature: number; 2981 | conditions: string; 2982 | humidity: number; 2983 | wind_speed: number; 2984 | timestamp: string; 2985 | } 2986 | 2987 | export interface ForecastDay { 2988 | date: string; 2989 | temperature: number; 2990 | conditions: string; 2991 | } 2992 | 2993 | export interface GetForecastArgs { 2994 | city: string; 2995 | days?: number; 2996 | } 2997 | 2998 | // Type guard for forecast arguments 2999 | export function isValidForecastArgs(args: any): args is GetForecastArgs { 3000 | return ( 3001 | typeof args === "object" && 3002 | args !== null && 3003 | "city" in args && 3004 | typeof args.city === "string" && 3005 | (args.days === undefined || typeof args.days === "number") 3006 | ); 3007 | } 3008 | ``` 3009 | </Step> 3010 | 3011 | <Step title="Add the base code"> 3012 | Replace `src/index.ts` with the following: 3013 | 3014 | ```typescript 3015 | #!/usr/bin/env node 3016 | import { Server } from "@modelcontextprotocol/sdk/server/index.js"; 3017 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"; 3018 | import { 3019 | ListResourcesRequestSchema, 3020 | ReadResourceRequestSchema, 3021 | ListToolsRequestSchema, 3022 | CallToolRequestSchema, 3023 | ErrorCode, 3024 | McpError 3025 | } from "@modelcontextprotocol/sdk/types.js"; 3026 | import axios from "axios"; 3027 | import dotenv from "dotenv"; 3028 | import { 3029 | WeatherData, 3030 | ForecastDay, 3031 | OpenWeatherResponse, 3032 | isValidForecastArgs 3033 | } from "./types.js"; 3034 | 3035 | dotenv.config(); 3036 | 3037 | const API_KEY = process.env.OPENWEATHER_API_KEY; 3038 | if (!API_KEY) { 3039 | throw new Error("OPENWEATHER_API_KEY environment variable is required"); 3040 | } 3041 | 3042 | const API_CONFIG = { 3043 | BASE_URL: 'http://api.openweathermap.org/data/2.5', 3044 | DEFAULT_CITY: 'San Francisco', 3045 | ENDPOINTS: { 3046 | CURRENT: 'weather', 3047 | FORECAST: 'forecast' 3048 | } 3049 | } as const; 3050 | 3051 | class WeatherServer { 3052 | private server: Server; 3053 | private axiosInstance; 3054 | 3055 | constructor() { 3056 | this.server = new Server({ 3057 | name: "example-weather-server", 3058 | version: "0.1.0" 3059 | }, { 3060 | capabilities: { 3061 | resources: {}, 3062 | tools: {} 3063 | } 3064 | }); 3065 | 3066 | // Configure axios with defaults 3067 | this.axiosInstance = axios.create({ 3068 | baseURL: API_CONFIG.BASE_URL, 3069 | params: { 3070 | appid: API_KEY, 3071 | units: "metric" 3072 | } 3073 | }); 3074 | 3075 | this.setupHandlers(); 3076 | this.setupErrorHandling(); 3077 | } 3078 | 3079 | private setupErrorHandling(): void { 3080 | this.server.onerror = (error) => { 3081 | console.error("[MCP Error]", error); 3082 | }; 3083 | 3084 | process.on('SIGINT', async () => { 3085 | await this.server.close(); 3086 | process.exit(0); 3087 | }); 3088 | } 3089 | 3090 | private setupHandlers(): void { 3091 | this.setupResourceHandlers(); 3092 | this.setupToolHandlers(); 3093 | } 3094 | 3095 | private setupResourceHandlers(): void { 3096 | // Implementation continues in next section 3097 | } 3098 | 3099 | private setupToolHandlers(): void { 3100 | // Implementation continues in next section 3101 | } 3102 | 3103 | async run(): Promise<void> { 3104 | const transport = new StdioServerTransport(); 3105 | await this.server.connect(transport); 3106 | 3107 | // Although this is just an informative message, we must log to stderr, 3108 | // to avoid interfering with MCP communication that happens on stdout 3109 | console.error("Weather MCP server running on stdio"); 3110 | } 3111 | } 3112 | 3113 | const server = new WeatherServer(); 3114 | server.run().catch(console.error); 3115 | ``` 3116 | </Step> 3117 | 3118 | <Step title="Add resource handlers"> 3119 | Add this to the `setupResourceHandlers` method: 3120 | 3121 | ```typescript 3122 | private setupResourceHandlers(): void { 3123 | this.server.setRequestHandler( 3124 | ListResourcesRequestSchema, 3125 | async () => ({ 3126 | resources: [{ 3127 | uri: `weather://${API_CONFIG.DEFAULT_CITY}/current`, 3128 | name: `Current weather in ${API_CONFIG.DEFAULT_CITY}`, 3129 | mimeType: "application/json", 3130 | description: "Real-time weather data including temperature, conditions, humidity, and wind speed" 3131 | }] 3132 | }) 3133 | ); 3134 | 3135 | this.server.setRequestHandler( 3136 | ReadResourceRequestSchema, 3137 | async (request) => { 3138 | const city = API_CONFIG.DEFAULT_CITY; 3139 | if (request.params.uri !== `weather://${city}/current`) { 3140 | throw new McpError( 3141 | ErrorCode.InvalidRequest, 3142 | `Unknown resource: ${request.params.uri}` 3143 | ); 3144 | } 3145 | 3146 | try { 3147 | const response = await this.axiosInstance.get<OpenWeatherResponse>( 3148 | API_CONFIG.ENDPOINTS.CURRENT, 3149 | { 3150 | params: { q: city } 3151 | } 3152 | ); 3153 | 3154 | const weatherData: WeatherData = { 3155 | temperature: response.data.main.temp, 3156 | conditions: response.data.weather[0].description, 3157 | humidity: response.data.main.humidity, 3158 | wind_speed: response.data.wind.speed, 3159 | timestamp: new Date().toISOString() 3160 | }; 3161 | 3162 | return { 3163 | contents: [{ 3164 | uri: request.params.uri, 3165 | mimeType: "application/json", 3166 | text: JSON.stringify(weatherData, null, 2) 3167 | }] 3168 | }; 3169 | } catch (error) { 3170 | if (axios.isAxiosError(error)) { 3171 | throw new McpError( 3172 | ErrorCode.InternalError, 3173 | `Weather API error: ${error.response?.data.message ?? error.message}` 3174 | ); 3175 | } 3176 | throw error; 3177 | } 3178 | } 3179 | ); 3180 | } 3181 | ``` 3182 | </Step> 3183 | 3184 | <Step title="Add tool handlers"> 3185 | Add these handlers to the `setupToolHandlers` method: 3186 | 3187 | ```typescript 3188 | private setupToolHandlers(): void { 3189 | this.server.setRequestHandler( 3190 | ListToolsRequestSchema, 3191 | async () => ({ 3192 | tools: [{ 3193 | name: "get_forecast", 3194 | description: "Get weather forecast for a city", 3195 | inputSchema: { 3196 | type: "object", 3197 | properties: { 3198 | city: { 3199 | type: "string", 3200 | description: "City name" 3201 | }, 3202 | days: { 3203 | type: "number", 3204 | description: "Number of days (1-5)", 3205 | minimum: 1, 3206 | maximum: 5 3207 | } 3208 | }, 3209 | required: ["city"] 3210 | } 3211 | }] 3212 | }) 3213 | ); 3214 | 3215 | this.server.setRequestHandler( 3216 | CallToolRequestSchema, 3217 | async (request) => { 3218 | if (request.params.name !== "get_forecast") { 3219 | throw new McpError( 3220 | ErrorCode.MethodNotFound, 3221 | `Unknown tool: ${request.params.name}` 3222 | ); 3223 | } 3224 | 3225 | if (!isValidForecastArgs(request.params.arguments)) { 3226 | throw new McpError( 3227 | ErrorCode.InvalidParams, 3228 | "Invalid forecast arguments" 3229 | ); 3230 | } 3231 | 3232 | const city = request.params.arguments.city; 3233 | const days = Math.min(request.params.arguments.days || 3, 5); 3234 | 3235 | try { 3236 | const response = await this.axiosInstance.get<{ 3237 | list: OpenWeatherResponse[] 3238 | }>(API_CONFIG.ENDPOINTS.FORECAST, { 3239 | params: { 3240 | q: city, 3241 | cnt: days * 8 // API returns 3-hour intervals 3242 | } 3243 | }); 3244 | 3245 | const forecasts: ForecastDay[] = []; 3246 | for (let i = 0; i < response.data.list.length; i += 8) { 3247 | const dayData = response.data.list[i]; 3248 | forecasts.push({ 3249 | date: dayData.dt_txt?.split(' ')[0] ?? new Date().toISOString().split('T')[0], 3250 | temperature: dayData.main.temp, 3251 | conditions: dayData.weather[0].description 3252 | }); 3253 | } 3254 | 3255 | return { 3256 | content: [{ 3257 | type: "text", 3258 | text: JSON.stringify(forecasts, null, 2) 3259 | }] 3260 | }; 3261 | } catch (error) { 3262 | if (axios.isAxiosError(error)) { 3263 | return { 3264 | content: [{ 3265 | type: "text", 3266 | text: `Weather API error: ${error.response?.data.message ?? error.message}` 3267 | }], 3268 | isError: true, 3269 | } 3270 | } 3271 | throw error; 3272 | } 3273 | } 3274 | ); 3275 | } 3276 | ``` 3277 | </Step> 3278 | 3279 | <Step title="Build and test"> 3280 | ```bash 3281 | npm run build 3282 | ``` 3283 | </Step> 3284 | </Steps> 3285 | 3286 | ## Connect to Claude Desktop 3287 | 3288 | <Steps> 3289 | <Step title="Update Claude config"> 3290 | If you didn't already connect to Claude Desktop during project setup, add to `claude_desktop_config.json`: 3291 | 3292 | ```json 3293 | { 3294 | "mcpServers": { 3295 | "weather": { 3296 | "command": "node", 3297 | "args": ["/path/to/weather-server/build/index.js"], 3298 | "env": { 3299 | "OPENWEATHER_API_KEY": "your-api-key", 3300 | } 3301 | } 3302 | } 3303 | } 3304 | ``` 3305 | </Step> 3306 | 3307 | <Step title="Restart Claude"> 3308 | 1. Quit Claude completely 3309 | 2. Start Claude again 3310 | 3. Look for your weather server in the 🔌 menu 3311 | </Step> 3312 | </Steps> 3313 | 3314 | ## Try it out! 3315 | 3316 | <AccordionGroup> 3317 | <Accordion title="Check Current Weather" active> 3318 | Ask Claude: 3319 | 3320 | ``` 3321 | What's the current weather in San Francisco? Can you analyze the conditions? 3322 | ``` 3323 | </Accordion> 3324 | 3325 | <Accordion title="Get a Forecast"> 3326 | Ask Claude: 3327 | 3328 | ``` 3329 | Can you get me a 5-day forecast for Tokyo and tell me if I should pack an umbrella? 3330 | ``` 3331 | </Accordion> 3332 | 3333 | <Accordion title="Compare Weather"> 3334 | Ask Claude: 3335 | 3336 | ``` 3337 | Can you analyze the forecast for both Tokyo and San Francisco and tell me which city will be warmer this week? 3338 | ``` 3339 | </Accordion> 3340 | </AccordionGroup> 3341 | 3342 | ## Understanding the code 3343 | 3344 | <Tabs> 3345 | <Tab title="Type Safety"> 3346 | ```typescript 3347 | interface WeatherData { 3348 | temperature: number; 3349 | conditions: string; 3350 | humidity: number; 3351 | wind_speed: number; 3352 | timestamp: string; 3353 | } 3354 | ``` 3355 | 3356 | TypeScript adds type safety to our MCP server, making it more reliable and easier to maintain. 3357 | </Tab> 3358 | 3359 | <Tab title="Resources"> 3360 | ```typescript 3361 | this.server.setRequestHandler( 3362 | ListResourcesRequestSchema, 3363 | async () => ({ 3364 | resources: [{ 3365 | uri: `weather://${DEFAULT_CITY}/current`, 3366 | name: `Current weather in ${DEFAULT_CITY}`, 3367 | mimeType: "application/json" 3368 | }] 3369 | }) 3370 | ); 3371 | ``` 3372 | 3373 | Resources provide data that Claude can access as context. 3374 | </Tab> 3375 | 3376 | <Tab title="Tools"> 3377 | ```typescript 3378 | { 3379 | name: "get_forecast", 3380 | description: "Get weather forecast for a city", 3381 | inputSchema: { 3382 | type: "object", 3383 | properties: { 3384 | city: { type: "string" }, 3385 | days: { type: "number" } 3386 | } 3387 | } 3388 | } 3389 | ``` 3390 | 3391 | Tools let Claude take actions through your server with type-safe inputs. 3392 | </Tab> 3393 | </Tabs> 3394 | 3395 | ## Best practices 3396 | 3397 | <CardGroup cols={1}> 3398 | <Card title="Error Handling" icon="shield"> 3399 | When a tool encounters an error, return the error message with `isError: true`, so the model can self-correct: 3400 | 3401 | ```typescript 3402 | try { 3403 | const response = await axiosInstance.get(...); 3404 | } catch (error) { 3405 | if (axios.isAxiosError(error)) { 3406 | return { 3407 | content: { 3408 | mimeType: "text/plain", 3409 | text: `Weather API error: ${error.response?.data.message ?? error.message}` 3410 | }, 3411 | isError: true, 3412 | } 3413 | } 3414 | throw error; 3415 | } 3416 | ``` 3417 | 3418 | For other handlers, throw an error, so the application can notify the user: 3419 | 3420 | ```typescript 3421 | try { 3422 | const response = await this.axiosInstance.get(...); 3423 | } catch (error) { 3424 | if (axios.isAxiosError(error)) { 3425 | throw new McpError( 3426 | ErrorCode.InternalError, 3427 | `Weather API error: ${error.response?.data.message}` 3428 | ); 3429 | } 3430 | throw error; 3431 | } 3432 | ``` 3433 | </Card> 3434 | 3435 | <Card title="Type Validation" icon="check"> 3436 | ```typescript 3437 | function isValidForecastArgs(args: any): args is GetForecastArgs { 3438 | return ( 3439 | typeof args === "object" && 3440 | args !== null && 3441 | "city" in args && 3442 | typeof args.city === "string" 3443 | ); 3444 | } 3445 | ``` 3446 | 3447 | <Tip>You can also use libraries like [Zod](https://zod.dev/) to perform this validation automatically.</Tip> 3448 | </Card> 3449 | </CardGroup> 3450 | 3451 | ## Available transports 3452 | 3453 | While this guide uses stdio to run the MCP server as a local process, MCP supports other [transports](/docs/concepts/transports) as well. 3454 | 3455 | ## Troubleshooting 3456 | 3457 | <Info> 3458 | The following troubleshooting tips are for macOS. Guides for other platforms are coming soon. 3459 | </Info> 3460 | 3461 | ### Build errors 3462 | 3463 | ```bash 3464 | # Check TypeScript version 3465 | npx tsc --version 3466 | 3467 | # Clean and rebuild 3468 | rm -rf build/ 3469 | npm run build 3470 | ``` 3471 | 3472 | ### Runtime errors 3473 | 3474 | Look for detailed error messages in the Claude Desktop logs: 3475 | 3476 | ```bash 3477 | # Monitor logs 3478 | tail -n 20 -f ~/Library/Logs/Claude/mcp*.log 3479 | ``` 3480 | 3481 | ### Type errors 3482 | 3483 | ```bash 3484 | # Check types without building 3485 | npx tsc --noEmit 3486 | ``` 3487 | 3488 | ## Next steps 3489 | 3490 | <CardGroup cols={2}> 3491 | <Card title="Architecture overview" icon="sitemap" href="/docs/concepts/architecture"> 3492 | Learn more about the MCP architecture 3493 | </Card> 3494 | 3495 | <Card title="TypeScript SDK" icon="square-js" href="https://github.com/modelcontextprotocol/typescript-sdk"> 3496 | Check out the TypeScript SDK on GitHub 3497 | </Card> 3498 | </CardGroup> 3499 | 3500 | <Note> 3501 | Need help? Ask Claude! Since it has access to the MCP SDK documentation, it can help you debug issues and suggest improvements to your server. 3502 | </Note> 3503 | 3504 | 3505 | # Debugging 3506 | 3507 | A comprehensive guide to debugging Model Context Protocol (MCP) integrations 3508 | 3509 | Effective debugging is essential when developing MCP servers or integrating them with applications. This guide covers the debugging tools and approaches available in the MCP ecosystem. 3510 | 3511 | <Info> 3512 | This guide is for macOS. Guides for other platforms are coming soon. 3513 | </Info> 3514 | 3515 | ## Debugging tools overview 3516 | 3517 | MCP provides several tools for debugging at different levels: 3518 | 3519 | 1. **MCP Inspector** 3520 | * Interactive debugging interface 3521 | * Direct server testing 3522 | * See the [Inspector guide](/docs/tools/inspector) for details 3523 | 3524 | 2. **Claude Desktop Developer Tools** 3525 | * Integration testing 3526 | * Log collection 3527 | * Chrome DevTools integration 3528 | 3529 | 3. **Server Logging** 3530 | * Custom logging implementations 3531 | * Error tracking 3532 | * Performance monitoring 3533 | 3534 | ## Debugging in Claude Desktop 3535 | 3536 | ### Checking server status 3537 | 3538 | The Claude.app interface provides basic server status information: 3539 | 3540 | 1. Click the 🔌 icon to view: 3541 | * Connected servers 3542 | * Available prompts and resources 3543 | 3544 | 2. Click the 🔨 icon to view: 3545 | * Tools made available to the model 3546 | 3547 | ### Viewing logs 3548 | 3549 | Review detailed MCP logs from Claude Desktop: 3550 | 3551 | ```bash 3552 | # Follow logs in real-time 3553 | tail -n 20 -f ~/Library/Logs/Claude/mcp*.log 3554 | ``` 3555 | 3556 | The logs capture: 3557 | 3558 | * Server connection events 3559 | * Configuration issues 3560 | * Runtime errors 3561 | * Message exchanges 3562 | 3563 | ### Using Chrome DevTools 3564 | 3565 | Access Chrome's developer tools inside Claude Desktop to investigate client-side errors: 3566 | 3567 | 1. Enable DevTools: 3568 | 3569 | ```bash 3570 | jq '.allowDevTools = true' ~/Library/Application\ Support/Claude/developer_settings.json > tmp.json \ 3571 | && mv tmp.json ~/Library/Application\ Support/Claude/developer_settings.json 3572 | ``` 3573 | 3574 | 2. Open DevTools: `Command-Option-Shift-i` 3575 | 3576 | Note: You'll see two DevTools windows: 3577 | 3578 | * Main content window 3579 | * App title bar window 3580 | 3581 | Use the Console panel to inspect client-side errors. 3582 | 3583 | Use the Network panel to inspect: 3584 | 3585 | * Message payloads 3586 | * Connection timing 3587 | 3588 | ## Common issues 3589 | 3590 | ### Environment variables 3591 | 3592 | MCP servers inherit only a subset of environment variables automatically, like `USER`, `HOME`, and `PATH`. 3593 | 3594 | To override the default variables or provide your own, you can specify an `env` key in `claude_desktop_config.json`: 3595 | 3596 | ```json 3597 | { 3598 | "myserver": { 3599 | "command": "mcp-server-myapp", 3600 | "env": { 3601 | "MYAPP_API_KEY": "some_key", 3602 | } 3603 | } 3604 | } 3605 | ``` 3606 | 3607 | ### Server initialization 3608 | 3609 | Common initialization problems: 3610 | 3611 | 1. **Path Issues** 3612 | * Incorrect server executable path 3613 | * Missing required files 3614 | * Permission problems 3615 | 3616 | 2. **Configuration Errors** 3617 | * Invalid JSON syntax 3618 | * Missing required fields 3619 | * Type mismatches 3620 | 3621 | 3. **Environment Problems** 3622 | * Missing environment variables 3623 | * Incorrect variable values 3624 | * Permission restrictions 3625 | 3626 | ### Connection problems 3627 | 3628 | When servers fail to connect: 3629 | 3630 | 1. Check Claude Desktop logs 3631 | 2. Verify server process is running 3632 | 3. Test standalone with [Inspector](/docs/tools/inspector) 3633 | 4. Verify protocol compatibility 3634 | 3635 | ## Implementing logging 3636 | 3637 | ### Server-side logging 3638 | 3639 | When building a server that uses the local stdio [transport](/docs/concepts/transports), all messages logged to stderr (standard error) will be captured by the host application (e.g., Claude Desktop) automatically. 3640 | 3641 | <Warning> 3642 | Local MCP servers should not log messages to stdout (standard out), as this will interfere with protocol operation. 3643 | </Warning> 3644 | 3645 | For all [transports](/docs/concepts/transports), you can also provide logging to the client by sending a log message notification: 3646 | 3647 | <Tabs> 3648 | <Tab title="Python"> 3649 | ```python 3650 | server.request_context.session.send_log_message( 3651 | level="info", 3652 | data="Server started successfully", 3653 | ) 3654 | ``` 3655 | </Tab> 3656 | 3657 | <Tab title="TypeScript"> 3658 | ```typescript 3659 | server.sendLoggingMessage({ 3660 | level: "info", 3661 | data: "Server started successfully", 3662 | }); 3663 | ``` 3664 | </Tab> 3665 | </Tabs> 3666 | 3667 | Important events to log: 3668 | 3669 | * Initialization steps 3670 | * Resource access 3671 | * Tool execution 3672 | * Error conditions 3673 | * Performance metrics 3674 | 3675 | ### Client-side logging 3676 | 3677 | In client applications: 3678 | 3679 | 1. Enable debug logging 3680 | 2. Monitor network traffic 3681 | 3. Track message exchanges 3682 | 4. Record error states 3683 | 3684 | ## Debugging workflow 3685 | 3686 | ### Development cycle 3687 | 3688 | 1. Initial Development 3689 | * Use [Inspector](/docs/tools/inspector) for basic testing 3690 | * Implement core functionality 3691 | * Add logging points 3692 | 3693 | 2. Integration Testing 3694 | * Test in Claude Desktop 3695 | * Monitor logs 3696 | * Check error handling 3697 | 3698 | ### Testing changes 3699 | 3700 | To test changes efficiently: 3701 | 3702 | * **Configuration changes**: Restart Claude Desktop 3703 | * **Server code changes**: Use Command-R to reload 3704 | * **Quick iteration**: Use [Inspector](/docs/tools/inspector) during development 3705 | 3706 | ## Best practices 3707 | 3708 | ### Logging strategy 3709 | 3710 | 1. **Structured Logging** 3711 | * Use consistent formats 3712 | * Include context 3713 | * Add timestamps 3714 | * Track request IDs 3715 | 3716 | 2. **Error Handling** 3717 | * Log stack traces 3718 | * Include error context 3719 | * Track error patterns 3720 | * Monitor recovery 3721 | 3722 | 3. **Performance Tracking** 3723 | * Log operation timing 3724 | * Monitor resource usage 3725 | * Track message sizes 3726 | * Measure latency 3727 | 3728 | ### Security considerations 3729 | 3730 | When debugging: 3731 | 3732 | 1. **Sensitive Data** 3733 | * Sanitize logs 3734 | * Protect credentials 3735 | * Mask personal information 3736 | 3737 | 2. **Access Control** 3738 | * Verify permissions 3739 | * Check authentication 3740 | * Monitor access patterns 3741 | 3742 | ## Getting help 3743 | 3744 | When encountering issues: 3745 | 3746 | 1. **First Steps** 3747 | * Check server logs 3748 | * Test with [Inspector](/docs/tools/inspector) 3749 | * Review configuration 3750 | * Verify environment 3751 | 3752 | 2. **Support Channels** 3753 | * GitHub issues 3754 | * GitHub discussions 3755 | 3756 | 3. **Providing Information** 3757 | * Log excerpts 3758 | * Configuration files 3759 | * Steps to reproduce 3760 | * Environment details 3761 | 3762 | ## Next steps 3763 | 3764 | <CardGroup cols={2}> 3765 | <Card title="MCP Inspector" icon="magnifying-glass" href="/docs/tools/inspector"> 3766 | Learn to use the MCP Inspector 3767 | </Card> 3768 | </CardGroup> 3769 | 3770 | 3771 | # Inspector 3772 | 3773 | In-depth guide to using the MCP Inspector for testing and debugging Model Context Protocol servers 3774 | 3775 | The [MCP Inspector](https://github.com/modelcontextprotocol/inspector) is an interactive developer tool for testing and debugging MCP servers. While the [Debugging Guide](/docs/tools/debugging) covers the Inspector as part of the overall debugging toolkit, this document provides a detailed exploration of the Inspector's features and capabilities. 3776 | 3777 | ## Getting started 3778 | 3779 | ### Installation and basic usage 3780 | 3781 | The Inspector runs directly through `npx` without requiring installation: 3782 | 3783 | ```bash 3784 | npx @modelcontextprotocol/inspector <command> 3785 | ``` 3786 | 3787 | ```bash 3788 | npx @modelcontextprotocol/inspector <command> <arg1> <arg2> 3789 | ``` 3790 | 3791 | #### Inspecting servers from NPM or PyPi 3792 | 3793 | A common way to start server packages from [NPM](https://npmjs.com) or [PyPi](https://pypi.com). 3794 | 3795 | <Tabs> 3796 | <Tab title="NPM package"> 3797 | ```bash 3798 | npx -y @modelcontextprotocol/inspector npx <package-name> <args> 3799 | # For example 3800 | npx -y @modelcontextprotocol/inspector npx server-postgres postgres://127.0.0.1/testdb 3801 | ``` 3802 | </Tab> 3803 | 3804 | <Tab title="PyPi package"> 3805 | ```bash 3806 | npx @modelcontextprotocol/inspector uvx <package-name> <args> 3807 | # For example 3808 | npx @modelcontextprotocol/inspector uvx mcp-server-git --repository ~/code/mcp/servers.git 3809 | ``` 3810 | </Tab> 3811 | </Tabs> 3812 | 3813 | #### Inspecting locally developed servers 3814 | 3815 | To inspect servers locally developed or downloaded as a repository, the most common 3816 | way is: 3817 | 3818 | <Tabs> 3819 | <Tab title="TypeScript"> 3820 | ```bash 3821 | npx @modelcontextprotocol/inspector node path/to/server/index.js args... 3822 | ``` 3823 | </Tab> 3824 | 3825 | <Tab title="Python"> 3826 | ```bash 3827 | npx @modelcontextprotocol/inspector \ 3828 | uv \ 3829 | --directory path/to/server \ 3830 | run \ 3831 | package-name \ 3832 | args... 3833 | ``` 3834 | </Tab> 3835 | </Tabs> 3836 | 3837 | Please carefully read any attached README for the most accurate instructions. 3838 | 3839 | ## Feature overview 3840 | 3841 | <Frame caption="The MCP Inspector interface"> 3842 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/mcp-inspector.png" /> 3843 | </Frame> 3844 | 3845 | The Inspector provides several features for interacting with your MCP server: 3846 | 3847 | ### Server connection pane 3848 | 3849 | * Allows selecting the [transport](/docs/concepts/transports) for connecting to the server 3850 | * For local servers, supports customizing the command-line arguments and environment 3851 | 3852 | ### Resources tab 3853 | 3854 | * Lists all available resources 3855 | * Shows resource metadata (MIME types, descriptions) 3856 | * Allows resource content inspection 3857 | * Supports subscription testing 3858 | 3859 | ### Prompts tab 3860 | 3861 | * Displays available prompt templates 3862 | * Shows prompt arguments and descriptions 3863 | * Enables prompt testing with custom arguments 3864 | * Previews generated messages 3865 | 3866 | ### Tools tab 3867 | 3868 | * Lists available tools 3869 | * Shows tool schemas and descriptions 3870 | * Enables tool testing with custom inputs 3871 | * Displays tool execution results 3872 | 3873 | ### Notifications pane 3874 | 3875 | * Presents all logs recorded from the server 3876 | * Shows notifications received from the server 3877 | 3878 | ## Best practices 3879 | 3880 | ### Development workflow 3881 | 3882 | 1. Start Development 3883 | * Launch Inspector with your server 3884 | * Verify basic connectivity 3885 | * Check capability negotiation 3886 | 3887 | 2. Iterative testing 3888 | * Make server changes 3889 | * Rebuild the server 3890 | * Reconnect the Inspector 3891 | * Test affected features 3892 | * Monitor messages 3893 | 3894 | 3. Test edge cases 3895 | * Invalid inputs 3896 | * Missing prompt arguments 3897 | * Concurrent operations 3898 | * Verify error handling and error responses 3899 | 3900 | ## Next steps 3901 | 3902 | <CardGroup cols={2}> 3903 | <Card title="Inspector Repository" icon="github" href="https://github.com/modelcontextprotocol/inspector"> 3904 | Check out the MCP Inspector source code 3905 | </Card> 3906 | 3907 | <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging"> 3908 | Learn about broader debugging strategies 3909 | </Card> 3910 | </CardGroup> 3911 | 3912 | 3913 | # Introduction 3914 | 3915 | Get started with the Model Context Protocol (MCP) 3916 | 3917 | The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you're building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need. 3918 | 3919 | ## Get started with MCP 3920 | 3921 | Choose the path that best fits your needs: 3922 | 3923 | <CardGroup cols={1}> 3924 | <Card title="Quickstart" icon="bolt" href="/quickstart"> 3925 | The fastest way to see MCP in action—connect example servers to Claude Desktop 3926 | </Card> 3927 | 3928 | <Card title="Build your first server (Python)" icon="python" href="/docs/first-server/python"> 3929 | Create a simple MCP server in Python to understand the basics 3930 | </Card> 3931 | 3932 | <Card title="Build your first server (TypeScript)" icon="square-js" href="/docs/first-server/typescript"> 3933 | Create a simple MCP server in TypeScript to understand the basics 3934 | </Card> 3935 | </CardGroup> 3936 | 3937 | ## Development tools 3938 | 3939 | Essential tools for building and debugging MCP servers: 3940 | 3941 | <CardGroup cols={2}> 3942 | <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging"> 3943 | Learn how to effectively debug MCP servers and integrations 3944 | </Card> 3945 | 3946 | <Card title="MCP Inspector" icon="magnifying-glass" href="/docs/tools/inspector"> 3947 | Test and inspect your MCP servers with our interactive debugging tool 3948 | </Card> 3949 | </CardGroup> 3950 | 3951 | ## Explore MCP 3952 | 3953 | Dive deeper into MCP's core concepts and capabilities: 3954 | 3955 | <CardGroup cols={2}> 3956 | <Card title="Core Architecture" icon="sitemap" href="/docs/concepts/architecture"> 3957 | Understand how MCP connects clients, servers, and LLMs 3958 | </Card> 3959 | 3960 | <Card title="Resources" icon="database" href="/docs/concepts/resources"> 3961 | Expose data and content from your servers to LLMs 3962 | </Card> 3963 | 3964 | <Card title="Prompts" icon="message" href="/docs/concepts/prompts"> 3965 | Create reusable prompt templates and workflows 3966 | </Card> 3967 | 3968 | <Card title="Tools" icon="wrench" href="/docs/concepts/tools"> 3969 | Enable LLMs to perform actions through your server 3970 | </Card> 3971 | 3972 | <Card title="Sampling" icon="robot" href="/docs/concepts/sampling"> 3973 | Let your servers request completions from LLMs 3974 | </Card> 3975 | 3976 | <Card title="Transports" icon="network-wired" href="/docs/concepts/transports"> 3977 | Learn about MCP's communication mechanism 3978 | </Card> 3979 | </CardGroup> 3980 | 3981 | ## Contributing 3982 | 3983 | Want to contribute? Check out [@modelcontextprotocol](https://github.com/modelcontextprotocol) on GitHub to join our growing community of developers building with MCP. 3984 | 3985 | 3986 | # Quickstart 3987 | 3988 | Get started with MCP in less than 5 minutes 3989 | 3990 | MCP is a protocol that enables secure connections between host applications, such as [Claude Desktop](https://claude.ai/download), and local services. In this quickstart guide, you'll learn how to: 3991 | 3992 | * Set up a local SQLite database 3993 | * Connect Claude Desktop to it through MCP 3994 | * Query and analyze your data securely 3995 | 3996 | <Note> 3997 | While this guide focuses on using Claude Desktop as an example MCP host, the protocol is open and can be integrated by any application. IDEs, AI tools, and other software can all use MCP to connect to local integrations in a standardized way. 3998 | </Note> 3999 | 4000 | <Warning> 4001 | Claude Desktop's MCP support is currently in developer preview and only supports connecting to local MCP servers running on your machine. Remote MCP connections are not yet supported. This integration is only available in the Claude Desktop app, not the Claude web interface (claude.ai). 4002 | </Warning> 4003 | 4004 | ## How MCP works 4005 | 4006 | MCP (Model Context Protocol) is an open protocol that enables secure, controlled interactions between AI applications and local or remote resources. Let's break down how it works, then look at how we'll use it in this guide. 4007 | 4008 | ### General Architecture 4009 | 4010 | At its core, MCP follows a client-server architecture where a host application can connect to multiple servers: 4011 | 4012 | ```mermaid 4013 | flowchart LR 4014 | subgraph "Your Computer" 4015 | Host["MCP Host\n(Claude, IDEs, Tools)"] 4016 | S1["MCP Server A"] 4017 | S2["MCP Server B"] 4018 | S3["MCP Server C"] 4019 | 4020 | Host <-->|"MCP Protocol"| S1 4021 | Host <-->|"MCP Protocol"| S2 4022 | Host <-->|"MCP Protocol"| S3 4023 | 4024 | S1 <--> R1[("Local\nResource A")] 4025 | S2 <--> R2[("Local\nResource B")] 4026 | end 4027 | 4028 | subgraph "Internet" 4029 | S3 <-->|"Web APIs"| R3[("Remote\nResource C")] 4030 | end 4031 | ``` 4032 | 4033 | * **MCP Hosts**: Programs like Claude Desktop, IDEs, or AI tools that want to access resources through MCP 4034 | * **MCP Clients**: Protocol clients that maintain 1:1 connections with servers 4035 | * **MCP Servers**: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol 4036 | * **Local Resources**: Your computer's resources (databases, files, services) that MCP servers can securely access 4037 | * **Remote Resources**: Resources available over the internet (e.g., through APIs) that MCP servers can connect to 4038 | 4039 | ### In This Guide 4040 | 4041 | For this quickstart, we'll implement a focused example using SQLite: 4042 | 4043 | ```mermaid 4044 | flowchart LR 4045 | subgraph "Your Computer" 4046 | direction LR 4047 | Claude["Claude Desktop"] 4048 | MCP["SQLite MCP Server"] 4049 | DB[(SQLite Database\n~/test.db)] 4050 | 4051 | Claude <-->|"MCP Protocol\n(Queries & Results)"| MCP 4052 | MCP <-->|"Local Access\n(SQL Operations)"| DB 4053 | end 4054 | ``` 4055 | 4056 | 1. Claude Desktop acts as our MCP client 4057 | 2. A SQLite MCP Server provides secure database access 4058 | 3. Your local SQLite database stores the actual data 4059 | 4060 | The communication between the SQLite MCP server and your local SQLite database happens entirely on your machine—your SQLite database is not exposed to the internet. The Model Context Protocol ensures that Claude Desktop can only perform approved database operations through well-defined interfaces. This gives you a secure way to let Claude analyze and interact with your local data while maintaining complete control over what it can access. 4061 | 4062 | ## Prerequisites 4063 | 4064 | * macOS or Windows 4065 | * The latest version of [Claude Desktop](https://claude.ai/download) installed 4066 | * [uv](https://docs.astral.sh/uv/) 0.4.18 or higher (`uv --version` to check) 4067 | * Git (`git --version` to check) 4068 | * SQLite (`sqlite3 --version` to check) 4069 | 4070 | <AccordionGroup> 4071 | <Accordion title="Installing prerequisites (macOS)"> 4072 | ```bash 4073 | # Using Homebrew 4074 | brew install uv git sqlite3 4075 | 4076 | # Or download directly: 4077 | # uv: https://docs.astral.sh/uv/ 4078 | # Git: https://git-scm.com 4079 | # SQLite: https://www.sqlite.org/download.html 4080 | ``` 4081 | </Accordion> 4082 | 4083 | <Accordion title="Installing prerequisites (Windows)"> 4084 | ```powershell 4085 | # Using winget 4086 | winget install --id=astral-sh.uv -e 4087 | winget install git.git sqlite.sqlite 4088 | 4089 | # Or download directly: 4090 | # uv: https://docs.astral.sh/uv/ 4091 | # Git: https://git-scm.com 4092 | # SQLite: https://www.sqlite.org/download.html 4093 | ``` 4094 | </Accordion> 4095 | </AccordionGroup> 4096 | 4097 | ## Installation 4098 | 4099 | <Tabs> 4100 | <Tab title="macOS"> 4101 | <Steps> 4102 | <Step title="Create a sample database"> 4103 | Let's create a simple SQLite database for testing: 4104 | 4105 | ```bash 4106 | # Create a new SQLite database 4107 | sqlite3 ~/test.db <<EOF 4108 | CREATE TABLE products ( 4109 | id INTEGER PRIMARY KEY, 4110 | name TEXT, 4111 | price REAL 4112 | ); 4113 | 4114 | INSERT INTO products (name, price) VALUES 4115 | ('Widget', 19.99), 4116 | ('Gadget', 29.99), 4117 | ('Gizmo', 39.99), 4118 | ('Smart Watch', 199.99), 4119 | ('Wireless Earbuds', 89.99), 4120 | ('Portable Charger', 24.99), 4121 | ('Bluetooth Speaker', 79.99), 4122 | ('Phone Stand', 15.99), 4123 | ('Laptop Sleeve', 34.99), 4124 | ('Mini Drone', 299.99), 4125 | ('LED Desk Lamp', 45.99), 4126 | ('Keyboard', 129.99), 4127 | ('Mouse Pad', 12.99), 4128 | ('USB Hub', 49.99), 4129 | ('Webcam', 69.99), 4130 | ('Screen Protector', 9.99), 4131 | ('Travel Adapter', 27.99), 4132 | ('Gaming Headset', 159.99), 4133 | ('Fitness Tracker', 119.99), 4134 | ('Portable SSD', 179.99); 4135 | EOF 4136 | ``` 4137 | </Step> 4138 | 4139 | <Step title="Configure Claude Desktop"> 4140 | Open your Claude Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor. 4141 | 4142 | For example, if you have [VS Code](https://code.visualstudio.com/) installed: 4143 | 4144 | ```bash 4145 | code ~/Library/Application\ Support/Claude/claude_desktop_config.json 4146 | ``` 4147 | 4148 | Add this configuration (replace YOUR\_USERNAME with your actual username): 4149 | 4150 | ```json 4151 | { 4152 | "mcpServers": { 4153 | "sqlite": { 4154 | "command": "uvx", 4155 | "args": ["mcp-server-sqlite", "--db-path", "/Users/YOUR_USERNAME/test.db"] 4156 | } 4157 | } 4158 | } 4159 | ``` 4160 | 4161 | This tells Claude Desktop: 4162 | 4163 | 1. There's an MCP server named "sqlite" 4164 | 2. Launch it by running `uvx mcp-server-sqlite` 4165 | 3. Connect it to your test database 4166 | 4167 | Save the file, and restart **Claude Desktop**. 4168 | </Step> 4169 | </Steps> 4170 | </Tab> 4171 | 4172 | <Tab title="Windows"> 4173 | <Steps> 4174 | <Step title="Create a sample database"> 4175 | Let's create a simple SQLite database for testing: 4176 | 4177 | ```powershell 4178 | # Create a new SQLite database 4179 | $sql = @' 4180 | CREATE TABLE products ( 4181 | id INTEGER PRIMARY KEY, 4182 | name TEXT, 4183 | price REAL 4184 | ); 4185 | 4186 | INSERT INTO products (name, price) VALUES 4187 | ('Widget', 19.99), 4188 | ('Gadget', 29.99), 4189 | ('Gizmo', 39.99), 4190 | ('Smart Watch', 199.99), 4191 | ('Wireless Earbuds', 89.99), 4192 | ('Portable Charger', 24.99), 4193 | ('Bluetooth Speaker', 79.99), 4194 | ('Phone Stand', 15.99), 4195 | ('Laptop Sleeve', 34.99), 4196 | ('Mini Drone', 299.99), 4197 | ('LED Desk Lamp', 45.99), 4198 | ('Keyboard', 129.99), 4199 | ('Mouse Pad', 12.99), 4200 | ('USB Hub', 49.99), 4201 | ('Webcam', 69.99), 4202 | ('Screen Protector', 9.99), 4203 | ('Travel Adapter', 27.99), 4204 | ('Gaming Headset', 159.99), 4205 | ('Fitness Tracker', 119.99), 4206 | ('Portable SSD', 179.99); 4207 | '@ 4208 | 4209 | cd ~ 4210 | & sqlite3 test.db $sql 4211 | ``` 4212 | </Step> 4213 | 4214 | <Step title="Configure Claude Desktop"> 4215 | Open your Claude Desktop App configuration at `%APPDATA%\Claude\claude_desktop_config.json` in a text editor. 4216 | 4217 | For example, if you have [VS Code](https://code.visualstudio.com/) installed: 4218 | 4219 | ```powershell 4220 | code $env:AppData\Claude\claude_desktop_config.json 4221 | ``` 4222 | 4223 | Add this configuration (replace YOUR\_USERNAME with your actual username): 4224 | 4225 | ```json 4226 | { 4227 | "mcpServers": { 4228 | "sqlite": { 4229 | "command": "uvx", 4230 | "args": [ 4231 | "mcp-server-sqlite", 4232 | "--db-path", 4233 | "C:\\Users\\YOUR_USERNAME\\test.db" 4234 | ] 4235 | } 4236 | } 4237 | } 4238 | ``` 4239 | 4240 | This tells Claude Desktop: 4241 | 4242 | 1. There's an MCP server named "sqlite" 4243 | 2. Launch it by running `uvx mcp-server-sqlite` 4244 | 3. Connect it to your test database 4245 | 4246 | Save the file, and restart **Claude Desktop**. 4247 | </Step> 4248 | </Steps> 4249 | </Tab> 4250 | </Tabs> 4251 | 4252 | ## Test it out 4253 | 4254 | Let's verify everything is working. Try sending this prompt to Claude Desktop: 4255 | 4256 | ``` 4257 | Can you connect to my SQLite database and tell me what products are available, and their prices? 4258 | ``` 4259 | 4260 | Claude Desktop will: 4261 | 4262 | 1. Connect to the SQLite MCP server 4263 | 2. Query your local database 4264 | 3. Format and present the results 4265 | 4266 | <Frame caption="Claude Desktop successfully queries our SQLite database 🎉"> 4267 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/quickstart-screenshot.png" alt="Example Claude Desktop conversation showing database query results" /> 4268 | </Frame> 4269 | 4270 | ## What's happening under the hood? 4271 | 4272 | When you interact with Claude Desktop using MCP: 4273 | 4274 | 1. **Server Discovery**: Claude Desktop connects to your configured MCP servers on startup 4275 | 4276 | 2. **Protocol Handshake**: When you ask about data, Claude Desktop: 4277 | * Identifies which MCP server can help (sqlite in this case) 4278 | * Negotiates capabilities through the protocol 4279 | * Requests data or actions from the MCP server 4280 | 4281 | 3. **Interaction Flow**: 4282 | ```mermaid 4283 | sequenceDiagram 4284 | participant C as Claude Desktop 4285 | participant M as MCP Server 4286 | participant D as SQLite DB 4287 | 4288 | C->>M: Initialize connection 4289 | M-->>C: Available capabilities 4290 | 4291 | C->>M: Query request 4292 | M->>D: SQL query 4293 | D-->>M: Results 4294 | M-->>C: Formatted results 4295 | ``` 4296 | 4297 | 4. **Security**: 4298 | * MCP servers only expose specific, controlled capabilities 4299 | * MCP servers run locally on your machine, and the resources they access are not exposed to the internet 4300 | * Claude Desktop requires user confirmation for sensitive operations 4301 | 4302 | ## Try these examples 4303 | 4304 | Now that MCP is working, try these increasingly powerful examples: 4305 | 4306 | <AccordionGroup> 4307 | <Accordion title="Basic Queries" active> 4308 | ``` 4309 | What's the average price of all products in the database? 4310 | ``` 4311 | </Accordion> 4312 | 4313 | <Accordion title="Data Analysis"> 4314 | ``` 4315 | Can you analyze the price distribution and suggest any pricing optimizations? 4316 | ``` 4317 | </Accordion> 4318 | 4319 | <Accordion title="Complex Operations"> 4320 | ``` 4321 | Could you help me design and create a new table for storing customer orders? 4322 | ``` 4323 | </Accordion> 4324 | </AccordionGroup> 4325 | 4326 | ## Add more capabilities 4327 | 4328 | Want to give Claude Desktop more local integration capabilities? Add these servers to your configuration: 4329 | 4330 | <Note> 4331 | Note that these MCP servers will require [Node.js](https://nodejs.org/en) to be installed on your machine. 4332 | </Note> 4333 | 4334 | <AccordionGroup> 4335 | <Accordion title="File System Access" icon="folder-open"> 4336 | Add this to your config to let Claude Desktop read and analyze files: 4337 | 4338 | ```json 4339 | "filesystem": { 4340 | "command": "npx", 4341 | "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/YOUR_USERNAME/Desktop"] 4342 | } 4343 | ``` 4344 | </Accordion> 4345 | 4346 | <Accordion title="PostgreSQL Connection" icon="database"> 4347 | Connect Claude Desktop to your PostgreSQL database: 4348 | 4349 | ```json 4350 | "postgres": { 4351 | "command": "npx", 4352 | "args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"] 4353 | } 4354 | ``` 4355 | </Accordion> 4356 | </AccordionGroup> 4357 | 4358 | ## More MCP Clients 4359 | 4360 | While this guide demonstrates MCP using Claude Desktop as a client, several other applications support MCP integration: 4361 | 4362 | <CardGroup cols={2}> 4363 | <Card title="Zed Editor" icon="pen-to-square" href="https://zed.dev"> 4364 | A high-performance, multiplayer code editor with built-in MCP support for AI-powered coding assistance 4365 | </Card> 4366 | 4367 | <Card title="Cody" icon="magnifying-glass" href="https://sourcegraph.com/cody"> 4368 | Code intelligence platform featuring MCP integration for enhanced code search and analysis capabilities 4369 | </Card> 4370 | </CardGroup> 4371 | 4372 | Each host application may implement MCP features differently or support different capabilities. Check their respective documentation for specific setup instructions and supported features. 4373 | 4374 | ## Troubleshooting 4375 | 4376 | <AccordionGroup> 4377 | <Accordion title="Nothing showing up in Claude Desktop?"> 4378 | 1. Check if MCP is enabled: 4379 | * Click the 🔌 icon in Claude Desktop, next to the chat box 4380 | * Expand "Installed MCP Servers" 4381 | * You should see your configured servers 4382 | 4383 | 2. Verify your config: 4384 | * From Claude Desktop, go to Claude > Settings… 4385 | * Open the "Developer" tab to see your configuration 4386 | 4387 | 3. Restart Claude Desktop completely: 4388 | * Quit the app (not just close the window) 4389 | * Start it again 4390 | </Accordion> 4391 | 4392 | <Accordion title="MCP or database errors?"> 4393 | 1. Check Claude Desktop's logs: 4394 | ```bash 4395 | tail -n 20 -f ~/Library/Logs/Claude/mcp*.log 4396 | ``` 4397 | 4398 | 2. Verify database access: 4399 | ```bash 4400 | # Test database connection 4401 | sqlite3 ~/test.db ".tables" 4402 | ``` 4403 | 4404 | 3. Common fixes: 4405 | * Check file paths in your config 4406 | * Verify database file permissions 4407 | * Ensure SQLite is installed properly 4408 | </Accordion> 4409 | </AccordionGroup> 4410 | 4411 | ## Next steps 4412 | 4413 | <CardGroup cols={2}> 4414 | <Card title="Build your first MCP server" icon="code" href="/docs/first-server/python"> 4415 | Create your own MCP servers to give your LLM clients new capabilities. 4416 | </Card> 4417 | 4418 | <Card title="Explore examples" icon="github" href="https://github.com/modelcontextprotocol/servers"> 4419 | Browse our collection of example servers to see what's possible. 4420 | </Card> 4421 | </CardGroup> 4422 | 4423 | ```