This is page 2 of 2. Use http://codebase.md/caue397/google-calendar-mcp?lines=true&page={x} to view the full context.
# Directory Structure
```
├── .gitignore
├── llm
│ ├── example_server.ts
│ └── mcp-llms-full.txt
├── package-lock.json
├── package.json
├── README.md
├── scripts
│ └── build.js
├── src
│ ├── auth-server.ts
│ ├── index.ts
│ └── token-manager.ts
└── tsconfig.json
```
# Files
--------------------------------------------------------------------------------
/llm/mcp-llms-full.txt:
--------------------------------------------------------------------------------
```
1 | # Clients
2 |
3 | A list of applications that support MCP integrations
4 |
5 | This page provides an overview of applications that support the Model Context Protocol (MCP). Each client may support different MCP features, allowing for varying levels of integration with MCP servers.
6 |
7 | ## Feature support matrix
8 |
9 | | Client | [Resources] | [Prompts] | [Tools] | [Sampling] | Roots | Notes |
10 | | ---------------------------- | ----------- | --------- | ------- | ---------- | ----- | ------------------------------------------------ |
11 | | [Claude Desktop App][Claude] | ✅ | ✅ | ✅ | ❌ | ❌ | Full support for all MCP features |
12 | | [Zed][Zed] | ❌ | ✅ | ❌ | ❌ | ❌ | Prompts appear as slash commands |
13 | | [Sourcegraph Cody][Cody] | ✅ | ❌ | ❌ | ❌ | ❌ | Supports resources through OpenCTX |
14 | | [Firebase Genkit][Genkit] | ⚠️ | ✅ | ✅ | ❌ | ❌ | Supports resource list and lookup through tools. |
15 | | [Continue][Continue] | ✅ | ✅ | ✅ | ❌ | ❌ | Full support for all MCP features |
16 | | [GenAIScript][GenAIScript] | ❌ | ❌ | ✅ | ❌ | ❌ | Supports tools. |
17 |
18 | [Claude]: https://claude.ai/download
19 |
20 | [Zed]: https://zed.dev
21 |
22 | [Cody]: https://sourcegraph.com/cody
23 |
24 | [Genkit]: https://github.com/firebase/genkit
25 |
26 | [Continue]: https://github.com/continuedev/continue
27 |
28 | [GenAIScript]: https://microsoft.github.io/genaiscript/reference/scripts/mcp-tools/
29 |
30 | [Resources]: https://modelcontextprotocol.io/docs/concepts/resources
31 |
32 | [Prompts]: https://modelcontextprotocol.io/docs/concepts/prompts
33 |
34 | [Tools]: https://modelcontextprotocol.io/docs/concepts/tools
35 |
36 | [Sampling]: https://modelcontextprotocol.io/docs/concepts/sampling
37 |
38 | ## Client details
39 |
40 | ### Claude Desktop App
41 |
42 | The Claude desktop application provides comprehensive support for MCP, enabling deep integration with local tools and data sources.
43 |
44 | **Key features:**
45 |
46 | * Full support for resources, allowing attachment of local files and data
47 | * Support for prompt templates
48 | * Tool integration for executing commands and scripts
49 | * Local server connections for enhanced privacy and security
50 |
51 | > ⓘ Note: The Claude.ai web application does not currently support MCP. MCP features are only available in the desktop application.
52 |
53 | ### Zed
54 |
55 | [Zed](https://zed.dev/docs/assistant/model-context-protocol) is a high-performance code editor with built-in MCP support, focusing on prompt templates and tool integration.
56 |
57 | **Key features:**
58 |
59 | * Prompt templates surface as slash commands in the editor
60 | * Tool integration for enhanced coding workflows
61 | * Tight integration with editor features and workspace context
62 | * Does not support MCP resources
63 |
64 | ### Sourcegraph Cody
65 |
66 | [Cody](https://openctx.org/docs/providers/modelcontextprotocol) is Sourcegraph's AI coding assistant, which implements MCP through OpenCTX.
67 |
68 | **Key features:**
69 |
70 | * Support for MCP resources
71 | * Integration with Sourcegraph's code intelligence
72 | * Uses OpenCTX as an abstraction layer
73 | * Future support planned for additional MCP features
74 |
75 | ### Firebase Genkit
76 |
77 | [Genkit](https://github.com/firebase/genkit) is Firebase's SDK for building and integrating GenAI features into applications. The [genkitx-mcp](https://github.com/firebase/genkit/tree/main/js/plugins/mcp) plugin enables consuming MCP servers as a client or creating MCP servers from Genkit tools and prompts.
78 |
79 | **Key features:**
80 |
81 | * Client support for tools and prompts (resources partially supported)
82 | * Rich discovery with support in Genkit's Dev UI playground
83 | * Seamless interoperability with Genkit's existing tools and prompts
84 | * Works across a wide variety of GenAI models from top providers
85 |
86 | ### Continue
87 |
88 | [Continue](https://github.com/continuedev/continue) is an open-source AI code assistant, with built-in support for all MCP features.
89 |
90 | **Key features**
91 |
92 | * Type "@" to mention MCP resources
93 | * Prompt templates surface as slash commands
94 | * Use both built-in and MCP tools directly in chat
95 | * Supports VS Code and JetBrains IDEs, with any LLM
96 |
97 | ### GenAIScript
98 |
99 | Programmatically assemble prompts for LLMs using [GenAIScript](https://microsoft.github.io/genaiscript/) (in JavaScript). Orchestrate LLMs, tools, and data in JavaScript.
100 |
101 | **Key features:**
102 |
103 | * JavaScript toolbox to work with prompts
104 | * Abstraction to make it easy and productive
105 | * Seamless Visual Studio Code integration
106 |
107 | ## Adding MCP support to your application
108 |
109 | If you've added MCP support to your application, we encourage you to submit a pull request to add it to this list. MCP integration can provide your users with powerful contextual AI capabilities and make your application part of the growing MCP ecosystem.
110 |
111 | Benefits of adding MCP support:
112 |
113 | * Enable users to bring their own context and tools
114 | * Join a growing ecosystem of interoperable AI applications
115 | * Provide users with flexible integration options
116 | * Support local-first AI workflows
117 |
118 | To get started with implementing MCP in your application, check out our [Python](https://github.com/modelcontextprotocol/python-sdk) or [TypeScript SDK Documentation](https://github.com/modelcontextprotocol/typescript-sdk)
119 |
120 | ## Updates and corrections
121 |
122 | This list is maintained by the community. If you notice any inaccuracies or would like to update information about MCP support in your application, please submit a pull request or [open an issue in our documentation repository](https://github.com/modelcontextprotocol/docs/issues).
123 |
124 |
125 | # Core architecture
126 |
127 | Understand how MCP connects clients, servers, and LLMs
128 |
129 | The Model Context Protocol (MCP) is built on a flexible, extensible architecture that enables seamless communication between LLM applications and integrations. This document covers the core architectural components and concepts.
130 |
131 | ## Overview
132 |
133 | MCP follows a client-server architecture where:
134 |
135 | * **Hosts** are LLM applications (like Claude Desktop or IDEs) that initiate connections
136 | * **Clients** maintain 1:1 connections with servers, inside the host application
137 | * **Servers** provide context, tools, and prompts to clients
138 |
139 | ```mermaid
140 | flowchart LR
141 | subgraph " Host (e.g., Claude Desktop) "
142 | client1[MCP Client]
143 | client2[MCP Client]
144 | end
145 | subgraph "Server Process"
146 | server1[MCP Server]
147 | end
148 | subgraph "Server Process"
149 | server2[MCP Server]
150 | end
151 |
152 | client1 <-->|Transport Layer| server1
153 | client2 <-->|Transport Layer| server2
154 | ```
155 |
156 | ## Core components
157 |
158 | ### Protocol layer
159 |
160 | The protocol layer handles message framing, request/response linking, and high-level communication patterns.
161 |
162 | <Tabs>
163 | <Tab title="TypeScript">
164 | ```typescript
165 | class Protocol<Request, Notification, Result> {
166 | // Handle incoming requests
167 | setRequestHandler<T>(schema: T, handler: (request: T, extra: RequestHandlerExtra) => Promise<Result>): void
168 |
169 | // Handle incoming notifications
170 | setNotificationHandler<T>(schema: T, handler: (notification: T) => Promise<void>): void
171 |
172 | // Send requests and await responses
173 | request<T>(request: Request, schema: T, options?: RequestOptions): Promise<T>
174 |
175 | // Send one-way notifications
176 | notification(notification: Notification): Promise<void>
177 | }
178 | ```
179 | </Tab>
180 |
181 | <Tab title="Python">
182 | ```python
183 | class Session(BaseSession[RequestT, NotificationT, ResultT]):
184 | async def send_request(
185 | self,
186 | request: RequestT,
187 | result_type: type[Result]
188 | ) -> Result:
189 | """
190 | Send request and wait for response. Raises McpError if response contains error.
191 | """
192 | # Request handling implementation
193 |
194 | async def send_notification(
195 | self,
196 | notification: NotificationT
197 | ) -> None:
198 | """Send one-way notification that doesn't expect response."""
199 | # Notification handling implementation
200 |
201 | async def _received_request(
202 | self,
203 | responder: RequestResponder[ReceiveRequestT, ResultT]
204 | ) -> None:
205 | """Handle incoming request from other side."""
206 | # Request handling implementation
207 |
208 | async def _received_notification(
209 | self,
210 | notification: ReceiveNotificationT
211 | ) -> None:
212 | """Handle incoming notification from other side."""
213 | # Notification handling implementation
214 | ```
215 | </Tab>
216 | </Tabs>
217 |
218 | Key classes include:
219 |
220 | * `Protocol`
221 | * `Client`
222 | * `Server`
223 |
224 | ### Transport layer
225 |
226 | The transport layer handles the actual communication between clients and servers. MCP supports multiple transport mechanisms:
227 |
228 | 1. **Stdio transport**
229 | * Uses standard input/output for communication
230 | * Ideal for local processes
231 |
232 | 2. **HTTP with SSE transport**
233 | * Uses Server-Sent Events for server-to-client messages
234 | * HTTP POST for client-to-server messages
235 |
236 | All transports use [JSON-RPC](https://www.jsonrpc.org/) 2.0 to exchange messages. See the [specification](https://spec.modelcontextprotocol.io) for detailed information about the Model Context Protocol message format.
237 |
238 | ### Message types
239 |
240 | MCP has these main types of messages:
241 |
242 | 1. **Requests** expect a response from the other side:
243 | ```typescript
244 | interface Request {
245 | method: string;
246 | params?: { ... };
247 | }
248 | ```
249 |
250 | 2. **Notifications** are one-way messages that don't expect a response:
251 | ```typescript
252 | interface Notification {
253 | method: string;
254 | params?: { ... };
255 | }
256 | ```
257 |
258 | 3. **Results** are successful responses to requests:
259 | ```typescript
260 | interface Result {
261 | [key: string]: unknown;
262 | }
263 | ```
264 |
265 | 4. **Errors** indicate that a request failed:
266 | ```typescript
267 | interface Error {
268 | code: number;
269 | message: string;
270 | data?: unknown;
271 | }
272 | ```
273 |
274 | ## Connection lifecycle
275 |
276 | ### 1. Initialization
277 |
278 | ```mermaid
279 | sequenceDiagram
280 | participant Client
281 | participant Server
282 |
283 | Client->>Server: initialize request
284 | Server->>Client: initialize response
285 | Client->>Server: initialized notification
286 |
287 | Note over Client,Server: Connection ready for use
288 | ```
289 |
290 | 1. Client sends `initialize` request with protocol version and capabilities
291 | 2. Server responds with its protocol version and capabilities
292 | 3. Client sends `initialized` notification as acknowledgment
293 | 4. Normal message exchange begins
294 |
295 | ### 2. Message exchange
296 |
297 | After initialization, the following patterns are supported:
298 |
299 | * **Request-Response**: Client or server sends requests, the other responds
300 | * **Notifications**: Either party sends one-way messages
301 |
302 | ### 3. Termination
303 |
304 | Either party can terminate the connection:
305 |
306 | * Clean shutdown via `close()`
307 | * Transport disconnection
308 | * Error conditions
309 |
310 | ## Error handling
311 |
312 | MCP defines these standard error codes:
313 |
314 | ```typescript
315 | enum ErrorCode {
316 | // Standard JSON-RPC error codes
317 | ParseError = -32700,
318 | InvalidRequest = -32600,
319 | MethodNotFound = -32601,
320 | InvalidParams = -32602,
321 | InternalError = -32603
322 | }
323 | ```
324 |
325 | SDKs and applications can define their own error codes above -32000.
326 |
327 | Errors are propagated through:
328 |
329 | * Error responses to requests
330 | * Error events on transports
331 | * Protocol-level error handlers
332 |
333 | ## Implementation example
334 |
335 | Here's a basic example of implementing an MCP server:
336 |
337 | <Tabs>
338 | <Tab title="TypeScript">
339 | ```typescript
340 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
341 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
342 |
343 | const server = new Server({
344 | name: "example-server",
345 | version: "1.0.0"
346 | }, {
347 | capabilities: {
348 | resources: {}
349 | }
350 | });
351 |
352 | // Handle requests
353 | server.setRequestHandler(ListResourcesRequestSchema, async () => {
354 | return {
355 | resources: [
356 | {
357 | uri: "example://resource",
358 | name: "Example Resource"
359 | }
360 | ]
361 | };
362 | });
363 |
364 | // Connect transport
365 | const transport = new StdioServerTransport();
366 | await server.connect(transport);
367 | ```
368 | </Tab>
369 |
370 | <Tab title="Python">
371 | ```python
372 | import asyncio
373 | import mcp.types as types
374 | from mcp.server import Server
375 | from mcp.server.stdio import stdio_server
376 |
377 | app = Server("example-server")
378 |
379 | @app.list_resources()
380 | async def list_resources() -> list[types.Resource]:
381 | return [
382 | types.Resource(
383 | uri="example://resource",
384 | name="Example Resource"
385 | )
386 | ]
387 |
388 | async def main():
389 | async with stdio_server() as streams:
390 | await app.run(
391 | streams[0],
392 | streams[1],
393 | app.create_initialization_options()
394 | )
395 |
396 | if __name__ == "__main__":
397 | asyncio.run(main)
398 | ```
399 | </Tab>
400 | </Tabs>
401 |
402 | ## Best practices
403 |
404 | ### Transport selection
405 |
406 | 1. **Local communication**
407 | * Use stdio transport for local processes
408 | * Efficient for same-machine communication
409 | * Simple process management
410 |
411 | 2. **Remote communication**
412 | * Use SSE for scenarios requiring HTTP compatibility
413 | * Consider security implications including authentication and authorization
414 |
415 | ### Message handling
416 |
417 | 1. **Request processing**
418 | * Validate inputs thoroughly
419 | * Use type-safe schemas
420 | * Handle errors gracefully
421 | * Implement timeouts
422 |
423 | 2. **Progress reporting**
424 | * Use progress tokens for long operations
425 | * Report progress incrementally
426 | * Include total progress when known
427 |
428 | 3. **Error management**
429 | * Use appropriate error codes
430 | * Include helpful error messages
431 | * Clean up resources on errors
432 |
433 | ## Security considerations
434 |
435 | 1. **Transport security**
436 | * Use TLS for remote connections
437 | * Validate connection origins
438 | * Implement authentication when needed
439 |
440 | 2. **Message validation**
441 | * Validate all incoming messages
442 | * Sanitize inputs
443 | * Check message size limits
444 | * Verify JSON-RPC format
445 |
446 | 3. **Resource protection**
447 | * Implement access controls
448 | * Validate resource paths
449 | * Monitor resource usage
450 | * Rate limit requests
451 |
452 | 4. **Error handling**
453 | * Don't leak sensitive information
454 | * Log security-relevant errors
455 | * Implement proper cleanup
456 | * Handle DoS scenarios
457 |
458 | ## Debugging and monitoring
459 |
460 | 1. **Logging**
461 | * Log protocol events
462 | * Track message flow
463 | * Monitor performance
464 | * Record errors
465 |
466 | 2. **Diagnostics**
467 | * Implement health checks
468 | * Monitor connection state
469 | * Track resource usage
470 | * Profile performance
471 |
472 | 3. **Testing**
473 | * Test different transports
474 | * Verify error handling
475 | * Check edge cases
476 | * Load test servers
477 |
478 |
479 | # Prompts
480 |
481 | Create reusable prompt templates and workflows
482 |
483 | Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a powerful way to standardize and share common LLM interactions.
484 |
485 | <Note>
486 | Prompts are designed to be **user-controlled**, meaning they are exposed from servers to clients with the intention of the user being able to explicitly select them for use.
487 | </Note>
488 |
489 | ## Overview
490 |
491 | Prompts in MCP are predefined templates that can:
492 |
493 | * Accept dynamic arguments
494 | * Include context from resources
495 | * Chain multiple interactions
496 | * Guide specific workflows
497 | * Surface as UI elements (like slash commands)
498 |
499 | ## Prompt structure
500 |
501 | Each prompt is defined with:
502 |
503 | ```typescript
504 | {
505 | name: string; // Unique identifier for the prompt
506 | description?: string; // Human-readable description
507 | arguments?: [ // Optional list of arguments
508 | {
509 | name: string; // Argument identifier
510 | description?: string; // Argument description
511 | required?: boolean; // Whether argument is required
512 | }
513 | ]
514 | }
515 | ```
516 |
517 | ## Discovering prompts
518 |
519 | Clients can discover available prompts through the `prompts/list` endpoint:
520 |
521 | ```typescript
522 | // Request
523 | {
524 | method: "prompts/list"
525 | }
526 |
527 | // Response
528 | {
529 | prompts: [
530 | {
531 | name: "analyze-code",
532 | description: "Analyze code for potential improvements",
533 | arguments: [
534 | {
535 | name: "language",
536 | description: "Programming language",
537 | required: true
538 | }
539 | ]
540 | }
541 | ]
542 | }
543 | ```
544 |
545 | ## Using prompts
546 |
547 | To use a prompt, clients make a `prompts/get` request:
548 |
549 | ````typescript
550 | // Request
551 | {
552 | method: "prompts/get",
553 | params: {
554 | name: "analyze-code",
555 | arguments: {
556 | language: "python"
557 | }
558 | }
559 | }
560 |
561 | // Response
562 | {
563 | description: "Analyze Python code for potential improvements",
564 | messages: [
565 | {
566 | role: "user",
567 | content: {
568 | type: "text",
569 | text: "Please analyze the following Python code for potential improvements:\n\n```python\ndef calculate_sum(numbers):\n total = 0\n for num in numbers:\n total = total + num\n return total\n\nresult = calculate_sum([1, 2, 3, 4, 5])\nprint(result)\n```"
570 | }
571 | }
572 | ]
573 | }
574 | ````
575 |
576 | ## Dynamic prompts
577 |
578 | Prompts can be dynamic and include:
579 |
580 | ### Embedded resource context
581 |
582 | ```json
583 | {
584 | "name": "analyze-project",
585 | "description": "Analyze project logs and code",
586 | "arguments": [
587 | {
588 | "name": "timeframe",
589 | "description": "Time period to analyze logs",
590 | "required": true
591 | },
592 | {
593 | "name": "fileUri",
594 | "description": "URI of code file to review",
595 | "required": true
596 | }
597 | ]
598 | }
599 | ```
600 |
601 | When handling the `prompts/get` request:
602 |
603 | ```json
604 | {
605 | "messages": [
606 | {
607 | "role": "user",
608 | "content": {
609 | "type": "text",
610 | "text": "Analyze these system logs and the code file for any issues:"
611 | }
612 | },
613 | {
614 | "role": "user",
615 | "content": {
616 | "type": "resource",
617 | "resource": {
618 | "uri": "logs://recent?timeframe=1h",
619 | "text": "[2024-03-14 15:32:11] ERROR: Connection timeout in network.py:127\n[2024-03-14 15:32:15] WARN: Retrying connection (attempt 2/3)\n[2024-03-14 15:32:20] ERROR: Max retries exceeded",
620 | "mimeType": "text/plain"
621 | }
622 | }
623 | },
624 | {
625 | "role": "user",
626 | "content": {
627 | "type": "resource",
628 | "resource": {
629 | "uri": "file:///path/to/code.py",
630 | "text": "def connect_to_service(timeout=30):\n retries = 3\n for attempt in range(retries):\n try:\n return establish_connection(timeout)\n except TimeoutError:\n if attempt == retries - 1:\n raise\n time.sleep(5)\n\ndef establish_connection(timeout):\n # Connection implementation\n pass",
631 | "mimeType": "text/x-python"
632 | }
633 | }
634 | }
635 | ]
636 | }
637 | ```
638 |
639 | ### Multi-step workflows
640 |
641 | ```typescript
642 | const debugWorkflow = {
643 | name: "debug-error",
644 | async getMessages(error: string) {
645 | return [
646 | {
647 | role: "user",
648 | content: {
649 | type: "text",
650 | text: `Here's an error I'm seeing: ${error}`
651 | }
652 | },
653 | {
654 | role: "assistant",
655 | content: {
656 | type: "text",
657 | text: "I'll help analyze this error. What have you tried so far?"
658 | }
659 | },
660 | {
661 | role: "user",
662 | content: {
663 | type: "text",
664 | text: "I've tried restarting the service, but the error persists."
665 | }
666 | }
667 | ];
668 | }
669 | };
670 | ```
671 |
672 | ## Example implementation
673 |
674 | Here's a complete example of implementing prompts in an MCP server:
675 |
676 | <Tabs>
677 | <Tab title="TypeScript">
678 | ```typescript
679 | import { Server } from "@modelcontextprotocol/sdk/server";
680 | import {
681 | ListPromptsRequestSchema,
682 | GetPromptRequestSchema
683 | } from "@modelcontextprotocol/sdk/types";
684 |
685 | const PROMPTS = {
686 | "git-commit": {
687 | name: "git-commit",
688 | description: "Generate a Git commit message",
689 | arguments: [
690 | {
691 | name: "changes",
692 | description: "Git diff or description of changes",
693 | required: true
694 | }
695 | ]
696 | },
697 | "explain-code": {
698 | name: "explain-code",
699 | description: "Explain how code works",
700 | arguments: [
701 | {
702 | name: "code",
703 | description: "Code to explain",
704 | required: true
705 | },
706 | {
707 | name: "language",
708 | description: "Programming language",
709 | required: false
710 | }
711 | ]
712 | }
713 | };
714 |
715 | const server = new Server({
716 | name: "example-prompts-server",
717 | version: "1.0.0"
718 | }, {
719 | capabilities: {
720 | prompts: {}
721 | }
722 | });
723 |
724 | // List available prompts
725 | server.setRequestHandler(ListPromptsRequestSchema, async () => {
726 | return {
727 | prompts: Object.values(PROMPTS)
728 | };
729 | });
730 |
731 | // Get specific prompt
732 | server.setRequestHandler(GetPromptRequestSchema, async (request) => {
733 | const prompt = PROMPTS[request.params.name];
734 | if (!prompt) {
735 | throw new Error(`Prompt not found: ${request.params.name}`);
736 | }
737 |
738 | if (request.params.name === "git-commit") {
739 | return {
740 | messages: [
741 | {
742 | role: "user",
743 | content: {
744 | type: "text",
745 | text: `Generate a concise but descriptive commit message for these changes:\n\n${request.params.arguments?.changes}`
746 | }
747 | }
748 | ]
749 | };
750 | }
751 |
752 | if (request.params.name === "explain-code") {
753 | const language = request.params.arguments?.language || "Unknown";
754 | return {
755 | messages: [
756 | {
757 | role: "user",
758 | content: {
759 | type: "text",
760 | text: `Explain how this ${language} code works:\n\n${request.params.arguments?.code}`
761 | }
762 | }
763 | ]
764 | };
765 | }
766 |
767 | throw new Error("Prompt implementation not found");
768 | });
769 | ```
770 | </Tab>
771 |
772 | <Tab title="Python">
773 | ```python
774 | from mcp.server import Server
775 | import mcp.types as types
776 |
777 | # Define available prompts
778 | PROMPTS = {
779 | "git-commit": types.Prompt(
780 | name="git-commit",
781 | description="Generate a Git commit message",
782 | arguments=[
783 | types.PromptArgument(
784 | name="changes",
785 | description="Git diff or description of changes",
786 | required=True
787 | )
788 | ],
789 | ),
790 | "explain-code": types.Prompt(
791 | name="explain-code",
792 | description="Explain how code works",
793 | arguments=[
794 | types.PromptArgument(
795 | name="code",
796 | description="Code to explain",
797 | required=True
798 | ),
799 | types.PromptArgument(
800 | name="language",
801 | description="Programming language",
802 | required=False
803 | )
804 | ],
805 | )
806 | }
807 |
808 | # Initialize server
809 | app = Server("example-prompts-server")
810 |
811 | @app.list_prompts()
812 | async def list_prompts() -> list[types.Prompt]:
813 | return list(PROMPTS.values())
814 |
815 | @app.get_prompt()
816 | async def get_prompt(
817 | name: str, arguments: dict[str, str] | None = None
818 | ) -> types.GetPromptResult:
819 | if name not in PROMPTS:
820 | raise ValueError(f"Prompt not found: {name}")
821 |
822 | if name == "git-commit":
823 | changes = arguments.get("changes") if arguments else ""
824 | return types.GetPromptResult(
825 | messages=[
826 | types.PromptMessage(
827 | role="user",
828 | content=types.TextContent(
829 | type="text",
830 | text=f"Generate a concise but descriptive commit message "
831 | f"for these changes:\n\n{changes}"
832 | )
833 | )
834 | ]
835 | )
836 |
837 | if name == "explain-code":
838 | code = arguments.get("code") if arguments else ""
839 | language = arguments.get("language", "Unknown") if arguments else "Unknown"
840 | return types.GetPromptResult(
841 | messages=[
842 | types.PromptMessage(
843 | role="user",
844 | content=types.TextContent(
845 | type="text",
846 | text=f"Explain how this {language} code works:\n\n{code}"
847 | )
848 | )
849 | ]
850 | )
851 |
852 | raise ValueError("Prompt implementation not found")
853 | ```
854 | </Tab>
855 | </Tabs>
856 |
857 | ## Best practices
858 |
859 | When implementing prompts:
860 |
861 | 1. Use clear, descriptive prompt names
862 | 2. Provide detailed descriptions for prompts and arguments
863 | 3. Validate all required arguments
864 | 4. Handle missing arguments gracefully
865 | 5. Consider versioning for prompt templates
866 | 6. Cache dynamic content when appropriate
867 | 7. Implement error handling
868 | 8. Document expected argument formats
869 | 9. Consider prompt composability
870 | 10. Test prompts with various inputs
871 |
872 | ## UI integration
873 |
874 | Prompts can be surfaced in client UIs as:
875 |
876 | * Slash commands
877 | * Quick actions
878 | * Context menu items
879 | * Command palette entries
880 | * Guided workflows
881 | * Interactive forms
882 |
883 | ## Updates and changes
884 |
885 | Servers can notify clients about prompt changes:
886 |
887 | 1. Server capability: `prompts.listChanged`
888 | 2. Notification: `notifications/prompts/list_changed`
889 | 3. Client re-fetches prompt list
890 |
891 | ## Security considerations
892 |
893 | When implementing prompts:
894 |
895 | * Validate all arguments
896 | * Sanitize user input
897 | * Consider rate limiting
898 | * Implement access controls
899 | * Audit prompt usage
900 | * Handle sensitive data appropriately
901 | * Validate generated content
902 | * Implement timeouts
903 | * Consider prompt injection risks
904 | * Document security requirements
905 |
906 |
907 | # Resources
908 |
909 | Expose data and content from your servers to LLMs
910 |
911 | Resources are a core primitive in the Model Context Protocol (MCP) that allow servers to expose data and content that can be read by clients and used as context for LLM interactions.
912 |
913 | <Note>
914 | Resources are designed to be **application-controlled**, meaning that the client application can decide how and when they should be used.
915 | Different MCP clients may handle resources differently. For example:
916 |
917 | * Claude Desktop currently requires users to explicitly select resources before they can be used
918 | * Other clients might automatically select resources based on heuristics
919 | * Some implementations may even allow the AI model itself to determine which resources to use
920 |
921 | Server authors should be prepared to handle any of these interaction patterns when implementing resource support. In order to expose data to models automatically, server authors should use a **model-controlled** primitive such as [Tools](./tools).
922 | </Note>
923 |
924 | ## Overview
925 |
926 | Resources represent any kind of data that an MCP server wants to make available to clients. This can include:
927 |
928 | * File contents
929 | * Database records
930 | * API responses
931 | * Live system data
932 | * Screenshots and images
933 | * Log files
934 | * And more
935 |
936 | Each resource is identified by a unique URI and can contain either text or binary data.
937 |
938 | ## Resource URIs
939 |
940 | Resources are identified using URIs that follow this format:
941 |
942 | ```
943 | [protocol]://[host]/[path]
944 | ```
945 |
946 | For example:
947 |
948 | * `file:///home/user/documents/report.pdf`
949 | * `postgres://database/customers/schema`
950 | * `screen://localhost/display1`
951 |
952 | The protocol and path structure is defined by the MCP server implementation. Servers can define their own custom URI schemes.
953 |
954 | ## Resource types
955 |
956 | Resources can contain two types of content:
957 |
958 | ### Text resources
959 |
960 | Text resources contain UTF-8 encoded text data. These are suitable for:
961 |
962 | * Source code
963 | * Configuration files
964 | * Log files
965 | * JSON/XML data
966 | * Plain text
967 |
968 | ### Binary resources
969 |
970 | Binary resources contain raw binary data encoded in base64. These are suitable for:
971 |
972 | * Images
973 | * PDFs
974 | * Audio files
975 | * Video files
976 | * Other non-text formats
977 |
978 | ## Resource discovery
979 |
980 | Clients can discover available resources through two main methods:
981 |
982 | ### Direct resources
983 |
984 | Servers expose a list of concrete resources via the `resources/list` endpoint. Each resource includes:
985 |
986 | ```typescript
987 | {
988 | uri: string; // Unique identifier for the resource
989 | name: string; // Human-readable name
990 | description?: string; // Optional description
991 | mimeType?: string; // Optional MIME type
992 | }
993 | ```
994 |
995 | ### Resource templates
996 |
997 | For dynamic resources, servers can expose [URI templates](https://datatracker.ietf.org/doc/html/rfc6570) that clients can use to construct valid resource URIs:
998 |
999 | ```typescript
1000 | {
1001 | uriTemplate: string; // URI template following RFC 6570
1002 | name: string; // Human-readable name for this type
1003 | description?: string; // Optional description
1004 | mimeType?: string; // Optional MIME type for all matching resources
1005 | }
1006 | ```
1007 |
1008 | ## Reading resources
1009 |
1010 | To read a resource, clients make a `resources/read` request with the resource URI.
1011 |
1012 | The server responds with a list of resource contents:
1013 |
1014 | ```typescript
1015 | {
1016 | contents: [
1017 | {
1018 | uri: string; // The URI of the resource
1019 | mimeType?: string; // Optional MIME type
1020 |
1021 | // One of:
1022 | text?: string; // For text resources
1023 | blob?: string; // For binary resources (base64 encoded)
1024 | }
1025 | ]
1026 | }
1027 | ```
1028 |
1029 | <Tip>
1030 | Servers may return multiple resources in response to one `resources/read` request. This could be used, for example, to return a list of files inside a directory when the directory is read.
1031 | </Tip>
1032 |
1033 | ## Resource updates
1034 |
1035 | MCP supports real-time updates for resources through two mechanisms:
1036 |
1037 | ### List changes
1038 |
1039 | Servers can notify clients when their list of available resources changes via the `notifications/resources/list_changed` notification.
1040 |
1041 | ### Content changes
1042 |
1043 | Clients can subscribe to updates for specific resources:
1044 |
1045 | 1. Client sends `resources/subscribe` with resource URI
1046 | 2. Server sends `notifications/resources/updated` when the resource changes
1047 | 3. Client can fetch latest content with `resources/read`
1048 | 4. Client can unsubscribe with `resources/unsubscribe`
1049 |
1050 | ## Example implementation
1051 |
1052 | Here's a simple example of implementing resource support in an MCP server:
1053 |
1054 | <Tabs>
1055 | <Tab title="TypeScript">
1056 | ```typescript
1057 | const server = new Server({
1058 | name: "example-server",
1059 | version: "1.0.0"
1060 | }, {
1061 | capabilities: {
1062 | resources: {}
1063 | }
1064 | });
1065 |
1066 | // List available resources
1067 | server.setRequestHandler(ListResourcesRequestSchema, async () => {
1068 | return {
1069 | resources: [
1070 | {
1071 | uri: "file:///logs/app.log",
1072 | name: "Application Logs",
1073 | mimeType: "text/plain"
1074 | }
1075 | ]
1076 | };
1077 | });
1078 |
1079 | // Read resource contents
1080 | server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
1081 | const uri = request.params.uri;
1082 |
1083 | if (uri === "file:///logs/app.log") {
1084 | const logContents = await readLogFile();
1085 | return {
1086 | contents: [
1087 | {
1088 | uri,
1089 | mimeType: "text/plain",
1090 | text: logContents
1091 | }
1092 | ]
1093 | };
1094 | }
1095 |
1096 | throw new Error("Resource not found");
1097 | });
1098 | ```
1099 | </Tab>
1100 |
1101 | <Tab title="Python">
1102 | ```python
1103 | app = Server("example-server")
1104 |
1105 | @app.list_resources()
1106 | async def list_resources() -> list[types.Resource]:
1107 | return [
1108 | types.Resource(
1109 | uri="file:///logs/app.log",
1110 | name="Application Logs",
1111 | mimeType="text/plain"
1112 | )
1113 | ]
1114 |
1115 | @app.read_resource()
1116 | async def read_resource(uri: AnyUrl) -> str:
1117 | if str(uri) == "file:///logs/app.log":
1118 | log_contents = await read_log_file()
1119 | return log_contents
1120 |
1121 | raise ValueError("Resource not found")
1122 |
1123 | # Start server
1124 | async with stdio_server() as streams:
1125 | await app.run(
1126 | streams[0],
1127 | streams[1],
1128 | app.create_initialization_options()
1129 | )
1130 | ```
1131 | </Tab>
1132 | </Tabs>
1133 |
1134 | ## Best practices
1135 |
1136 | When implementing resource support:
1137 |
1138 | 1. Use clear, descriptive resource names and URIs
1139 | 2. Include helpful descriptions to guide LLM understanding
1140 | 3. Set appropriate MIME types when known
1141 | 4. Implement resource templates for dynamic content
1142 | 5. Use subscriptions for frequently changing resources
1143 | 6. Handle errors gracefully with clear error messages
1144 | 7. Consider pagination for large resource lists
1145 | 8. Cache resource contents when appropriate
1146 | 9. Validate URIs before processing
1147 | 10. Document your custom URI schemes
1148 |
1149 | ## Security considerations
1150 |
1151 | When exposing resources:
1152 |
1153 | * Validate all resource URIs
1154 | * Implement appropriate access controls
1155 | * Sanitize file paths to prevent directory traversal
1156 | * Be cautious with binary data handling
1157 | * Consider rate limiting for resource reads
1158 | * Audit resource access
1159 | * Encrypt sensitive data in transit
1160 | * Validate MIME types
1161 | * Implement timeouts for long-running reads
1162 | * Handle resource cleanup appropriately
1163 |
1164 |
1165 | # Sampling
1166 |
1167 | Let your servers request completions from LLMs
1168 |
1169 | Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated agentic behaviors while maintaining security and privacy.
1170 |
1171 | <Info>
1172 | This feature of MCP is not yet supported in the Claude Desktop client.
1173 | </Info>
1174 |
1175 | ## How sampling works
1176 |
1177 | The sampling flow follows these steps:
1178 |
1179 | 1. Server sends a `sampling/createMessage` request to the client
1180 | 2. Client reviews the request and can modify it
1181 | 3. Client samples from an LLM
1182 | 4. Client reviews the completion
1183 | 5. Client returns the result to the server
1184 |
1185 | This human-in-the-loop design ensures users maintain control over what the LLM sees and generates.
1186 |
1187 | ## Message format
1188 |
1189 | Sampling requests use a standardized message format:
1190 |
1191 | ```typescript
1192 | {
1193 | messages: [
1194 | {
1195 | role: "user" | "assistant",
1196 | content: {
1197 | type: "text" | "image",
1198 |
1199 | // For text:
1200 | text?: string,
1201 |
1202 | // For images:
1203 | data?: string, // base64 encoded
1204 | mimeType?: string
1205 | }
1206 | }
1207 | ],
1208 | modelPreferences?: {
1209 | hints?: [{
1210 | name?: string // Suggested model name/family
1211 | }],
1212 | costPriority?: number, // 0-1, importance of minimizing cost
1213 | speedPriority?: number, // 0-1, importance of low latency
1214 | intelligencePriority?: number // 0-1, importance of capabilities
1215 | },
1216 | systemPrompt?: string,
1217 | includeContext?: "none" | "thisServer" | "allServers",
1218 | temperature?: number,
1219 | maxTokens: number,
1220 | stopSequences?: string[],
1221 | metadata?: Record<string, unknown>
1222 | }
1223 | ```
1224 |
1225 | ## Request parameters
1226 |
1227 | ### Messages
1228 |
1229 | The `messages` array contains the conversation history to send to the LLM. Each message has:
1230 |
1231 | * `role`: Either "user" or "assistant"
1232 | * `content`: The message content, which can be:
1233 | * Text content with a `text` field
1234 | * Image content with `data` (base64) and `mimeType` fields
1235 |
1236 | ### Model preferences
1237 |
1238 | The `modelPreferences` object allows servers to specify their model selection preferences:
1239 |
1240 | * `hints`: Array of model name suggestions that clients can use to select an appropriate model:
1241 | * `name`: String that can match full or partial model names (e.g. "claude-3", "sonnet")
1242 | * Clients may map hints to equivalent models from different providers
1243 | * Multiple hints are evaluated in preference order
1244 |
1245 | * Priority values (0-1 normalized):
1246 | * `costPriority`: Importance of minimizing costs
1247 | * `speedPriority`: Importance of low latency response
1248 | * `intelligencePriority`: Importance of advanced model capabilities
1249 |
1250 | Clients make the final model selection based on these preferences and their available models.
1251 |
1252 | ### System prompt
1253 |
1254 | An optional `systemPrompt` field allows servers to request a specific system prompt. The client may modify or ignore this.
1255 |
1256 | ### Context inclusion
1257 |
1258 | The `includeContext` parameter specifies what MCP context to include:
1259 |
1260 | * `"none"`: No additional context
1261 | * `"thisServer"`: Include context from the requesting server
1262 | * `"allServers"`: Include context from all connected MCP servers
1263 |
1264 | The client controls what context is actually included.
1265 |
1266 | ### Sampling parameters
1267 |
1268 | Fine-tune the LLM sampling with:
1269 |
1270 | * `temperature`: Controls randomness (0.0 to 1.0)
1271 | * `maxTokens`: Maximum tokens to generate
1272 | * `stopSequences`: Array of sequences that stop generation
1273 | * `metadata`: Additional provider-specific parameters
1274 |
1275 | ## Response format
1276 |
1277 | The client returns a completion result:
1278 |
1279 | ```typescript
1280 | {
1281 | model: string, // Name of the model used
1282 | stopReason?: "endTurn" | "stopSequence" | "maxTokens" | string,
1283 | role: "user" | "assistant",
1284 | content: {
1285 | type: "text" | "image",
1286 | text?: string,
1287 | data?: string,
1288 | mimeType?: string
1289 | }
1290 | }
1291 | ```
1292 |
1293 | ## Example request
1294 |
1295 | Here's an example of requesting sampling from a client:
1296 |
1297 | ```json
1298 | {
1299 | "method": "sampling/createMessage",
1300 | "params": {
1301 | "messages": [
1302 | {
1303 | "role": "user",
1304 | "content": {
1305 | "type": "text",
1306 | "text": "What files are in the current directory?"
1307 | }
1308 | }
1309 | ],
1310 | "systemPrompt": "You are a helpful file system assistant.",
1311 | "includeContext": "thisServer",
1312 | "maxTokens": 100
1313 | }
1314 | }
1315 | ```
1316 |
1317 | ## Best practices
1318 |
1319 | When implementing sampling:
1320 |
1321 | 1. Always provide clear, well-structured prompts
1322 | 2. Handle both text and image content appropriately
1323 | 3. Set reasonable token limits
1324 | 4. Include relevant context through `includeContext`
1325 | 5. Validate responses before using them
1326 | 6. Handle errors gracefully
1327 | 7. Consider rate limiting sampling requests
1328 | 8. Document expected sampling behavior
1329 | 9. Test with various model parameters
1330 | 10. Monitor sampling costs
1331 |
1332 | ## Human in the loop controls
1333 |
1334 | Sampling is designed with human oversight in mind:
1335 |
1336 | ### For prompts
1337 |
1338 | * Clients should show users the proposed prompt
1339 | * Users should be able to modify or reject prompts
1340 | * System prompts can be filtered or modified
1341 | * Context inclusion is controlled by the client
1342 |
1343 | ### For completions
1344 |
1345 | * Clients should show users the completion
1346 | * Users should be able to modify or reject completions
1347 | * Clients can filter or modify completions
1348 | * Users control which model is used
1349 |
1350 | ## Security considerations
1351 |
1352 | When implementing sampling:
1353 |
1354 | * Validate all message content
1355 | * Sanitize sensitive information
1356 | * Implement appropriate rate limits
1357 | * Monitor sampling usage
1358 | * Encrypt data in transit
1359 | * Handle user data privacy
1360 | * Audit sampling requests
1361 | * Control cost exposure
1362 | * Implement timeouts
1363 | * Handle model errors gracefully
1364 |
1365 | ## Common patterns
1366 |
1367 | ### Agentic workflows
1368 |
1369 | Sampling enables agentic patterns like:
1370 |
1371 | * Reading and analyzing resources
1372 | * Making decisions based on context
1373 | * Generating structured data
1374 | * Handling multi-step tasks
1375 | * Providing interactive assistance
1376 |
1377 | ### Context management
1378 |
1379 | Best practices for context:
1380 |
1381 | * Request minimal necessary context
1382 | * Structure context clearly
1383 | * Handle context size limits
1384 | * Update context as needed
1385 | * Clean up stale context
1386 |
1387 | ### Error handling
1388 |
1389 | Robust error handling should:
1390 |
1391 | * Catch sampling failures
1392 | * Handle timeout errors
1393 | * Manage rate limits
1394 | * Validate responses
1395 | * Provide fallback behaviors
1396 | * Log errors appropriately
1397 |
1398 | ## Limitations
1399 |
1400 | Be aware of these limitations:
1401 |
1402 | * Sampling depends on client capabilities
1403 | * Users control sampling behavior
1404 | * Context size has limits
1405 | * Rate limits may apply
1406 | * Costs should be considered
1407 | * Model availability varies
1408 | * Response times vary
1409 | * Not all content types supported
1410 |
1411 |
1412 | # Tools
1413 |
1414 | Enable LLMs to perform actions through your server
1415 |
1416 | Tools are a powerful primitive in the Model Context Protocol (MCP) that enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world.
1417 |
1418 | <Note>
1419 | Tools are designed to be **model-controlled**, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval).
1420 | </Note>
1421 |
1422 | ## Overview
1423 |
1424 | Tools in MCP allow servers to expose executable functions that can be invoked by clients and used by LLMs to perform actions. Key aspects of tools include:
1425 |
1426 | * **Discovery**: Clients can list available tools through the `tools/list` endpoint
1427 | * **Invocation**: Tools are called using the `tools/call` endpoint, where servers perform the requested operation and return results
1428 | * **Flexibility**: Tools can range from simple calculations to complex API interactions
1429 |
1430 | Like [resources](/docs/concepts/resources), tools are identified by unique names and can include descriptions to guide their usage. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems.
1431 |
1432 | ## Tool definition structure
1433 |
1434 | Each tool is defined with the following structure:
1435 |
1436 | ```typescript
1437 | {
1438 | name: string; // Unique identifier for the tool
1439 | description?: string; // Human-readable description
1440 | inputSchema: { // JSON Schema for the tool's parameters
1441 | type: "object",
1442 | properties: { ... } // Tool-specific parameters
1443 | }
1444 | }
1445 | ```
1446 |
1447 | ## Implementing tools
1448 |
1449 | Here's an example of implementing a basic tool in an MCP server:
1450 |
1451 | <Tabs>
1452 | <Tab title="TypeScript">
1453 | ```typescript
1454 | const server = new Server({
1455 | name: "example-server",
1456 | version: "1.0.0"
1457 | }, {
1458 | capabilities: {
1459 | tools: {}
1460 | }
1461 | });
1462 |
1463 | // Define available tools
1464 | server.setRequestHandler(ListToolsRequestSchema, async () => {
1465 | return {
1466 | tools: [{
1467 | name: "calculate_sum",
1468 | description: "Add two numbers together",
1469 | inputSchema: {
1470 | type: "object",
1471 | properties: {
1472 | a: { type: "number" },
1473 | b: { type: "number" }
1474 | },
1475 | required: ["a", "b"]
1476 | }
1477 | }]
1478 | };
1479 | });
1480 |
1481 | // Handle tool execution
1482 | server.setRequestHandler(CallToolRequestSchema, async (request) => {
1483 | if (request.params.name === "calculate_sum") {
1484 | const { a, b } = request.params.arguments;
1485 | return {
1486 | toolResult: a + b
1487 | };
1488 | }
1489 | throw new Error("Tool not found");
1490 | });
1491 | ```
1492 | </Tab>
1493 |
1494 | <Tab title="Python">
1495 | ```python
1496 | app = Server("example-server")
1497 |
1498 | @app.list_tools()
1499 | async def list_tools() -> list[types.Tool]:
1500 | return [
1501 | types.Tool(
1502 | name="calculate_sum",
1503 | description="Add two numbers together",
1504 | inputSchema={
1505 | "type": "object",
1506 | "properties": {
1507 | "a": {"type": "number"},
1508 | "b": {"type": "number"}
1509 | },
1510 | "required": ["a", "b"]
1511 | }
1512 | )
1513 | ]
1514 |
1515 | @app.call_tool()
1516 | async def call_tool(
1517 | name: str,
1518 | arguments: dict
1519 | ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
1520 | if name == "calculate_sum":
1521 | a = arguments["a"]
1522 | b = arguments["b"]
1523 | result = a + b
1524 | return [types.TextContent(type="text", text=str(result))]
1525 | raise ValueError(f"Tool not found: {name}")
1526 | ```
1527 | </Tab>
1528 | </Tabs>
1529 |
1530 | ## Example tool patterns
1531 |
1532 | Here are some examples of types of tools that a server could provide:
1533 |
1534 | ### System operations
1535 |
1536 | Tools that interact with the local system:
1537 |
1538 | ```typescript
1539 | {
1540 | name: "execute_command",
1541 | description: "Run a shell command",
1542 | inputSchema: {
1543 | type: "object",
1544 | properties: {
1545 | command: { type: "string" },
1546 | args: { type: "array", items: { type: "string" } }
1547 | }
1548 | }
1549 | }
1550 | ```
1551 |
1552 | ### API integrations
1553 |
1554 | Tools that wrap external APIs:
1555 |
1556 | ```typescript
1557 | {
1558 | name: "github_create_issue",
1559 | description: "Create a GitHub issue",
1560 | inputSchema: {
1561 | type: "object",
1562 | properties: {
1563 | title: { type: "string" },
1564 | body: { type: "string" },
1565 | labels: { type: "array", items: { type: "string" } }
1566 | }
1567 | }
1568 | }
1569 | ```
1570 |
1571 | ### Data processing
1572 |
1573 | Tools that transform or analyze data:
1574 |
1575 | ```typescript
1576 | {
1577 | name: "analyze_csv",
1578 | description: "Analyze a CSV file",
1579 | inputSchema: {
1580 | type: "object",
1581 | properties: {
1582 | filepath: { type: "string" },
1583 | operations: {
1584 | type: "array",
1585 | items: {
1586 | enum: ["sum", "average", "count"]
1587 | }
1588 | }
1589 | }
1590 | }
1591 | }
1592 | ```
1593 |
1594 | ## Best practices
1595 |
1596 | When implementing tools:
1597 |
1598 | 1. Provide clear, descriptive names and descriptions
1599 | 2. Use detailed JSON Schema definitions for parameters
1600 | 3. Include examples in tool descriptions to demonstrate how the model should use them
1601 | 4. Implement proper error handling and validation
1602 | 5. Use progress reporting for long operations
1603 | 6. Keep tool operations focused and atomic
1604 | 7. Document expected return value structures
1605 | 8. Implement proper timeouts
1606 | 9. Consider rate limiting for resource-intensive operations
1607 | 10. Log tool usage for debugging and monitoring
1608 |
1609 | ## Security considerations
1610 |
1611 | When exposing tools:
1612 |
1613 | ### Input validation
1614 |
1615 | * Validate all parameters against the schema
1616 | * Sanitize file paths and system commands
1617 | * Validate URLs and external identifiers
1618 | * Check parameter sizes and ranges
1619 | * Prevent command injection
1620 |
1621 | ### Access control
1622 |
1623 | * Implement authentication where needed
1624 | * Use appropriate authorization checks
1625 | * Audit tool usage
1626 | * Rate limit requests
1627 | * Monitor for abuse
1628 |
1629 | ### Error handling
1630 |
1631 | * Don't expose internal errors to clients
1632 | * Log security-relevant errors
1633 | * Handle timeouts appropriately
1634 | * Clean up resources after errors
1635 | * Validate return values
1636 |
1637 | ## Tool discovery and updates
1638 |
1639 | MCP supports dynamic tool discovery:
1640 |
1641 | 1. Clients can list available tools at any time
1642 | 2. Servers can notify clients when tools change using `notifications/tools/list_changed`
1643 | 3. Tools can be added or removed during runtime
1644 | 4. Tool definitions can be updated (though this should be done carefully)
1645 |
1646 | ## Error handling
1647 |
1648 | Tool errors should be reported within the result object, not as MCP protocol-level errors. This allows the LLM to see and potentially handle the error. When a tool encounters an error:
1649 |
1650 | 1. Set `isError` to `true` in the result
1651 | 2. Include error details in the `content` array
1652 |
1653 | Here's an example of proper error handling for tools:
1654 |
1655 | <Tabs>
1656 | <Tab title="TypeScript">
1657 | ```typescript
1658 | try {
1659 | // Tool operation
1660 | const result = performOperation();
1661 | return {
1662 | content: [
1663 | {
1664 | type: "text",
1665 | text: `Operation successful: ${result}`
1666 | }
1667 | ]
1668 | };
1669 | } catch (error) {
1670 | return {
1671 | isError: true,
1672 | content: [
1673 | {
1674 | type: "text",
1675 | text: `Error: ${error.message}`
1676 | }
1677 | ]
1678 | };
1679 | }
1680 | ```
1681 | </Tab>
1682 |
1683 | <Tab title="Python">
1684 | ```python
1685 | try:
1686 | # Tool operation
1687 | result = perform_operation()
1688 | return types.CallToolResult(
1689 | content=[
1690 | types.TextContent(
1691 | type="text",
1692 | text=f"Operation successful: {result}"
1693 | )
1694 | ]
1695 | )
1696 | except Exception as error:
1697 | return types.CallToolResult(
1698 | isError=True,
1699 | content=[
1700 | types.TextContent(
1701 | type="text",
1702 | text=f"Error: {str(error)}"
1703 | )
1704 | ]
1705 | )
1706 | ```
1707 | </Tab>
1708 | </Tabs>
1709 |
1710 | This approach allows the LLM to see that an error occurred and potentially take corrective action or request human intervention.
1711 |
1712 | ## Testing tools
1713 |
1714 | A comprehensive testing strategy for MCP tools should cover:
1715 |
1716 | * **Functional testing**: Verify tools execute correctly with valid inputs and handle invalid inputs appropriately
1717 | * **Integration testing**: Test tool interaction with external systems using both real and mocked dependencies
1718 | * **Security testing**: Validate authentication, authorization, input sanitization, and rate limiting
1719 | * **Performance testing**: Check behavior under load, timeout handling, and resource cleanup
1720 | * **Error handling**: Ensure tools properly report errors through the MCP protocol and clean up resources
1721 |
1722 |
1723 | # Transports
1724 |
1725 | Learn about MCP's communication mechanisms
1726 |
1727 | Transports in the Model Context Protocol (MCP) provide the foundation for communication between clients and servers. A transport handles the underlying mechanics of how messages are sent and received.
1728 |
1729 | ## Message Format
1730 |
1731 | MCP uses [JSON-RPC](https://www.jsonrpc.org/) 2.0 as its wire format. The transport layer is responsible for converting MCP protocol messages into JSON-RPC format for transmission and converting received JSON-RPC messages back into MCP protocol messages.
1732 |
1733 | There are three types of JSON-RPC messages used:
1734 |
1735 | ### Requests
1736 |
1737 | ```typescript
1738 | {
1739 | jsonrpc: "2.0",
1740 | id: number | string,
1741 | method: string,
1742 | params?: object
1743 | }
1744 | ```
1745 |
1746 | ### Responses
1747 |
1748 | ```typescript
1749 | {
1750 | jsonrpc: "2.0",
1751 | id: number | string,
1752 | result?: object,
1753 | error?: {
1754 | code: number,
1755 | message: string,
1756 | data?: unknown
1757 | }
1758 | }
1759 | ```
1760 |
1761 | ### Notifications
1762 |
1763 | ```typescript
1764 | {
1765 | jsonrpc: "2.0",
1766 | method: string,
1767 | params?: object
1768 | }
1769 | ```
1770 |
1771 | ## Built-in Transport Types
1772 |
1773 | MCP includes two standard transport implementations:
1774 |
1775 | ### Standard Input/Output (stdio)
1776 |
1777 | The stdio transport enables communication through standard input and output streams. This is particularly useful for local integrations and command-line tools.
1778 |
1779 | Use stdio when:
1780 |
1781 | * Building command-line tools
1782 | * Implementing local integrations
1783 | * Needing simple process communication
1784 | * Working with shell scripts
1785 |
1786 | <Tabs>
1787 | <Tab title="TypeScript (Server)">
1788 | ```typescript
1789 | const server = new Server({
1790 | name: "example-server",
1791 | version: "1.0.0"
1792 | }, {
1793 | capabilities: {}
1794 | });
1795 |
1796 | const transport = new StdioServerTransport();
1797 | await server.connect(transport);
1798 | ```
1799 | </Tab>
1800 |
1801 | <Tab title="TypeScript (Client)">
1802 | ```typescript
1803 | const client = new Client({
1804 | name: "example-client",
1805 | version: "1.0.0"
1806 | }, {
1807 | capabilities: {}
1808 | });
1809 |
1810 | const transport = new StdioClientTransport({
1811 | command: "./server",
1812 | args: ["--option", "value"]
1813 | });
1814 | await client.connect(transport);
1815 | ```
1816 | </Tab>
1817 |
1818 | <Tab title="Python (Server)">
1819 | ```python
1820 | app = Server("example-server")
1821 |
1822 | async with stdio_server() as streams:
1823 | await app.run(
1824 | streams[0],
1825 | streams[1],
1826 | app.create_initialization_options()
1827 | )
1828 | ```
1829 | </Tab>
1830 |
1831 | <Tab title="Python (Client)">
1832 | ```python
1833 | params = StdioServerParameters(
1834 | command="./server",
1835 | args=["--option", "value"]
1836 | )
1837 |
1838 | async with stdio_client(params) as streams:
1839 | async with ClientSession(streams[0], streams[1]) as session:
1840 | await session.initialize()
1841 | ```
1842 | </Tab>
1843 | </Tabs>
1844 |
1845 | ### Server-Sent Events (SSE)
1846 |
1847 | SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication.
1848 |
1849 | Use SSE when:
1850 |
1851 | * Only server-to-client streaming is needed
1852 | * Working with restricted networks
1853 | * Implementing simple updates
1854 |
1855 | <Tabs>
1856 | <Tab title="TypeScript (Server)">
1857 | ```typescript
1858 | const server = new Server({
1859 | name: "example-server",
1860 | version: "1.0.0"
1861 | }, {
1862 | capabilities: {}
1863 | });
1864 |
1865 | const transport = new SSEServerTransport("/message", response);
1866 | await server.connect(transport);
1867 | ```
1868 | </Tab>
1869 |
1870 | <Tab title="TypeScript (Client)">
1871 | ```typescript
1872 | const client = new Client({
1873 | name: "example-client",
1874 | version: "1.0.0"
1875 | }, {
1876 | capabilities: {}
1877 | });
1878 |
1879 | const transport = new SSEClientTransport(
1880 | new URL("http://localhost:3000/sse")
1881 | );
1882 | await client.connect(transport);
1883 | ```
1884 | </Tab>
1885 |
1886 | <Tab title="Python (Server)">
1887 | ```python
1888 | from mcp.server.sse import SseServerTransport
1889 | from starlette.applications import Starlette
1890 | from starlette.routing import Route
1891 |
1892 | app = Server("example-server")
1893 | sse = SseServerTransport("/messages")
1894 |
1895 | async def handle_sse(scope, receive, send):
1896 | async with sse.connect_sse(scope, receive, send) as streams:
1897 | await app.run(streams[0], streams[1], app.create_initialization_options())
1898 |
1899 | async def handle_messages(scope, receive, send):
1900 | await sse.handle_post_message(scope, receive, send)
1901 |
1902 | starlette_app = Starlette(
1903 | routes=[
1904 | Route("/sse", endpoint=handle_sse),
1905 | Route("/messages", endpoint=handle_messages, methods=["POST"]),
1906 | ]
1907 | )
1908 | ```
1909 | </Tab>
1910 |
1911 | <Tab title="Python (Client)">
1912 | ```python
1913 | async with sse_client("http://localhost:8000/sse") as streams:
1914 | async with ClientSession(streams[0], streams[1]) as session:
1915 | await session.initialize()
1916 | ```
1917 | </Tab>
1918 | </Tabs>
1919 |
1920 | ## Custom Transports
1921 |
1922 | MCP makes it easy to implement custom transports for specific needs. Any transport implementation just needs to conform to the Transport interface:
1923 |
1924 | You can implement custom transports for:
1925 |
1926 | * Custom network protocols
1927 | * Specialized communication channels
1928 | * Integration with existing systems
1929 | * Performance optimization
1930 |
1931 | <Tabs>
1932 | <Tab title="TypeScript">
1933 | ```typescript
1934 | interface Transport {
1935 | // Start processing messages
1936 | start(): Promise<void>;
1937 |
1938 | // Send a JSON-RPC message
1939 | send(message: JSONRPCMessage): Promise<void>;
1940 |
1941 | // Close the connection
1942 | close(): Promise<void>;
1943 |
1944 | // Callbacks
1945 | onclose?: () => void;
1946 | onerror?: (error: Error) => void;
1947 | onmessage?: (message: JSONRPCMessage) => void;
1948 | }
1949 | ```
1950 | </Tab>
1951 |
1952 | <Tab title="Python">
1953 | Note that while MCP Servers are often implemented with asyncio, we recommend
1954 | implementing low-level interfaces like transports with `anyio` for wider compatibility.
1955 |
1956 | ```python
1957 | @contextmanager
1958 | async def create_transport(
1959 | read_stream: MemoryObjectReceiveStream[JSONRPCMessage | Exception],
1960 | write_stream: MemoryObjectSendStream[JSONRPCMessage]
1961 | ):
1962 | """
1963 | Transport interface for MCP.
1964 |
1965 | Args:
1966 | read_stream: Stream to read incoming messages from
1967 | write_stream: Stream to write outgoing messages to
1968 | """
1969 | async with anyio.create_task_group() as tg:
1970 | try:
1971 | # Start processing messages
1972 | tg.start_soon(lambda: process_messages(read_stream))
1973 |
1974 | # Send messages
1975 | async with write_stream:
1976 | yield write_stream
1977 |
1978 | except Exception as exc:
1979 | # Handle errors
1980 | raise exc
1981 | finally:
1982 | # Clean up
1983 | tg.cancel_scope.cancel()
1984 | await write_stream.aclose()
1985 | await read_stream.aclose()
1986 | ```
1987 | </Tab>
1988 | </Tabs>
1989 |
1990 | ## Error Handling
1991 |
1992 | Transport implementations should handle various error scenarios:
1993 |
1994 | 1. Connection errors
1995 | 2. Message parsing errors
1996 | 3. Protocol errors
1997 | 4. Network timeouts
1998 | 5. Resource cleanup
1999 |
2000 | Example error handling:
2001 |
2002 | <Tabs>
2003 | <Tab title="TypeScript">
2004 | ```typescript
2005 | class ExampleTransport implements Transport {
2006 | async start() {
2007 | try {
2008 | // Connection logic
2009 | } catch (error) {
2010 | this.onerror?.(new Error(`Failed to connect: ${error}`));
2011 | throw error;
2012 | }
2013 | }
2014 |
2015 | async send(message: JSONRPCMessage) {
2016 | try {
2017 | // Sending logic
2018 | } catch (error) {
2019 | this.onerror?.(new Error(`Failed to send message: ${error}`));
2020 | throw error;
2021 | }
2022 | }
2023 | }
2024 | ```
2025 | </Tab>
2026 |
2027 | <Tab title="Python">
2028 | Note that while MCP Servers are often implemented with asyncio, we recommend
2029 | implementing low-level interfaces like transports with `anyio` for wider compatibility.
2030 |
2031 | ```python
2032 | @contextmanager
2033 | async def example_transport(scope: Scope, receive: Receive, send: Send):
2034 | try:
2035 | # Create streams for bidirectional communication
2036 | read_stream_writer, read_stream = anyio.create_memory_object_stream(0)
2037 | write_stream, write_stream_reader = anyio.create_memory_object_stream(0)
2038 |
2039 | async def message_handler():
2040 | try:
2041 | async with read_stream_writer:
2042 | # Message handling logic
2043 | pass
2044 | except Exception as exc:
2045 | logger.error(f"Failed to handle message: {exc}")
2046 | raise exc
2047 |
2048 | async with anyio.create_task_group() as tg:
2049 | tg.start_soon(message_handler)
2050 | try:
2051 | # Yield streams for communication
2052 | yield read_stream, write_stream
2053 | except Exception as exc:
2054 | logger.error(f"Transport error: {exc}")
2055 | raise exc
2056 | finally:
2057 | tg.cancel_scope.cancel()
2058 | await write_stream.aclose()
2059 | await read_stream.aclose()
2060 | except Exception as exc:
2061 | logger.error(f"Failed to initialize transport: {exc}")
2062 | raise exc
2063 | ```
2064 | </Tab>
2065 | </Tabs>
2066 |
2067 | ## Best Practices
2068 |
2069 | When implementing or using MCP transport:
2070 |
2071 | 1. Handle connection lifecycle properly
2072 | 2. Implement proper error handling
2073 | 3. Clean up resources on connection close
2074 | 4. Use appropriate timeouts
2075 | 5. Validate messages before sending
2076 | 6. Log transport events for debugging
2077 | 7. Implement reconnection logic when appropriate
2078 | 8. Handle backpressure in message queues
2079 | 9. Monitor connection health
2080 | 10. Implement proper security measures
2081 |
2082 | ## Security Considerations
2083 |
2084 | When implementing transport:
2085 |
2086 | ### Authentication and Authorization
2087 |
2088 | * Implement proper authentication mechanisms
2089 | * Validate client credentials
2090 | * Use secure token handling
2091 | * Implement authorization checks
2092 |
2093 | ### Data Security
2094 |
2095 | * Use TLS for network transport
2096 | * Encrypt sensitive data
2097 | * Validate message integrity
2098 | * Implement message size limits
2099 | * Sanitize input data
2100 |
2101 | ### Network Security
2102 |
2103 | * Implement rate limiting
2104 | * Use appropriate timeouts
2105 | * Handle denial of service scenarios
2106 | * Monitor for unusual patterns
2107 | * Implement proper firewall rules
2108 |
2109 | ## Debugging Transport
2110 |
2111 | Tips for debugging transport issues:
2112 |
2113 | 1. Enable debug logging
2114 | 2. Monitor message flow
2115 | 3. Check connection states
2116 | 4. Validate message formats
2117 | 5. Test error scenarios
2118 | 6. Use network analysis tools
2119 | 7. Implement health checks
2120 | 8. Monitor resource usage
2121 | 9. Test edge cases
2122 | 10. Use proper error tracking
2123 |
2124 |
2125 | # Debugging
2126 |
2127 | A comprehensive guide to debugging Model Context Protocol (MCP) integrations
2128 |
2129 | Effective debugging is essential when developing MCP servers or integrating them with applications. This guide covers the debugging tools and approaches available in the MCP ecosystem.
2130 |
2131 | <Info>
2132 | This guide is for macOS. Guides for other platforms are coming soon.
2133 | </Info>
2134 |
2135 | ## Debugging tools overview
2136 |
2137 | MCP provides several tools for debugging at different levels:
2138 |
2139 | 1. **MCP Inspector**
2140 | * Interactive debugging interface
2141 | * Direct server testing
2142 | * See the [Inspector guide](/docs/tools/inspector) for details
2143 |
2144 | 2. **Claude Desktop Developer Tools**
2145 | * Integration testing
2146 | * Log collection
2147 | * Chrome DevTools integration
2148 |
2149 | 3. **Server Logging**
2150 | * Custom logging implementations
2151 | * Error tracking
2152 | * Performance monitoring
2153 |
2154 | ## Debugging in Claude Desktop
2155 |
2156 | ### Checking server status
2157 |
2158 | The Claude.app interface provides basic server status information:
2159 |
2160 | 1. Click the <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-plug-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon to view:
2161 | * Connected servers
2162 | * Available prompts and resources
2163 |
2164 | 2. Click the <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-hammer-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon to view:
2165 | * Tools made available to the model
2166 |
2167 | ### Viewing logs
2168 |
2169 | Review detailed MCP logs from Claude Desktop:
2170 |
2171 | ```bash
2172 | # Follow logs in real-time
2173 | tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
2174 | ```
2175 |
2176 | The logs capture:
2177 |
2178 | * Server connection events
2179 | * Configuration issues
2180 | * Runtime errors
2181 | * Message exchanges
2182 |
2183 | ### Using Chrome DevTools
2184 |
2185 | Access Chrome's developer tools inside Claude Desktop to investigate client-side errors:
2186 |
2187 | 1. Enable DevTools:
2188 |
2189 | ```bash
2190 | jq '.allowDevTools = true' ~/Library/Application\ Support/Claude/developer_settings.json > tmp.json \
2191 | && mv tmp.json ~/Library/Application\ Support/Claude/developer_settings.json
2192 | ```
2193 |
2194 | 2. Open DevTools: `Command-Option-Shift-i`
2195 |
2196 | Note: You'll see two DevTools windows:
2197 |
2198 | * Main content window
2199 | * App title bar window
2200 |
2201 | Use the Console panel to inspect client-side errors.
2202 |
2203 | Use the Network panel to inspect:
2204 |
2205 | * Message payloads
2206 | * Connection timing
2207 |
2208 | ## Common issues
2209 |
2210 | ### Environment variables
2211 |
2212 | MCP servers inherit only a subset of environment variables automatically, like `USER`, `HOME`, and `PATH`.
2213 |
2214 | To override the default variables or provide your own, you can specify an `env` key in `claude_desktop_config.json`:
2215 |
2216 | ```json
2217 | {
2218 | "myserver": {
2219 | "command": "mcp-server-myapp",
2220 | "env": {
2221 | "MYAPP_API_KEY": "some_key",
2222 | }
2223 | }
2224 | }
2225 | ```
2226 |
2227 | ### Server initialization
2228 |
2229 | Common initialization problems:
2230 |
2231 | 1. **Path Issues**
2232 | * Incorrect server executable path
2233 | * Missing required files
2234 | * Permission problems
2235 |
2236 | 2. **Configuration Errors**
2237 | * Invalid JSON syntax
2238 | * Missing required fields
2239 | * Type mismatches
2240 |
2241 | 3. **Environment Problems**
2242 | * Missing environment variables
2243 | * Incorrect variable values
2244 | * Permission restrictions
2245 |
2246 | ### Connection problems
2247 |
2248 | When servers fail to connect:
2249 |
2250 | 1. Check Claude Desktop logs
2251 | 2. Verify server process is running
2252 | 3. Test standalone with [Inspector](/docs/tools/inspector)
2253 | 4. Verify protocol compatibility
2254 |
2255 | ## Implementing logging
2256 |
2257 | ### Server-side logging
2258 |
2259 | When building a server that uses the local stdio [transport](/docs/concepts/transports), all messages logged to stderr (standard error) will be captured by the host application (e.g., Claude Desktop) automatically.
2260 |
2261 | <Warning>
2262 | Local MCP servers should not log messages to stdout (standard out), as this will interfere with protocol operation.
2263 | </Warning>
2264 |
2265 | For all [transports](/docs/concepts/transports), you can also provide logging to the client by sending a log message notification:
2266 |
2267 | <Tabs>
2268 | <Tab title="Python">
2269 | ```python
2270 | server.request_context.session.send_log_message(
2271 | level="info",
2272 | data="Server started successfully",
2273 | )
2274 | ```
2275 | </Tab>
2276 |
2277 | <Tab title="TypeScript">
2278 | ```typescript
2279 | server.sendLoggingMessage({
2280 | level: "info",
2281 | data: "Server started successfully",
2282 | });
2283 | ```
2284 | </Tab>
2285 | </Tabs>
2286 |
2287 | Important events to log:
2288 |
2289 | * Initialization steps
2290 | * Resource access
2291 | * Tool execution
2292 | * Error conditions
2293 | * Performance metrics
2294 |
2295 | ### Client-side logging
2296 |
2297 | In client applications:
2298 |
2299 | 1. Enable debug logging
2300 | 2. Monitor network traffic
2301 | 3. Track message exchanges
2302 | 4. Record error states
2303 |
2304 | ## Debugging workflow
2305 |
2306 | ### Development cycle
2307 |
2308 | 1. Initial Development
2309 | * Use [Inspector](/docs/tools/inspector) for basic testing
2310 | * Implement core functionality
2311 | * Add logging points
2312 |
2313 | 2. Integration Testing
2314 | * Test in Claude Desktop
2315 | * Monitor logs
2316 | * Check error handling
2317 |
2318 | ### Testing changes
2319 |
2320 | To test changes efficiently:
2321 |
2322 | * **Configuration changes**: Restart Claude Desktop
2323 | * **Server code changes**: Use Command-R to reload
2324 | * **Quick iteration**: Use [Inspector](/docs/tools/inspector) during development
2325 |
2326 | ## Best practices
2327 |
2328 | ### Logging strategy
2329 |
2330 | 1. **Structured Logging**
2331 | * Use consistent formats
2332 | * Include context
2333 | * Add timestamps
2334 | * Track request IDs
2335 |
2336 | 2. **Error Handling**
2337 | * Log stack traces
2338 | * Include error context
2339 | * Track error patterns
2340 | * Monitor recovery
2341 |
2342 | 3. **Performance Tracking**
2343 | * Log operation timing
2344 | * Monitor resource usage
2345 | * Track message sizes
2346 | * Measure latency
2347 |
2348 | ### Security considerations
2349 |
2350 | When debugging:
2351 |
2352 | 1. **Sensitive Data**
2353 | * Sanitize logs
2354 | * Protect credentials
2355 | * Mask personal information
2356 |
2357 | 2. **Access Control**
2358 | * Verify permissions
2359 | * Check authentication
2360 | * Monitor access patterns
2361 |
2362 | ## Getting help
2363 |
2364 | When encountering issues:
2365 |
2366 | 1. **First Steps**
2367 | * Check server logs
2368 | * Test with [Inspector](/docs/tools/inspector)
2369 | * Review configuration
2370 | * Verify environment
2371 |
2372 | 2. **Support Channels**
2373 | * GitHub issues
2374 | * GitHub discussions
2375 |
2376 | 3. **Providing Information**
2377 | * Log excerpts
2378 | * Configuration files
2379 | * Steps to reproduce
2380 | * Environment details
2381 |
2382 | ## Next steps
2383 |
2384 | <CardGroup cols={2}>
2385 | <Card title="MCP Inspector" icon="magnifying-glass" href="/docs/tools/inspector">
2386 | Learn to use the MCP Inspector
2387 | </Card>
2388 | </CardGroup>
2389 |
2390 |
2391 | # Inspector
2392 |
2393 | In-depth guide to using the MCP Inspector for testing and debugging Model Context Protocol servers
2394 |
2395 | The [MCP Inspector](https://github.com/modelcontextprotocol/inspector) is an interactive developer tool for testing and debugging MCP servers. While the [Debugging Guide](/docs/tools/debugging) covers the Inspector as part of the overall debugging toolkit, this document provides a detailed exploration of the Inspector's features and capabilities.
2396 |
2397 | ## Getting started
2398 |
2399 | ### Installation and basic usage
2400 |
2401 | The Inspector runs directly through `npx` without requiring installation:
2402 |
2403 | ```bash
2404 | npx @modelcontextprotocol/inspector <command>
2405 | ```
2406 |
2407 | ```bash
2408 | npx @modelcontextprotocol/inspector <command> <arg1> <arg2>
2409 | ```
2410 |
2411 | #### Inspecting servers from NPM or PyPi
2412 |
2413 | A common way to start server packages from [NPM](https://npmjs.com) or [PyPi](https://pypi.com).
2414 |
2415 | <Tabs>
2416 | <Tab title="NPM package">
2417 | ```bash
2418 | npx -y @modelcontextprotocol/inspector npx <package-name> <args>
2419 | # For example
2420 | npx -y @modelcontextprotocol/inspector npx server-postgres postgres://127.0.0.1/testdb
2421 | ```
2422 | </Tab>
2423 |
2424 | <Tab title="PyPi package">
2425 | ```bash
2426 | npx @modelcontextprotocol/inspector uvx <package-name> <args>
2427 | # For example
2428 | npx @modelcontextprotocol/inspector uvx mcp-server-git --repository ~/code/mcp/servers.git
2429 | ```
2430 | </Tab>
2431 | </Tabs>
2432 |
2433 | #### Inspecting locally developed servers
2434 |
2435 | To inspect servers locally developed or downloaded as a repository, the most common
2436 | way is:
2437 |
2438 | <Tabs>
2439 | <Tab title="TypeScript">
2440 | ```bash
2441 | npx @modelcontextprotocol/inspector node path/to/server/index.js args...
2442 | ```
2443 | </Tab>
2444 |
2445 | <Tab title="Python">
2446 | ```bash
2447 | npx @modelcontextprotocol/inspector \
2448 | uv \
2449 | --directory path/to/server \
2450 | run \
2451 | package-name \
2452 | args...
2453 | ```
2454 | </Tab>
2455 | </Tabs>
2456 |
2457 | Please carefully read any attached README for the most accurate instructions.
2458 |
2459 | ## Feature overview
2460 |
2461 | <Frame caption="The MCP Inspector interface">
2462 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/mcp-inspector.png" />
2463 | </Frame>
2464 |
2465 | The Inspector provides several features for interacting with your MCP server:
2466 |
2467 | ### Server connection pane
2468 |
2469 | * Allows selecting the [transport](/docs/concepts/transports) for connecting to the server
2470 | * For local servers, supports customizing the command-line arguments and environment
2471 |
2472 | ### Resources tab
2473 |
2474 | * Lists all available resources
2475 | * Shows resource metadata (MIME types, descriptions)
2476 | * Allows resource content inspection
2477 | * Supports subscription testing
2478 |
2479 | ### Prompts tab
2480 |
2481 | * Displays available prompt templates
2482 | * Shows prompt arguments and descriptions
2483 | * Enables prompt testing with custom arguments
2484 | * Previews generated messages
2485 |
2486 | ### Tools tab
2487 |
2488 | * Lists available tools
2489 | * Shows tool schemas and descriptions
2490 | * Enables tool testing with custom inputs
2491 | * Displays tool execution results
2492 |
2493 | ### Notifications pane
2494 |
2495 | * Presents all logs recorded from the server
2496 | * Shows notifications received from the server
2497 |
2498 | ## Best practices
2499 |
2500 | ### Development workflow
2501 |
2502 | 1. Start Development
2503 | * Launch Inspector with your server
2504 | * Verify basic connectivity
2505 | * Check capability negotiation
2506 |
2507 | 2. Iterative testing
2508 | * Make server changes
2509 | * Rebuild the server
2510 | * Reconnect the Inspector
2511 | * Test affected features
2512 | * Monitor messages
2513 |
2514 | 3. Test edge cases
2515 | * Invalid inputs
2516 | * Missing prompt arguments
2517 | * Concurrent operations
2518 | * Verify error handling and error responses
2519 |
2520 | ## Next steps
2521 |
2522 | <CardGroup cols={2}>
2523 | <Card title="Inspector Repository" icon="github" href="https://github.com/modelcontextprotocol/inspector">
2524 | Check out the MCP Inspector source code
2525 | </Card>
2526 |
2527 | <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging">
2528 | Learn about broader debugging strategies
2529 | </Card>
2530 | </CardGroup>
2531 |
2532 |
2533 | # Examples
2534 |
2535 | A list of example servers and implementations
2536 |
2537 | This page showcases various Model Context Protocol (MCP) servers that demonstrate the protocol's capabilities and versatility. These servers enable Large Language Models (LLMs) to securely access tools and data sources.
2538 |
2539 | ## Reference implementations
2540 |
2541 | These official reference servers demonstrate core MCP features and SDK usage:
2542 |
2543 | ### Data and file systems
2544 |
2545 | * **[Filesystem](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem)** - Secure file operations with configurable access controls
2546 | * **[PostgreSQL](https://github.com/modelcontextprotocol/servers/tree/main/src/postgres)** - Read-only database access with schema inspection capabilities
2547 | * **[SQLite](https://github.com/modelcontextprotocol/servers/tree/main/src/sqlite)** - Database interaction and business intelligence features
2548 | * **[Google Drive](https://github.com/modelcontextprotocol/servers/tree/main/src/gdrive)** - File access and search capabilities for Google Drive
2549 |
2550 | ### Development tools
2551 |
2552 | * **[Git](https://github.com/modelcontextprotocol/servers/tree/main/src/git)** - Tools to read, search, and manipulate Git repositories
2553 | * **[GitHub](https://github.com/modelcontextprotocol/servers/tree/main/src/github)** - Repository management, file operations, and GitHub API integration
2554 | * **[GitLab](https://github.com/modelcontextprotocol/servers/tree/main/src/gitlab)** - GitLab API integration enabling project management
2555 | * **[Sentry](https://github.com/modelcontextprotocol/servers/tree/main/src/sentry)** - Retrieving and analyzing issues from Sentry.io
2556 |
2557 | ### Web and browser automation
2558 |
2559 | * **[Brave Search](https://github.com/modelcontextprotocol/servers/tree/main/src/brave-search)** - Web and local search using Brave's Search API
2560 | * **[Fetch](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch)** - Web content fetching and conversion optimized for LLM usage
2561 | * **[Puppeteer](https://github.com/modelcontextprotocol/servers/tree/main/src/puppeteer)** - Browser automation and web scraping capabilities
2562 |
2563 | ### Productivity and communication
2564 |
2565 | * **[Slack](https://github.com/modelcontextprotocol/servers/tree/main/src/slack)** - Channel management and messaging capabilities
2566 | * **[Google Maps](https://github.com/modelcontextprotocol/servers/tree/main/src/google-maps)** - Location services, directions, and place details
2567 | * **[Memory](https://github.com/modelcontextprotocol/servers/tree/main/src/memory)** - Knowledge graph-based persistent memory system
2568 |
2569 | ### AI and specialized tools
2570 |
2571 | * **[EverArt](https://github.com/modelcontextprotocol/servers/tree/main/src/everart)** - AI image generation using various models
2572 | * **[Sequential Thinking](https://github.com/modelcontextprotocol/servers/tree/main/src/sequentialthinking)** - Dynamic problem-solving through thought sequences
2573 | * **[AWS KB Retrieval](https://github.com/modelcontextprotocol/servers/tree/main/src/aws-kb-retrieval-server)** - Retrieval from AWS Knowledge Base using Bedrock Agent Runtime
2574 |
2575 | ## Official integrations
2576 |
2577 | These MCP servers are maintained by companies for their platforms:
2578 |
2579 | * **[Axiom](https://github.com/axiomhq/mcp-server-axiom)** - Query and analyze logs, traces, and event data using natural language
2580 | * **[Browserbase](https://github.com/browserbase/mcp-server-browserbase)** - Automate browser interactions in the cloud
2581 | * **[Cloudflare](https://github.com/cloudflare/mcp-server-cloudflare)** - Deploy and manage resources on the Cloudflare developer platform
2582 | * **[E2B](https://github.com/e2b-dev/mcp-server)** - Execute code in secure cloud sandboxes
2583 | * **[Neon](https://github.com/neondatabase/mcp-server-neon)** - Interact with the Neon serverless Postgres platform
2584 | * **[Obsidian Markdown Notes](https://github.com/calclavia/mcp-obsidian)** - Read and search through Markdown notes in Obsidian vaults
2585 | * **[Qdrant](https://github.com/qdrant/mcp-server-qdrant/)** - Implement semantic memory using the Qdrant vector search engine
2586 | * **[Raygun](https://github.com/MindscapeHQ/mcp-server-raygun)** - Access crash reporting and monitoring data
2587 | * **[Search1API](https://github.com/fatwang2/search1api-mcp)** - Unified API for search, crawling, and sitemaps
2588 | * **[Tinybird](https://github.com/tinybirdco/mcp-tinybird)** - Interface with the Tinybird serverless ClickHouse platform
2589 |
2590 | ## Community highlights
2591 |
2592 | A growing ecosystem of community-developed servers extends MCP's capabilities:
2593 |
2594 | * **[Docker](https://github.com/ckreiling/mcp-server-docker)** - Manage containers, images, volumes, and networks
2595 | * **[Kubernetes](https://github.com/Flux159/mcp-server-kubernetes)** - Manage pods, deployments, and services
2596 | * **[Linear](https://github.com/jerhadf/linear-mcp-server)** - Project management and issue tracking
2597 | * **[Snowflake](https://github.com/datawiz168/mcp-snowflake-service)** - Interact with Snowflake databases
2598 | * **[Spotify](https://github.com/varunneal/spotify-mcp)** - Control Spotify playback and manage playlists
2599 | * **[Todoist](https://github.com/abhiz123/todoist-mcp-server)** - Task management integration
2600 |
2601 | > **Note:** Community servers are untested and should be used at your own risk. They are not affiliated with or endorsed by Anthropic.
2602 |
2603 | For a complete list of community servers, visit the [MCP Servers Repository](https://github.com/modelcontextprotocol/servers).
2604 |
2605 | ## Getting started
2606 |
2607 | ### Using reference servers
2608 |
2609 | TypeScript-based servers can be used directly with `npx`:
2610 |
2611 | ```bash
2612 | npx -y @modelcontextprotocol/server-memory
2613 | ```
2614 |
2615 | Python-based servers can be used with `uvx` (recommended) or `pip`:
2616 |
2617 | ```bash
2618 | # Using uvx
2619 | uvx mcp-server-git
2620 |
2621 | # Using pip
2622 | pip install mcp-server-git
2623 | python -m mcp_server_git
2624 | ```
2625 |
2626 | ### Configuring with Claude
2627 |
2628 | To use an MCP server with Claude, add it to your configuration:
2629 |
2630 | ```json
2631 | {
2632 | "mcpServers": {
2633 | "memory": {
2634 | "command": "npx",
2635 | "args": ["-y", "@modelcontextprotocol/server-memory"]
2636 | },
2637 | "filesystem": {
2638 | "command": "npx",
2639 | "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"]
2640 | },
2641 | "github": {
2642 | "command": "npx",
2643 | "args": ["-y", "@modelcontextprotocol/server-github"],
2644 | "env": {
2645 | "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>"
2646 | }
2647 | }
2648 | }
2649 | }
2650 | ```
2651 |
2652 | ## Additional resources
2653 |
2654 | * [MCP Servers Repository](https://github.com/modelcontextprotocol/servers) - Complete collection of reference implementations and community servers
2655 | * [Awesome MCP Servers](https://github.com/punkpeye/awesome-mcp-servers) - Curated list of MCP servers
2656 | * [MCP CLI](https://github.com/wong2/mcp-cli) - Command-line inspector for testing MCP servers
2657 | * [MCP Get](https://mcp-get.com) - Tool for installing and managing MCP servers
2658 |
2659 | Visit our [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) to engage with the MCP community.
2660 |
2661 |
2662 | # Introduction
2663 |
2664 | Get started with the Model Context Protocol (MCP)
2665 |
2666 | MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
2667 |
2668 | ## Why MCP?
2669 |
2670 | MCP helps you build agents and complex workflows on top of LLMs. LLMs frequently need to integrate with data and tools, and MCP provides:
2671 |
2672 | * A growing list of pre-built integrations that your LLM can directly plug into
2673 | * The flexibility to switch between LLM providers and vendors
2674 | * Best practices for securing your data within your infrastructure
2675 |
2676 | ### General architecture
2677 |
2678 | At its core, MCP follows a client-server architecture where a host application can connect to multiple servers:
2679 |
2680 | ```mermaid
2681 | flowchart LR
2682 | subgraph "Your Computer"
2683 | Host["MCP Host\n(Claude, IDEs, Tools)"]
2684 | S1["MCP Server A"]
2685 | S2["MCP Server B"]
2686 | S3["MCP Server C"]
2687 | Host <-->|"MCP Protocol"| S1
2688 | Host <-->|"MCP Protocol"| S2
2689 | Host <-->|"MCP Protocol"| S3
2690 | S1 <--> D1[("Local\nData Source A")]
2691 | S2 <--> D2[("Local\nData Source B")]
2692 | end
2693 | subgraph "Internet"
2694 | S3 <-->|"Web APIs"| D3[("Remote\nService C")]
2695 | end
2696 | ```
2697 |
2698 | * **MCP Hosts**: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP
2699 | * **MCP Clients**: Protocol clients that maintain 1:1 connections with servers
2700 | * **MCP Servers**: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol
2701 | * **Local Data Sources**: Your computer's files, databases, and services that MCP servers can securely access
2702 | * **Remote Services**: External systems available over the internet (e.g., through APIs) that MCP servers can connect to
2703 |
2704 | ## Get started
2705 |
2706 | Choose the path that best fits your needs:
2707 |
2708 | <CardGroup cols={2}>
2709 | <Card title="Quickstart" icon="bolt" href="/quickstart">
2710 | Build and connect to your first MCP server
2711 | </Card>
2712 |
2713 | <Card title="Examples" icon="grid" href="/examples">
2714 | Check out our gallery of official MCP servers and implementations
2715 | </Card>
2716 |
2717 | <Card title="Clients" icon="cubes" href="/clients">
2718 | View the list of clients that support MCP integrations
2719 | </Card>
2720 | </CardGroup>
2721 |
2722 | ## Tutorials
2723 |
2724 | <CardGroup cols={2}>
2725 | <Card title="Building a MCP client" icon="outlet" href="/tutorials/building-a-client">
2726 | Learn how to build your first MCP client
2727 | </Card>
2728 |
2729 | <Card title="Building MCP with LLMs" icon="comments" href="/tutorials/building-mcp-with-llms">
2730 | Learn how to use LLMs like Claude to speed up your MCP development
2731 | </Card>
2732 |
2733 | <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging">
2734 | Learn how to effectively debug MCP servers and integrations
2735 | </Card>
2736 |
2737 | <Card title="MCP Inspector" icon="magnifying-glass" href="/docs/tools/inspector">
2738 | Test and inspect your MCP servers with our interactive debugging tool
2739 | </Card>
2740 | </CardGroup>
2741 |
2742 | ## Explore MCP
2743 |
2744 | Dive deeper into MCP's core concepts and capabilities:
2745 |
2746 | <CardGroup cols={2}>
2747 | <Card title="Core architecture" icon="sitemap" href="/docs/concepts/architecture">
2748 | Understand how MCP connects clients, servers, and LLMs
2749 | </Card>
2750 |
2751 | <Card title="Resources" icon="database" href="/docs/concepts/resources">
2752 | Expose data and content from your servers to LLMs
2753 | </Card>
2754 |
2755 | <Card title="Prompts" icon="message" href="/docs/concepts/prompts">
2756 | Create reusable prompt templates and workflows
2757 | </Card>
2758 |
2759 | <Card title="Tools" icon="wrench" href="/docs/concepts/tools">
2760 | Enable LLMs to perform actions through your server
2761 | </Card>
2762 |
2763 | <Card title="Sampling" icon="robot" href="/docs/concepts/sampling">
2764 | Let your servers request completions from LLMs
2765 | </Card>
2766 |
2767 | <Card title="Transports" icon="network-wired" href="/docs/concepts/transports">
2768 | Learn about MCP's communication mechanism
2769 | </Card>
2770 | </CardGroup>
2771 |
2772 | ## Contributing
2773 |
2774 | Want to contribute? Check out [@modelcontextprotocol](https://github.com/modelcontextprotocol) on GitHub to join our growing community of developers building with MCP.
2775 |
2776 |
2777 | # Quickstart
2778 |
2779 | Get started with building your first MCP server and connecting it to a host
2780 |
2781 | In this tutorial, we'll build a simple MCP weather server and connect it to a host, Claude for Desktop. We'll start with a basic setup, and then progress to more complex use cases.
2782 |
2783 | ### What we'll be building
2784 |
2785 | Many LLMs (including Claude) do not currently have the ability to fetch the forecast and severe weather alerts. Let's use MCP to solve that!
2786 |
2787 | We'll build a server that exposes two tools: `get-alerts` and `get-forecast`. Then we'll connect the server to an MCP host (in this case, Claude for Desktop):
2788 |
2789 | <Frame>
2790 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/weather-alerts.png" />
2791 | </Frame>
2792 |
2793 | <Frame>
2794 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/current-weather.png" />
2795 | </Frame>
2796 |
2797 | <Note>
2798 | Servers can connect to any client. We've chosen Claude desktop here for simplicity, but we also have guides on [building your own client](/tutorials/building-a-client).
2799 | </Note>
2800 |
2801 | <Accordion title="Why Claude for Desktop and not Claude.ai?">
2802 | Because servers are locally run, MCP currently only supports desktop hosts. Remote hosts are in active development.
2803 | </Accordion>
2804 |
2805 | ### Core MCP Concepts
2806 |
2807 | MCP servers can provide three main types of capabilities:
2808 |
2809 | 1. **Resources**: File-like data that can be read by clients (like API responses or file contents)
2810 | 2. **Tools**: Functions that can be called by the LLM (with user approval)
2811 | 3. **Prompts**: Pre-written templates that help users accomplish specific tasks
2812 |
2813 | This tutorial focuses on tools, but we have intermediate tutorials if you'd like to learn more about Resources and Prompts.
2814 |
2815 | <Tabs>
2816 | <Tab title="Python">
2817 | ### Prerequisite knowledge
2818 |
2819 | This quickstart assumes you have familiarity with:
2820 |
2821 | * Python
2822 | * LLMs like Claude
2823 |
2824 | ### System requirements
2825 |
2826 | For Python, make sure you have Python 3.9 or higher installed.
2827 |
2828 | ### Set up your environment
2829 |
2830 | First, let's install `uv` and set up our Python project and environment:
2831 |
2832 | <CodeGroup>
2833 | ```bash MacOS/Linux
2834 | curl -LsSf https://astral.sh/uv/install.sh | sh
2835 | ```
2836 |
2837 | ```powershell Windows
2838 | powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
2839 | ```
2840 | </CodeGroup>
2841 |
2842 | Make sure to restart your terminal afterwards to ensure that the `uv` command gets picked up.
2843 |
2844 | Now, let's create and set up our project:
2845 |
2846 | <CodeGroup>
2847 | ```bash MacOS/Linux
2848 | # Create a new directory for our project
2849 | uv init weather
2850 | cd weather
2851 |
2852 | # Create virtual environment and activate it
2853 | uv venv
2854 | source .venv/bin/activate
2855 |
2856 | # Install dependencies
2857 | uv add mcp httpx
2858 |
2859 | # Remove template file
2860 | rm hello.py
2861 |
2862 | # Create our files
2863 | mkdir -p src/weather
2864 | touch src/weather/__init__.py
2865 | touch src/weather/server.py
2866 | ```
2867 |
2868 | ```powershell Windows
2869 | # Create a new directory for our project
2870 | uv init weather
2871 | cd weather
2872 |
2873 | # Create virtual environment and activate it
2874 | uv venv
2875 | .venv\Scripts\activate
2876 |
2877 | # Install dependencies
2878 | uv add mcp httpx
2879 |
2880 | # Clean up boilerplate code
2881 | rm hello.py
2882 |
2883 | # Create our files
2884 | md src
2885 | md src\weather
2886 | new-item src\weather\__init__.py
2887 | new-item src\weather\server.py
2888 | ```
2889 | </CodeGroup>
2890 |
2891 | Add this code to `pyproject.toml`:
2892 |
2893 | ```toml
2894 | ...rest of config
2895 |
2896 | [build-system]
2897 | requires = [ "hatchling",]
2898 | build-backend = "hatchling.build"
2899 |
2900 | [project.scripts]
2901 | weather = "weather:main"
2902 | ```
2903 |
2904 | Add this code to `__init__.py`:
2905 |
2906 | ```python src/weather/__init__.py
2907 | from . import server
2908 | import asyncio
2909 |
2910 | def main():
2911 | """Main entry point for the package."""
2912 | asyncio.run(server.main())
2913 |
2914 | # Optionally expose other important items at package level
2915 | __all__ = ['main', 'server']
2916 | ```
2917 |
2918 | Now let's dive into building your server.
2919 |
2920 | ## Building your server
2921 |
2922 | ### Importing packages
2923 |
2924 | Add these to the top of your `server.py`:
2925 |
2926 | ```python
2927 | from typing import Any
2928 | import asyncio
2929 | import httpx
2930 | from mcp.server.models import InitializationOptions
2931 | import mcp.types as types
2932 | from mcp.server import NotificationOptions, Server
2933 | import mcp.server.stdio
2934 | ```
2935 |
2936 | ### Setting up the instance
2937 |
2938 | Then initialize the server instance and the base URL for the NWS API:
2939 |
2940 | ```python
2941 | NWS_API_BASE = "https://api.weather.gov"
2942 | USER_AGENT = "weather-app/1.0"
2943 |
2944 | server = Server("weather")
2945 | ```
2946 |
2947 | ### Implementing tool listing
2948 |
2949 | We need to tell clients what tools are available. The `list_tools()` decorator registers this handler:
2950 |
2951 | ```python
2952 | @server.list_tools()
2953 | async def handle_list_tools() -> list[types.Tool]:
2954 | """
2955 | List available tools.
2956 | Each tool specifies its arguments using JSON Schema validation.
2957 | """
2958 | return [
2959 | types.Tool(
2960 | name="get-alerts",
2961 | description="Get weather alerts for a state",
2962 | inputSchema={
2963 | "type": "object",
2964 | "properties": {
2965 | "state": {
2966 | "type": "string",
2967 | "description": "Two-letter state code (e.g. CA, NY)",
2968 | },
2969 | },
2970 | "required": ["state"],
2971 | },
2972 | ),
2973 | types.Tool(
2974 | name="get-forecast",
2975 | description="Get weather forecast for a location",
2976 | inputSchema={
2977 | "type": "object",
2978 | "properties": {
2979 | "latitude": {
2980 | "type": "number",
2981 | "description": "Latitude of the location",
2982 | },
2983 | "longitude": {
2984 | "type": "number",
2985 | "description": "Longitude of the location",
2986 | },
2987 | },
2988 | "required": ["latitude", "longitude"],
2989 | },
2990 | ),
2991 | ]
2992 |
2993 | ```
2994 |
2995 | This defines our two tools: `get-alerts` and `get-forecast`.
2996 |
2997 | ### Helper functions
2998 |
2999 | Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
3000 |
3001 | ```python
3002 | async def make_nws_request(client: httpx.AsyncClient, url: str) -> dict[str, Any] | None:
3003 | """Make a request to the NWS API with proper error handling."""
3004 | headers = {
3005 | "User-Agent": USER_AGENT,
3006 | "Accept": "application/geo+json"
3007 | }
3008 |
3009 | try:
3010 | response = await client.get(url, headers=headers, timeout=30.0)
3011 | response.raise_for_status()
3012 | return response.json()
3013 | except Exception:
3014 | return None
3015 |
3016 | def format_alert(feature: dict) -> str:
3017 | """Format an alert feature into a concise string."""
3018 | props = feature["properties"]
3019 | return (
3020 | f"Event: {props.get('event', 'Unknown')}\n"
3021 | f"Area: {props.get('areaDesc', 'Unknown')}\n"
3022 | f"Severity: {props.get('severity', 'Unknown')}\n"
3023 | f"Status: {props.get('status', 'Unknown')}\n"
3024 | f"Headline: {props.get('headline', 'No headline')}\n"
3025 | "---"
3026 | )
3027 | ```
3028 |
3029 | ### Implementing tool execution
3030 |
3031 | The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
3032 |
3033 | ```python
3034 | @server.call_tool()
3035 | async def handle_call_tool(
3036 | name: str, arguments: dict | None
3037 | ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
3038 | """
3039 | Handle tool execution requests.
3040 | Tools can fetch weather data and notify clients of changes.
3041 | """
3042 | if not arguments:
3043 | raise ValueError("Missing arguments")
3044 |
3045 | if name == "get-alerts":
3046 | state = arguments.get("state")
3047 | if not state:
3048 | raise ValueError("Missing state parameter")
3049 |
3050 | # Convert state to uppercase to ensure consistent format
3051 | state = state.upper()
3052 | if len(state) != 2:
3053 | raise ValueError("State must be a two-letter code (e.g. CA, NY)")
3054 |
3055 | async with httpx.AsyncClient() as client:
3056 | alerts_url = f"{NWS_API_BASE}/alerts?area={state}"
3057 | alerts_data = await make_nws_request(client, alerts_url)
3058 |
3059 | if not alerts_data:
3060 | return [types.TextContent(type="text", text="Failed to retrieve alerts data")]
3061 |
3062 | features = alerts_data.get("features", [])
3063 | if not features:
3064 | return [types.TextContent(type="text", text=f"No active alerts for {state}")]
3065 |
3066 | # Format each alert into a concise string
3067 | formatted_alerts = [format_alert(feature) for feature in features[:20]] # only take the first 20 alerts
3068 | alerts_text = f"Active alerts for {state}:\n\n" + "\n".join(formatted_alerts)
3069 |
3070 | return [
3071 | types.TextContent(
3072 | type="text",
3073 | text=alerts_text
3074 | )
3075 | ]
3076 | elif name == "get-forecast":
3077 | try:
3078 | latitude = float(arguments.get("latitude"))
3079 | longitude = float(arguments.get("longitude"))
3080 | except (TypeError, ValueError):
3081 | return [types.TextContent(
3082 | type="text",
3083 | text="Invalid coordinates. Please provide valid numbers for latitude and longitude."
3084 | )]
3085 |
3086 | # Basic coordinate validation
3087 | if not (-90 <= latitude <= 90) or not (-180 <= longitude <= 180):
3088 | return [types.TextContent(
3089 | type="text",
3090 | text="Invalid coordinates. Latitude must be between -90 and 90, longitude between -180 and 180."
3091 | )]
3092 |
3093 | async with httpx.AsyncClient() as client:
3094 | # First get the grid point
3095 | lat_str = f"{latitude}"
3096 | lon_str = f"{longitude}"
3097 | points_url = f"{NWS_API_BASE}/points/{lat_str},{lon_str}"
3098 | points_data = await make_nws_request(client, points_url)
3099 |
3100 | if not points_data:
3101 | return [types.TextContent(type="text", text=f"Failed to retrieve grid point data for coordinates: {latitude}, {longitude}. This location may not be supported by the NWS API (only US locations are supported).")]
3102 |
3103 | # Extract forecast URL from the response
3104 | properties = points_data.get("properties", {})
3105 | forecast_url = properties.get("forecast")
3106 |
3107 | if not forecast_url:
3108 | return [types.TextContent(type="text", text="Failed to get forecast URL from grid point data")]
3109 |
3110 | # Get the forecast
3111 | forecast_data = await make_nws_request(client, forecast_url)
3112 |
3113 | if not forecast_data:
3114 | return [types.TextContent(type="text", text="Failed to retrieve forecast data")]
3115 |
3116 | # Format the forecast periods
3117 | periods = forecast_data.get("properties", {}).get("periods", [])
3118 | if not periods:
3119 | return [types.TextContent(type="text", text="No forecast periods available")]
3120 |
3121 | # Format each period into a concise string
3122 | formatted_forecast = []
3123 | for period in periods:
3124 | forecast_text = (
3125 | f"{period.get('name', 'Unknown')}:\n"
3126 | f"Temperature: {period.get('temperature', 'Unknown')}°{period.get('temperatureUnit', 'F')}\n"
3127 | f"Wind: {period.get('windSpeed', 'Unknown')} {period.get('windDirection', '')}\n"
3128 | f"{period.get('shortForecast', 'No forecast available')}\n"
3129 | "---"
3130 | )
3131 | formatted_forecast.append(forecast_text)
3132 |
3133 | forecast_text = f"Forecast for {latitude}, {longitude}:\n\n" + "\n".join(formatted_forecast)
3134 |
3135 | return [types.TextContent(
3136 | type="text",
3137 | text=forecast_text
3138 | )]
3139 | else:
3140 | raise ValueError(f"Unknown tool: {name}")
3141 | ```
3142 |
3143 | ### Running the server
3144 |
3145 | Finally, implement the main function to run the server:
3146 |
3147 | ```python
3148 | async def main():
3149 | # Run the server using stdin/stdout streams
3150 | async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
3151 | await server.run(
3152 | read_stream,
3153 | write_stream,
3154 | InitializationOptions(
3155 | server_name="weather",
3156 | server_version="0.1.0",
3157 | capabilities=server.get_capabilities(
3158 | notification_options=NotificationOptions(),
3159 | experimental_capabilities={},
3160 | ),
3161 | ),
3162 | )
3163 |
3164 | # This is needed if you'd like to connect to a custom client
3165 | if __name__ == "__main__":
3166 | asyncio.run(main())
3167 | ```
3168 |
3169 | Your server is complete! Run `uv run src/weather/server.py` to confirm that everything's working.
3170 |
3171 | Let's now test your server from an existing MCP host, Claude for Desktop.
3172 |
3173 | ## Testing your server with Claude for Desktop
3174 |
3175 | <Note>
3176 | Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/tutorials/building-a-client) tutorial to build an MCP client that connects to the server we just built.
3177 | </Note>
3178 |
3179 | First, make sure you have Claude for Desktop installed. [You can install the latest version here.](https://claude.ai/download)
3180 |
3181 | Next, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor.
3182 |
3183 | For example, if you have [VS Code](https://code.visualstudio.com/) installed:
3184 |
3185 | <Tabs>
3186 | <Tab title="MacOS/Linux">
3187 | ```bash
3188 | code ~/Library/Application\ Support/Claude/claude_desktop_config.json
3189 | ```
3190 | </Tab>
3191 |
3192 | <Tab title="Windows">
3193 | ```powershell
3194 | code $env:AppData\Claude\claude_desktop_config.json
3195 | ```
3196 | </Tab>
3197 | </Tabs>
3198 |
3199 | Add this configuration (replace the parent folder path):
3200 |
3201 | <Tabs>
3202 | <Tab title="MacOS/Linux">
3203 | ```json Python
3204 | {
3205 | "mcpServers": {
3206 | "weather": {
3207 | "command": "uv",
3208 | "args": [
3209 | "--directory",
3210 | "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather",
3211 | "run",
3212 | "weather"
3213 | ]
3214 | }
3215 | }
3216 | }
3217 | ```
3218 | </Tab>
3219 |
3220 | <Tab title="Windows">
3221 | ```json Python
3222 | {
3223 | "mcpServers": {
3224 | "weather": {
3225 | "command": "uv",
3226 | "args": [
3227 | "--directory",
3228 | "C:\\ABSOLUTE\PATH\TO\PARENT\FOLDER\weather",
3229 | "run",
3230 | "weather"
3231 | ]
3232 | }
3233 | }
3234 | }
3235 | ```
3236 | </Tab>
3237 | </Tabs>
3238 |
3239 | This tells Claude for Desktop:
3240 |
3241 | 1. There's an MCP server named "weather"
3242 | 2. Launch it by running `uv --directory /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather run weather`
3243 |
3244 | Save the file, and restart **Claude for Desktop**.
3245 | </Tab>
3246 |
3247 | <Tab title="Node">
3248 | ### Prerequisite knowledge
3249 |
3250 | This quickstart assumes you have familiarity with:
3251 |
3252 | * TypeScript
3253 | * LLMs like Claude
3254 |
3255 | ### System requirements
3256 |
3257 | For TypeScript, make sure you have the latest version of Node installed.
3258 |
3259 | ### Set up your environment
3260 |
3261 | First, let's install Node.js and npm if you haven't already. You can download them from [nodejs.org](https://nodejs.org/).
3262 | Verify your Node.js installation:
3263 |
3264 | ```bash
3265 | node --version
3266 | npm --version
3267 | ```
3268 |
3269 | For this tutorial, you'll need Node.js version 16 or higher.
3270 |
3271 | Now, let's create and set up our project:
3272 |
3273 | <CodeGroup>
3274 | ```bash MacOS/Linux
3275 | # Create a new directory for our project
3276 | mkdir weather
3277 | cd weather
3278 |
3279 | # Initialize a new npm project
3280 | npm init -y
3281 |
3282 | # Install dependencies
3283 | npm install @modelcontextprotocol/sdk zod
3284 | npm install -D @types/node typescript
3285 |
3286 | # Create our files
3287 | mkdir src
3288 | touch src/index.ts
3289 | ```
3290 |
3291 | ```powershell Windows
3292 | # Create a new directory for our project
3293 | md weather
3294 | cd weather
3295 |
3296 | # Initialize a new npm project
3297 | npm init -y
3298 |
3299 | # Install dependencies
3300 | npm install @modelcontextprotocol/sdk zod
3301 | npm install -D @types/node typescript
3302 |
3303 | # Create our files
3304 | md src
3305 | new-item src\index.ts
3306 | ```
3307 | </CodeGroup>
3308 |
3309 | Update your package.json to add type: "module" and a build script:
3310 |
3311 | ```json package.json
3312 | {
3313 | "type": "module",
3314 | "bin": {
3315 | "weather": "./build/index.js"
3316 | },
3317 | "scripts": {
3318 | "build": "tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
3319 | },
3320 | "files": [
3321 | "build"
3322 | ],
3323 | }
3324 | ```
3325 |
3326 | Create a `tsconfig.json` in the root of your project:
3327 |
3328 | ```json tsconfig.json
3329 | {
3330 | "compilerOptions": {
3331 | "target": "ES2022",
3332 | "module": "Node16",
3333 | "moduleResolution": "Node16",
3334 | "outDir": "./build",
3335 | "rootDir": "./src",
3336 | "strict": true,
3337 | "esModuleInterop": true,
3338 | "skipLibCheck": true,
3339 | "forceConsistentCasingInFileNames": true
3340 | },
3341 | "include": ["src/**/*"],
3342 | "exclude": ["node_modules"]
3343 | }
3344 | ```
3345 |
3346 | Now let's dive into building your server.
3347 |
3348 | ## Building your server
3349 |
3350 | ### Importing packages
3351 |
3352 | Add these to the top of your `src/index.ts`:
3353 |
3354 | ```typescript
3355 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
3356 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
3357 | import {
3358 | CallToolRequestSchema,
3359 | ListToolsRequestSchema,
3360 | } from "@modelcontextprotocol/sdk/types.js";
3361 | import { z } from "zod";
3362 | ```
3363 |
3364 | ### Setting up the instance
3365 |
3366 | Then initialize the NWS API base URL, validation schemas, and server instance:
3367 |
3368 | ```typescript
3369 | const NWS_API_BASE = "https://api.weather.gov";
3370 | const USER_AGENT = "weather-app/1.0";
3371 |
3372 | // Define Zod schemas for validation
3373 | const AlertsArgumentsSchema = z.object({
3374 | state: z.string().length(2),
3375 | });
3376 |
3377 | const ForecastArgumentsSchema = z.object({
3378 | latitude: z.number().min(-90).max(90),
3379 | longitude: z.number().min(-180).max(180),
3380 | });
3381 |
3382 | // Create server instance
3383 | const server = new Server(
3384 | {
3385 | name: "weather",
3386 | version: "1.0.0",
3387 | },
3388 | {
3389 | capabilities: {
3390 | tools: {},
3391 | },
3392 | }
3393 | );
3394 | ```
3395 |
3396 | ### Implementing tool listing
3397 |
3398 | We need to tell clients what tools are available. This `server.setRequestHandler` call will register this list for us:
3399 |
3400 | ```typescript
3401 | // List available tools
3402 | server.setRequestHandler(ListToolsRequestSchema, async () => {
3403 | return {
3404 | tools: [
3405 | {
3406 | name: "get-alerts",
3407 | description: "Get weather alerts for a state",
3408 | inputSchema: {
3409 | type: "object",
3410 | properties: {
3411 | state: {
3412 | type: "string",
3413 | description: "Two-letter state code (e.g. CA, NY)",
3414 | },
3415 | },
3416 | required: ["state"],
3417 | },
3418 | },
3419 | {
3420 | name: "get-forecast",
3421 | description: "Get weather forecast for a location",
3422 | inputSchema: {
3423 | type: "object",
3424 | properties: {
3425 | latitude: {
3426 | type: "number",
3427 | description: "Latitude of the location",
3428 | },
3429 | longitude: {
3430 | type: "number",
3431 | description: "Longitude of the location",
3432 | },
3433 | },
3434 | required: ["latitude", "longitude"],
3435 | },
3436 | },
3437 | ],
3438 | };
3439 | });
3440 | ```
3441 |
3442 | This defines our two tools: `get-alerts` and `get-forecast`.
3443 |
3444 | ### Helper functions
3445 |
3446 | Next, let's add our helper functions for querying and formatting the data from the National Weather Service API:
3447 |
3448 | ```typescript
3449 | // Helper function for making NWS API requests
3450 | async function makeNWSRequest<T>(url: string): Promise<T | null> {
3451 | const headers = {
3452 | "User-Agent": USER_AGENT,
3453 | Accept: "application/geo+json",
3454 | };
3455 |
3456 | try {
3457 | const response = await fetch(url, { headers });
3458 | if (!response.ok) {
3459 | throw new Error(`HTTP error! status: ${response.status}`);
3460 | }
3461 | return (await response.json()) as T;
3462 | } catch (error) {
3463 | console.error("Error making NWS request:", error);
3464 | return null;
3465 | }
3466 | }
3467 |
3468 | interface AlertFeature {
3469 | properties: {
3470 | event?: string;
3471 | areaDesc?: string;
3472 | severity?: string;
3473 | status?: string;
3474 | headline?: string;
3475 | };
3476 | }
3477 |
3478 | // Format alert data
3479 | function formatAlert(feature: AlertFeature): string {
3480 | const props = feature.properties;
3481 | return [
3482 | `Event: ${props.event || "Unknown"}`,
3483 | `Area: ${props.areaDesc || "Unknown"}`,
3484 | `Severity: ${props.severity || "Unknown"}`,
3485 | `Status: ${props.status || "Unknown"}`,
3486 | `Headline: ${props.headline || "No headline"}`,
3487 | "---",
3488 | ].join("\n");
3489 | }
3490 |
3491 | interface ForecastPeriod {
3492 | name?: string;
3493 | temperature?: number;
3494 | temperatureUnit?: string;
3495 | windSpeed?: string;
3496 | windDirection?: string;
3497 | shortForecast?: string;
3498 | }
3499 |
3500 | interface AlertsResponse {
3501 | features: AlertFeature[];
3502 | }
3503 |
3504 | interface PointsResponse {
3505 | properties: {
3506 | forecast?: string;
3507 | };
3508 | }
3509 |
3510 | interface ForecastResponse {
3511 | properties: {
3512 | periods: ForecastPeriod[];
3513 | };
3514 | }
3515 | ```
3516 |
3517 | ### Implementing tool execution
3518 |
3519 | The tool execution handler is responsible for actually executing the logic of each tool. Let's add it:
3520 |
3521 | ```typescript
3522 | // Handle tool execution
3523 | server.setRequestHandler(CallToolRequestSchema, async (request) => {
3524 | const { name, arguments: args } = request.params;
3525 |
3526 | try {
3527 | if (name === "get-alerts") {
3528 | const { state } = AlertsArgumentsSchema.parse(args);
3529 | const stateCode = state.toUpperCase();
3530 |
3531 | const alertsUrl = `${NWS_API_BASE}/alerts?area=${stateCode}`;
3532 | const alertsData = await makeNWSRequest<AlertsResponse>(alertsUrl);
3533 |
3534 | if (!alertsData) {
3535 | return {
3536 | content: [
3537 | {
3538 | type: "text",
3539 | text: "Failed to retrieve alerts data",
3540 | },
3541 | ],
3542 | };
3543 | }
3544 |
3545 | const features = alertsData.features || [];
3546 | if (features.length === 0) {
3547 | return {
3548 | content: [
3549 | {
3550 | type: "text",
3551 | text: `No active alerts for ${stateCode}`,
3552 | },
3553 | ],
3554 | };
3555 | }
3556 |
3557 | const formattedAlerts = features.map(formatAlert).slice(0, 20) // only take the first 20 alerts;
3558 | const alertsText = `Active alerts for ${stateCode}:\n\n${formattedAlerts.join(
3559 | "\n"
3560 | )}`;
3561 |
3562 | return {
3563 | content: [
3564 | {
3565 | type: "text",
3566 | text: alertsText,
3567 | },
3568 | ],
3569 | };
3570 | } else if (name === "get-forecast") {
3571 | const { latitude, longitude } = ForecastArgumentsSchema.parse(args);
3572 |
3573 | // Get grid point data
3574 | const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(
3575 | 4
3576 | )},${longitude.toFixed(4)}`;
3577 | const pointsData = await makeNWSRequest<PointsResponse>(pointsUrl);
3578 |
3579 | if (!pointsData) {
3580 | return {
3581 | content: [
3582 | {
3583 | type: "text",
3584 | text: `Failed to retrieve grid point data for coordinates: ${latitude}, ${longitude}. This location may not be supported by the NWS API (only US locations are supported).`,
3585 | },
3586 | ],
3587 | };
3588 | }
3589 |
3590 | const forecastUrl = pointsData.properties?.forecast;
3591 | if (!forecastUrl) {
3592 | return {
3593 | content: [
3594 | {
3595 | type: "text",
3596 | text: "Failed to get forecast URL from grid point data",
3597 | },
3598 | ],
3599 | };
3600 | }
3601 |
3602 | // Get forecast data
3603 | const forecastData = await makeNWSRequest<ForecastResponse>(forecastUrl);
3604 | if (!forecastData) {
3605 | return {
3606 | content: [
3607 | {
3608 | type: "text",
3609 | text: "Failed to retrieve forecast data",
3610 | },
3611 | ],
3612 | };
3613 | }
3614 |
3615 | const periods = forecastData.properties?.periods || [];
3616 | if (periods.length === 0) {
3617 | return {
3618 | content: [
3619 | {
3620 | type: "text",
3621 | text: "No forecast periods available",
3622 | },
3623 | ],
3624 | };
3625 | }
3626 |
3627 | // Format forecast periods
3628 | const formattedForecast = periods.map((period: ForecastPeriod) =>
3629 | [
3630 | `${period.name || "Unknown"}:`,
3631 | `Temperature: ${period.temperature || "Unknown"}°${
3632 | period.temperatureUnit || "F"
3633 | }`,
3634 | `Wind: ${period.windSpeed || "Unknown"} ${
3635 | period.windDirection || ""
3636 | }`,
3637 | `${period.shortForecast || "No forecast available"}`,
3638 | "---",
3639 | ].join("\n")
3640 | );
3641 |
3642 | const forecastText = `Forecast for ${latitude}, ${longitude}:\n\n${formattedForecast.join(
3643 | "\n"
3644 | )}`;
3645 |
3646 | return {
3647 | content: [
3648 | {
3649 | type: "text",
3650 | text: forecastText,
3651 | },
3652 | ],
3653 | };
3654 | } else {
3655 | throw new Error(`Unknown tool: ${name}`);
3656 | }
3657 | } catch (error) {
3658 | if (error instanceof z.ZodError) {
3659 | throw new Error(
3660 | `Invalid arguments: ${error.errors
3661 | .map((e) => `${e.path.join(".")}: ${e.message}`)
3662 | .join(", ")}`
3663 | );
3664 | }
3665 | throw error;
3666 | }
3667 | });
3668 | ```
3669 |
3670 | ### Running the server
3671 |
3672 | Finally, implement the main function to run the server:
3673 |
3674 | ```typescript
3675 | // Start the server
3676 | async function main() {
3677 | const transport = new StdioServerTransport();
3678 | await server.connect(transport);
3679 | console.error("Weather MCP Server running on stdio");
3680 | }
3681 |
3682 | main().catch((error) => {
3683 | console.error("Fatal error in main():", error);
3684 | process.exit(1);
3685 | });
3686 | ```
3687 |
3688 | Make sure to run `npm run build` to build your server! This is a very important step in getting your server to connect.
3689 |
3690 | Let's now test your server from an existing MCP host, Claude for Desktop.
3691 |
3692 | ## Testing your server with Claude for Desktop
3693 |
3694 | <Note>
3695 | Unfortunately, Claude for Desktop is not yet available on Linux. Linux users can proceed to the [Building a client](/tutorials/building-a-client) tutorial for a workaround.
3696 | </Note>
3697 |
3698 | First, make sure you have Claude for Desktop installed. [You can install the latest version here.](https://claude.ai/download)
3699 |
3700 | Next, open your Claude for Desktop App configuration at `~/Library/Application Support/Claude/claude_desktop_config.json` in a text editor.
3701 |
3702 | For example, if you have [VS Code](https://code.visualstudio.com/) installed:
3703 |
3704 | <Tabs>
3705 | <Tab title="MacOS/Linux">
3706 | ```bash
3707 | code ~/Library/Application\ Support/Claude/claude_desktop_config.json
3708 | ```
3709 | </Tab>
3710 |
3711 | <Tab title="Windows">
3712 | ```powershell
3713 | code $env:AppData\Claude\claude_desktop_config.json
3714 | ```
3715 | </Tab>
3716 | </Tabs>
3717 |
3718 | Add this configuration (replace the parent folder path):
3719 |
3720 | <Tabs>
3721 | <Tab title="MacOS/Linux">
3722 | <CodeGroup>
3723 | ```json Node
3724 | {
3725 | "mcpServers": {
3726 | "weather": {
3727 | "command": "node",
3728 | "args": [
3729 | "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js"
3730 | ]
3731 | }
3732 | }
3733 | }
3734 | ```
3735 | </CodeGroup>
3736 | </Tab>
3737 |
3738 | <Tab title="Windows">
3739 | <CodeGroup>
3740 | ```json Node
3741 | {
3742 | "mcpServers": {
3743 | "weather": {
3744 | "command": "node",
3745 | "args": [
3746 | "C:\\PATH\TO\PARENT\FOLDER\weather\build\index.js"
3747 | ]
3748 | }
3749 | }
3750 | }
3751 | ```
3752 | </CodeGroup>
3753 | </Tab>
3754 | </Tabs>
3755 |
3756 | This tells Claude for Desktop:
3757 |
3758 | 1. There's an MCP server named "weather"
3759 | 2. Launch it by running `node /ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js`
3760 |
3761 | Save the file, and restart **Claude for Desktop**.
3762 | </Tab>
3763 | </Tabs>
3764 |
3765 | ### Test with commands
3766 |
3767 | First, make sure Claude for Desktop is picking up the two tools we've exposed in our `weather` server. You can do this by looking for the hammer <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/claude-desktop-mcp-hammer-icon.svg" style={{display: 'inline', margin: 0, height: '1.3em'}} /> icon:
3768 |
3769 | <Frame>
3770 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/visual-indicator-mcp-tools.png" />
3771 | </Frame>
3772 |
3773 | After clicking on the hammer icon, you should see two tools listed:
3774 |
3775 | <Frame>
3776 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/available-mcp-tools.png" />
3777 | </Frame>
3778 |
3779 | If your server isn't being picked up by Claude for Desktop, proceed to the [Troubleshooting](#troubleshooting) section for debugging advice.
3780 |
3781 | You can now test your server by running the following commands in Claude for Desktop:
3782 |
3783 | * What's the weather in Sacramento?
3784 | * What are the active weather alerts in Texas?
3785 |
3786 | <Frame>
3787 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/current-weather.png" />
3788 | </Frame>
3789 |
3790 | <Frame>
3791 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/weather-alerts.png" />
3792 | </Frame>
3793 |
3794 | <Note>
3795 | Since this is the US National Weather service, the queries will only work for US locations.
3796 | </Note>
3797 |
3798 | ## What's happening under the hood
3799 |
3800 | When you ask a question:
3801 |
3802 | 1. The client sends your question to Claude
3803 | 2. Claude analyzes the available tools and decides which one(s) to use
3804 | 3. The client executes the chosen tool(s) through the MCP server
3805 | 4. The results are sent back to Claude
3806 | 5. Claude formulates a natural language response
3807 | 6. The response is displayed to you!
3808 |
3809 | ## Troubleshooting
3810 |
3811 | <AccordionGroup>
3812 | <Accordion title="Weather API Issues">
3813 | **Error: Failed to retrieve grid point data**
3814 |
3815 | This usually means either:
3816 |
3817 | 1. The coordinates are outside the US
3818 | 2. The NWS API is having issues
3819 | 3. You're being rate limited
3820 |
3821 | Fix:
3822 |
3823 | * Verify you're using US coordinates
3824 | * Add a small delay between requests
3825 | * Check the NWS API status page
3826 |
3827 | **Error: No active alerts for \[STATE]**
3828 |
3829 | This isn't an error - it just means there are no current weather alerts for that state. Try a different state or check during severe weather.
3830 | </Accordion>
3831 |
3832 | <Accordion title="Claude for Desktop Integration Issues">
3833 | **Server not showing up in Claude**
3834 |
3835 | 1. Check your configuration file syntax
3836 | 2. Make sure the path to your project is correct
3837 | 3. Restart Claude for Desktop completely
3838 |
3839 | You can also check Claude's logs for errors like so:
3840 |
3841 | ```bash
3842 | # Check Claude's logs for errors
3843 | tail -n 20 -f ~/Library/Logs/Claude/mcp*.log
3844 | ```
3845 |
3846 | **Tool calls failing silently**
3847 |
3848 | If Claude attempts to use the tools but they fail:
3849 |
3850 | 1. Check Claude's logs for errors
3851 | 2. Verify your server runs without errors
3852 | 3. Try restarting Claude for Desktop
3853 | </Accordion>
3854 | </AccordionGroup>
3855 |
3856 | <Note>
3857 | For more advanced troubleshooting, check out our guide on [Debugging MCP](/docs/tools/debugging)
3858 | </Note>
3859 |
3860 | ## Next steps
3861 |
3862 | <CardGroup cols={2}>
3863 | <Card title="Building a client" icon="outlet" href="/tutorials/building-a-client">
3864 | Learn how to build your an MCP client that can connect to your server
3865 | </Card>
3866 |
3867 | <Card title="Example servers" icon="grid" href="/examples">
3868 | Check out our gallery of official MCP servers and implementations
3869 | </Card>
3870 |
3871 | <Card title="Debugging Guide" icon="bug" href="/docs/tools/debugging">
3872 | Learn how to effectively debug MCP servers and integrations
3873 | </Card>
3874 |
3875 | <Card title="Building MCP with LLMs" icon="comments" href="/building-mcp-with-llms">
3876 | Learn how to use LLMs like Claude to speed up your MCP development
3877 | </Card>
3878 | </CardGroup>
3879 |
3880 |
3881 | # Building MCP clients
3882 |
3883 | Learn how to build your first client in MCP
3884 |
3885 | In this tutorial, you'll learn how to build a LLM-powered chatbot client that connects to MCP servers. It helps to have gone through the [Quickstart tutorial](/quickstart) that guides you through the basic of building your first server.
3886 |
3887 | <Tabs>
3888 | <Tab title="Python">
3889 | You can find the complete code for this tutorial [here.](https://gist.github.com/zckly/f3f28ea731e096e53b39b47bf0a2d4b1)
3890 |
3891 | ## System Requirements
3892 |
3893 | Before starting, ensure your system meets these requirements:
3894 |
3895 | * Mac or Windows computer
3896 | * Latest Python version installed
3897 | * Latest version of `uv` installed
3898 |
3899 | ## Setting Up Your Environment
3900 |
3901 | First, create a new Python project with `uv`:
3902 |
3903 | ```bash
3904 | # Create project directory
3905 | uv init mcp-client
3906 | cd mcp-client
3907 |
3908 | # Create virtual environment
3909 | uv venv
3910 |
3911 | # Activate virtual environment
3912 | # On Windows:
3913 | .venv\Scripts\activate
3914 | # On Unix or MacOS:
3915 | source .venv/bin/activate
3916 |
3917 | # Install required packages
3918 | uv add mcp anthropic python-dotenv
3919 |
3920 | # Remove boilerplate files
3921 | rm hello.py
3922 |
3923 | # Create our main file
3924 | touch client.py
3925 | ```
3926 |
3927 | ## Setting Up Your API Key
3928 |
3929 | You'll need an Anthropic API key from the [Anthropic Console](https://console.anthropic.com/settings/keys).
3930 |
3931 | Create a `.env` file to store it:
3932 |
3933 | ```bash
3934 | # Create .env file
3935 | touch .env
3936 | ```
3937 |
3938 | Add your key to the `.env` file:
3939 |
3940 | ```bash
3941 | ANTHROPIC_API_KEY=<your key here>
3942 | ```
3943 |
3944 | Add `.env` to your `.gitignore`:
3945 |
3946 | ```bash
3947 | echo ".env" >> .gitignore
3948 | ```
3949 |
3950 | <Warning>
3951 | Make sure you keep your `ANTHROPIC_API_KEY` secure!
3952 | </Warning>
3953 |
3954 | ## Creating the Client
3955 |
3956 | ### Basic Client Structure
3957 |
3958 | First, let's set up our imports and create the basic client class:
3959 |
3960 | ```python
3961 | import asyncio
3962 | from typing import Optional
3963 | from contextlib import AsyncExitStack
3964 |
3965 | from mcp import ClientSession, StdioServerParameters
3966 | from mcp.client.stdio import stdio_client
3967 |
3968 | from anthropic import Anthropic
3969 | from dotenv import load_dotenv
3970 |
3971 | load_dotenv() # load environment variables from .env
3972 |
3973 | class MCPClient:
3974 | def __init__(self):
3975 | # Initialize session and client objects
3976 | self.session: Optional[ClientSession] = None
3977 | self.exit_stack = AsyncExitStack()
3978 | self.anthropic = Anthropic()
3979 | # methods will go here
3980 | ```
3981 |
3982 | ### Server Connection Management
3983 |
3984 | Next, we'll implement the method to connect to an MCP server:
3985 |
3986 | ```python
3987 | async def connect_to_server(self, server_script_path: str):
3988 | """Connect to an MCP server
3989 |
3990 | Args:
3991 | server_script_path: Path to the server script (.py or .js)
3992 | """
3993 | is_python = server_script_path.endswith('.py')
3994 | is_js = server_script_path.endswith('.js')
3995 | if not (is_python or is_js):
3996 | raise ValueError("Server script must be a .py or .js file")
3997 |
3998 | command = "python" if is_python else "node"
3999 | server_params = StdioServerParameters(
4000 | command=command,
4001 | args=[server_script_path],
4002 | env=None
4003 | )
4004 |
4005 | stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
4006 | self.stdio, self.write = stdio_transport
4007 | self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
4008 |
4009 | await self.session.initialize()
4010 |
4011 | # List available tools
4012 | response = await self.session.list_tools()
4013 | tools = response.tools
4014 | print("\nConnected to server with tools:", [tool.name for tool in tools])
4015 | ```
4016 |
4017 | ### Query Processing Logic
4018 |
4019 | Now let's add the core functionality for processing queries and handling tool calls:
4020 |
4021 | ```python
4022 | async def process_query(self, query: str) -> str:
4023 | """Process a query using Claude and available tools"""
4024 | messages = [
4025 | {
4026 | "role": "user",
4027 | "content": query
4028 | }
4029 | ]
4030 |
4031 | response = await self.session.list_tools()
4032 | available_tools = [{
4033 | "name": tool.name,
4034 | "description": tool.description,
4035 | "input_schema": tool.inputSchema
4036 | } for tool in response.tools]
4037 |
4038 | # Initial Claude API call
4039 | response = self.anthropic.messages.create(
4040 | model="claude-3-5-sonnet-20241022",
4041 | max_tokens=1000,
4042 | messages=messages,
4043 | tools=available_tools
4044 | )
4045 |
4046 | # Process response and handle tool calls
4047 | tool_results = []
4048 | final_text = []
4049 |
4050 | for content in response.content:
4051 | if content.type == 'text':
4052 | final_text.append(content.text)
4053 | elif content.type == 'tool_use':
4054 | tool_name = content.name
4055 | tool_args = content.input
4056 |
4057 | # Execute tool call
4058 | result = await self.session.call_tool(tool_name, tool_args)
4059 | tool_results.append({"call": tool_name, "result": result})
4060 | final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
4061 |
4062 | # Continue conversation with tool results
4063 | if hasattr(content, 'text') and content.text:
4064 | messages.append({
4065 | "role": "assistant",
4066 | "content": content.text
4067 | })
4068 | messages.append({
4069 | "role": "user",
4070 | "content": result.content
4071 | })
4072 |
4073 | # Get next response from Claude
4074 | response = self.anthropic.messages.create(
4075 | model="claude-3-5-sonnet-20241022",
4076 | max_tokens=1000,
4077 | messages=messages,
4078 | )
4079 |
4080 | final_text.append(response.content[0].text)
4081 |
4082 | return "\n".join(final_text)
4083 | ```
4084 |
4085 | ### Interactive Chat Interface
4086 |
4087 | Now we'll add the chat loop and cleanup functionality:
4088 |
4089 | ```python
4090 | async def chat_loop(self):
4091 | """Run an interactive chat loop"""
4092 | print("\nMCP Client Started!")
4093 | print("Type your queries or 'quit' to exit.")
4094 |
4095 | while True:
4096 | try:
4097 | query = input("\nQuery: ").strip()
4098 |
4099 | if query.lower() == 'quit':
4100 | break
4101 |
4102 | response = await self.process_query(query)
4103 | print("\n" + response)
4104 |
4105 | except Exception as e:
4106 | print(f"\nError: {str(e)}")
4107 |
4108 | async def cleanup(self):
4109 | """Clean up resources"""
4110 | await self.exit_stack.aclose()
4111 | ```
4112 |
4113 | ### Main Entry Point
4114 |
4115 | Finally, we'll add the main execution logic:
4116 |
4117 | ```python
4118 | async def main():
4119 | if len(sys.argv) < 2:
4120 | print("Usage: python client.py <path_to_server_script>")
4121 | sys.exit(1)
4122 |
4123 | client = MCPClient()
4124 | try:
4125 | await client.connect_to_server(sys.argv[1])
4126 | await client.chat_loop()
4127 | finally:
4128 | await client.cleanup()
4129 |
4130 | if __name__ == "__main__":
4131 | import sys
4132 | asyncio.run(main())
4133 | ```
4134 |
4135 | You can find the complete `client.py` file [here.](https://gist.github.com/zckly/f3f28ea731e096e53b39b47bf0a2d4b1)
4136 |
4137 | ## Key Components Explained
4138 |
4139 | ### 1. Client Initialization
4140 |
4141 | * The `MCPClient` class initializes with session management and API clients
4142 | * Uses `AsyncExitStack` for proper resource management
4143 | * Configures the Anthropic client for Claude interactions
4144 |
4145 | ### 2. Server Connection
4146 |
4147 | * Supports both Python and Node.js servers
4148 | * Validates server script type
4149 | * Sets up proper communication channels
4150 | * Initializes the session and lists available tools
4151 |
4152 | ### 3. Query Processing
4153 |
4154 | * Maintains conversation context
4155 | * Handles Claude's responses and tool calls
4156 | * Manages the message flow between Claude and tools
4157 | * Combines results into a coherent response
4158 |
4159 | ### 4. Interactive Interface
4160 |
4161 | * Provides a simple command-line interface
4162 | * Handles user input and displays responses
4163 | * Includes basic error handling
4164 | * Allows graceful exit
4165 |
4166 | ### 5. Resource Management
4167 |
4168 | * Proper cleanup of resources
4169 | * Error handling for connection issues
4170 | * Graceful shutdown procedures
4171 |
4172 | ## Common Customization Points
4173 |
4174 | 1. **Tool Handling**
4175 | * Modify `process_query()` to handle specific tool types
4176 | * Add custom error handling for tool calls
4177 | * Implement tool-specific response formatting
4178 |
4179 | 2. **Response Processing**
4180 | * Customize how tool results are formatted
4181 | * Add response filtering or transformation
4182 | * Implement custom logging
4183 |
4184 | 3. **User Interface**
4185 | * Add a GUI or web interface
4186 | * Implement rich console output
4187 | * Add command history or auto-completion
4188 |
4189 | ## Running the Client
4190 |
4191 | To run your client with any MCP server:
4192 |
4193 | ```bash
4194 | uv run client.py path/to/server.py # python server
4195 | uv run client.py path/to/build/index.js # node server
4196 | ```
4197 |
4198 | <Note>
4199 | If you're continuing the weather tutorial from the quickstart, your command might look something like this: `python client.py .../weather/src/weather/server.py`
4200 | </Note>
4201 |
4202 | The client will:
4203 |
4204 | 1. Connect to the specified server
4205 | 2. List available tools
4206 | 3. Start an interactive chat session where you can:
4207 | * Enter queries
4208 | * See tool executions
4209 | * Get responses from Claude
4210 |
4211 | Here's an example of what it should look like if connected to the weather server from the quickstart:
4212 |
4213 | <Frame>
4214 | <img src="https://mintlify.s3.us-west-1.amazonaws.com/mcp/images/client-claude-cli-python.png" />
4215 | </Frame>
4216 |
4217 | ## How It Works
4218 |
4219 | When you submit a query:
4220 |
4221 | 1. The client gets the list of available tools from the server
4222 | 2. Your query is sent to Claude along with tool descriptions
4223 | 3. Claude decides which tools (if any) to use
4224 | 4. The client executes any requested tool calls through the server
4225 | 5. Results are sent back to Claude
4226 | 6. Claude provides a natural language response
4227 | 7. The response is displayed to you
4228 |
4229 | ## Best practices
4230 |
4231 | 1. **Error Handling**
4232 | * Always wrap tool calls in try-catch blocks
4233 | * Provide meaningful error messages
4234 | * Gracefully handle connection issues
4235 |
4236 | 2. **Resource Management**
4237 | * Use `AsyncExitStack` for proper cleanup
4238 | * Close connections when done
4239 | * Handle server disconnections
4240 |
4241 | 3. **Security**
4242 | * Store API keys securely in `.env`
4243 | * Validate server responses
4244 | * Be cautious with tool permissions
4245 |
4246 | ## Troubleshooting
4247 |
4248 | ### Server Path Issues
4249 |
4250 | * Double-check the path to your server script is correct
4251 | * Use the absolute path if the relative path isn't working
4252 | * For Windows users, make sure to use forward slashes (/) or escaped backslashes (\\) in the path
4253 | * Verify the server file has the correct extension (.py for Python or .js for Node.js)
4254 |
4255 | Example of correct path usage:
4256 |
4257 | ```bash
4258 | # Relative path
4259 | uv run client.py ./server/weather.py
4260 |
4261 | # Absolute path
4262 | uv run client.py /Users/username/projects/mcp-server/weather.py
4263 |
4264 | # Windows path (either format works)
4265 | uv run client.py C:/projects/mcp-server/weather.py
4266 | uv run client.py C:\\projects\\mcp-server\\weather.py
4267 | ```
4268 |
4269 | ### Response Timing
4270 |
4271 | * The first response might take up to 30 seconds to return
4272 | * This is normal and happens while:
4273 | * The server initializes
4274 | * Claude processes the query
4275 | * Tools are being executed
4276 | * Subsequent responses are typically faster
4277 | * Don't interrupt the process during this initial waiting period
4278 |
4279 | ### Common Error Messages
4280 |
4281 | If you see:
4282 |
4283 | * `FileNotFoundError`: Check your server path
4284 | * `Connection refused`: Ensure the server is running and the path is correct
4285 | * `Tool execution failed`: Verify the tool's required environment variables are set
4286 | * `Timeout error`: Consider increasing the timeout in your client configuration
4287 | </Tab>
4288 | </Tabs>
4289 |
4290 | ## Next steps
4291 |
4292 | <CardGroup cols={2}>
4293 | <Card title="Example servers" icon="grid" href="/examples">
4294 | Check out our gallery of official MCP servers and implementations
4295 | </Card>
4296 |
4297 | <Card title="Clients" icon="cubes" href="/clients">
4298 | View the list of clients that support MCP integrations
4299 | </Card>
4300 |
4301 | <Card title="Building MCP with LLMs" icon="comments" href="/building-mcp-with-llms">
4302 | Learn how to use LLMs like Claude to speed up your MCP development
4303 | </Card>
4304 |
4305 | <Card title="Core architecture" icon="sitemap" href="/docs/concepts/architecture">
4306 | Understand how MCP connects clients, servers, and LLMs
4307 | </Card>
4308 | </CardGroup>
4309 |
4310 |
4311 | # Building MCP with LLMs
4312 |
4313 | Speed up your MCP development using LLMs such as Claude!
4314 |
4315 | This guide will help you use LLMs to help you build custom Model Context Protocol (MCP) servers and clients. We'll be focusing on Claude for this tutorial, but you can do this with any frontier LLM.
4316 |
4317 | ## Preparing the documentation
4318 |
4319 | Before starting, gather the necessary documentation to help Claude understand MCP:
4320 |
4321 | 1. Visit [https://modelcontextprotocol.io/llms-full.txt](https://modelcontextprotocol.io/llms-full.txt) and copy the full documentation text
4322 | 2. Navigate to either the [MCP TypeScript SDK](https://github.com/modelcontextprotocol/typescript-sdk) or [Python SDK repository](https://github.com/modelcontextprotocol/python-sdk)
4323 | 3. Copy the README files and other relevant documentation
4324 | 4. Paste these documents into your conversation with Claude
4325 |
4326 | ## Describing your server
4327 |
4328 | Once you've provided the documentation, clearly describe to Claude what kind of server you want to build. Be specific about:
4329 |
4330 | * What resources your server will expose
4331 | * What tools it will provide
4332 | * Any prompts it should offer
4333 | * What external systems it needs to interact with
4334 |
4335 | For example:
4336 |
4337 | ```
4338 | Build an MCP server that:
4339 | - Connects to my company's PostgreSQL database
4340 | - Exposes table schemas as resources
4341 | - Provides tools for running read-only SQL queries
4342 | - Includes prompts for common data analysis tasks
4343 | ```
4344 |
4345 | ## Working with Claude
4346 |
4347 | When working with Claude on MCP servers:
4348 |
4349 | 1. Start with the core functionality first, then iterate to add more features
4350 | 2. Ask Claude to explain any parts of the code you don't understand
4351 | 3. Request modifications or improvements as needed
4352 | 4. Have Claude help you test the server and handle edge cases
4353 |
4354 | Claude can help implement all the key MCP features:
4355 |
4356 | * Resource management and exposure
4357 | * Tool definitions and implementations
4358 | * Prompt templates and handlers
4359 | * Error handling and logging
4360 | * Connection and transport setup
4361 |
4362 | ## Best practices
4363 |
4364 | When building MCP servers with Claude:
4365 |
4366 | * Break down complex servers into smaller pieces
4367 | * Test each component thoroughly before moving on
4368 | * Keep security in mind - validate inputs and limit access appropriately
4369 | * Document your code well for future maintenance
4370 | * Follow MCP protocol specifications carefully
4371 |
4372 | ## Next steps
4373 |
4374 | After Claude helps you build your server:
4375 |
4376 | 1. Review the generated code carefully
4377 | 2. Test the server with the MCP Inspector tool
4378 | 3. Connect it to Claude.app or other MCP clients
4379 | 4. Iterate based on real usage and feedback
4380 |
4381 | Remember that Claude can help you modify and improve your server as requirements change over time.
4382 |
4383 | Need more guidance? Just ask Claude specific questions about implementing MCP features or troubleshooting issues that arise.
4384 |
4385 |
```