# Directory Structure
```
├── .env
├── .gitattributes
├── .gitignore
├── docs
│ ├── flux_docs.md
│ ├── mcp_server.md
│ ├── mcp.md
│ └── openai_docs.md
├── flux-server
│ ├── .gitignore
│ ├── package-lock.json
│ ├── package.json
│ ├── README.md
│ ├── src
│ │ ├── index.ts
│ │ └── types.ts
│ └── tsconfig.json
├── LICENSE
├── openai-server
│ ├── .gitignore
│ ├── package-lock.json
│ ├── package.json
│ ├── README.md
│ ├── src
│ │ ├── index.ts
│ │ └── types.ts
│ └── tsconfig.json
└── README.md
```
# Files
--------------------------------------------------------------------------------
/.env:
--------------------------------------------------------------------------------
```
1 | FLUX_API_KEY=your_flux_key_here
2 |
```
--------------------------------------------------------------------------------
/flux-server/.gitignore:
--------------------------------------------------------------------------------
```
1 | node_modules/
2 | build/
3 | *.log
4 | .env*
```
--------------------------------------------------------------------------------
/openai-server/.gitignore:
--------------------------------------------------------------------------------
```
1 | node_modules/
2 | build/
3 | *.log
4 | .env*
```
--------------------------------------------------------------------------------
/.gitattributes:
--------------------------------------------------------------------------------
```
1 | package-lock.json linguist-generated=true
2 |
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
1 | # Logs
2 | logs
3 | *.log
4 | npm-debug.log*
5 | yarn-debug.log*
6 | yarn-error.log*
7 | lerna-debug.log*
8 | .pnpm-debug.log*
9 |
10 | # Diagnostic reports (https://nodejs.org/api/report.html)
11 | report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
12 |
13 | # Runtime data
14 | pids
15 | *.pid
16 | *.seed
17 | *.pid.lock
18 |
19 | # Directory for instrumented libs generated by jscoverage/JSCover
20 | lib-cov
21 |
22 | # Coverage directory used by tools like istanbul
23 | coverage
24 | *.lcov
25 |
26 | # nyc test coverage
27 | .nyc_output
28 |
29 | # Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
30 | .grunt
31 |
32 | # Bower dependency directory (https://bower.io/)
33 | bower_components
34 |
35 | # node-waf configuration
36 | .lock-wscript
37 |
38 | # Compiled binary addons (https://nodejs.org/api/addons.html)
39 | build/Release
40 |
41 | # Dependency directories
42 | node_modules/
43 | jspm_packages/
44 |
45 | # Snowpack dependency directory (https://snowpack.dev/)
46 | web_modules/
47 |
48 | # TypeScript cache
49 | *.tsbuildinfo
50 |
51 | # Optional npm cache directory
52 | .npm
53 |
54 | # Optional eslint cache
55 | .eslintcache
56 |
57 | # Optional stylelint cache
58 | .stylelintcache
59 |
60 | # Microbundle cache
61 | .rpt2_cache/
62 | .rts2_cache_cjs/
63 | .rts2_cache_es/
64 | .rts2_cache_umd/
65 |
66 | # Optional REPL history
67 | .node_repl_history
68 |
69 | # Output of 'npm pack'
70 | *.tgz
71 |
72 | # Yarn Integrity file
73 | .yarn-integrity
74 |
75 | # dotenv environment variable files
76 | .env
77 | .env.development.local
78 | .env.test.local
79 | .env.production.local
80 | .env.local
81 |
82 | # parcel-bundler cache (https://parceljs.org/)
83 | .cache
84 | .parcel-cache
85 |
86 | # Next.js build output
87 | .next
88 | out
89 |
90 | # Nuxt.js build / generate output
91 | .nuxt
92 | dist
93 |
94 | # Gatsby files
95 | .cache/
96 | # Comment in the public line in if your project uses Gatsby and not Next.js
97 | # https://nextjs.org/blog/next-9-1#public-directory-support
98 | # public
99 |
100 | # vuepress build output
101 | .vuepress/dist
102 |
103 | # vuepress v2.x temp and cache directory
104 | .temp
105 | .cache
106 |
107 | # Docusaurus cache and generated files
108 | .docusaurus
109 |
110 | # Serverless directories
111 | .serverless/
112 |
113 | # FuseBox cache
114 | .fusebox/
115 |
116 | # DynamoDB Local files
117 | .dynamodb/
118 |
119 | # TernJS port file
120 | .tern-port
121 |
122 | # Stores VSCode versions used for testing VSCode extensions
123 | .vscode-test
124 |
125 | # yarn v2
126 | .yarn/cache
127 | .yarn/unplugged
128 | .yarn/build-state.yml
129 | .yarn/install-state.gz
130 | .pnp.*
131 |
132 | build/
133 |
134 | gcp-oauth.keys.json
135 | .*-server-credentials.json
136 |
137 | # Byte-compiled / optimized / DLL files
138 | __pycache__/
139 | *.py[cod]
140 | *$py.class
141 |
142 | # C extensions
143 | *.so
144 |
145 | # Distribution / packaging
146 | .Python
147 | build/
148 | develop-eggs/
149 | dist/
150 | downloads/
151 | eggs/
152 | .eggs/
153 | lib/
154 | lib64/
155 | parts/
156 | sdist/
157 | var/
158 | wheels/
159 | share/python-wheels/
160 | *.egg-info/
161 | .installed.cfg
162 | *.egg
163 | MANIFEST
164 |
165 | # PyInstaller
166 | # Usually these files are written by a python script from a template
167 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
168 | *.manifest
169 | *.spec
170 |
171 | # Installer logs
172 | pip-log.txt
173 | pip-delete-this-directory.txt
174 |
175 | # Unit test / coverage reports
176 | htmlcov/
177 | .tox/
178 | .nox/
179 | .coverage
180 | .coverage.*
181 | .cache
182 | nosetests.xml
183 | coverage.xml
184 | *.cover
185 | *.py,cover
186 | .hypothesis/
187 | .pytest_cache/
188 | cover/
189 |
190 | # Translations
191 | *.mo
192 | *.pot
193 |
194 | # Django stuff:
195 | *.log
196 | local_settings.py
197 | db.sqlite3
198 | db.sqlite3-journal
199 |
200 | # Flask stuff:
201 | instance/
202 | .webassets-cache
203 |
204 | # Scrapy stuff:
205 | .scrapy
206 |
207 | # Sphinx documentation
208 | docs/_build/
209 |
210 | # PyBuilder
211 | .pybuilder/
212 | target/
213 |
214 | # Jupyter Notebook
215 | .ipynb_checkpoints
216 |
217 | # IPython
218 | profile_default/
219 | ipython_config.py
220 |
221 | # pyenv
222 | # For a library or package, you might want to ignore these files since the code is
223 | # intended to run in multiple environments; otherwise, check them in:
224 | # .python-version
225 |
226 | # pipenv
227 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
228 | # However, in case of collaboration, if having platform-specific dependencies or dependencies
229 | # having no cross-platform support, pipenv may install dependencies that don't work, or not
230 | # install all needed dependencies.
231 | #Pipfile.lock
232 |
233 | # poetry
234 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
235 | # This is especially recommended for binary packages to ensure reproducibility, and is more
236 | # commonly ignored for libraries.
237 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
238 | #poetry.lock
239 |
240 | # pdm
241 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
242 | #pdm.lock
243 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
244 | # in version control.
245 | # https://pdm.fming.dev/latest/usage/project/#working-with-version-control
246 | .pdm.toml
247 | .pdm-python
248 | .pdm-build/
249 |
250 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
251 | __pypackages__/
252 |
253 | # Celery stuff
254 | celerybeat-schedule
255 | celerybeat.pid
256 |
257 | # SageMath parsed files
258 | *.sage.py
259 |
260 | # Environments
261 | .env
262 | .venv
263 | env/
264 | venv/
265 | ENV/
266 | env.bak/
267 | venv.bak/
268 |
269 | # Spyder project settings
270 | .spyderproject
271 | .spyproject
272 |
273 | # Rope project settings
274 | .ropeproject
275 |
276 | # mkdocs documentation
277 | /site
278 |
279 | # mypy
280 | .mypy_cache/
281 | .dmypy.json
282 | dmypy.json
283 |
284 | # Pyre type checker
285 | .pyre/
286 |
287 | # pytype static type analyzer
288 | .pytype/
289 |
290 | # Cython debug symbols
291 | cython_debug/
292 |
293 | .DS_Store
294 |
295 | # PyCharm
296 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
297 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
298 | # and can be added to the global gitignore or merged into this file. For a more nuclear
299 | # option (not recommended) you can uncomment the following to ignore the entire idea folder.
300 | #.idea/
301 |
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
1 | # MCP Servers - OpenAI and Flux Integration
2 |
3 | This repository contains MCP (Model Context Protocol) servers for integrating with OpenAI's o1 model and Flux capabilities.
4 |
5 | ## Server Configurations
6 |
7 | ### OpenAI o1 MCP Server
8 |
9 | The o1 server enables interaction with OpenAI's o1 preview model through the MCP protocol.
10 |
11 | ```json
12 | {
13 | "mcpServers": {
14 | "openai": {
15 | "command": "openai-server",
16 | "env": {
17 | "OPENAI_API_KEY": "apikey"
18 | }
19 | }
20 | }
21 | }
22 |
23 | ```
24 |
25 | Key features:
26 | - Direct access to o1-preview model
27 | - Streaming support
28 | - Temperature and top_p parameter control
29 | - System message configuration
30 |
31 | ### Flux MCP Server
32 |
33 | The Flux server provides integration with Flux capabilities through MCP.
34 |
35 | ```json
36 | {
37 | "mcpServers": {
38 | "flux": {
39 | "command": "flux-server",
40 | "env": {
41 | "REPLICATE_API_TOKEN": "your-replicate-token"
42 | }
43 | }
44 | }
45 | }
46 | ```
47 |
48 | Key features:
49 | - SOTA Image Model
50 |
51 | ## Usage
52 |
53 | 1. Clone or Fork Server
54 | ```bash
55 | git clone https://github.com/AllAboutAI-YT/mcp-servers.git
56 | ```
57 |
58 | 2. Set up environment variables in your .env file:
59 | ```env
60 | FLUX_API_KEY=your_flux_key_here
61 | ```
62 |
63 | 3. Start the servers using the configurations above.
64 |
65 | ## Security
66 |
67 | - Store API keys securely
68 | - Use environment variables for sensitive data
69 | - Follow security best practices in SECURITY.md
70 |
71 | ## License
72 |
73 | MIT License - See LICENSE file for details.
74 |
```
--------------------------------------------------------------------------------
/flux-server/README.md:
--------------------------------------------------------------------------------
```markdown
1 | # flux-server MCP Server
2 |
3 | A Model Context Protocol server
4 |
5 | This is a TypeScript-based MCP server that implements a simple notes system. It demonstrates core MCP concepts by providing:
6 |
7 | - Resources representing text notes with URIs and metadata
8 | - Tools for creating new notes
9 | - Prompts for generating summaries of notes
10 |
11 | ## Features
12 |
13 | ### Resources
14 | - List and access notes via `note://` URIs
15 | - Each note has a title, content and metadata
16 | - Plain text mime type for simple content access
17 |
18 | ### Tools
19 | - `create_note` - Create new text notes
20 | - Takes title and content as required parameters
21 | - Stores note in server state
22 |
23 | ### Prompts
24 | - `summarize_notes` - Generate a summary of all stored notes
25 | - Includes all note contents as embedded resources
26 | - Returns structured prompt for LLM summarization
27 |
28 | ## Development
29 |
30 | Install dependencies:
31 | ```bash
32 | npm install
33 | ```
34 |
35 | Build the server:
36 | ```bash
37 | npm run build
38 | ```
39 |
40 | For development with auto-rebuild:
41 | ```bash
42 | npm run watch
43 | ```
44 |
45 | ## Installation
46 |
47 | To use with Claude Desktop, add the server config:
48 |
49 | On MacOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
50 | On Windows: `%APPDATA%/Claude/claude_desktop_config.json`
51 |
52 | ```json
53 | {
54 | "mcpServers": {
55 | "flux-server": {
56 | "command": "/path/to/flux-server/build/index.js"
57 | }
58 | }
59 | }
60 | ```
61 |
62 | ### Debugging
63 |
64 | Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the [MCP Inspector](https://github.com/modelcontextprotocol/inspector), which is available as a package script:
65 |
66 | ```bash
67 | npm run inspector
68 | ```
69 |
70 | The Inspector will provide a URL to access debugging tools in your browser.
71 |
```
--------------------------------------------------------------------------------
/openai-server/README.md:
--------------------------------------------------------------------------------
```markdown
1 | # openai-server MCP Server
2 |
3 | A Model Context Protocol server
4 |
5 | This is a TypeScript-based MCP server that implements a simple notes system. It demonstrates core MCP concepts by providing:
6 |
7 | - Resources representing text notes with URIs and metadata
8 | - Tools for creating new notes
9 | - Prompts for generating summaries of notes
10 |
11 | ## Features
12 |
13 | ### Resources
14 | - List and access notes via `note://` URIs
15 | - Each note has a title, content and metadata
16 | - Plain text mime type for simple content access
17 |
18 | ### Tools
19 | - `create_note` - Create new text notes
20 | - Takes title and content as required parameters
21 | - Stores note in server state
22 |
23 | ### Prompts
24 | - `summarize_notes` - Generate a summary of all stored notes
25 | - Includes all note contents as embedded resources
26 | - Returns structured prompt for LLM summarization
27 |
28 | ## Development
29 |
30 | Install dependencies:
31 | ```bash
32 | npm install
33 | ```
34 |
35 | Build the server:
36 | ```bash
37 | npm run build
38 | ```
39 |
40 | For development with auto-rebuild:
41 | ```bash
42 | npm run watch
43 | ```
44 |
45 | ## Installation
46 |
47 | To use with Claude Desktop, add the server config:
48 |
49 | On MacOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
50 | On Windows: `%APPDATA%/Claude/claude_desktop_config.json`
51 |
52 | ```json
53 | {
54 | "mcpServers": {
55 | "openai-server": {
56 | "command": "/path/to/openai-server/build/index.js"
57 | }
58 | }
59 | }
60 | ```
61 |
62 | ### Debugging
63 |
64 | Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the [MCP Inspector](https://github.com/modelcontextprotocol/inspector), which is available as a package script:
65 |
66 | ```bash
67 | npm run inspector
68 | ```
69 |
70 | The Inspector will provide a URL to access debugging tools in your browser.
71 |
```
--------------------------------------------------------------------------------
/flux-server/tsconfig.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "compilerOptions": {
3 | "target": "ES2022",
4 | "module": "Node16",
5 | "moduleResolution": "Node16",
6 | "outDir": "./build",
7 | "rootDir": "./src",
8 | "strict": true,
9 | "esModuleInterop": true,
10 | "skipLibCheck": true,
11 | "forceConsistentCasingInFileNames": true
12 | },
13 | "include": ["src/**/*"],
14 | "exclude": ["node_modules"]
15 | }
16 |
```
--------------------------------------------------------------------------------
/openai-server/tsconfig.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "compilerOptions": {
3 | "target": "ES2022",
4 | "module": "Node16",
5 | "moduleResolution": "Node16",
6 | "outDir": "./build",
7 | "rootDir": "./src",
8 | "strict": true,
9 | "esModuleInterop": true,
10 | "skipLibCheck": true,
11 | "forceConsistentCasingInFileNames": true
12 | },
13 | "include": ["src/**/*"],
14 | "exclude": ["node_modules"]
15 | }
16 |
```
--------------------------------------------------------------------------------
/openai-server/src/types.ts:
--------------------------------------------------------------------------------
```typescript
1 | export interface ChatCompletionArgs {
2 | prompt: string;
3 | model?: string;
4 | temperature?: number;
5 | max_tokens?: number;
6 | system_message?: string;
7 | }
8 |
9 | export function isValidChatArgs(args: any): args is ChatCompletionArgs {
10 | return (
11 | typeof args === "object" &&
12 | args !== null &&
13 | typeof args.prompt === "string" &&
14 | (args.model === undefined || typeof args.model === "string") &&
15 | (args.temperature === undefined || typeof args.temperature === "number") &&
16 | (args.max_tokens === undefined || typeof args.max_tokens === "number") &&
17 | (args.system_message === undefined || typeof args.system_message === "string")
18 | );
19 | }
20 |
```
--------------------------------------------------------------------------------
/openai-server/package.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "name": "openai-server",
3 | "version": "0.1.0",
4 | "description": "A Model Context Protocol server",
5 | "private": true,
6 | "type": "module",
7 | "bin": {
8 | "openai-server": "./build/index.js"
9 | },
10 | "files": [
11 | "build"
12 | ],
13 | "scripts": {
14 | "build": "tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
15 | "prepare": "npm run build",
16 | "watch": "tsc --watch",
17 | "inspector": "npx @modelcontextprotocol/inspector build/index.js"
18 | },
19 | "dependencies": {
20 | "@modelcontextprotocol/sdk": "0.6.0",
21 | "dotenv": "^16.4.5",
22 | "openai": "^4.73.1"
23 | },
24 | "devDependencies": {
25 | "@types/node": "^20.11.24",
26 | "typescript": "^5.3.3"
27 | }
28 | }
29 |
```
--------------------------------------------------------------------------------
/flux-server/package.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "name": "flux-server",
3 | "version": "0.1.0",
4 | "description": "A Model Context Protocol server",
5 | "private": true,
6 | "type": "module",
7 | "bin": {
8 | "flux-server": "./build/index.js"
9 | },
10 | "files": [
11 | "build"
12 | ],
13 | "scripts": {
14 | "build": "tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
15 | "prepare": "npm run build",
16 | "watch": "tsc --watch",
17 | "inspector": "npx @modelcontextprotocol/inspector build/index.js"
18 | },
19 | "dependencies": {
20 | "@modelcontextprotocol/sdk": "0.6.0",
21 | "dotenv": "^16.4.5",
22 | "node-fetch": "^3.3.2",
23 | "replicate": "^1.0.1"
24 | },
25 | "devDependencies": {
26 | "@types/node": "^20.11.24",
27 | "typescript": "^5.3.3"
28 | }
29 | }
30 |
```
--------------------------------------------------------------------------------
/flux-server/src/types.ts:
--------------------------------------------------------------------------------
```typescript
1 | export interface FluxGenerateArgs {
2 | prompt: string;
3 | go_fast?: boolean;
4 | guidance?: number;
5 | megapixels?: string;
6 | num_outputs?: number;
7 | aspect_ratio?: string;
8 | output_format?: string;
9 | output_quality?: number;
10 | prompt_strength?: number;
11 | num_inference_steps?: number;
12 | }
13 |
14 | export function isValidFluxArgs(args: any): args is FluxGenerateArgs {
15 | return (
16 | typeof args === "object" &&
17 | args !== null &&
18 | typeof args.prompt === "string" &&
19 | (args.go_fast === undefined || typeof args.go_fast === "boolean") &&
20 | (args.guidance === undefined || typeof args.guidance === "number") &&
21 | (args.megapixels === undefined || typeof args.megapixels === "string") &&
22 | (args.num_outputs === undefined || typeof args.num_outputs === "number") &&
23 | (args.aspect_ratio === undefined || typeof args.aspect_ratio === "string") &&
24 | (args.output_format === undefined || typeof args.output_format === "string") &&
25 | (args.output_quality === undefined || typeof args.output_quality === "number") &&
26 | (args.prompt_strength === undefined || typeof args.prompt_strength === "number") &&
27 | (args.num_inference_steps === undefined || typeof args.num_inference_steps === "number")
28 | );
29 | }
30 |
```
--------------------------------------------------------------------------------
/docs/openai_docs.md:
--------------------------------------------------------------------------------
```markdown
1 | Developer quickstart
2 | Learn how to make your first API request.
3 | The OpenAI API provides a simple interface to state-of-the-art AI models for natural language processing, image generation, semantic search, and speech recognition. Follow this guide to learn how to generate human-like responses to natural language prompts, create vector embeddings for semantic search, and generate images from textual descriptions.
4 |
5 | Create and export an API key
6 | Create an API key in the dashboard here, which you’ll use to securely access the API. Store the key in a safe location, like a .zshrc file or another text file on your computer. Once you’ve generated an API key, export it as an environment variable in your terminal.
7 |
8 | Export an environment variable on macOS or Linux systems
9 |
10 | export OPENAI_API_KEY="your_api_key_here"
11 | Make your first API request
12 | With your OpenAI API key exported as an environment variable, you're ready to make your first API request. You can either use the REST API directly with the HTTP client of your choice, or use one of our official SDKs as shown below.
13 |
14 | To use the OpenAI API in server-side JavaScript environments like Node.js, Deno, or Bun, you can use the official OpenAI SDK for TypeScript and JavaScript. Get started by installing the SDK using npm or your preferred package manager:
15 |
16 | Install the OpenAI SDK with npm
17 |
18 | npm install openai
19 | With the OpenAI SDK installed, create a file called example.mjs and copy one of the following examples into it:
20 |
21 | Create a human-like response to a prompt
22 |
23 | import OpenAI from "openai";
24 | const openai = new OpenAI();
25 |
26 | const completion = await openai.chat.completions.create({
27 | model: "gpt-4o-mini",
28 | messages: [
29 | { role: "system", content: "You are a helpful assistant." },
30 | {
31 | role: "user",
32 | content: "Write a haiku about recursion in programming.",
33 | },
34 | ],
35 | });
36 |
37 | console.log(completion.choices[0].message);
38 | Execute the code with node example.mjs (or the equivalent command for Deno or Bun). In a few moments, you should see the output of your API request!
39 |
40 |
41 | OpenAI developer platform
42 | Developer quickstart
43 | Set up your environment and make your first API request in minutes
44 | 5 min
45 |
46 | import OpenAI from "openai";
47 | const openai = new OpenAI();
48 | const completion = await openai.chat.completions.create({
49 | model: "gpt-4o",
50 | messages: [
51 | {"role": "user", "content": "write a haiku about ai"}
52 | ]
53 | });
```
--------------------------------------------------------------------------------
/openai-server/src/index.ts:
--------------------------------------------------------------------------------
```typescript
1 | #!/usr/bin/env node
2 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
3 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
4 | import {
5 | ListToolsRequestSchema,
6 | CallToolRequestSchema,
7 | ErrorCode,
8 | McpError
9 | } from "@modelcontextprotocol/sdk/types.js";
10 | import OpenAI from "openai";
11 | import dotenv from "dotenv";
12 | import { isValidChatArgs } from "./types.js";
13 |
14 | dotenv.config();
15 |
16 | const API_KEY = process.env.OPENAI_API_KEY;
17 | if (!API_KEY) {
18 | throw new Error("OPENAI_API_KEY environment variable is required");
19 | }
20 |
21 | class OpenAIServer {
22 | private server: Server;
23 | private openai: OpenAI;
24 |
25 | constructor() {
26 | this.server = new Server({
27 | name: "openai-server",
28 | version: "0.1.0"
29 | }, {
30 | capabilities: {
31 | tools: {}
32 | }
33 | });
34 |
35 | this.openai = new OpenAI({
36 | apiKey: API_KEY
37 | });
38 |
39 | this.setupHandlers();
40 | this.setupErrorHandling();
41 | }
42 |
43 | private setupErrorHandling(): void {
44 | this.server.onerror = (error) => {
45 | console.error("[MCP Error]", error);
46 | };
47 |
48 | process.on('SIGINT', async () => {
49 | await this.server.close();
50 | process.exit(0);
51 | });
52 | }
53 |
54 | private setupHandlers(): void {
55 | this.server.setRequestHandler(
56 | ListToolsRequestSchema,
57 | async () => ({
58 | tools: [{
59 | name: "chat_completion",
60 | description: "Generate text using OpenAI's chat completion API",
61 | inputSchema: {
62 | type: "object",
63 | properties: {
64 | prompt: {
65 | type: "string",
66 | description: "The prompt to send to the model"
67 | },
68 | model: {
69 | type: "string",
70 | description: "The model to use (default: o1-preview)",
71 | default: "o1-preview"
72 | }
73 | },
74 | required: ["prompt"]
75 | }
76 | }]
77 | })
78 | );
79 |
80 | this.server.setRequestHandler(
81 | CallToolRequestSchema,
82 | async (request) => {
83 | if (request.params.name !== "chat_completion") {
84 | throw new McpError(
85 | ErrorCode.MethodNotFound,
86 | `Unknown tool: ${request.params.name}`
87 | );
88 | }
89 |
90 | if (!isValidChatArgs(request.params.arguments)) {
91 | throw new McpError(
92 | ErrorCode.InvalidParams,
93 | "Invalid chat completion arguments"
94 | );
95 | }
96 |
97 | try {
98 | const completion = await this.openai.chat.completions.create({
99 | model: request.params.arguments.model || "o1-preview",
100 | messages: [
101 | {
102 | role: "user",
103 | content: request.params.arguments.prompt
104 | }
105 | ]
106 | });
107 |
108 | return {
109 | content: [
110 | {
111 | type: "text",
112 | text: completion.choices[0].message.content || "No response generated"
113 | }
114 | ]
115 | };
116 | } catch (error) {
117 | return {
118 | content: [
119 | {
120 | type: "text",
121 | text: `OpenAI API error: ${error instanceof Error ? error.message : String(error)}`
122 | }
123 | ],
124 | isError: true
125 | };
126 | }
127 | }
128 | );
129 | }
130 |
131 | async run(): Promise<void> {
132 | const transport = new StdioServerTransport();
133 | await this.server.connect(transport);
134 | console.error("OpenAI MCP server running on stdio");
135 | }
136 | }
137 |
138 | const server = new OpenAIServer();
139 | server.run().catch(console.error);
140 |
```
--------------------------------------------------------------------------------
/docs/flux_docs.md:
--------------------------------------------------------------------------------
```markdown
1 | Install Replicate’s Node.js client library
2 | npm install replicate
3 |
4 | Copy
5 | Set the REPLICATE_API_TOKEN environment variable
6 | export REPLICATE_API_TOKEN=<paste-your-token-here>
7 |
8 | Visibility
9 |
10 | Copy
11 | Find your API token in your account settings.
12 |
13 | Import and set up the client
14 | import Replicate from "replicate";
15 |
16 | const replicate = new Replicate({
17 | auth: process.env.REPLICATE_API_TOKEN,
18 | });
19 |
20 | Copy
21 | Run black-forest-labs/flux-dev using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
22 |
23 | const input = {
24 | prompt: "black forest gateau cake spelling out the words \"FLUX DEV\", tasty, food photography, dynamic shot",
25 | go_fast: true,
26 | guidance: 3.5,
27 | megapixels: "1",
28 | num_outputs: 1,
29 | aspect_ratio: "1:1",
30 | output_format: "webp",
31 | output_quality: 80,
32 | prompt_strength: 0.8,
33 | num_inference_steps: 28
34 | };
35 |
36 | const output = await replicate.run("black-forest-labs/flux-dev", { input });
37 | console.log(output);
38 |
39 | Run a model from Node.js
40 | Table of contents
41 |
42 | Prerequisites
43 | 🐇 Quickstart: Scaffold a project with a one-liner
44 | 🐢 Slowstart: Set up a project from scratch
45 | Step 1: Authenticate
46 | Step 2: Create a new Node.js project
47 | Step 3: Install the Replicate JavaScript client
48 | Step 4: Write some code
49 | Step 5: Run your code
50 | Next steps
51 | Further reading
52 | Learn how to run a model on Replicate using Node.js.
53 |
54 | This guide includes a quickstart to scaffold a new project with a single command in your terminal, followed by a step-by-step tutorial for setting up a project from scratch. By the end, you'll have a working Node.js project that can run any model on Replicate.
55 |
56 | Prerequisites
57 | Node.js 16 or greater: The simplest way to install Node.js is using the installer at nodejs.org.
58 |
59 | 🐇 Quickstart: Scaffold a project with a one-liner
60 | To get up and running as quickly as possible, you can use create-replicate, an npm package that creates a project directory for you, writes some starter code, installs the dependencies, and runs the code.
61 |
62 | Run the following command to scaffold a new project:
63 |
64 |
65 | Copy
66 | npx create-replicate
67 | That's it. You should now have a working Node.js project that generates images with the SDXL model using Replicate's API.
68 |
69 | If you want to use a different model than SDXL, specify it when creating your project:
70 |
71 |
72 | Copy
73 | npx create-replicate --model black-forest-labs/flux-schnell
74 | To learn more about scaffolding new Node.js projects, check out the create-replicate documentation.
75 |
76 | 🐢 Slowstart: Set up a project from scratch
77 | If you prefer to manually set up your Node.js project step by step, follow the instructions below.
78 |
79 | Step 1: Authenticate
80 | Authenticate by setting your Replicate API token in an environment variable:
81 |
82 |
83 | Copy
84 | export REPLICATE_API_TOKEN=r8_******
85 | Step 2: Create a new Node.js project
86 |
87 | Copy
88 | # create the directory
89 | mkdir my-replicate-app
90 | cd my-replicate-app
91 |
92 | # set up package.json
93 | npm init -y
94 | npm pkg set type=module
95 | Step 3: Install the Replicate JavaScript client
96 | Use npm to install the Replicate JavaScript client:
97 |
98 |
99 | Copy
100 | npm install replicate
101 | Step 4: Write some code
102 | Create a file called index.js and add the following code:
103 |
104 |
105 | Copy
106 | import Replicate from "replicate";
107 | const replicate = new Replicate();
108 |
109 | console.log("Running the model...");
110 | const [output] = await replicate.run(
111 | "black-forest-labs/flux-schnell",
112 | {
113 | input: {
114 | prompt: "An astronaut riding a rainbow unicorn, cinematic, dramatic",
115 | },
116 | }
117 | );
118 |
119 | // Save the generated image
120 | import { writeFile } from "node:fs/promises";
121 |
122 | await writeFile("./output.png", output);
123 | console.log("Image saved as output.png");
124 | Step 5: Run your code
125 | Next, run your code from your terminal:
126 |
127 |
128 | Copy
129 | node index.js
130 | You should see output indicating the model is running and the image has been saved:
131 |
132 |
133 | Copy
134 | Running the model...
135 | Image saved as output.png
```
--------------------------------------------------------------------------------
/flux-server/src/index.ts:
--------------------------------------------------------------------------------
```typescript
1 | #!/usr/bin/env node
2 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
3 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
4 | import {
5 | ListToolsRequestSchema,
6 | CallToolRequestSchema,
7 | ErrorCode,
8 | McpError
9 | } from "@modelcontextprotocol/sdk/types.js";
10 | import Replicate from "replicate";
11 | import dotenv from "dotenv";
12 | import { FluxGenerateArgs, isValidFluxArgs } from "./types.js";
13 | import fetch from 'node-fetch';
14 |
15 | dotenv.config();
16 |
17 | const API_TOKEN = process.env.REPLICATE_API_TOKEN;
18 | if (!API_TOKEN) {
19 | throw new Error("REPLICATE_API_TOKEN environment variable is required");
20 | }
21 |
22 | class FluxServer {
23 | private server: Server;
24 | private replicate: Replicate;
25 |
26 | constructor() {
27 | this.server = new Server({
28 | name: "flux-image-server",
29 | version: "0.1.0"
30 | }, {
31 | capabilities: {
32 | tools: {}
33 | }
34 | });
35 |
36 | this.replicate = new Replicate({
37 | auth: API_TOKEN
38 | });
39 |
40 | this.setupHandlers();
41 | this.setupErrorHandling();
42 | }
43 |
44 | private setupErrorHandling(): void {
45 | this.server.onerror = (error) => {
46 | console.error("[MCP Error]", error);
47 | };
48 |
49 | process.on('SIGINT', async () => {
50 | await this.server.close();
51 | process.exit(0);
52 | });
53 | }
54 |
55 | private setupHandlers(): void {
56 | this.server.setRequestHandler(
57 | ListToolsRequestSchema,
58 | async () => ({
59 | tools: [{
60 | name: "generate_image",
61 | description: "Generate an image using the Flux model",
62 | inputSchema: {
63 | type: "object",
64 | properties: {
65 | prompt: {
66 | type: "string",
67 | description: "Text description of the image to generate"
68 | },
69 | go_fast: {
70 | type: "boolean",
71 | description: "Enable fast mode",
72 | default: true
73 | },
74 | guidance: {
75 | type: "number",
76 | description: "Guidance scale",
77 | default: 3.5
78 | },
79 | megapixels: {
80 | type: "string",
81 | description: "Image resolution in megapixels",
82 | default: "1"
83 | },
84 | aspect_ratio: {
85 | type: "string",
86 | description: "Image aspect ratio",
87 | default: "4:5"
88 | }
89 | },
90 | required: ["prompt"]
91 | }
92 | }]
93 | })
94 | );
95 |
96 | this.server.setRequestHandler(
97 | CallToolRequestSchema,
98 | async (request) => {
99 | if (request.params.name !== "generate_image") {
100 | throw new McpError(
101 | ErrorCode.MethodNotFound,
102 | `Unknown tool: ${request.params.name}`
103 | );
104 | }
105 |
106 | if (!isValidFluxArgs(request.params.arguments)) {
107 | throw new McpError(
108 | ErrorCode.InvalidParams,
109 | "Invalid generation arguments"
110 | );
111 | }
112 |
113 | try {
114 | const output = await this.replicate.run(
115 | "black-forest-labs/flux-dev",
116 | {
117 | input: request.params.arguments
118 | }
119 | );
120 |
121 | const imageUrl = Array.isArray(output) ? String(output[0]) : String(output);
122 |
123 | // Fetch the image data from the URL
124 | const response = await fetch(imageUrl);
125 | const imageBuffer = await response.arrayBuffer();
126 |
127 | return {
128 | content: [
129 | {
130 | type: "text",
131 | text: "Generated image:"
132 | },
133 | {
134 | type: "image",
135 | data: Buffer.from(imageBuffer).toString('base64'),
136 | mimeType: "image/webp"
137 | },
138 | {
139 | type: "text",
140 | text: `Image URL: ${imageUrl}`
141 | }
142 | ]
143 | };
144 | } catch (error) {
145 | return {
146 | content: [
147 | {
148 | type: "text",
149 | text: `Flux API error: ${error instanceof Error ? error.message : String(error)}`
150 | }
151 | ],
152 | isError: true
153 | };
154 | }
155 | }
156 | );
157 | }
158 |
159 | async run(): Promise<void> {
160 | const transport = new StdioServerTransport();
161 | await this.server.connect(transport);
162 | console.error("Flux MCP server running on stdio");
163 | }
164 | }
165 |
166 | const server = new FluxServer();
167 | server.run().catch(console.error);
168 |
```
--------------------------------------------------------------------------------
/docs/mcp_server.md:
--------------------------------------------------------------------------------
```markdown
1 | Your First MCP Server
2 | TypeScript
3 | Create a simple MCP server in TypeScript in 15 minutes
4 |
5 | Let’s build your first MCP server in TypeScript! We’ll create a weather server that provides current weather data as a resource and lets Claude fetch forecasts using tools.
6 |
7 | This guide uses the OpenWeatherMap API. You’ll need a free API key from OpenWeatherMap to follow along.
8 |
9 |
10 | Prerequisites
11 | 1
12 | Install Node.js
13 |
14 | You’ll need Node.js 18 or higher:
15 |
16 |
17 | node --version # Should be v18 or higher
18 | npm --version
19 | 2
20 | Create a new project
21 |
22 | You can use our create-typescript-server tool to bootstrap a new project:
23 |
24 |
25 | npx @modelcontextprotocol/create-server weather-server
26 | cd weather-server
27 | 3
28 | Install dependencies
29 |
30 |
31 | npm install --save axios dotenv
32 | 4
33 | Set up environment
34 |
35 | Create .env:
36 |
37 |
38 | OPENWEATHER_API_KEY=your-api-key-here
39 | Make sure to add your environment file to .gitignore
40 |
41 |
42 | .env
43 |
44 | Create your server
45 | 1
46 | Define types
47 |
48 | Create a file src/types.ts, and add the following:
49 |
50 |
51 | export interface OpenWeatherResponse {
52 | main: {
53 | temp: number;
54 | humidity: number;
55 | };
56 | weather: Array<{
57 | description: string;
58 | }>;
59 | wind: {
60 | speed: number;
61 | };
62 | dt_txt?: string;
63 | }
64 |
65 | export interface WeatherData {
66 | temperature: number;
67 | conditions: string;
68 | humidity: number;
69 | wind_speed: number;
70 | timestamp: string;
71 | }
72 |
73 | export interface ForecastDay {
74 | date: string;
75 | temperature: number;
76 | conditions: string;
77 | }
78 |
79 | export interface GetForecastArgs {
80 | city: string;
81 | days?: number;
82 | }
83 |
84 | // Type guard for forecast arguments
85 | export function isValidForecastArgs(args: any): args is GetForecastArgs {
86 | return (
87 | typeof args === "object" &&
88 | args !== null &&
89 | "city" in args &&
90 | typeof args.city === "string" &&
91 | (args.days === undefined || typeof args.days === "number")
92 | );
93 | }
94 | 2
95 | Add the base code
96 |
97 | Replace src/index.ts with the following:
98 |
99 |
100 | #!/usr/bin/env node
101 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
102 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
103 | import {
104 | ListResourcesRequestSchema,
105 | ReadResourceRequestSchema,
106 | ListToolsRequestSchema,
107 | CallToolRequestSchema,
108 | ErrorCode,
109 | McpError
110 | } from "@modelcontextprotocol/sdk/types.js";
111 | import axios from "axios";
112 | import dotenv from "dotenv";
113 | import {
114 | WeatherData,
115 | ForecastDay,
116 | OpenWeatherResponse,
117 | isValidForecastArgs
118 | } from "./types.js";
119 |
120 | dotenv.config();
121 |
122 | const API_KEY = process.env.OPENWEATHER_API_KEY;
123 | if (!API_KEY) {
124 | throw new Error("OPENWEATHER_API_KEY environment variable is required");
125 | }
126 |
127 | const API_CONFIG = {
128 | BASE_URL: 'http://api.openweathermap.org/data/2.5',
129 | DEFAULT_CITY: 'San Francisco',
130 | ENDPOINTS: {
131 | CURRENT: 'weather',
132 | FORECAST: 'forecast'
133 | }
134 | } as const;
135 |
136 | class WeatherServer {
137 | private server: Server;
138 | private axiosInstance;
139 |
140 | constructor() {
141 | this.server = new Server({
142 | name: "example-weather-server",
143 | version: "0.1.0"
144 | }, {
145 | capabilities: {
146 | resources: {},
147 | tools: {}
148 | }
149 | });
150 |
151 | // Configure axios with defaults
152 | this.axiosInstance = axios.create({
153 | baseURL: API_CONFIG.BASE_URL,
154 | params: {
155 | appid: API_KEY,
156 | units: "metric"
157 | }
158 | });
159 |
160 | this.setupHandlers();
161 | this.setupErrorHandling();
162 | }
163 |
164 | private setupErrorHandling(): void {
165 | this.server.onerror = (error) => {
166 | console.error("[MCP Error]", error);
167 | };
168 |
169 | process.on('SIGINT', async () => {
170 | await this.server.close();
171 | process.exit(0);
172 | });
173 | }
174 |
175 | private setupHandlers(): void {
176 | this.setupResourceHandlers();
177 | this.setupToolHandlers();
178 | }
179 |
180 | private setupResourceHandlers(): void {
181 | // Implementation continues in next section
182 | }
183 |
184 | private setupToolHandlers(): void {
185 | // Implementation continues in next section
186 | }
187 |
188 | async run(): Promise<void> {
189 | const transport = new StdioServerTransport();
190 | await this.server.connect(transport);
191 |
192 | // Although this is just an informative message, we must log to stderr,
193 | // to avoid interfering with MCP communication that happens on stdout
194 | console.error("Weather MCP server running on stdio");
195 | }
196 | }
197 |
198 | const server = new WeatherServer();
199 | server.run().catch(console.error);
200 | 3
201 | Add resource handlers
202 |
203 | Add this to the setupResourceHandlers method:
204 |
205 |
206 | private setupResourceHandlers(): void {
207 | this.server.setRequestHandler(
208 | ListResourcesRequestSchema,
209 | async () => ({
210 | resources: [{
211 | uri: `weather://${API_CONFIG.DEFAULT_CITY}/current`,
212 | name: `Current weather in ${API_CONFIG.DEFAULT_CITY}`,
213 | mimeType: "application/json",
214 | description: "Real-time weather data including temperature, conditions, humidity, and wind speed"
215 | }]
216 | })
217 | );
218 |
219 | this.server.setRequestHandler(
220 | ReadResourceRequestSchema,
221 | async (request) => {
222 | const city = API_CONFIG.DEFAULT_CITY;
223 | if (request.params.uri !== `weather://${city}/current`) {
224 | throw new McpError(
225 | ErrorCode.InvalidRequest,
226 | `Unknown resource: ${request.params.uri}`
227 | );
228 | }
229 |
230 | try {
231 | const response = await this.axiosInstance.get<OpenWeatherResponse>(
232 | API_CONFIG.ENDPOINTS.CURRENT,
233 | {
234 | params: { q: city }
235 | }
236 | );
237 |
238 | const weatherData: WeatherData = {
239 | temperature: response.data.main.temp,
240 | conditions: response.data.weather[0].description,
241 | humidity: response.data.main.humidity,
242 | wind_speed: response.data.wind.speed,
243 | timestamp: new Date().toISOString()
244 | };
245 |
246 | return {
247 | contents: [{
248 | uri: request.params.uri,
249 | mimeType: "application/json",
250 | text: JSON.stringify(weatherData, null, 2)
251 | }]
252 | };
253 | } catch (error) {
254 | if (axios.isAxiosError(error)) {
255 | throw new McpError(
256 | ErrorCode.InternalError,
257 | `Weather API error: ${error.response?.data.message ?? error.message}`
258 | );
259 | }
260 | throw error;
261 | }
262 | }
263 | );
264 | }
265 | 4
266 | Add tool handlers
267 |
268 | Add these handlers to the setupToolHandlers method:
269 |
270 |
271 | private setupToolHandlers(): void {
272 | this.server.setRequestHandler(
273 | ListToolsRequestSchema,
274 | async () => ({
275 | tools: [{
276 | name: "get_forecast",
277 | description: "Get weather forecast for a city",
278 | inputSchema: {
279 | type: "object",
280 | properties: {
281 | city: {
282 | type: "string",
283 | description: "City name"
284 | },
285 | days: {
286 | type: "number",
287 | description: "Number of days (1-5)",
288 | minimum: 1,
289 | maximum: 5
290 | }
291 | },
292 | required: ["city"]
293 | }
294 | }]
295 | })
296 | );
297 |
298 | this.server.setRequestHandler(
299 | CallToolRequestSchema,
300 | async (request) => {
301 | if (request.params.name !== "get_forecast") {
302 | throw new McpError(
303 | ErrorCode.MethodNotFound,
304 | `Unknown tool: ${request.params.name}`
305 | );
306 | }
307 |
308 | if (!isValidForecastArgs(request.params.arguments)) {
309 | throw new McpError(
310 | ErrorCode.InvalidParams,
311 | "Invalid forecast arguments"
312 | );
313 | }
314 |
315 | const city = request.params.arguments.city;
316 | const days = Math.min(request.params.arguments.days || 3, 5);
317 |
318 | try {
319 | const response = await this.axiosInstance.get<{
320 | list: OpenWeatherResponse[]
321 | }>(API_CONFIG.ENDPOINTS.FORECAST, {
322 | params: {
323 | q: city,
324 | cnt: days * 8 // API returns 3-hour intervals
325 | }
326 | });
327 |
328 | const forecasts: ForecastDay[] = [];
329 | for (let i = 0; i < response.data.list.length; i += 8) {
330 | const dayData = response.data.list[i];
331 | forecasts.push({
332 | date: dayData.dt_txt?.split(' ')[0] ?? new Date().toISOString().split('T')[0],
333 | temperature: dayData.main.temp,
334 | conditions: dayData.weather[0].description
335 | });
336 | }
337 |
338 | return {
339 | content: {
340 | mimeType: "application/json",
341 | text: JSON.stringify(forecasts, null, 2)
342 | }
343 | };
344 | } catch (error) {
345 | if (axios.isAxiosError(error)) {
346 | return {
347 | content: {
348 | mimeType: "text/plain",
349 | text: `Weather API error: ${error.response?.data.message ?? error.message}`
350 | },
351 | isError: true,
352 | }
353 | }
354 | throw error;
355 | }
356 | }
357 | );
358 | }
359 | 5
360 | Build and test
361 |
362 |
363 | npm run build
364 | npm link
365 |
366 | Connect to Claude Desktop
367 | 1
368 | Update Claude config
369 |
370 | If you didn’t already connect to Claude Desktop during project setup, add to claude_desktop_config.json:
371 |
372 |
373 | {
374 | "mcpServers": {
375 | "weather": {
376 | "command": "weather-server",
377 | "env": {
378 | "OPENWEATHER_API_KEY": "your-api-key",
379 | }
380 | }
381 | }
382 | }
383 | 2
384 | Restart Claude
385 |
386 | Quit Claude completely
387 | Start Claude again
388 | Look for your weather server in the 🔌 menu
389 |
390 | Try it out!
391 |
392 | Check Current Weather
393 |
394 |
395 | Get a Forecast
396 |
397 |
398 | Compare Weather
399 |
400 |
401 | Understanding the code
402 | Type Safety
403 | Resources
404 | Tools
405 |
406 | interface WeatherData {
407 | temperature: number;
408 | conditions: string;
409 | humidity: number;
410 | wind_speed: number;
411 | timestamp: string;
412 | }
413 | TypeScript adds type safety to our MCP server, making it more reliable and easier to maintain.
414 |
415 |
416 | Best practices
417 | Error Handling
418 | When a tool encounters an error, return the error message with isError: true, so the model can self-correct:
419 |
420 |
421 | try {
422 | const response = await axiosInstance.get(...);
423 | } catch (error) {
424 | if (axios.isAxiosError(error)) {
425 | return {
426 | content: {
427 | mimeType: "text/plain",
428 | text: `Weather API error: ${error.response?.data.message ?? error.message}`
429 | },
430 | isError: true,
431 | }
432 | }
433 | throw error;
434 | }
435 | For other handlers, throw an error, so the application can notify the user:
436 |
437 |
438 | try {
439 | const response = await this.axiosInstance.get(...);
440 | } catch (error) {
441 | if (axios.isAxiosError(error)) {
442 | throw new McpError(
443 | ErrorCode.InternalError,
444 | `Weather API error: ${error.response?.data.message}`
445 | );
446 | }
447 | throw error;
448 | }
449 | Type Validation
450 |
451 | function isValidForecastArgs(args: any): args is GetForecastArgs {
452 | return (
453 | typeof args === "object" &&
454 | args !== null &&
455 | "city" in args &&
456 | typeof args.city === "string"
457 | );
458 | }
459 | You can also use libraries like Zod to perform this validation automatically.
460 |
461 | Available transports
462 | While this guide uses stdio to run the MCP server as a local process, MCP supports other transports as well.
463 |
464 |
465 | Troubleshooting
466 | The following troubleshooting tips are for macOS. Guides for other platforms are coming soon.
467 |
468 |
469 | Build errors
470 |
471 | # Check TypeScript version
472 | npx tsc --version
473 |
474 | # Clean and rebuild
475 | rm -rf build/
476 | npm run build
477 |
478 | Runtime errors
479 | Look for detailed error messages in the Claude Desktop logs:
480 |
481 |
482 | # Monitor logs
483 | tail -n 20 -f ~/Library/Application\ Support/Claude/mcp*.log
484 |
485 | Type errors
486 |
487 | # Check types without building
488 | npx tsc --noEmit
489 |
490 |
```
--------------------------------------------------------------------------------
/docs/mcp.md:
--------------------------------------------------------------------------------
```markdown
1 | Introducing the Model Context Protocol
2 | 25. nov. 2024
3 | ●
4 | 3 min read
5 | An abstract illustration of critical context connecting to a central hub
6 | Today, we're open-sourcing the Model Context Protocol (MCP), a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses.
7 |
8 | As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality. Yet even the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy systems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale.
9 |
10 | MCP addresses this challenge. It provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. The result is a simpler, more reliable way to give AI systems access to the data they need.
11 |
12 | Model Context Protocol
13 | The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.
14 |
15 | Today, we're introducing three major components of the Model Context Protocol for developers:
16 |
17 | The Model Context Protocol specification and SDKs
18 | Local MCP server support in the Claude Desktop apps
19 | An open-source repository of MCP servers
20 | Claude 3.5 Sonnet is adept at quickly building MCP server implementations, making it easy for organizations and individuals to rapidly connect their most important datasets with a range of AI-powered tools. To help developers start exploring, we’re sharing pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.
21 |
22 | Early adopters like Block and Apollo have integrated MCP into their systems, while development tools companies including Zed, Replit, Codeium, and Sourcegraph are working with MCP to enhance their platforms—enabling AI agents to better retrieve relevant information to further understand the context around a coding task and produce more nuanced and functional code with fewer attempts.
23 |
24 | "At Block, open source is more than a development model—it’s the foundation of our work and a commitment to creating technology that drives meaningful change and serves as a public good for all,” said Dhanji R. Prasanna, Chief Technology Officer at Block. “Open technologies like the Model Context Protocol are the bridges that connect AI to real-world applications, ensuring innovation is accessible, transparent, and rooted in collaboration. We are excited to partner on a protocol and use it to build agentic systems, which remove the burden of the mechanical so people can focus on the creative.”
25 |
26 | Instead of maintaining separate connectors for each data source, developers can now build against a standard protocol. As the ecosystem matures, AI systems will maintain context as they move between different tools and datasets, replacing today's fragmented integrations with a more sustainable architecture.
27 |
28 | Getting started
29 | Developers can start building and testing MCP connectors today. Existing Claude for Work customers can begin testing MCP servers locally, connecting Claude to internal systems and datasets. We'll soon provide developer toolkits for deploying remote production MCP servers that can serve your entire Claude for Work organization.
30 |
31 | To start building:
32 |
33 | Install pre-built MCP servers through the Claude Desktop app
34 | Follow our quickstart guide to build your first MCP server
35 | Contribute to our open-source repositories of connectors and implementations
36 | An open community
37 | We’re committed to building MCP as a collaborative, open-source project and ecosystem, and we’re eager to hear your feedback. Whether you’re an AI tool developer, an enterprise looking to leverage existing data, or an early adopter exploring the frontier, we invite you to build the future of context-aware AI together.
38 |
39 | Get Started
40 | Quickstart
41 | Get started with MCP in less than 5 minutes
42 |
43 | MCP is a protocol that enables secure connections between host applications, such as Claude Desktop, and local services. In this quickstart guide, you’ll learn how to:
44 |
45 | Set up a local SQLite database
46 | Connect Claude Desktop to it through MCP
47 | Query and analyze your data securely
48 | While this guide focuses on using Claude Desktop as an example MCP host, the protocol is open and can be integrated by any application. IDEs, AI tools, and other software can all use MCP to connect to local integrations in a standardized way.
49 |
50 | Claude Desktop’s MCP support is currently in developer preview and only supports connecting to local MCP servers running on your machine. Remote MCP connections are not yet supported. This integration is only available in the Claude Desktop app, not the Claude web interface (claude.ai).
51 |
52 |
53 | How MCP works
54 | MCP (Model Context Protocol) is an open protocol that enables secure, controlled interactions between AI applications and local or remote resources. Let’s break down how it works, then look at how we’ll use it in this guide.
55 |
56 |
57 | General Architecture
58 | At its core, MCP follows a client-server architecture where a host application can connect to multiple servers:
59 |
60 | Internet
61 | Your Computer
62 | MCP Protocol
63 | MCP Protocol
64 | MCP Protocol
65 | Web APIs
66 | Remote
67 | Resource C
68 | MCP Host
69 | (Claude, IDEs, Tools)
70 | MCP Server A
71 | MCP Server B
72 | MCP Server C
73 | Local
74 | Resource A
75 | Local
76 | Resource B
77 | MCP Hosts: Programs like Claude Desktop, IDEs, or AI tools that want to access resources through MCP
78 | MCP Clients: Protocol clients that maintain 1:1 connections with servers
79 | MCP Servers: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol
80 | Local Resources: Your computer’s resources (databases, files, services) that MCP servers can securely access
81 | Remote Resources: Resources available over the internet (e.g., through APIs) that MCP servers can connect to
82 |
83 | In This Guide
84 | For this quickstart, we’ll implement a focused example using SQLite:
85 |
86 | Your Computer
87 | MCP Protocol
88 | (Queries & Results)
89 | Local Access
90 | (SQL Operations)
91 | Claude Desktop
92 | SQLite MCP Server
93 | SQLite Database
94 | ~/test.db
95 | Claude Desktop acts as our MCP client
96 | A SQLite MCP Server provides secure database access
97 | Your local SQLite database stores the actual data
98 | The communication between the SQLite MCP server and your local SQLite database happens entirely on your machine—your SQLite database is not exposed to the internet. The Model Context Protocol ensures that Claude Desktop can only perform approved database operations through well-defined interfaces. This gives you a secure way to let Claude analyze and interact with your local data while maintaining complete control over what it can access.
99 |
100 |
101 | Prerequisites
102 | macOS or Windows
103 | The latest version of Claude Desktop installed
104 | uv 0.4.18 or higher (uv --version to check)
105 | Git (git --version to check)
106 | SQLite (sqlite3 --version to check)
107 |
108 | Installing prerequisites (macOS)
109 |
110 |
111 | Installing prerequisites (Windows)
112 |
113 |
114 | Installation
115 | macOS
116 | Windows
117 | 1
118 | Create a sample database
119 |
120 | Let’s create a simple SQLite database for testing:
121 |
122 |
123 | # Create a new SQLite database
124 | sqlite3 ~/test.db <<EOF
125 | CREATE TABLE products (
126 | id INTEGER PRIMARY KEY,
127 | name TEXT,
128 | price REAL
129 | );
130 |
131 | INSERT INTO products (name, price) VALUES
132 | ('Widget', 19.99),
133 | ('Gadget', 29.99),
134 | ('Gizmo', 39.99),
135 | ('Smart Watch', 199.99),
136 | ('Wireless Earbuds', 89.99),
137 | ('Portable Charger', 24.99),
138 | ('Bluetooth Speaker', 79.99),
139 | ('Phone Stand', 15.99),
140 | ('Laptop Sleeve', 34.99),
141 | ('Mini Drone', 299.99),
142 | ('LED Desk Lamp', 45.99),
143 | ('Keyboard', 129.99),
144 | ('Mouse Pad', 12.99),
145 | ('USB Hub', 49.99),
146 | ('Webcam', 69.99),
147 | ('Screen Protector', 9.99),
148 | ('Travel Adapter', 27.99),
149 | ('Gaming Headset', 159.99),
150 | ('Fitness Tracker', 119.99),
151 | ('Portable SSD', 179.99);
152 | EOF
153 | 2
154 | Configure Claude Desktop
155 |
156 | Open your Claude Desktop App configuration at ~/Library/Application Support/Claude/claude_desktop_config.json in a text editor.
157 |
158 | For example, if you have VS Code installed:
159 |
160 |
161 | code ~/Library/Application\ Support/Claude/claude_desktop_config.json
162 | Add this configuration (replace YOUR_USERNAME with your actual username):
163 |
164 |
165 | {
166 | "mcpServers": {
167 | "sqlite": {
168 | "command": "uvx",
169 | "args": ["mcp-server-sqlite", "--db-path", "/Users/YOUR_USERNAME/test.db"]
170 | }
171 | }
172 | }
173 | This tells Claude Desktop:
174 |
175 | There’s an MCP server named “sqlite”
176 | Launch it by running uvx mcp-server-sqlite
177 | Connect it to your test database
178 | Save the file, and restart Claude Desktop.
179 |
180 |
181 | Test it out
182 | Let’s verify everything is working. Try sending this prompt to Claude Desktop:
183 |
184 |
185 | Can you connect to my SQLite database and tell me what products are available, and their prices?
186 | Claude Desktop will:
187 |
188 | Connect to the SQLite MCP server
189 | Query your local database
190 | Format and present the results
191 | Example Claude Desktop conversation showing database query results
192 | Claude Desktop successfully queries our SQLite database 🎉
193 |
194 |
195 | What’s happening under the hood?
196 | When you interact with Claude Desktop using MCP:
197 |
198 | Server Discovery: Claude Desktop connects to your configured MCP servers on startup
199 |
200 | Protocol Handshake: When you ask about data, Claude Desktop:
201 |
202 | Identifies which MCP server can help (sqlite in this case)
203 | Negotiates capabilities through the protocol
204 | Requests data or actions from the MCP server
205 | Interaction Flow:
206 |
207 | SQLite DB
208 | MCP Server
209 | Claude Desktop
210 | SQLite DB
211 | MCP Server
212 | Claude Desktop
213 | Initialize connection
214 | Available capabilities
215 | Query request
216 | SQL query
217 | Results
218 | Formatted results
219 | Security:
220 |
221 | MCP servers only expose specific, controlled capabilities
222 | MCP servers run locally on your machine, and the resources they access are not exposed to the internet
223 | Claude Desktop requires user confirmation for sensitive operations
224 |
225 | Try these examples
226 | Now that MCP is working, try these increasingly powerful examples:
227 |
228 |
229 | Basic Queries
230 |
231 |
232 | Data Analysis
233 |
234 |
235 | Complex Operations
236 |
237 |
238 | Add more capabilities
239 | Want to give Claude Desktop more local integration capabilities? Add these servers to your configuration:
240 |
241 | Note that these MCP servers will require Node.js to be installed on your machine.
242 |
243 |
244 | File System Access
245 |
246 |
247 | PostgreSQL Connection
248 |
249 |
250 | More MCP Clients
251 | While this guide demonstrates MCP using Claude Desktop as a client, several other applications support MCP integration:
252 |
253 | Zed Editor
254 | A high-performance, multiplayer code editor with built-in MCP support for AI-powered coding assistance
255 |
256 | Cody
257 | Code intelligence platform featuring MCP integration for enhanced code search and analysis capabilities
258 |
259 | Each host application may implement MCP features differently or support different capabilities. Check their respective documentation for specific setup instructions and supported features.
260 |
261 |
262 | Troubleshooting
263 |
264 | Nothing showing up in Claude Desktop?
265 |
266 |
267 | MCP or database errors?
268 |
269 |
270 | Next steps
271 |
272 |
273 | Your First MCP Server
274 | Python
275 | Create a simple MCP server in Python in 15 minutes
276 |
277 | Let’s build your first MCP server in Python! We’ll create a weather server that provides current weather data as a resource and lets Claude fetch forecasts using tools.
278 |
279 | This guide uses the OpenWeatherMap API. You’ll need a free API key from OpenWeatherMap to follow along.
280 |
281 |
282 | Prerequisites
283 | The following steps are for macOS. Guides for other platforms are coming soon.
284 |
285 | 1
286 | Install Python
287 |
288 | You’ll need Python 3.10 or higher:
289 |
290 |
291 | python --version # Should be 3.10 or higher
292 | 2
293 | Install uv via homebrew
294 |
295 | See https://docs.astral.sh/uv/ for more information.
296 |
297 |
298 | brew install uv
299 | uv --version # Should be 0.4.18 or higher
300 | 3
301 | Create a new project using the MCP project creator
302 |
303 |
304 | uvx create-mcp-server --path weather_service
305 | cd weather_service
306 | 4
307 | Install additional dependencies
308 |
309 |
310 | uv add httpx python-dotenv
311 | 5
312 | Set up environment
313 |
314 | Create .env:
315 |
316 |
317 | OPENWEATHER_API_KEY=your-api-key-here
318 |
319 | Create your server
320 | 1
321 | Add the base imports and setup
322 |
323 | In weather_service/src/weather_service/server.py
324 |
325 |
326 | import os
327 | import json
328 | import logging
329 | from datetime import datetime, timedelta
330 | from collections.abc import Sequence
331 | from functools import lru_cache
332 | from typing import Any
333 |
334 | import httpx
335 | import asyncio
336 | from dotenv import load_dotenv
337 | from mcp.server import Server
338 | from mcp.types import (
339 | Resource,
340 | Tool,
341 | TextContent,
342 | ImageContent,
343 | EmbeddedResource,
344 | LoggingLevel
345 | )
346 | from pydantic import AnyUrl
347 |
348 | # Load environment variables
349 | load_dotenv()
350 |
351 | # Configure logging
352 | logging.basicConfig(level=logging.INFO)
353 | logger = logging.getLogger("weather-server")
354 |
355 | # API configuration
356 | API_KEY = os.getenv("OPENWEATHER_API_KEY")
357 | if not API_KEY:
358 | raise ValueError("OPENWEATHER_API_KEY environment variable required")
359 |
360 | API_BASE_URL = "http://api.openweathermap.org/data/2.5"
361 | DEFAULT_CITY = "London"
362 | CURRENT_WEATHER_ENDPOINT = "weather"
363 | FORECAST_ENDPOINT = "forecast"
364 |
365 | # The rest of our server implementation will go here
366 | 2
367 | Add weather fetching functionality
368 |
369 | Add this functionality:
370 |
371 |
372 | # Create reusable params
373 | http_params = {
374 | "appid": API_KEY,
375 | "units": "metric"
376 | }
377 |
378 | async def fetch_weather(city: str) -> dict[str, Any]:
379 | async with httpx.AsyncClient() as client:
380 | response = await client.get(
381 | f"{API_BASE_URL}/weather",
382 | params={"q": city, **http_params}
383 | )
384 | response.raise_for_status()
385 | data = response.json()
386 |
387 | return {
388 | "temperature": data["main"]["temp"],
389 | "conditions": data["weather"][0]["description"],
390 | "humidity": data["main"]["humidity"],
391 | "wind_speed": data["wind"]["speed"],
392 | "timestamp": datetime.now().isoformat()
393 | }
394 |
395 |
396 | app = Server("weather-server")
397 | 3
398 | Implement resource handlers
399 |
400 | Add these resource-related handlers to our main function:
401 |
402 |
403 | app = Server("weather-server")
404 |
405 | @app.list_resources()
406 | async def list_resources() -> list[Resource]:
407 | """List available weather resources."""
408 | uri = AnyUrl(f"weather://{DEFAULT_CITY}/current")
409 | return [
410 | Resource(
411 | uri=uri,
412 | name=f"Current weather in {DEFAULT_CITY}",
413 | mimeType="application/json",
414 | description="Real-time weather data"
415 | )
416 | ]
417 |
418 | @app.read_resource()
419 | async def read_resource(uri: AnyUrl) -> str:
420 | """Read current weather data for a city."""
421 | city = DEFAULT_CITY
422 | if str(uri).startswith("weather://") and str(uri).endswith("/current"):
423 | city = str(uri).split("/")[-2]
424 | else:
425 | raise ValueError(f"Unknown resource: {uri}")
426 |
427 | try:
428 | weather_data = await fetch_weather(city)
429 | return json.dumps(weather_data, indent=2)
430 | except httpx.HTTPError as e:
431 | raise RuntimeError(f"Weather API error: {str(e)}")
432 |
433 | 4
434 | Implement tool handlers
435 |
436 | Add these tool-related handlers:
437 |
438 |
439 | app = Server("weather-server")
440 |
441 | # Resource implementation ...
442 |
443 | @app.list_tools()
444 | async def list_tools() -> list[Tool]:
445 | """List available weather tools."""
446 | return [
447 | Tool(
448 | name="get_forecast",
449 | description="Get weather forecast for a city",
450 | inputSchema={
451 | "type": "object",
452 | "properties": {
453 | "city": {
454 | "type": "string",
455 | "description": "City name"
456 | },
457 | "days": {
458 | "type": "number",
459 | "description": "Number of days (1-5)",
460 | "minimum": 1,
461 | "maximum": 5
462 | }
463 | },
464 | "required": ["city"]
465 | }
466 | )
467 | ]
468 |
469 | @app.call_tool()
470 | async def call_tool(name: str, arguments: Any) -> Sequence[TextContent | ImageContent | EmbeddedResource]:
471 | """Handle tool calls for weather forecasts."""
472 | if name != "get_forecast":
473 | raise ValueError(f"Unknown tool: {name}")
474 |
475 | if not isinstance(arguments, dict) or "city" not in arguments:
476 | raise ValueError("Invalid forecast arguments")
477 |
478 | city = arguments["city"]
479 | days = min(int(arguments.get("days", 3)), 5)
480 |
481 | try:
482 | async with httpx.AsyncClient() as client:
483 | response = await client.get(
484 | f"{API_BASE_URL}/{FORECAST_ENDPOINT}",
485 | params={
486 | "q": city,
487 | "cnt": days * 8, # API returns 3-hour intervals
488 | **http_params,
489 | }
490 | )
491 | response.raise_for_status()
492 | data = response.json()
493 |
494 | forecasts = []
495 | for i in range(0, len(data["list"]), 8):
496 | day_data = data["list"][i]
497 | forecasts.append({
498 | "date": day_data["dt_txt"].split()[0],
499 | "temperature": day_data["main"]["temp"],
500 | "conditions": day_data["weather"][0]["description"]
501 | })
502 |
503 | return [
504 | TextContent(
505 | type="text",
506 | text=json.dumps(forecasts, indent=2)
507 | )
508 | ]
509 | except requests.HTTPError as e:
510 | logger.error(f"Weather API error: {str(e)}")
511 | raise RuntimeError(f"Weather API error: {str(e)}")
512 | 5
513 | Add the main function
514 |
515 | Add this to the end of weather_service/src/weather_service/server.py:
516 |
517 |
518 | async def main():
519 | # Import here to avoid issues with event loops
520 | from mcp.server.stdio import stdio_server
521 |
522 | async with stdio_server() as (read_stream, write_stream):
523 | await app.run(
524 | read_stream,
525 | write_stream,
526 | app.create_initialization_options()
527 | )
528 | 6
529 | Check your entry point in __init__.py
530 |
531 | Add this to the end of weather_service/src/weather_service/__init__.py:
532 |
533 |
534 | from . import server
535 | import asyncio
536 |
537 | def main():
538 | """Main entry point for the package."""
539 | asyncio.run(server.main())
540 |
541 | # Optionally expose other important items at package level
542 | __all__ = ['main', 'server']
543 |
544 | Connect to Claude Desktop
545 | 1
546 | Update Claude config
547 |
548 | Add to claude_desktop_config.json:
549 |
550 |
551 | {
552 | "mcpServers": {
553 | "weather": {
554 | "command": "uv",
555 | "args": [
556 | "--directory",
557 | "path/to/your/project",
558 | "run",
559 | "weather-service"
560 | ],
561 | "env": {
562 | "OPENWEATHER_API_KEY": "your-api-key"
563 | }
564 | }
565 | }
566 | }
567 | 2
568 | Restart Claude
569 |
570 | Quit Claude completely
571 |
572 | Start Claude again
573 |
574 | Look for your weather server in the 🔌 menu
575 |
576 |
577 | Try it out!
578 |
579 | Check Current Weather
580 |
581 |
582 | Get a Forecast
583 |
584 |
585 | Compare Weather
586 |
587 |
588 | Understanding the code
589 | Type Hints
590 | Resources
591 | Tools
592 | Server Structure
593 |
594 | async def read_resource(self, uri: str) -> ReadResourceResult:
595 | # ...
596 | Python type hints help catch errors early and improve code maintainability.
597 |
598 |
599 | Best practices
600 | Error Handling
601 |
602 | try:
603 | async with httpx.AsyncClient() as client:
604 | response = await client.get(..., params={..., **http_params})
605 | response.raise_for_status()
606 | except requests.HTTPError as e:
607 | raise McpError(
608 | ErrorCode.INTERNAL_ERROR,
609 | f"API error: {str(e)}"
610 | )
611 | Type Validation
612 |
613 | if not isinstance(args, dict) or "city" not in args:
614 | raise McpError(
615 | ErrorCode.INVALID_PARAMS,
616 | "Invalid forecast arguments"
617 | )
618 | Environment Variables
619 |
620 | if not API_KEY:
621 | raise ValueError("OPENWEATHER_API_KEY is required")
622 |
623 | Available transports
624 | While this guide uses stdio transport, MCP supports additonal transport options:
625 |
626 |
627 | SSE (Server-Sent Events)
628 |
629 | from mcp.server.sse import SseServerTransport
630 | from starlette.applications import Starlette
631 | from starlette.routing import Route
632 |
633 | # Create SSE transport with endpoint
634 | sse = SseServerTransport("/messages")
635 |
636 | # Handler for SSE connections
637 | async def handle_sse(scope, receive, send):
638 | async with sse.connect_sse(scope, receive, send) as streams:
639 | await app.run(
640 | streams[0], streams[1], app.create_initialization_options()
641 | )
642 |
643 | # Handler for client messages
644 | async def handle_messages(scope, receive, send):
645 | await sse.handle_post_message(scope, receive, send)
646 |
647 | # Create Starlette app with routes
648 | app = Starlette(
649 | debug=True,
650 | routes=[
651 | Route("/sse", endpoint=handle_sse),
652 | Route("/messages", endpoint=handle_messages, methods=["POST"]),
653 | ],
654 | )
655 |
656 | # Run with any ASGI server
657 | import uvicorn
658 | uvicorn.run(app, host="0.0.0.0", port=8000)
659 |
660 | Advanced features
661 | 1
662 | Understanding Request Context
663 |
664 | The request context provides access to the current request’s metadata and the active client session. Access it through server.request_context:
665 |
666 |
667 | @app.call_tool()
668 | async def call_tool(name: str, arguments: Any) -> Sequence[TextContent]:
669 | # Access the current request context
670 | ctx = self.request_context
671 |
672 | # Get request metadata like progress tokens
673 | if progress_token := ctx.meta.progressToken:
674 | # Send progress notifications via the session
675 | await ctx.session.send_progress_notification(
676 | progress_token=progress_token,
677 | progress=0.5,
678 | total=1.0
679 | )
680 |
681 | # Sample from the LLM client
682 | result = await ctx.session.create_message(
683 | messages=[
684 | SamplingMessage(
685 | role="user",
686 | content=TextContent(
687 | type="text",
688 | text="Analyze this weather data: " + json.dumps(arguments)
689 | )
690 | )
691 | ],
692 | max_tokens=100
693 | )
694 |
695 | return [TextContent(type="text", text=result.content.text)]
696 | 2
697 | Add caching
698 |
699 |
700 | # Cache settings
701 | cache_timeout = timedelta(minutes=15)
702 | last_cache_time = None
703 | cached_weather = None
704 |
705 | async def fetch_weather(city: str) -> dict[str, Any]:
706 | global cached_weather, last_cache_time
707 |
708 | now = datetime.now()
709 | if (cached_weather is None or
710 | last_cache_time is None or
711 | now - last_cache_time > cache_timeout):
712 |
713 | async with httpx.AsyncClient() as client:
714 | response = await client.get(
715 | f"{API_BASE_URL}/{CURRENT_WEATHER_ENDPOINT}",
716 | params={"q": city, **http_params}
717 | )
718 | response.raise_for_status()
719 | data = response.json()
720 |
721 | cached_weather = {
722 | "temperature": data["main"]["temp"],
723 | "conditions": data["weather"][0]["description"],
724 | "humidity": data["main"]["humidity"],
725 | "wind_speed": data["wind"]["speed"],
726 | "timestamp": datetime.now().isoformat()
727 | }
728 | last_cache_time = now
729 |
730 | return cached_weather
731 | 3
732 | Add progress notifications
733 |
734 |
735 | @self.call_tool()
736 | async def call_tool(self, name: str, arguments: Any) -> CallToolResult:
737 | if progress_token := self.request_context.meta.progressToken:
738 | # Send progress notifications
739 | await self.request_context.session.send_progress_notification(
740 | progress_token=progress_token,
741 | progress=1,
742 | total=2
743 | )
744 |
745 | # Fetch data...
746 |
747 | await self.request_context.session.send_progress_notification(
748 | progress_token=progress_token,
749 | progress=2,
750 | total=2
751 | )
752 |
753 | # Rest of the method implementation...
754 | 4
755 | Add logging support
756 |
757 |
758 | # Set up logging
759 | logger = logging.getLogger("weather-server")
760 | logger.setLevel(logging.INFO)
761 |
762 | @app.set_logging_level()
763 | async def set_logging_level(level: LoggingLevel) -> EmptyResult:
764 | logger.setLevel(level.upper())
765 | await app.request_context.session.send_log_message(
766 | level="info",
767 | data=f"Log level set to {level}",
768 | logger="weather-server"
769 | )
770 | return EmptyResult()
771 |
772 | # Use logger throughout the code
773 | # For example:
774 | # logger.info("Weather data fetched successfully")
775 | # logger.error(f"Error fetching weather data: {str(e)}")
776 | 5
777 | Add resource templates
778 |
779 |
780 | @self.list_resources()
781 | async def list_resources(self) -> ListResourcesResult:
782 | return ListResourcesResult(
783 | resources=[...],
784 | resourceTemplates=[
785 | ResourceTemplate(
786 | uriTemplate="weather://{city}/current",
787 | name="Current weather for any city",
788 | mimeType="application/json"
789 | )
790 | ]
791 | )
792 |
793 | Testing
794 | 1
795 | Create test file
796 |
797 | Create tests/weather_test.py:
798 |
799 |
800 | import pytest
801 | import os
802 | from unittest.mock import patch, Mock
803 | from datetime import datetime
804 | import json
805 | from pydantic import AnyUrl
806 | os.environ["OPENWEATHER_API_KEY"] = "TEST"
807 |
808 | from weather_service.server import (
809 | fetch_weather,
810 | read_resource,
811 | call_tool,
812 | list_resources,
813 | list_tools,
814 | DEFAULT_CITY
815 | )
816 |
817 | @pytest.fixture
818 | def anyio_backend():
819 | return "asyncio"
820 |
821 | @pytest.fixture
822 | def mock_weather_response():
823 | return {
824 | "main": {
825 | "temp": 20.5,
826 | "humidity": 65
827 | },
828 | "weather": [
829 | {"description": "scattered clouds"}
830 | ],
831 | "wind": {
832 | "speed": 3.6
833 | }
834 | }
835 |
836 | @pytest.fixture
837 | def mock_forecast_response():
838 | return {
839 | "list": [
840 | {
841 | "dt_txt": "2024-01-01 12:00:00",
842 | "main": {"temp": 18.5},
843 | "weather": [{"description": "sunny"}]
844 | },
845 | {
846 | "dt_txt": "2024-01-02 12:00:00",
847 | "main": {"temp": 17.2},
848 | "weather": [{"description": "cloudy"}]
849 | }
850 | ]
851 | }
852 |
853 | @pytest.mark.anyio
854 | async def test_fetch_weather(mock_weather_response):
855 | with patch('requests.Session.get') as mock_get:
856 | mock_get.return_value.json.return_value = mock_weather_response
857 | mock_get.return_value.raise_for_status = Mock()
858 |
859 | weather = await fetch_weather("London")
860 |
861 | assert weather["temperature"] == 20.5
862 | assert weather["conditions"] == "scattered clouds"
863 | assert weather["humidity"] == 65
864 | assert weather["wind_speed"] == 3.6
865 | assert "timestamp" in weather
866 |
867 | @pytest.mark.anyio
868 | async def test_read_resource():
869 | with patch('weather_service.server.fetch_weather') as mock_fetch:
870 | mock_fetch.return_value = {
871 | "temperature": 20.5,
872 | "conditions": "clear sky",
873 | "timestamp": datetime.now().isoformat()
874 | }
875 |
876 | uri = AnyUrl("weather://London/current")
877 | result = await read_resource(uri)
878 |
879 | assert isinstance(result, str)
880 | assert "temperature" in result
881 | assert "clear sky" in result
882 |
883 | @pytest.mark.anyio
884 | async def test_call_tool(mock_forecast_response):
885 | class Response():
886 | def raise_for_status(self):
887 | pass
888 |
889 | def json(self):
890 | return nock_forecast_response
891 |
892 | class AsyncClient():
893 | def __aenter__(self):
894 | return self
895 |
896 | async def __aexit__(self, *exc_info):
897 | pass
898 |
899 | async def get(self, *args, **kwargs):
900 | return Response()
901 |
902 | with patch('httpx.AsyncClient', new=AsyncClient) as mock_client:
903 | result = await call_tool("get_forecast", {"city": "London", "days": 2})
904 |
905 | assert len(result) == 1
906 | assert result[0].type == "text"
907 | forecast_data = json.loads(result[0].text)
908 | assert len(forecast_data) == 1
909 | assert forecast_data[0]["temperature"] == 18.5
910 | assert forecast_data[0]["conditions"] == "sunny"
911 |
912 | @pytest.mark.anyio
913 | async def test_list_resources():
914 | resources = await list_resources()
915 | assert len(resources) == 1
916 | assert resources[0].name == f"Current weather in {DEFAULT_CITY}"
917 | assert resources[0].mimeType == "application/json"
918 |
919 | @pytest.mark.anyio
920 | async def test_list_tools():
921 | tools = await list_tools()
922 | assert len(tools) == 1
923 | assert tools[0].name == "get_forecast"
924 | assert "city" in tools[0].inputSchema["properties"]
925 | 2
926 | Run tests
927 |
928 |
929 | uv add --dev pytest
930 | uv run pytest
931 |
932 | Troubleshooting
933 |
934 | Installation issues
935 |
936 | # Check Python version
937 | python --version
938 |
939 | # Reinstall dependencies
940 | uv sync --reinstall
941 |
942 | Type checking
943 |
944 | # Install mypy
945 | uv add --dev pyright
946 |
947 | # Run type checker
948 | uv run pyright src
949 |
950 |
951 | Clients
952 | A list of applications that support MCP integrations
953 |
954 | This page provides an overview of applications that support the Model Context Protocol (MCP). Each client may support different MCP features, allowing for varying levels of integration with MCP servers.
955 |
956 |
957 | Feature support matrix
958 | Client Resources Prompts Tools Sampling Roots Notes
959 | Claude Desktop App ✅ ✅ ✅ ❌ ❌ Full support for all MCP features
960 | Zed ❌ ✅ ❌ ❌ ❌ Prompts appear as slash commands
961 | Sourcegraph Cody ✅ ❌ ❌ ❌ ❌ Supports resources through OpenCTX
962 |
963 | Client details
964 |
965 | Claude Desktop App
966 | The Claude desktop application provides comprehensive support for MCP, enabling deep integration with local tools and data sources.
967 |
968 | Key features:
969 |
970 | Full support for resources, allowing attachment of local files and data
971 | Support for prompt templates
972 | Tool integration for executing commands and scripts
973 | Local server connections for enhanced privacy and security
974 | ⓘ Note: The Claude.ai web application does not currently support MCP. MCP features are only available in the desktop application.
975 |
976 |
977 | Zed
978 | Zed is a high-performance code editor with built-in MCP support, focusing on prompt templates and tool integration.
979 |
980 | Key features:
981 |
982 | Prompt templates surface as slash commands in the editor
983 | Tool integration for enhanced coding workflows
984 | Tight integration with editor features and workspace context
985 | Does not support MCP resources
986 |
987 | Sourcegraph Cody
988 | Cody is Sourcegraph’s AI coding assistant, which implements MCP through OpenCTX.
989 |
990 | Key features:
991 |
992 | Support for MCP resources
993 | Integration with Sourcegraph’s code intelligence
994 | Uses OpenCTX as an abstraction layer
995 | Future support planned for additional MCP features
996 |
997 | Adding MCP support to your application
998 | If you’ve added MCP support to your application, we encourage you to submit a pull request to add it to this list. MCP integration can provide your users with powerful contextual AI capabilities and make your application part of the growing MCP ecosystem.
999 |
1000 | Benefits of adding MCP support:
1001 |
1002 | Enable users to bring their own context and tools
1003 | Join a growing ecosystem of interoperable AI applications
1004 | Provide users with flexible integration options
1005 | Support local-first AI workflows
1006 | To get started with implementing MCP in your application, check out our Python or TypeScript SDK Documentation
1007 |
1008 |
1009 | Updates and corrections
1010 | This list is maintained by the community. If you notice any inaccuracies or would like to update information about MCP support in your application, please submit a pull request or open an issue in our documentation repository.
1011 |
1012 | Concepts
1013 | Core architecture
1014 | Understand how MCP connects clients, servers, and LLMs
1015 |
1016 | The Model Context Protocol (MCP) is built on a flexible, extensible architecture that enables seamless communication between LLM applications and integrations. This document covers the core architectural components and concepts.
1017 |
1018 |
1019 | Overview
1020 | MCP follows a client-server architecture where:
1021 |
1022 | Hosts are LLM applications (like Claude Desktop or IDEs) that initiate connections
1023 | Clients maintain 1:1 connections with servers, inside the host application
1024 | Servers provide context, tools, and prompts to clients
1025 | Server Process
1026 | Server Process
1027 | Host (e.g., Claude Desktop)
1028 | Transport Layer
1029 | Transport Layer
1030 | MCP Server
1031 | MCP Server
1032 | MCP Client
1033 | MCP Client
1034 |
1035 | Core components
1036 |
1037 | Protocol layer
1038 | The protocol layer handles message framing, request/response linking, and high-level communication patterns.
1039 |
1040 | TypeScript
1041 | Python
1042 |
1043 | class Protocol<Request, Notification, Result> {
1044 | // Handle incoming requests
1045 | setRequestHandler<T>(schema: T, handler: (request: T, extra: RequestHandlerExtra) => Promise<Result>): void
1046 |
1047 | // Handle incoming notifications
1048 | setNotificationHandler<T>(schema: T, handler: (notification: T) => Promise<void>): void
1049 |
1050 | // Send requests and await responses
1051 | request<T>(request: Request, schema: T, options?: RequestOptions): Promise<T>
1052 |
1053 | // Send one-way notifications
1054 | notification(notification: Notification): Promise<void>
1055 | }
1056 | Key classes include:
1057 |
1058 | Protocol
1059 | Client
1060 | Server
1061 |
1062 | Transport layer
1063 | The transport layer handles the actual communication between clients and servers. MCP supports multiple transport mechanisms:
1064 |
1065 | Stdio transport
1066 |
1067 | Uses standard input/output for communication
1068 | Ideal for local processes
1069 | HTTP with SSE transport
1070 |
1071 | Uses Server-Sent Events for server-to-client messages
1072 | HTTP POST for client-to-server messages
1073 | All transports use JSON-RPC 2.0 to exchange messages. See the specification for detailed information about the Model Context Protocol message format.
1074 |
1075 |
1076 | Message types
1077 | MCP has these main types of messages:
1078 |
1079 | Requests expect a response from the other side:
1080 |
1081 |
1082 | interface Request {
1083 | method: string;
1084 | params?: { ... };
1085 | }
1086 | Notifications are one-way messages that don’t expect a response:
1087 |
1088 |
1089 | interface Notification {
1090 | method: string;
1091 | params?: { ... };
1092 | }
1093 | Results are successful responses to requests:
1094 |
1095 |
1096 | interface Result {
1097 | [key: string]: unknown;
1098 | }
1099 | Errors indicate that a request failed:
1100 |
1101 |
1102 | interface Error {
1103 | code: number;
1104 | message: string;
1105 | data?: unknown;
1106 | }
1107 |
1108 | Connection lifecycle
1109 |
1110 | 1. Initialization
1111 | Server
1112 | Client
1113 | Server
1114 | Client
1115 | Connection ready for use
1116 | initialize request
1117 | initialize response
1118 | initialized notification
1119 | Client sends initialize request with protocol version and capabilities
1120 | Server responds with its protocol version and capabilities
1121 | Client sends initialized notification as acknowledgment
1122 | Normal message exchange begins
1123 |
1124 | 2. Message exchange
1125 | After initialization, the following patterns are supported:
1126 |
1127 | Request-Response: Client or server sends requests, the other responds
1128 | Notifications: Either party sends one-way messages
1129 |
1130 | 3. Termination
1131 | Either party can terminate the connection:
1132 |
1133 | Clean shutdown via close()
1134 | Transport disconnection
1135 | Error conditions
1136 |
1137 | Error handling
1138 | MCP defines these standard error codes:
1139 |
1140 |
1141 | enum ErrorCode {
1142 | // Standard JSON-RPC error codes
1143 | ParseError = -32700,
1144 | InvalidRequest = -32600,
1145 | MethodNotFound = -32601,
1146 | InvalidParams = -32602,
1147 | InternalError = -32603
1148 | }
1149 | SDKs and applications can define their own error codes above -32000.
1150 |
1151 | Errors are propagated through:
1152 |
1153 | Error responses to requests
1154 | Error events on transports
1155 | Protocol-level error handlers
1156 |
1157 | Implementation example
1158 | Here’s a basic example of implementing an MCP server:
1159 |
1160 | TypeScript
1161 | Python
1162 |
1163 | import asyncio
1164 | import mcp.types as types
1165 | from mcp.server import Server
1166 | from mcp.server.stdio import stdio_server
1167 |
1168 | app = Server("example-server")
1169 |
1170 | @app.list_resources()
1171 | async def list_resources() -> list[types.Resource]:
1172 | return [
1173 | types.Resource(
1174 | uri="example://resource",
1175 | name="Example Resource"
1176 | )
1177 | ]
1178 |
1179 | async def main():
1180 | async with stdio_server() as streams:
1181 | await app.run(
1182 | streams[0],
1183 | streams[1],
1184 | app.create_initialization_options()
1185 | )
1186 |
1187 | if __name__ == "__main__":
1188 | asyncio.run(main)
1189 |
1190 | Best practices
1191 |
1192 | Transport selection
1193 | Local communication
1194 |
1195 | Use stdio transport for local processes
1196 | Efficient for same-machine communication
1197 | Simple process management
1198 | Remote communication
1199 |
1200 | Use SSE for scenarios requiring HTTP compatibility
1201 | Consider security implications including authentication and authorization
1202 |
1203 | Message handling
1204 | Request processing
1205 |
1206 | Validate inputs thoroughly
1207 | Use type-safe schemas
1208 | Handle errors gracefully
1209 | Implement timeouts
1210 | Progress reporting
1211 |
1212 | Use progress tokens for long operations
1213 | Report progress incrementally
1214 | Include total progress when known
1215 | Error management
1216 |
1217 | Use appropriate error codes
1218 | Include helpful error messages
1219 | Clean up resources on errors
1220 |
1221 | Security considerations
1222 | Transport security
1223 |
1224 | Use TLS for remote connections
1225 | Validate connection origins
1226 | Implement authentication when needed
1227 | Message validation
1228 |
1229 | Validate all incoming messages
1230 | Sanitize inputs
1231 | Check message size limits
1232 | Verify JSON-RPC format
1233 | Resource protection
1234 |
1235 | Implement access controls
1236 | Validate resource paths
1237 | Monitor resource usage
1238 | Rate limit requests
1239 | Error handling
1240 |
1241 | Don’t leak sensitive information
1242 | Log security-relevant errors
1243 | Implement proper cleanup
1244 | Handle DoS scenarios
1245 |
1246 | Debugging and monitoring
1247 | Logging
1248 |
1249 | Log protocol events
1250 | Track message flow
1251 | Monitor performance
1252 | Record errors
1253 | Diagnostics
1254 |
1255 | Implement health checks
1256 | Monitor connection state
1257 | Track resource usage
1258 | Profile performance
1259 | Testing
1260 |
1261 | Test different transports
1262 | Verify error handling
1263 | Check edge cases
1264 | Load test servers
1265 | Inspector
1266 |
1267 | Concepts
1268 | Resources
1269 | Expose data and content from your servers to LLMs
1270 |
1271 | Resources are a core primitive in the Model Context Protocol (MCP) that allow servers to expose data and content that can be read by clients and used as context for LLM interactions.
1272 |
1273 | Resources are designed to be application-controlled, meaning that the client application can decide how and when they should be used.
1274 |
1275 | For example, one application may require users to explicitly select resources, while another could automatically select them based on heuristics or even at the discretion of the AI model itself.
1276 |
1277 |
1278 | Overview
1279 | Resources represent any kind of data that an MCP server wants to make available to clients. This can include:
1280 |
1281 | File contents
1282 | Database records
1283 | API responses
1284 | Live system data
1285 | Screenshots and images
1286 | Log files
1287 | And more
1288 | Each resource is identified by a unique URI and can contain either text or binary data.
1289 |
1290 |
1291 | Resource URIs
1292 | Resources are identified using URIs that follow this format:
1293 |
1294 |
1295 | [protocol]://[host]/[path]
1296 | For example:
1297 |
1298 | file:///home/user/documents/report.pdf
1299 | postgres://database/customers/schema
1300 | screen://localhost/display1
1301 | The protocol and path structure is defined by the MCP server implementation. Servers can define their own custom URI schemes.
1302 |
1303 |
1304 | Resource types
1305 | Resources can contain two types of content:
1306 |
1307 |
1308 | Text resources
1309 | Text resources contain UTF-8 encoded text data. These are suitable for:
1310 |
1311 | Source code
1312 | Configuration files
1313 | Log files
1314 | JSON/XML data
1315 | Plain text
1316 |
1317 | Binary resources
1318 | Binary resources contain raw binary data encoded in base64. These are suitable for:
1319 |
1320 | Images
1321 | PDFs
1322 | Audio files
1323 | Video files
1324 | Other non-text formats
1325 |
1326 | Resource discovery
1327 | Clients can discover available resources through two main methods:
1328 |
1329 |
1330 | Direct resources
1331 | Servers expose a list of concrete resources via the resources/list endpoint. Each resource includes:
1332 |
1333 |
1334 | {
1335 | uri: string; // Unique identifier for the resource
1336 | name: string; // Human-readable name
1337 | description?: string; // Optional description
1338 | mimeType?: string; // Optional MIME type
1339 | }
1340 |
1341 | Resource templates
1342 | For dynamic resources, servers can expose URI templates that clients can use to construct valid resource URIs:
1343 |
1344 |
1345 | {
1346 | uriTemplate: string; // URI template following RFC 6570
1347 | name: string; // Human-readable name for this type
1348 | description?: string; // Optional description
1349 | mimeType?: string; // Optional MIME type for all matching resources
1350 | }
1351 |
1352 | Reading resources
1353 | To read a resource, clients make a resources/read request with the resource URI.
1354 |
1355 | The server responds with a list of resource contents:
1356 |
1357 |
1358 | {
1359 | contents: [
1360 | {
1361 | uri: string; // The URI of the resource
1362 | mimeType?: string; // Optional MIME type
1363 |
1364 | // One of:
1365 | text?: string; // For text resources
1366 | blob?: string; // For binary resources (base64 encoded)
1367 | }
1368 | ]
1369 | }
1370 | Servers may return multiple resources in response to one resources/read request. This could be used, for example, to return a list of files inside a directory when the directory is read.
1371 |
1372 |
1373 | Resource updates
1374 | MCP supports real-time updates for resources through two mechanisms:
1375 |
1376 |
1377 | List changes
1378 | Servers can notify clients when their list of available resources changes via the notifications/resources/list_changed notification.
1379 |
1380 |
1381 | Content changes
1382 | Clients can subscribe to updates for specific resources:
1383 |
1384 | Client sends resources/subscribe with resource URI
1385 | Server sends notifications/resources/updated when the resource changes
1386 | Client can fetch latest content with resources/read
1387 | Client can unsubscribe with resources/unsubscribe
1388 |
1389 | Example implementation
1390 | Here’s a simple example of implementing resource support in an MCP server:
1391 |
1392 | TypeScript
1393 | Python
1394 |
1395 | app = Server("example-server")
1396 |
1397 | @app.list_resources()
1398 | async def list_resources() -> list[types.Resource]:
1399 | return [
1400 | types.Resource(
1401 | uri="file:///logs/app.log",
1402 | name="Application Logs",
1403 | mimeType="text/plain"
1404 | )
1405 | ]
1406 |
1407 | @app.read_resource()
1408 | async def read_resource(uri: AnyUrl) -> str:
1409 | if str(uri) == "file:///logs/app.log":
1410 | log_contents = await read_log_file()
1411 | return log_contents
1412 |
1413 | raise ValueError("Resource not found")
1414 |
1415 | # Start server
1416 | async with stdio_server() as streams:
1417 | await app.run(
1418 | streams[0],
1419 | streams[1],
1420 | app.create_initialization_options()
1421 | )
1422 |
1423 | Best practices
1424 | When implementing resource support:
1425 |
1426 | Use clear, descriptive resource names and URIs
1427 | Include helpful descriptions to guide LLM understanding
1428 | Set appropriate MIME types when known
1429 | Implement resource templates for dynamic content
1430 | Use subscriptions for frequently changing resources
1431 | Handle errors gracefully with clear error messages
1432 | Consider pagination for large resource lists
1433 | Cache resource contents when appropriate
1434 | Validate URIs before processing
1435 | Document your custom URI schemes
1436 |
1437 | Security considerations
1438 | When exposing resources:
1439 |
1440 | Validate all resource URIs
1441 | Implement appropriate access controls
1442 | Sanitize file paths to prevent directory traversal
1443 | Be cautious with binary data handling
1444 | Consider rate limiting for resource reads
1445 | Audit resource access
1446 | Encrypt sensitive data in transit
1447 | Validate MIME types
1448 | Implement timeouts for long-running reads
1449 | Handle resource cleanup appropriately
1450 |
1451 | Concepts
1452 | Prompts
1453 | Create resuable prompt templates and workflows
1454 |
1455 | Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a powerful way to standardize and share common LLM interactions.
1456 |
1457 | Prompts are designed to be user-controlled, meaning they are exposed from servers to clients with the intention of the user being able to explicitly select them for use.
1458 |
1459 |
1460 | Overview
1461 | Prompts in MCP are predefined templates that can:
1462 |
1463 | Accept dynamic arguments
1464 | Include context from resources
1465 | Chain multiple interactions
1466 | Guide specific workflows
1467 | Surface as UI elements (like slash commands)
1468 |
1469 | Prompt structure
1470 | Each prompt is defined with:
1471 |
1472 |
1473 | {
1474 | name: string; // Unique identifier for the prompt
1475 | description?: string; // Human-readable description
1476 | arguments?: [ // Optional list of arguments
1477 | {
1478 | name: string; // Argument identifier
1479 | description?: string; // Argument description
1480 | required?: boolean; // Whether argument is required
1481 | }
1482 | ]
1483 | }
1484 |
1485 | Discovering prompts
1486 | Clients can discover available prompts through the prompts/list endpoint:
1487 |
1488 |
1489 | // Request
1490 | {
1491 | method: "prompts/list"
1492 | }
1493 |
1494 | // Response
1495 | {
1496 | prompts: [
1497 | {
1498 | name: "analyze-code",
1499 | description: "Analyze code for potential improvements",
1500 | arguments: [
1501 | {
1502 | name: "language",
1503 | description: "Programming language",
1504 | required: true
1505 | }
1506 | ]
1507 | }
1508 | ]
1509 | }
1510 |
1511 | Using prompts
1512 | To use a prompt, clients make a prompts/get request:
1513 |
1514 |
1515 | // Request
1516 | {
1517 | method: "prompts/get",
1518 | params: {
1519 | name: "analyze-code",
1520 | arguments: {
1521 | language: "python"
1522 | }
1523 | }
1524 | }
1525 |
1526 | // Response
1527 | {
1528 | description: "Analyze Python code for potential improvements",
1529 | messages: [
1530 | {
1531 | role: "user",
1532 | content: {
1533 | type: "text",
1534 | text: "Please analyze the following Python code for potential improvements:\n\n```python\ndef calculate_sum(numbers):\n total = 0\n for num in numbers:\n total = total + num\n return total\n\nresult = calculate_sum([1, 2, 3, 4, 5])\nprint(result)\n```"
1535 | }
1536 | }
1537 | ]
1538 | }
1539 |
1540 | Dynamic prompts
1541 | Prompts can be dynamic and include:
1542 |
1543 |
1544 | Embedded resource context
1545 |
1546 | {
1547 | "name": "analyze-project",
1548 | "description": "Analyze project logs and code",
1549 | "arguments": [
1550 | {
1551 | "name": "timeframe",
1552 | "description": "Time period to analyze logs",
1553 | "required": true
1554 | },
1555 | {
1556 | "name": "fileUri",
1557 | "description": "URI of code file to review",
1558 | "required": true
1559 | }
1560 | ]
1561 | }
1562 | When handling the prompts/get request:
1563 |
1564 |
1565 | {
1566 | "messages": [
1567 | {
1568 | "role": "user",
1569 | "content": {
1570 | "type": "text",
1571 | "text": "Analyze these system logs and the code file for any issues:"
1572 | }
1573 | },
1574 | {
1575 | "role": "user",
1576 | "content": {
1577 | "type": "resource",
1578 | "resource": {
1579 | "uri": "logs://recent?timeframe=1h",
1580 | "text": "[2024-03-14 15:32:11] ERROR: Connection timeout in network.py:127\n[2024-03-14 15:32:15] WARN: Retrying connection (attempt 2/3)\n[2024-03-14 15:32:20] ERROR: Max retries exceeded",
1581 | "mimeType": "text/plain"
1582 | }
1583 | }
1584 | },
1585 | {
1586 | "role": "user",
1587 | "content": {
1588 | "type": "resource",
1589 | "resource": {
1590 | "uri": "file:///path/to/code.py",
1591 | "text": "def connect_to_service(timeout=30):\n retries = 3\n for attempt in range(retries):\n try:\n return establish_connection(timeout)\n except TimeoutError:\n if attempt == retries - 1:\n raise\n time.sleep(5)\n\ndef establish_connection(timeout):\n # Connection implementation\n pass",
1592 | "mimeType": "text/x-python"
1593 | }
1594 | }
1595 | }
1596 | ]
1597 | }
1598 |
1599 | Multi-step workflows
1600 |
1601 | const debugWorkflow = {
1602 | name: "debug-error",
1603 | async getMessages(error: string) {
1604 | return [
1605 | {
1606 | role: "user",
1607 | content: {
1608 | type: "text",
1609 | text: `Here's an error I'm seeing: ${error}`
1610 | }
1611 | },
1612 | {
1613 | role: "assistant",
1614 | content: {
1615 | type: "text",
1616 | text: "I'll help analyze this error. What have you tried so far?"
1617 | }
1618 | },
1619 | {
1620 | role: "user",
1621 | content: {
1622 | type: "text",
1623 | text: "I've tried restarting the service, but the error persists."
1624 | }
1625 | }
1626 | ];
1627 | }
1628 | };
1629 |
1630 | Example implementation
1631 | Here’s a complete example of implementing prompts in an MCP server:
1632 |
1633 | TypeScript
1634 | Python
1635 |
1636 | from mcp.server import Server
1637 | import mcp.types as types
1638 |
1639 | # Define available prompts
1640 | PROMPTS = {
1641 | "git-commit": types.Prompt(
1642 | name="git-commit",
1643 | description="Generate a Git commit message",
1644 | arguments=[
1645 | types.PromptArgument(
1646 | name="changes",
1647 | description="Git diff or description of changes",
1648 | required=True
1649 | )
1650 | ],
1651 | ),
1652 | "explain-code": types.Prompt(
1653 | name="explain-code",
1654 | description="Explain how code works",
1655 | arguments=[
1656 | types.PromptArgument(
1657 | name="code",
1658 | description="Code to explain",
1659 | required=True
1660 | ),
1661 | types.PromptArgument(
1662 | name="language",
1663 | description="Programming language",
1664 | required=False
1665 | )
1666 | ],
1667 | )
1668 | }
1669 |
1670 | # Initialize server
1671 | app = Server("example-prompts-server")
1672 |
1673 | @app.list_prompts()
1674 | async def list_prompts() -> list[types.Prompt]:
1675 | return list(PROMPTS.values())
1676 |
1677 | @app.get_prompt()
1678 | async def get_prompt(
1679 | name: str, arguments: dict[str, str] | None = None
1680 | ) -> types.GetPromptResult:
1681 | if name not in PROMPTS:
1682 | raise ValueError(f"Prompt not found: {name}")
1683 |
1684 | if name == "git-commit":
1685 | changes = arguments.get("changes") if arguments else ""
1686 | return types.GetPromptResult(
1687 | messages=[
1688 | types.PromptMessage(
1689 | role="user",
1690 | content=types.TextContent(
1691 | type="text",
1692 | text=f"Generate a concise but descriptive commit message "
1693 | f"for these changes:\n\n{changes}"
1694 | )
1695 | )
1696 | ]
1697 | )
1698 |
1699 | if name == "explain-code":
1700 | code = arguments.get("code") if arguments else ""
1701 | language = arguments.get("language", "Unknown") if arguments else "Unknown"
1702 | return types.GetPromptResult(
1703 | messages=[
1704 | types.PromptMessage(
1705 | role="user",
1706 | content=types.TextContent(
1707 | type="text",
1708 | text=f"Explain how this {language} code works:\n\n{code}"
1709 | )
1710 | )
1711 | ]
1712 | )
1713 |
1714 | raise ValueError("Prompt implementation not found")
1715 |
1716 | Best practices
1717 | When implementing prompts:
1718 |
1719 | Use clear, descriptive prompt names
1720 | Provide detailed descriptions for prompts and arguments
1721 | Validate all required arguments
1722 | Handle missing arguments gracefully
1723 | Consider versioning for prompt templates
1724 | Cache dynamic content when appropriate
1725 | Implement error handling
1726 | Document expected argument formats
1727 | Consider prompt composability
1728 | Test prompts with various inputs
1729 |
1730 | UI integration
1731 | Prompts can be surfaced in client UIs as:
1732 |
1733 | Slash commands
1734 | Quick actions
1735 | Context menu items
1736 | Command palette entries
1737 | Guided workflows
1738 | Interactive forms
1739 |
1740 | Updates and changes
1741 | Servers can notify clients about prompt changes:
1742 |
1743 | Server capability: prompts.listChanged
1744 | Notification: notifications/prompts/list_changed
1745 | Client re-fetches prompt list
1746 |
1747 | Security considerations
1748 | When implementing prompts:
1749 |
1750 | Validate all arguments
1751 | Sanitize user input
1752 | Consider rate limiting
1753 | Implement access controls
1754 | Audit prompt usage
1755 | Handle sensitive data appropriately
1756 | Validate generated content
1757 | Implement timeouts
1758 | Consider prompt injection risks
1759 | Document security requirements
1760 |
1761 | Concepts
1762 | Tools
1763 | Enable LLMs to perform actions through your server
1764 |
1765 | Tools are a powerful primitive in the Model Context Protocol (MCP) that enable servers to expose executable functionality to clients. Through tools, LLMs can interact with external systems, perform computations, and take actions in the real world.
1766 |
1767 | Tools are designed to be model-controlled, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval).
1768 |
1769 |
1770 | Overview
1771 | Tools in MCP allow servers to expose executable functions that can be invoked by clients and used by LLMs to perform actions. Key aspects of tools include:
1772 |
1773 | Discovery: Clients can list available tools through the tools/list endpoint
1774 | Invocation: Tools are called using the tools/call endpoint, where servers perform the requested operation and return results
1775 | Flexibility: Tools can range from simple calculations to complex API interactions
1776 | Like resources, tools are identified by unique names and can include descriptions to guide their usage. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems.
1777 |
1778 |
1779 | Tool definition structure
1780 | Each tool is defined with the following structure:
1781 |
1782 |
1783 | {
1784 | name: string; // Unique identifier for the tool
1785 | description?: string; // Human-readable description
1786 | inputSchema: { // JSON Schema for the tool's parameters
1787 | type: "object",
1788 | properties: { ... } // Tool-specific parameters
1789 | }
1790 | }
1791 |
1792 | Implementing tools
1793 | Here’s an example of implementing a basic tool in an MCP server:
1794 |
1795 | TypeScript
1796 | Python
1797 |
1798 | app = Server("example-server")
1799 |
1800 | @app.list_tools()
1801 | async def list_tools() -> list[types.Tool]:
1802 | return [
1803 | types.Tool(
1804 | name="calculate_sum",
1805 | description="Add two numbers together",
1806 | inputSchema={
1807 | "type": "object",
1808 | "properties": {
1809 | "a": {"type": "number"},
1810 | "b": {"type": "number"}
1811 | },
1812 | "required": ["a", "b"]
1813 | }
1814 | )
1815 | ]
1816 |
1817 | @app.call_tool()
1818 | async def call_tool(
1819 | name: str,
1820 | arguments: dict
1821 | ) -> list[types.TextContent | types.ImageContent | types.EmbeddedResource]:
1822 | if name == "calculate_sum":
1823 | a = arguments["a"]
1824 | b = arguments["b"]
1825 | result = a + b
1826 | return [types.TextContent(type="text", text=str(result))]
1827 | raise ValueError(f"Tool not found: {name}")
1828 |
1829 | Example tool patterns
1830 | Here are some examples of types of tools that a server could provide:
1831 |
1832 |
1833 | System operations
1834 | Tools that interact with the local system:
1835 |
1836 |
1837 | {
1838 | name: "execute_command",
1839 | description: "Run a shell command",
1840 | inputSchema: {
1841 | type: "object",
1842 | properties: {
1843 | command: { type: "string" },
1844 | args: { type: "array", items: { type: "string" } }
1845 | }
1846 | }
1847 | }
1848 |
1849 | API integrations
1850 | Tools that wrap external APIs:
1851 |
1852 |
1853 | {
1854 | name: "github_create_issue",
1855 | description: "Create a GitHub issue",
1856 | inputSchema: {
1857 | type: "object",
1858 | properties: {
1859 | title: { type: "string" },
1860 | body: { type: "string" },
1861 | labels: { type: "array", items: { type: "string" } }
1862 | }
1863 | }
1864 | }
1865 |
1866 | Data processing
1867 | Tools that transform or analyze data:
1868 |
1869 |
1870 | {
1871 | name: "analyze_csv",
1872 | description: "Analyze a CSV file",
1873 | inputSchema: {
1874 | type: "object",
1875 | properties: {
1876 | filepath: { type: "string" },
1877 | operations: {
1878 | type: "array",
1879 | items: {
1880 | enum: ["sum", "average", "count"]
1881 | }
1882 | }
1883 | }
1884 | }
1885 | }
1886 |
1887 | Best practices
1888 | When implementing tools:
1889 |
1890 | Provide clear, descriptive names and descriptions
1891 | Use detailed JSON Schema definitions for parameters
1892 | Include examples in tool descriptions to demonstrate how the model should use them
1893 | Implement proper error handling and validation
1894 | Use progress reporting for long operations
1895 | Keep tool operations focused and atomic
1896 | Document expected return value structures
1897 | Implement proper timeouts
1898 | Consider rate limiting for resource-intensive operations
1899 | Log tool usage for debugging and monitoring
1900 |
1901 | Security considerations
1902 | When exposing tools:
1903 |
1904 |
1905 | Input validation
1906 | Validate all parameters against the schema
1907 | Sanitize file paths and system commands
1908 | Validate URLs and external identifiers
1909 | Check parameter sizes and ranges
1910 | Prevent command injection
1911 |
1912 | Access control
1913 | Implement authentication where needed
1914 | Use appropriate authorization checks
1915 | Audit tool usage
1916 | Rate limit requests
1917 | Monitor for abuse
1918 |
1919 | Error handling
1920 | Don’t expose internal errors to clients
1921 | Log security-relevant errors
1922 | Handle timeouts appropriately
1923 | Clean up resources after errors
1924 | Validate return values
1925 |
1926 | Tool discovery and updates
1927 | MCP supports dynamic tool discovery:
1928 |
1929 | Clients can list available tools at any time
1930 | Servers can notify clients when tools change using notifications/tools/list_changed
1931 | Tools can be added or removed during runtime
1932 | Tool definitions can be updated (though this should be done carefully)
1933 |
1934 | Error handling
1935 | Tool errors should be reported within the result object, not as MCP protocol-level errors. This allows the LLM to see and potentially handle the error. When a tool encounters an error:
1936 |
1937 | Set isError to true in the result
1938 | Include error details in the content array
1939 | Here’s an example of proper error handling for tools:
1940 |
1941 | TypeScript
1942 | Python
1943 |
1944 | try:
1945 | # Tool operation
1946 | result = perform_operation()
1947 | return types.CallToolResult(
1948 | content=[
1949 | types.TextContent(
1950 | type="text",
1951 | text=f"Operation successful: {result}"
1952 | )
1953 | ]
1954 | )
1955 | except Exception as error:
1956 | return types.CallToolResult(
1957 | isError=True,
1958 | content=[
1959 | types.TextContent(
1960 | type="text",
1961 | text=f"Error: {str(error)}"
1962 | )
1963 | ]
1964 | )
1965 | This approach allows the LLM to see that an error occurred and potentially take corrective action or request human intervention.
1966 |
1967 |
1968 | Testing tools
1969 | A comprehensive testing strategy for MCP tools should cover:
1970 |
1971 | Functional testing: Verify tools execute correctly with valid inputs and handle invalid inputs appropriately
1972 | Integration testing: Test tool interaction with external systems using both real and mocked dependencies
1973 | Security testing: Validate authentication, authorization, input sanitization, and rate limiting
1974 | Performance testing: Check behavior under load, timeout handling, and resource cleanup
1975 | Error handling: Ensure tools properly report errors through the MCP protocol and clean up resources
1976 |
1977 | Concepts
1978 | Sampling
1979 | Let your servers request completions from LLMs
1980 |
1981 | Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated agentic behaviors while maintaining security and privacy.
1982 |
1983 | This feature of MCP is not yet supported in the Claude Desktop client.
1984 |
1985 |
1986 | How sampling works
1987 | The sampling flow follows these steps:
1988 |
1989 | Server sends a sampling/createMessage request to the client
1990 | Client reviews the request and can modify it
1991 | Client samples from an LLM
1992 | Client reviews the completion
1993 | Client returns the result to the server
1994 | This human-in-the-loop design ensures users maintain control over what the LLM sees and generates.
1995 |
1996 |
1997 | Message format
1998 | Sampling requests use a standardized message format:
1999 |
2000 |
2001 | {
2002 | messages: [
2003 | {
2004 | role: "user" | "assistant",
2005 | content: {
2006 | type: "text" | "image",
2007 |
2008 | // For text:
2009 | text?: string,
2010 |
2011 | // For images:
2012 | data?: string, // base64 encoded
2013 | mimeType?: string
2014 | }
2015 | }
2016 | ],
2017 | modelPreferences?: {
2018 | hints?: [{
2019 | name?: string // Suggested model name/family
2020 | }],
2021 | costPriority?: number, // 0-1, importance of minimizing cost
2022 | speedPriority?: number, // 0-1, importance of low latency
2023 | intelligencePriority?: number // 0-1, importance of capabilities
2024 | },
2025 | systemPrompt?: string,
2026 | includeContext?: "none" | "thisServer" | "allServers",
2027 | temperature?: number,
2028 | maxTokens: number,
2029 | stopSequences?: string[],
2030 | metadata?: Record<string, unknown>
2031 | }
2032 |
2033 | Request parameters
2034 |
2035 | Messages
2036 | The messages array contains the conversation history to send to the LLM. Each message has:
2037 |
2038 | role: Either “user” or “assistant”
2039 | content: The message content, which can be:
2040 | Text content with a text field
2041 | Image content with data (base64) and mimeType fields
2042 |
2043 | Model preferences
2044 | The modelPreferences object allows servers to specify their model selection preferences:
2045 |
2046 | hints: Array of model name suggestions that clients can use to select an appropriate model:
2047 |
2048 | name: String that can match full or partial model names (e.g. “claude-3”, “sonnet”)
2049 | Clients may map hints to equivalent models from different providers
2050 | Multiple hints are evaluated in preference order
2051 | Priority values (0-1 normalized):
2052 |
2053 | costPriority: Importance of minimizing costs
2054 | speedPriority: Importance of low latency response
2055 | intelligencePriority: Importance of advanced model capabilities
2056 | Clients make the final model selection based on these preferences and their available models.
2057 |
2058 |
2059 | System prompt
2060 | An optional systemPrompt field allows servers to request a specific system prompt. The client may modify or ignore this.
2061 |
2062 |
2063 | Context inclusion
2064 | The includeContext parameter specifies what MCP context to include:
2065 |
2066 | "none": No additional context
2067 | "thisServer": Include context from the requesting server
2068 | "allServers": Include context from all connected MCP servers
2069 | The client controls what context is actually included.
2070 |
2071 |
2072 | Sampling parameters
2073 | Fine-tune the LLM sampling with:
2074 |
2075 | temperature: Controls randomness (0.0 to 1.0)
2076 | maxTokens: Maximum tokens to generate
2077 | stopSequences: Array of sequences that stop generation
2078 | metadata: Additional provider-specific parameters
2079 |
2080 | Response format
2081 | The client returns a completion result:
2082 |
2083 |
2084 | {
2085 | model: string, // Name of the model used
2086 | stopReason?: "endTurn" | "stopSequence" | "maxTokens" | string,
2087 | role: "user" | "assistant",
2088 | content: {
2089 | type: "text" | "image",
2090 | text?: string,
2091 | data?: string,
2092 | mimeType?: string
2093 | }
2094 | }
2095 |
2096 | Example request
2097 | Here’s an example of requesting sampling from a client:
2098 |
2099 |
2100 | {
2101 | "method": "sampling/createMessage",
2102 | "params": {
2103 | "messages": [
2104 | {
2105 | "role": "user",
2106 | "content": {
2107 | "type": "text",
2108 | "text": "What files are in the current directory?"
2109 | }
2110 | }
2111 | ],
2112 | "systemPrompt": "You are a helpful file system assistant.",
2113 | "includeContext": "thisServer",
2114 | "maxTokens": 100
2115 | }
2116 | }
2117 |
2118 | Best practices
2119 | When implementing sampling:
2120 |
2121 | Always provide clear, well-structured prompts
2122 | Handle both text and image content appropriately
2123 | Set reasonable token limits
2124 | Include relevant context through includeContext
2125 | Validate responses before using them
2126 | Handle errors gracefully
2127 | Consider rate limiting sampling requests
2128 | Document expected sampling behavior
2129 | Test with various model parameters
2130 | Monitor sampling costs
2131 |
2132 | Human in the loop controls
2133 | Sampling is designed with human oversight in mind:
2134 |
2135 |
2136 | For prompts
2137 | Clients should show users the proposed prompt
2138 | Users should be able to modify or reject prompts
2139 | System prompts can be filtered or modified
2140 | Context inclusion is controlled by the client
2141 |
2142 | For completions
2143 | Clients should show users the completion
2144 | Users should be able to modify or reject completions
2145 | Clients can filter or modify completions
2146 | Users control which model is used
2147 |
2148 | Security considerations
2149 | When implementing sampling:
2150 |
2151 | Validate all message content
2152 | Sanitize sensitive information
2153 | Implement appropriate rate limits
2154 | Monitor sampling usage
2155 | Encrypt data in transit
2156 | Handle user data privacy
2157 | Audit sampling requests
2158 | Control cost exposure
2159 | Implement timeouts
2160 | Handle model errors gracefully
2161 |
2162 | Common patterns
2163 |
2164 | Agentic workflows
2165 | Sampling enables agentic patterns like:
2166 |
2167 | Reading and analyzing resources
2168 | Making decisions based on context
2169 | Generating structured data
2170 | Handling multi-step tasks
2171 | Providing interactive assistance
2172 |
2173 | Context management
2174 | Best practices for context:
2175 |
2176 | Request minimal necessary context
2177 | Structure context clearly
2178 | Handle context size limits
2179 | Update context as needed
2180 | Clean up stale context
2181 |
2182 | Error handling
2183 | Robust error handling should:
2184 |
2185 | Catch sampling failures
2186 | Handle timeout errors
2187 | Manage rate limits
2188 | Validate responses
2189 | Provide fallback behaviors
2190 | Log errors appropriately
2191 |
2192 | Limitations
2193 | Be aware of these limitations:
2194 |
2195 | Sampling depends on client capabilities
2196 | Users control sampling behavior
2197 | Context size has limits
2198 | Rate limits may apply
2199 | Costs should be considered
2200 | Model availability varies
2201 | Response times vary
2202 | Not all content types supported
2203 |
2204 | Concepts
2205 | Transports
2206 | Learn about MCP’s communication mechanisms
2207 |
2208 | Transports in the Model Context Protocol (MCP) provide the foundation for communication between clients and servers. A transport handles the underlying mechanics of how messages are sent and received.
2209 |
2210 |
2211 | Message Format
2212 | MCP uses JSON-RPC 2.0 as its wire format. The transport layer is responsible for converting MCP protocol messages into JSON-RPC format for transmission and converting received JSON-RPC messages back into MCP protocol messages.
2213 |
2214 | There are three types of JSON-RPC messages used:
2215 |
2216 |
2217 | Requests
2218 |
2219 | {
2220 | jsonrpc: "2.0",
2221 | id: number | string,
2222 | method: string,
2223 | params?: object
2224 | }
2225 |
2226 | Responses
2227 |
2228 | {
2229 | jsonrpc: "2.0",
2230 | id: number | string,
2231 | result?: object,
2232 | error?: {
2233 | code: number,
2234 | message: string,
2235 | data?: unknown
2236 | }
2237 | }
2238 |
2239 | Notifications
2240 |
2241 | {
2242 | jsonrpc: "2.0",
2243 | method: string,
2244 | params?: object
2245 | }
2246 |
2247 | Built-in Transport Types
2248 | MCP includes two standard transport implementations:
2249 |
2250 |
2251 | Standard Input/Output (stdio)
2252 | The stdio transport enables communication through standard input and output streams. This is particularly useful for local integrations and command-line tools.
2253 |
2254 | Use stdio when:
2255 |
2256 | Building command-line tools
2257 | Implementing local integrations
2258 | Needing simple process communication
2259 | Working with shell scripts
2260 | TypeScript (Server)
2261 | TypeScript (Client)
2262 | Python (Server)
2263 | Python (Client)
2264 |
2265 | const server = new Server({
2266 | name: "example-server",
2267 | version: "1.0.0"
2268 | }, {
2269 | capabilities: {}
2270 | });
2271 |
2272 | const transport = new StdioServerTransport();
2273 | await server.connect(transport);
2274 |
2275 | Server-Sent Events (SSE)
2276 | SSE transport enables server-to-client streaming with HTTP POST requests for client-to-server communication.
2277 |
2278 | Use SSE when:
2279 |
2280 | Only server-to-client streaming is needed
2281 | Working with restricted networks
2282 | Implementing simple updates
2283 | TypeScript (Server)
2284 | TypeScript (Client)
2285 | Python (Server)
2286 | Python (Client)
2287 |
2288 | from mcp.server.sse import SseServerTransport
2289 | from starlette.applications import Starlette
2290 | from starlette.routing import Route
2291 |
2292 | app = Server("example-server")
2293 | sse = SseServerTransport("/messages")
2294 |
2295 | async def handle_sse(scope, receive, send):
2296 | async with sse.connect_sse(scope, receive, send) as streams:
2297 | await app.run(streams[0], streams[1], app.create_initialization_options())
2298 |
2299 | async def handle_messages(scope, receive, send):
2300 | await sse.handle_post_message(scope, receive, send)
2301 |
2302 | starlette_app = Starlette(
2303 | routes=[
2304 | Route("/sse", endpoint=handle_sse),
2305 | Route("/messages", endpoint=handle_messages, methods=["POST"]),
2306 | ]
2307 | )
2308 |
2309 | Custom Transports
2310 | MCP makes it easy to implement custom transports for specific needs. Any transport implementation just needs to conform to the Transport interface:
2311 |
2312 | You can implement custom transports for:
2313 |
2314 | Custom network protocols
2315 | Specialized communication channels
2316 | Integration with existing systems
2317 | Performance optimization
2318 | TypeScript
2319 | Python
2320 | Note that while MCP Servers are often implemented with asyncio, we recommend implementing low-level interfaces like transports with anyio for wider compatibility.
2321 |
2322 |
2323 | @contextmanager
2324 | async def create_transport(
2325 | read_stream: MemoryObjectReceiveStream[JSONRPCMessage | Exception],
2326 | write_stream: MemoryObjectSendStream[JSONRPCMessage]
2327 | ):
2328 | """
2329 | Transport interface for MCP.
2330 |
2331 | Args:
2332 | read_stream: Stream to read incoming messages from
2333 | write_stream: Stream to write outgoing messages to
2334 | """
2335 | async with anyio.create_task_group() as tg:
2336 | try:
2337 | # Start processing messages
2338 | tg.start_soon(lambda: process_messages(read_stream))
2339 |
2340 | # Send messages
2341 | async with write_stream:
2342 | yield write_stream
2343 |
2344 | except Exception as exc:
2345 | # Handle errors
2346 | raise exc
2347 | finally:
2348 | # Clean up
2349 | tg.cancel_scope.cancel()
2350 | await write_stream.aclose()
2351 | await read_stream.aclose()
2352 |
2353 | Error Handling
2354 | Transport implementations should handle various error scenarios:
2355 |
2356 | Connection errors
2357 | Message parsing errors
2358 | Protocol errors
2359 | Network timeouts
2360 | Resource cleanup
2361 | Example error handling:
2362 |
2363 | TypeScript
2364 | Python
2365 | Note that while MCP Servers are often implemented with asyncio, we recommend implementing low-level interfaces like transports with anyio for wider compatibility.
2366 |
2367 |
2368 | @contextmanager
2369 | async def example_transport(scope: Scope, receive: Receive, send: Send):
2370 | try:
2371 | # Create streams for bidirectional communication
2372 | read_stream_writer, read_stream = anyio.create_memory_object_stream(0)
2373 | write_stream, write_stream_reader = anyio.create_memory_object_stream(0)
2374 |
2375 | async def message_handler():
2376 | try:
2377 | async with read_stream_writer:
2378 | # Message handling logic
2379 | pass
2380 | except Exception as exc:
2381 | logger.error(f"Failed to handle message: {exc}")
2382 | raise exc
2383 |
2384 | async with anyio.create_task_group() as tg:
2385 | tg.start_soon(message_handler)
2386 | try:
2387 | # Yield streams for communication
2388 | yield read_stream, write_stream
2389 | except Exception as exc:
2390 | logger.error(f"Transport error: {exc}")
2391 | raise exc
2392 | finally:
2393 | tg.cancel_scope.cancel()
2394 | await write_stream.aclose()
2395 | await read_stream.aclose()
2396 | except Exception as exc:
2397 | logger.error(f"Failed to initialize transport: {exc}")
2398 | raise exc
2399 |
2400 | Best Practices
2401 | When implementing or using MCP transport:
2402 |
2403 | Handle connection lifecycle properly
2404 | Implement proper error handling
2405 | Clean up resources on connection close
2406 | Use appropriate timeouts
2407 | Validate messages before sending
2408 | Log transport events for debugging
2409 | Implement reconnection logic when appropriate
2410 | Handle backpressure in message queues
2411 | Monitor connection health
2412 | Implement proper security measures
2413 |
2414 | Security Considerations
2415 | When implementing transport:
2416 |
2417 |
2418 | Authentication and Authorization
2419 | Implement proper authentication mechanisms
2420 | Validate client credentials
2421 | Use secure token handling
2422 | Implement authorization checks
2423 |
2424 | Data Security
2425 | Use TLS for network transport
2426 | Encrypt sensitive data
2427 | Validate message integrity
2428 | Implement message size limits
2429 | Sanitize input data
2430 |
2431 | Network Security
2432 | Implement rate limiting
2433 | Use appropriate timeouts
2434 | Handle denial of service scenarios
2435 | Monitor for unusual patterns
2436 | Implement proper firewall rules
2437 |
2438 | Debugging Transport
2439 | Tips for debugging transport issues:
2440 |
2441 | Enable debug logging
2442 | Monitor message flow
2443 | Check connection states
2444 | Validate message formats
2445 | Test error scenarios
2446 | Use network analysis tools
2447 | Implement health checks
2448 | Monitor resource usage
2449 | Test edge cases
2450 | Use proper error tracking
2451 |
2452 | Python (Client)
2453 |
2454 | async with sse_client("http://localhost:8000/sse") as streams:
2455 | async with ClientSession(streams[0], streams[1]) as session:
2456 | await session.initialize()
```