#
tokens: 19495/50000 23/23 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── client
│   ├── app.py
│   ├── requirements.txt
│   ├── servers_config_example.json
│   └── src
│       ├── __init__.py
│       ├── api_server.py
│       ├── api.py
│       ├── auth.py
│       ├── config.py
│       └── server.py
├── example_llm_mcp
│   ├── .env.example
│   ├── main.py
│   ├── requirements.txt
│   └── servers_config_example.json
├── files
│   ├── client.gif
│   └── llm_mcp_example.gif
├── LICENSE
├── README.md
└── server
    ├── package.json
    ├── src
    │   ├── auth.ts
    │   ├── client.ts
    │   ├── config.ts
    │   ├── index.ts
    │   ├── mcp-proxy.ts
    │   └── sse.ts
    └── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/example_llm_mcp/.env.example:
--------------------------------------------------------------------------------

```
1 | OPENAI_API_KEY="<YOUR_OPENAI_API_KEY>"
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | .venv/
2 | .env
3 | servers_config.json
4 | .idea/
5 | node_modules/
6 | package-lock.json
7 | api_keys.json
8 | build/
9 | __pycache__/
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Pocket MCP Manager
  2 | 
  3 | A flexible and user-friendly management system for Model Context Protocol (MCP) servers, consisting of a client-server architecture that simplifies handling multiple MCP servers through a central interface.
  4 | 
  5 | ## Overview
  6 | 
  7 | The Pocket MCP Manager streamlines the process of working with multiple MCP servers by allowing you to:
  8 | 
  9 | - Add all your MCP servers to a central management UI
 10 | - Selectively launch specific servers through an intuitive interface
 11 | - Generate API keys linked to these running servers
 12 | - Connect to them through a single proxy MCP server in clients like Claude or Cursor
 13 | 
 14 | This approach means you only need to update a single API key in your AI tools when changing which MCP servers you want to use, rather than reconfiguring multiple connection settings.
 15 | 
 16 | ## Server Component
 17 | 
 18 | The server component is an MCP proxy server that:
 19 | 
 20 | 1. Accepts an API key from the client
 21 | 2. Connects to the already running MCP servers authorized by that key
 22 | 3. Exposes all connected servers' capabilities through a unified MCP interface
 23 | 4. Routes requests to the appropriate backend servers
 24 | 
 25 | ### Server Installation
 26 | 
 27 | ```bash
 28 | # Clone the repository
 29 | git clone [email protected]:dailydaniel/pocket-mcp.git
 30 | cd pocket-mcp/server
 31 | 
 32 | # Install dependencies
 33 | npm install
 34 | 
 35 | # The build step runs automatically during installation
 36 | ```
 37 | 
 38 | ### Connecting to Claude Desktop / Cursor
 39 | 
 40 | Add the following configuration to your Claude Desktop settings:
 41 | 
 42 | ```json
 43 | {
 44 |   "mcpServers": {
 45 |     "mcp-proxy": {
 46 |       "command": "node",
 47 |       "args": ["/full/path/to/pocket-mcp/server/build/index.js"],
 48 |       "env": {
 49 |         "MCP_API_KEY": "api_key_from_client",
 50 |         "CLIENT_API_URL": "http://localhost:<port>/api"
 51 |       }
 52 |     }
 53 |   }
 54 | }
 55 | ```
 56 | 
 57 | Replace:
 58 | - `/full/path/to/pocket-mcp/server/build/index.js` with the absolute path to your server's build/index.js file
 59 | - `api_key_from_client` with the API key generated from the client UI
 60 | - `<port>` with the port shown in the API server logs (typically 8000)
 61 | 
 62 | ## Client Component
 63 | 
 64 | The client provides a web-based UI built with Streamlit for:
 65 | 
 66 | - Viewing all configured MCP servers
 67 | - Launching selected servers as a group
 68 | - Generating API keys for launched servers
 69 | - Managing existing API keys
 70 | 
 71 | ### Client Setup
 72 | 
 73 | ```bash
 74 | # Navigate to the client directory
 75 | cd pocket-mcp/client
 76 | 
 77 | # Create and activate a virtual environment
 78 | python -m venv .venv --prompt "mcp-venv"
 79 | source .venv/bin/activate
 80 | 
 81 | # Install requirements
 82 | pip install -r requirements.txt
 83 | 
 84 | # Copy the example config
 85 | cp servers_config_example.json servers_config.json
 86 | 
 87 | # Edit the configuration with your MCP servers
 88 | vim servers_config.json
 89 | 
 90 | # Run the client
 91 | streamlit run app.py
 92 | ```
 93 | 
 94 | ### Server Configuration Example
 95 | 
 96 | Create a `servers_config.json` file in the client directory with your MCP servers:
 97 | 
 98 | ```json
 99 | {
100 |   "mcpServers": {
101 |     "jetbrains": {
102 |       "command": "npx",
103 |       "args": ["-y", "@jetbrains/mcp-proxy"]
104 |     },
105 |     "logseq": {
106 |       "command": "uvx",
107 |       "args": ["mcp-server-logseq"],
108 |       "env": {
109 |         "LOGSEQ_API_TOKEN": "API_KEY",
110 |         "LOGSEQ_API_URL": "http://127.0.0.1:<port>"
111 |       }
112 |     },
113 |     "brave-search": {
114 |       "command": "npx",
115 |       "args": [
116 |         "-y",
117 |         "@modelcontextprotocol/server-brave-search"
118 |       ],
119 |       "env": {
120 |         "BRAVE_API_KEY": "API_KEY"
121 |       }
122 |     }
123 |   }
124 | }
125 | ```
126 | 
127 | Replace `API_KEY` and `<port>` with your actual values.
128 | 
129 | ### Client Demo
130 | ![Client Demo](files/client.gif)
131 | 
132 | ## Example LLM MCP Client
133 | 
134 | The repository includes an example client for chatting with LLMs using OpenAI API and MCP servers.
135 | 
136 | ### Setup and Usage
137 | 
138 | ```bash
139 | # Navigate to the example client directory
140 | cd pocket-mcp/example_llm_mcp
141 | 
142 | # Copy the example environment file
143 | cp .env.example .env
144 | 
145 | # Edit the .env file and add your OpenAI API key
146 | vim .env
147 | 
148 | # Add server configurations to the servers_config.json file
149 | cp servers_config_example.json servers_config.json
150 | 
151 | # Add API key from the client
152 | vim servers_config.json
153 | 
154 | # If not already in a virtual environment
155 | # May use the same virtual environment as for the client
156 | source ../client/.venv/bin/activate
157 | 
158 | # Install requirements
159 | pip install -r requirements.txt
160 | 
161 | # Run the client
162 | python3 main.py
163 | ```
164 | 
165 | The example client will connect to your running MCP servers and allow you to chat with an LLM while utilizing MCP capabilities.
166 | 
167 | ### Chat Demo
168 | ![Chat Demo](files/llm_mcp_example.gif)
169 | 
170 | ## Acknowledgments
171 | 
172 | - Server component based on [mcp-proxy-server](https://github.com/adamwattis/mcp-proxy-server)
173 | - SSE implementation with [mcp-proxy](https://github.com/sparfenyuk/mcp-proxy)
174 | 
175 | ## TODO
176 | 
177 | - Add additional functionality to the client component
178 | - Convert the server into a standalone npm module that can be installed with npx
179 | - Delete cringe copyright notice from streamlit app :)
```

--------------------------------------------------------------------------------
/client/src/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/example_llm_mcp/requirements.txt:
--------------------------------------------------------------------------------

```
1 | python-dotenv
2 | mcp
3 | openai
```

--------------------------------------------------------------------------------
/client/requirements.txt:
--------------------------------------------------------------------------------

```
1 | streamlit>=1.25.0
2 | mcp>=0.6.0
3 | python-dotenv>=1.0.0
4 | pyperclip>=1.8.2
5 | fastapi>=0.95.0
6 | uvicorn>=0.21.0
7 | mcp-proxy>=0.5.1
```

--------------------------------------------------------------------------------
/example_llm_mcp/servers_config_example.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "mcpServers": {
 3 |     "mcp-proxy": {
 4 |       "command": "node",
 5 |       "args": ["/full/path/to/pocket-mcp/server/build/index.js"],
 6 |       "env": {
 7 |         "MCP_API_KEY": "API_KEY",
 8 |         "CLIENT_API_URL": "http://localhost:<PORT>/api"
 9 |       }
10 |     }
11 |   }
12 | }
```

--------------------------------------------------------------------------------
/server/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 |  {
 2 |   "compilerOptions": {
 3 |     "target": "ES2022",
 4 |     "module": "Node16",
 5 |     "moduleResolution": "Node16",
 6 |     "outDir": "./build",
 7 |     "rootDir": "./src",
 8 |     "strict": true,
 9 |     "esModuleInterop": true,
10 |     "skipLibCheck": true,
11 |     "forceConsistentCasingInFileNames": true
12 |   },
13 |   "include": ["src/**/*"],
14 |   "exclude": ["node_modules"]
15 | }
```

--------------------------------------------------------------------------------
/client/servers_config_example.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "mcpServers": {
 3 |     "jetbrains": {
 4 |       "command": "npx",
 5 |       "args": ["-y", "@jetbrains/mcp-proxy"]
 6 |     },
 7 |     "logseq": {
 8 |       "command": "uvx",
 9 |       "args": ["mcp-server-logseq"],
10 |       "env": {
11 |         "LOGSEQ_API_TOKEN": "API_KEY",
12 |         "LOGSEQ_API_URL": "http://127.0.0.1:<port>"
13 |       }
14 |     },
15 |     "brave-search": {
16 |       "command": "npx",
17 |       "args": [
18 |         "-y",
19 |         "@modelcontextprotocol/server-brave-search"
20 |       ],
21 |       "env": {
22 |         "BRAVE_API_KEY": "API_KEY"
23 |       }
24 |     }
25 |   }
26 | }
```

--------------------------------------------------------------------------------
/server/src/index.ts:
--------------------------------------------------------------------------------

```typescript
 1 | #!/usr/bin/env node
 2 | 
 3 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
 4 | import { createServer } from "./mcp-proxy.js";
 5 | 
 6 | async function main() {
 7 |   const apiKey = process.env.MCP_API_KEY;
 8 |   
 9 |   const transport = new StdioServerTransport();
10 |   
11 |   try {
12 |     const { server, cleanup } = await createServer(apiKey);
13 | 
14 |     await server.connect(transport);
15 | 
16 |     process.on("SIGINT", async () => {
17 |       await cleanup();
18 |       await server.close();
19 |       process.exit(0);
20 |     });
21 |   } catch (error) {
22 |     console.error("Server error:", error);
23 |     process.exit(1);
24 |   }
25 | }
26 | 
27 | main().catch((error) => {
28 |   console.error("Error:", error);
29 |   process.exit(1);
30 | });
31 | 
```

--------------------------------------------------------------------------------
/client/src/api.py:
--------------------------------------------------------------------------------

```python
 1 | import json
 2 | import os
 3 | from typing import Any, Dict, List, Optional
 4 | 
 5 | 
 6 | class ApiClient:
 7 |     """Client for generating server configurations for MCP API server."""
 8 | 
 9 |     @staticmethod
10 |     def generate_server_config(
11 |             api_key: str,
12 |             server_configs: Dict[str, Dict[str, Any]]
13 |     ) -> Dict[str, Any]:
14 |         """Generate MCP Proxy Server configuration based on API key.
15 | 
16 |         Args:
17 |             api_key: API key for authentication
18 |             server_configs: Dictionary of server configurations
19 | 
20 |         Returns:
21 |             Dictionary with instructions for proxy server
22 |         """
23 |         return {
24 |             "api_key": api_key,
25 |             "instructions": {
26 |                 "env_variable": "MCP_API_KEY",
27 |                 "command": "mcp-proxy-server",
28 |                 "sse_command": "node build/sse.js"
29 |             }
30 |         }
```

--------------------------------------------------------------------------------
/server/src/auth.ts:
--------------------------------------------------------------------------------

```typescript
 1 | // src/auth.ts
 2 | import axios from 'axios';
 3 | import { ServerConfig } from './config.js';
 4 | 
 5 | const CLIENT_API_URL = process.env.CLIENT_API_URL || 'http://localhost:8000/api';
 6 | 
 7 | export interface AuthResponse {
 8 |   success: boolean;
 9 |   servers: ServerConfig[];
10 |   message?: string;
11 | }
12 | 
13 | export async function authenticateAndGetServers(apiKey: string): Promise<ServerConfig[]> {
14 |   try {
15 |     const response = await axios.get<AuthResponse>(`${CLIENT_API_URL}/servers`, {
16 |       headers: {
17 |         Authorization: `Bearer ${apiKey}`
18 |       }
19 |     });
20 | 
21 |     if (!response.data.success) {
22 |       throw new Error(response.data.message || 'Auth failed');
23 |     }
24 | 
25 |     return response.data.servers;
26 |   } catch (error) {
27 |     if (axios.isAxiosError(error)) {
28 |       if (error.response?.status === 401) {
29 |         throw new Error('Invalid API key');
30 |       } else if (error.response?.status === 403) {
31 |         throw new Error('No permission');
32 |       }
33 |       throw new Error(`Auth error: ${error.message}`);
34 |     }
35 |     throw error;
36 |   }
37 | }
38 | 
```

--------------------------------------------------------------------------------
/server/src/config.ts:
--------------------------------------------------------------------------------

```typescript
 1 | // src/config.ts
 2 | import { readFile } from 'fs/promises';
 3 | import { resolve } from 'path';
 4 | import { authenticateAndGetServers } from './auth.js';
 5 | 
 6 | export type TransportConfigStdio = {
 7 |   type?: 'stdio'
 8 |   command: string;
 9 |   args?: string[];
10 |   env?: string[]
11 | }
12 | 
13 | export type TransportConfigSSE = {
14 |   type: 'sse'
15 |   url: string
16 | }
17 | 
18 | export type TransportConfig = TransportConfigSSE | TransportConfigStdio
19 | export interface ServerConfig {
20 |   name: string;
21 |   transport: TransportConfig;
22 | }
23 | 
24 | export interface Config {
25 |   servers: ServerConfig[];
26 | }
27 | 
28 | export const loadConfig = async (apiKey?: string): Promise<Config> => {
29 |   if (apiKey) {
30 |     try {
31 |       const servers = await authenticateAndGetServers(apiKey);
32 |       return { servers };
33 |     } catch (error) {
34 |       console.error('Auth error with API key:', error);
35 |       throw error;
36 |     }
37 |   }
38 | 
39 |   try {
40 |     const configPath = process.env.MCP_CONFIG_PATH || resolve(process.cwd(), 'config.json');
41 |     console.log(`Load config from: ${configPath}`);
42 |     const fileContents = await readFile(configPath, 'utf-8');
43 |     return JSON.parse(fileContents);
44 |   } catch (error) {
45 |     console.error('Error in loading config.json:', error);
46 |     return { servers: [] };
47 |   }
48 | };
49 | 
```

--------------------------------------------------------------------------------
/server/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "name": "mcp-proxy-server",
 3 |   "version": "0.0.1",
 4 |   "author": "Your Name",
 5 |   "license": "MIT",
 6 |   "description": "An MCP proxy server that aggregates and serves multiple MCP resource servers through a single interface with API key support",
 7 |   "private": false,
 8 |   "type": "module",
 9 |   "bin": {
10 |     "mcp-proxy-server": "./build/index.js"
11 |   },
12 |   "files": [
13 |     "build"
14 |   ],
15 |   "scripts": {
16 |     "dev": "nodemon --watch 'src/**' --ext 'ts,json' --ignore 'src/**/*.spec.ts' --exec 'tsx src/index.ts'",
17 |     "dev:sse": "nodemon --watch 'src/**' --ext 'ts,json' --ignore 'src/**/*.spec.ts' --exec 'tsx src/sse.ts'",
18 |     "build": "tsc",
19 |     "postbuild": "chmod +x build/index.js",
20 |     "prepare": "npm run build",
21 |     "watch": "tsc --watch"
22 |   },
23 |   "dependencies": {
24 |     "@modelcontextprotocol/sdk": "0.6.0",
25 |     "@types/cors": "^2.8.17",
26 |     "axios": "^1.6.0",
27 |     "cors": "^2.8.5",
28 |     "eventsource": "^3.0.2",
29 |     "express": "^4.21.1",
30 |     "zod": "^3.23.8",
31 |     "zod-to-json-schema": "^3.23.5"
32 |   },
33 |   "devDependencies": {
34 |     "@types/express": "^5.0.0",
35 |     "@types/node": "^20.11.24",
36 |     "nodemon": "^3.1.9",
37 |     "tsx": "^4.19.2",
38 |     "typescript": "^5.3.3"
39 |   },
40 |   "repository": {
41 |     "type": "git",
42 |     "url": "https://github.com/dailydaniel/pocket-mcp",
43 |     "directory": "server"
44 |   }
45 | }
46 | 
```

--------------------------------------------------------------------------------
/server/src/sse.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { SSEServerTransport } from "@modelcontextprotocol/sdk/server/sse.js";
 2 | import express from "express";
 3 | import { createServer } from "./mcp-proxy.js";
 4 | import cors from "cors";
 5 | 
 6 | const app = express();
 7 | app.use(cors());
 8 | 
 9 | const apiKey = process.env.MCP_API_KEY;
10 | 
11 | let transport: SSEServerTransport;
12 | let cleanupFunction: () => Promise<void>;
13 | 
14 | async function initServer() {
15 |   try {
16 |     const { server, cleanup } = await createServer(apiKey);
17 |     cleanupFunction = cleanup;
18 | 
19 |     app.get("/sse", async (req, res) => {
20 |       console.log("Got connection");
21 |       transport = new SSEServerTransport("/message", res);
22 |       await server.connect(transport);
23 | 
24 |       server.onerror = (err) => {
25 |         console.error(`Server error: ${err.stack}`);
26 |       };
27 | 
28 |       server.onclose = async () => {
29 |         console.log('Server closed');
30 |         if (process.env.KEEP_SERVER_OPEN !== "1") {
31 |           await cleanup();
32 |           await server.close();
33 |           process.exit(0);
34 |         }
35 |       };
36 |     });
37 | 
38 |     app.post("/message", async (req, res) => {
39 |       console.log("Got message");
40 |       await transport.handlePostMessage(req, res);
41 |     });
42 |   } catch (error) {
43 |     console.error("Error in server initialization:", error);
44 |     process.exit(1);
45 |   }
46 | }
47 | 
48 | initServer();
49 | 
50 | const PORT = process.env.PORT || 3006;
51 | app.listen(PORT, () => {
52 |   console.log(`Server listening on port ${PORT}`);
53 | });
54 | 
55 | process.on("SIGINT", async () => {
56 |   if (cleanupFunction) {
57 |     await cleanupFunction();
58 |   }
59 |   process.exit(0);
60 | });
61 | 
```

--------------------------------------------------------------------------------
/client/src/config.py:
--------------------------------------------------------------------------------

```python
 1 | import json
 2 | import os
 3 | from typing import Any, Dict, Optional
 4 | 
 5 | from dotenv import load_dotenv
 6 | 
 7 | 
 8 | class Configuration:
 9 |     """Manages configuration and environment variables for the MCP client."""
10 | 
11 |     def __init__(self, config_path: str = "servers_config.json") -> None:
12 |         """Initialize the configuration manager.
13 | 
14 |         Args:
15 |             config_path: Path to the configuration file
16 |         """
17 |         self.config_path = config_path
18 |         self.load_env()
19 |         self.config = self.load_config()
20 | 
21 |     @staticmethod
22 |     def load_env() -> None:
23 |         """Load environment variables from .env file."""
24 |         load_dotenv()
25 | 
26 |     def load_config(self, file_path: Optional[str] = None) -> Dict[str, Any]:
27 |         """Load configuration from the JSON file.
28 | 
29 |         Args:
30 |             file_path: Optional path to the configuration file (overrides config_path)
31 | 
32 |         Returns:
33 |             Dictionary containing the configuration
34 |         """
35 |         try:
36 |             path = file_path or self.config_path
37 |             if os.path.exists(path):
38 |                 with open(path, "r") as f:
39 |                     return json.load(f)
40 |             return {"mcpServers": {}}
41 |         except Exception as e:
42 |             print(f"Error loading configuration: {e}")
43 |             return {"mcpServers": {}}
44 | 
45 |     def save_config(self, config: Dict[str, Any]) -> None:
46 |         """Save configuration to the JSON file.
47 | 
48 |         Args:
49 |             config: Configuration dictionary to save
50 |         """
51 |         try:
52 |             with open(self.config_path, "w") as f:
53 |                 json.dump(config, f, indent=2)
54 |             self.config = config
55 |         except Exception as e:
56 |             print(f"Error saving configuration: {e}")
57 | 
58 |     def get_server_config(self, server_name: str) -> Optional[Dict[str, Any]]:
59 |         """Get configuration for a specific server.
60 | 
61 |         Args:
62 |             server_name: Name of the server
63 | 
64 |         Returns:
65 |             Server configuration or None if not found
66 |         """
67 |         return self.config.get("mcpServers", {}).get(server_name)
68 | 
69 |     def get_servers(self) -> Dict[str, Dict[str, Any]]:
70 |         """Get all server configurations.
71 | 
72 |         Returns:
73 |             Dictionary of server configurations
74 |         """
75 |         return self.config.get("mcpServers", {})
76 | 
```

--------------------------------------------------------------------------------
/client/src/api_server.py:
--------------------------------------------------------------------------------

```python
 1 | import uvicorn
 2 | from fastapi import FastAPI, HTTPException, Depends, Header
 3 | from fastapi.middleware.cors import CORSMiddleware
 4 | from typing import Dict, List, Optional
 5 | import socket
 6 | 
 7 | from .auth import AuthManager
 8 | from .server import ServerManager
 9 | from .config import Configuration
10 | 
11 | API_PORT = None
12 | 
13 | app = FastAPI()
14 | 
15 | app.add_middleware(
16 |     CORSMiddleware,
17 |     allow_origins=["*"],
18 |     allow_credentials=True,
19 |     allow_methods=["*"],
20 |     allow_headers=["*"],
21 | )
22 | 
23 | auth_manager = AuthManager()
24 | config = Configuration()
25 | server_manager = ServerManager()
26 | 
27 | 
28 | async def verify_api_key(authorization: str = Header(None)):
29 |     if not authorization:
30 |         raise HTTPException(status_code=401, detail="API key required")
31 | 
32 |     if authorization.startswith("Bearer "):
33 |         api_key = authorization[7:]
34 |     else:
35 |         api_key = authorization
36 | 
37 |     is_valid, server_names = auth_manager.validate_key(api_key)
38 |     if not is_valid:
39 |         raise HTTPException(status_code=401, detail="Invalid API key")
40 | 
41 |     return server_names
42 | 
43 | 
44 | @app.get("/api/servers")
45 | async def get_servers(server_names: List[str] = Depends(verify_api_key)):
46 |     servers_config = config.get_servers()
47 | 
48 |     authorized_servers = []
49 |     for name in server_names:
50 |         if name in servers_config:
51 |             sse_port = 3000 + hash(name) % 1000
52 | 
53 |             authorized_servers.append({
54 |                 "name": name,
55 |                 "transport": {
56 |                     "type": "sse",
57 |                     "url": f"http://0.0.0.0:{sse_port}/sse"
58 |                 }
59 |             })
60 | 
61 |     return {
62 |         "success": True,
63 |         "servers": authorized_servers
64 |     }
65 | 
66 | 
67 | @app.get("/api/health")
68 | async def health_check():
69 |     return {"status": "ok"}
70 | 
71 | 
72 | def find_free_port(start_port=8000, max_attempts=100):
73 |     for port in range(start_port, start_port + max_attempts):
74 |         try:
75 |             with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
76 |                 s.bind(('127.0.0.1', port))
77 |                 return port
78 |         except OSError:
79 |             continue
80 |     raise IOError("No free ports found")
81 | 
82 | 
83 | def start_api_server(host="0.0.0.0", port=None):
84 |     if port is None:
85 |         port = find_free_port()
86 |     print(f"Starting API server on port {port}")
87 |     global API_PORT
88 |     API_PORT = port
89 |     uvicorn.run(app, host=host, port=port)
90 | 
91 | 
92 | if __name__ == "__main__":
93 |     start_api_server()
94 | 
```

--------------------------------------------------------------------------------
/server/src/client.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { Client } from '@modelcontextprotocol/sdk/client/index.js';
  2 | import { SSEClientTransport } from '@modelcontextprotocol/sdk/client/sse.js';
  3 | import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
  4 | import { Transport } from '@modelcontextprotocol/sdk/shared/transport.js';
  5 | import { ServerConfig } from './config.js';
  6 | 
  7 | const sleep = (time: number) => new Promise<void>(resolve => setTimeout(() => resolve(), time))
  8 | export interface ConnectedClient {
  9 |   client: Client;
 10 |   cleanup: () => Promise<void>;
 11 |   name: string;
 12 | }
 13 | 
 14 | const createClient = (server: ServerConfig): { client: Client | undefined, transport: Transport | undefined } => {
 15 | 
 16 |   let transport: Transport | null = null
 17 |   try {
 18 |     if (server.transport.type === 'sse') {
 19 |       transport = new SSEClientTransport(new URL(server.transport.url));
 20 |     } else {
 21 |       transport = new StdioClientTransport({
 22 |         command: server.transport.command,
 23 |         args: server.transport.args,
 24 |         env: server.transport.env ? server.transport.env.reduce((o, v) => ({
 25 |           [v]: process.env[v] || ''
 26 |         }), {}) : undefined
 27 |       });
 28 |     }
 29 |   } catch (error) {
 30 |     console.error(`Failed to create transport ${server.transport.type || 'stdio'} to ${server.name}:`, error);
 31 |   }
 32 | 
 33 |   if (!transport) {
 34 |     console.warn(`Transport ${server.name} not available.`)
 35 |     return { transport: undefined, client: undefined }
 36 |   }
 37 | 
 38 |   const client = new Client({
 39 |     name: 'mcp-proxy-client',
 40 |     version: '1.0.0',
 41 |   }, {
 42 |     capabilities: {
 43 |       prompts: {},
 44 |       resources: { subscribe: true },
 45 |       tools: {}
 46 |     }
 47 |   });
 48 | 
 49 |   return { client, transport }
 50 | }
 51 | 
 52 | export const createClients = async (servers: ServerConfig[]): Promise<ConnectedClient[]> => {
 53 |   const clients: ConnectedClient[] = [];
 54 | 
 55 |   for (const server of servers) {
 56 |     console.log(`Connecting to server: ${server.name}`);
 57 | 
 58 |     const waitFor = 2500
 59 |     const retries = 3
 60 |     let count = 0
 61 |     let retry = true
 62 | 
 63 |     while (retry) {
 64 | 
 65 |       const { client, transport } = createClient(server)
 66 |       if (!client || !transport) {
 67 |         break
 68 |       }
 69 | 
 70 |       try {
 71 |         await client.connect(transport);
 72 |         console.log(`Connected to server: ${server.name}`);
 73 | 
 74 |         clients.push({
 75 |           client,
 76 |           name: server.name,
 77 |           cleanup: async () => {
 78 |             await transport.close();
 79 |           }
 80 |         });
 81 | 
 82 |         break
 83 | 
 84 |       } catch (error) {
 85 |         console.error(`Failed to connect to ${server.name}:`, error);
 86 |         count++
 87 |         retry = (count < retries)
 88 |         if (retry) {
 89 |           try {
 90 |             await client.close()
 91 |           } catch { }
 92 |           console.log(`Retry connection to ${server.name} in ${waitFor}ms (${count}/${retries})`);
 93 |           await sleep(waitFor)
 94 |         }
 95 |       }
 96 | 
 97 |     }
 98 | 
 99 |   }
100 | 
101 |   return clients;
102 | };
```

--------------------------------------------------------------------------------
/client/src/auth.py:
--------------------------------------------------------------------------------

```python
  1 | import json
  2 | import os
  3 | import secrets
  4 | import time
  5 | from typing import Dict, List, Optional, Set, Tuple, Union
  6 | 
  7 | 
  8 | class AuthManager:
  9 |     """Manages API keys and server access."""
 10 | 
 11 |     def __init__(self, keys_file: str = "api_keys.json") -> None:
 12 |         """Initialize the authentication manager.
 13 | 
 14 |         Args:
 15 |             keys_file: Path to the API keys file
 16 |         """
 17 |         self.keys_file = keys_file
 18 |         self.keys = self.load_keys()
 19 | 
 20 |     def load_keys(self) -> Dict[str, Dict[str, Union[List[str], int]]]:
 21 |         """Load API keys from the JSON file.
 22 | 
 23 |         Returns:
 24 |             Dictionary of API keys and their associated servers
 25 |         """
 26 |         try:
 27 |             if os.path.exists(self.keys_file):
 28 |                 with open(self.keys_file, "r") as f:
 29 |                     return json.load(f)
 30 |             return {}
 31 |         except Exception as e:
 32 |             print(f"Error loading API keys: {e}")
 33 |             return {}
 34 | 
 35 |     def save_keys(self) -> None:
 36 |         """Save API keys to the JSON file."""
 37 |         try:
 38 |             with open(self.keys_file, "w") as f:
 39 |                 json.dump(self.keys, f, indent=2)
 40 |         except Exception as e:
 41 |             print(f"Error saving API keys: {e}")
 42 | 
 43 |     def generate_key(self, server_names: List[str]) -> str:
 44 |         """Generate a new API key for a group of servers.
 45 | 
 46 |         Args:
 47 |             server_names: List of server names to associate with the key
 48 | 
 49 |         Returns:
 50 |             Generated API key
 51 |         """
 52 |         # Generate a random token
 53 |         token = secrets.token_urlsafe(32)
 54 | 
 55 |         # Store the key with associated servers and creation time
 56 |         self.keys[token] = {
 57 |             "servers": server_names,
 58 |             "created": int(time.time())
 59 |         }
 60 | 
 61 |         # Save the updated keys
 62 |         self.save_keys()
 63 | 
 64 |         return token
 65 | 
 66 |     def validate_key(self, key: str) -> Tuple[bool, List[str]]:
 67 |         """Validate an API key and return associated servers.
 68 | 
 69 |         Args:
 70 |             key: API key to validate
 71 | 
 72 |         Returns:
 73 |             Tuple of (is_valid, server_names)
 74 |         """
 75 |         # print(f"current keys: {self.keys}")
 76 |         self.keys = self.load_keys()
 77 |         # print(f"current keys after reload: {self.keys}")
 78 |         # print(f"validating key: {key}")
 79 | 
 80 |         if key in self.keys:
 81 |             return True, self.keys[key]["servers"]
 82 |         return False, []
 83 | 
 84 |     def revoke_key(self, key: str) -> bool:
 85 |         """Revoke an API key.
 86 | 
 87 |         Args:
 88 |             key: API key to revoke
 89 | 
 90 |         Returns:
 91 |             True if the key was revoked, False otherwise
 92 |         """
 93 |         if key in self.keys:
 94 |             del self.keys[key]
 95 |             self.save_keys()
 96 |             return True
 97 |         return False
 98 | 
 99 |     def get_all_keys(self) -> Dict[str, Dict[str, Union[List[str], int]]]:
100 |         """Get all API keys.
101 | 
102 |         Returns:
103 |             Dictionary of API keys and their details
104 |         """
105 |         return self.keys
106 | 
```

--------------------------------------------------------------------------------
/client/src/server.py:
--------------------------------------------------------------------------------

```python
  1 | import asyncio
  2 | import json
  3 | import os
  4 | import shutil
  5 | import subprocess
  6 | from contextlib import AsyncExitStack
  7 | from typing import Any, Dict, List, Optional, Set, Tuple
  8 | 
  9 | from mcp import ClientSession, StdioServerParameters
 10 | from mcp.client.stdio import stdio_client
 11 | 
 12 | 
 13 | class Server:
 14 |     """Manages an MCP server connection and execution."""
 15 | 
 16 |     def __init__(self, name: str, config: Dict[str, Any]) -> None:
 17 |         """Initialize the server.
 18 | 
 19 |         Args:
 20 |             name: Server name
 21 |             config: Server configuration
 22 |         """
 23 |         self.name: str = name
 24 |         self.config: Dict[str, Any] = config
 25 |         self.process: Optional[subprocess.Popen] = None
 26 |         self.session: Optional[ClientSession] = None
 27 |         self._cleanup_lock: asyncio.Lock = asyncio.Lock()
 28 |         self.exit_stack: AsyncExitStack = AsyncExitStack()
 29 | 
 30 |     async def start(self) -> bool:
 31 |         """Start the server process via mcp-proxy in SSE mode."""
 32 |         if self.process and self.is_running():
 33 |             print(f"Server {self.name} is already running")
 34 |             return True
 35 | 
 36 |         try:
 37 |             # Prepare command and environment for mcp-proxy
 38 |             proxy_command = shutil.which("mcp-proxy")
 39 |             if not proxy_command:
 40 |                 print(f"Command 'mcp-proxy' not found. Please install it.")
 41 |                 return False
 42 | 
 43 |             # Define the SSE port for this server
 44 |             sse_port = self.config.get("sse_port", 3000 + hash(self.name) % 1000)
 45 | 
 46 |             start_command = shutil.which(self.config["command"])
 47 |             print(f"start command: {start_command}")
 48 |             if not start_command:
 49 |                 print(f"Command {self.config['command']} not found. Please install Node.js and npm.")
 50 | 
 51 |             args = [
 52 |                        "--allow-origin='*'",
 53 |                        f"--sse-port={str(sse_port)}",
 54 |                        "--sse-host=0.0.0.0",
 55 |                        "--pass-environment",
 56 |                        "--",
 57 |                        start_command
 58 |                    ] + self.config["args"]
 59 | 
 60 |             # Add server-specific environment variables
 61 |             env = os.environ.copy()
 62 |             if "env" in self.config:
 63 |                 env.update(self.config["env"])
 64 | 
 65 |             # Start mcp-proxy process
 66 |             self.process = subprocess.Popen(
 67 |                 [proxy_command] + args,
 68 |                 env=env,
 69 |                 stdin=subprocess.PIPE,
 70 |                 stdout=subprocess.PIPE,
 71 |                 stderr=subprocess.PIPE,
 72 |                 text=False
 73 |             )
 74 | 
 75 |             print(f"Started mcp-proxy for {self.name} on port {sse_port}")
 76 |             return True
 77 | 
 78 |         except Exception as e:
 79 |             print(f"Error starting mcp-proxy for {self.name}: {e}")
 80 |             return False
 81 | 
 82 |     async def stop(self) -> bool:
 83 |         """Stop the server.
 84 | 
 85 |         Returns:
 86 |             True if the server was stopped successfully, False otherwise
 87 |         """
 88 |         await self.cleanup()
 89 | 
 90 |         if not self.process:
 91 |             return True
 92 | 
 93 |         try:
 94 |             self.process.terminate()
 95 | 
 96 |             # Wait for the process to terminate
 97 |             try:
 98 |                 self.process.wait(timeout=5)
 99 |             except subprocess.TimeoutExpired:
100 |                 self.process.kill()
101 | 
102 |             self.process = None
103 |             print(f"Stopped server {self.name}")
104 |             return True
105 | 
106 |         except Exception as e:
107 |             print(f"Error stopping server {self.name}: {e}")
108 |             return False
109 | 
110 |     async def cleanup(self) -> None:
111 |         """Clean up resources."""
112 |         async with self._cleanup_lock:
113 |             try:
114 |                 await self.exit_stack.aclose()
115 |                 self.session = None
116 |             except Exception as e:
117 |                 print(f"Error during cleanup of server {self.name}: {e}")
118 | 
119 |     def is_running(self) -> bool:
120 |         """Check if the server is running.
121 | 
122 |         Returns:
123 |             True if the server is running, False otherwise
124 |         """
125 |         if not self.process:
126 |             return False
127 | 
128 |         # Check if the process is still alive
129 |         if self.process.poll() is not None:
130 |             # Process has terminated
131 |             self.process = None
132 |             return False
133 | 
134 |         return True
135 | 
136 | 
137 | class ServerManager:
138 |     """Manages multiple MCP servers."""
139 | 
140 |     def __init__(self) -> None:
141 |         """Initialize the server manager."""
142 |         self.servers: Dict[str, Server] = {}
143 |         self.selected_servers: Set[str] = set()
144 | 
145 |     def select_servers(self, server_names: List[str]) -> None:
146 |         """Select a group of servers.
147 | 
148 |         Args:
149 |             server_names: List of server names to select
150 |         """
151 |         self.selected_servers = set(server_names)
152 | 
153 |     def get_selected_servers(self) -> Set[str]:
154 |         """Get the currently selected servers.
155 | 
156 |         Returns:
157 |             Set of selected server names
158 |         """
159 |         return self.selected_servers
160 | 
161 |     async def start_server(self, name: str, config: Dict[str, Any]) -> bool:
162 |         """Start an MCP server.
163 | 
164 |         Args:
165 |             name: Server name
166 |             config: Server configuration
167 | 
168 |         Returns:
169 |             True if the server was started successfully, False otherwise
170 |         """
171 |         if name not in self.servers:
172 |             self.servers[name] = Server(name, config)
173 | 
174 |         return await self.servers[name].start()
175 | 
176 |     async def start_selected_servers(self, server_configs: Dict[str, Dict[str, Any]]) -> List[str]:
177 |         """Start all selected servers.
178 | 
179 |         Args:
180 |             server_configs: Dictionary of server configurations
181 | 
182 |         Returns:
183 |             List of successfully started server names
184 |         """
185 |         started_servers = []
186 | 
187 |         for name in self.selected_servers:
188 |             if name in server_configs:
189 |                 success = await self.start_server(name, server_configs[name])
190 |                 if success:
191 |                     print(f"Success in starting server {name}")
192 |                     started_servers.append(name)
193 |                 else:
194 |                     print(f"Failed to start server {name}")
195 |             else:
196 |                 print(f"Server {name} not found in configuration")
197 | 
198 |         return started_servers
199 | 
200 |     async def stop_server(self, name: str) -> bool:
201 |         """Stop an MCP server.
202 | 
203 |         Args:
204 |             name: Server name
205 | 
206 |         Returns:
207 |             True if the server was stopped successfully, False otherwise
208 |         """
209 |         if name not in self.servers:
210 |             print(f"Server {name} not found")
211 |             return False
212 | 
213 |         return await self.servers[name].stop()
214 | 
215 |     async def stop_all_servers(self) -> None:
216 |         """Stop all servers."""
217 |         for name in list(self.servers.keys()):
218 |             await self.stop_server(name)
219 | 
220 |     def get_running_servers(self) -> List[str]:
221 |         """Get names of all running servers.
222 | 
223 |         Returns:
224 |             List of running server names
225 |         """
226 |         return [name for name, server in self.servers.items() if server.is_running()]
227 | 
228 |     def is_server_running(self, name: str) -> bool:
229 |         """Check if a server is running.
230 | 
231 |         Args:
232 |             name: Server name
233 | 
234 |         Returns:
235 |             True if the server is running, False otherwise
236 |         """
237 |         if name not in self.servers:
238 |             return False
239 | 
240 |         return self.servers[name].is_running()
241 | 
```

--------------------------------------------------------------------------------
/client/app.py:
--------------------------------------------------------------------------------

```python
  1 | import asyncio
  2 | import datetime
  3 | import json
  4 | import os
  5 | import pyperclip
  6 | from typing import Dict, List, Set
  7 | 
  8 | import threading
  9 | from src.api_server import start_api_server
 10 | import time
 11 | 
 12 | import streamlit as st
 13 | 
 14 | from src.auth import AuthManager
 15 | from src.config import Configuration
 16 | from src.server import ServerManager
 17 | from src.api import ApiClient
 18 | 
 19 | # Set page configuration
 20 | st.set_page_config(
 21 |     page_title="MCP Server Manager",
 22 |     page_icon="🔌",
 23 |     layout="wide",
 24 |     initial_sidebar_state="expanded"
 25 | )
 26 | 
 27 | if "api_server_started" not in st.session_state:
 28 |     api_thread = threading.Thread(target=start_api_server, daemon=True)
 29 |     api_thread.start()
 30 |     time.sleep(1)
 31 |     st.session_state.api_server_started = True
 32 | 
 33 | # Initialize session state for persistent objects
 34 | if "server_manager" not in st.session_state:
 35 |     st.session_state.server_manager = ServerManager()
 36 | if "config" not in st.session_state:
 37 |     st.session_state.config = Configuration()
 38 | if "auth_manager" not in st.session_state:
 39 |     st.session_state.auth_manager = AuthManager()
 40 | if "api_client" not in st.session_state:
 41 |     st.session_state.api_client = ApiClient()
 42 | 
 43 | 
 44 | # Function to copy text to clipboard
 45 | def copy_to_clipboard(text):
 46 |     pyperclip.copy(text)
 47 |     return True
 48 | 
 49 | 
 50 | # Page title and description
 51 | st.title("🔌 MCP Server Manager")
 52 | st.markdown("""
 53 | This application helps you manage your MCP servers, launch them in groups, and generate API keys for use with the MCP proxy server.
 54 | """)
 55 | 
 56 | # Sidebar navigation
 57 | st.sidebar.header("Navigation")
 58 | page = st.sidebar.radio(
 59 |     "Go to",
 60 |     ["Server Dashboard", "Launch Servers", "API Keys Management"]
 61 | )
 62 | 
 63 | # Server Dashboard page
 64 | if page == "Server Dashboard":
 65 |     st.header("📋 Server Dashboard")
 66 | 
 67 |     servers = st.session_state.config.get_servers()
 68 | 
 69 |     if not servers:
 70 |         st.warning("No servers configured. Please add servers to your servers_config.json file.")
 71 | 
 72 |         st.subheader("Sample Configuration")
 73 |         sample_config = {
 74 |             "mcpServers": {
 75 |                 "logseq": {
 76 |                     "command": "uvx",
 77 |                     "args": ["mcp-server-logseq"],
 78 |                     "env": {
 79 |                         "LOGSEQ_API_TOKEN": "your_token",
 80 |                         "LOGSEQ_API_URL": "http://127.0.0.1:8000"
 81 |                     }
 82 |                 }
 83 |             }
 84 |         }
 85 |         st.code(json.dumps(sample_config, indent=2), language="json")
 86 |     else:
 87 |         # Display running servers first
 88 |         running_servers = st.session_state.server_manager.get_running_servers()
 89 |         if running_servers:
 90 |             st.subheader("🟢 Running Servers")
 91 |             for name in running_servers:
 92 |                 with st.expander(f"{name}", expanded=True):
 93 |                     st.json(st.session_state.config.get_server_config(name))
 94 |                     if st.button(f"Stop {name}", key=f"stop_{name}"):
 95 |                         with st.spinner(f"Stopping {name}..."):
 96 |                             asyncio.run(st.session_state.server_manager.stop_server(name))
 97 |                             st.success(f"Server {name} stopped.")
 98 |                             st.rerun()
 99 | 
100 |         # Display all servers
101 |         st.subheader("📃 All Configured Servers")
102 |         for name, config in servers.items():
103 |             status = "🟢 Running" if st.session_state.server_manager.is_server_running(name) else "🔴 Stopped"
104 |             with st.expander(f"{name} - {status}"):
105 |                 st.json(config)
106 | 
107 | # Launch Servers page
108 | elif page == "Launch Servers":
109 |     st.header("🚀 Launch Server Group")
110 | 
111 |     servers = st.session_state.config.get_servers()
112 | 
113 |     if not servers:
114 |         st.warning("No servers configured. Please add servers to your servers_config.json file.")
115 |     else:
116 |         col1, col2 = st.columns([3, 1])
117 | 
118 |         with col1:
119 |             # Server selection
120 |             server_names = list(servers.keys())
121 |             selected_servers = st.multiselect(
122 |                 "Select servers to launch",
123 |                 options=server_names,
124 |                 default=list(st.session_state.server_manager.get_selected_servers()),
125 |                 help="Select the MCP servers you want to launch as a group"
126 |             )
127 | 
128 |             # Update selected servers
129 |             if selected_servers:
130 |                 st.session_state.server_manager.select_servers(selected_servers)
131 | 
132 |             # Launch button
133 |             launch_button = st.button("Launch Selected Servers", type="primary", disabled=len(selected_servers) == 0)
134 | 
135 |             if launch_button:
136 |                 with st.spinner("Starting servers..."):
137 |                     # Use asyncio to start servers
138 |                     started_servers = asyncio.run(
139 |                         st.session_state.server_manager.start_selected_servers(servers)
140 |                     )
141 | 
142 |                     if started_servers:
143 |                         st.success(f"Started servers: {', '.join(started_servers)}")
144 | 
145 |                         # Generate API key for the started servers
146 |                         api_key = st.session_state.auth_manager.generate_key(started_servers)
147 | 
148 |                         # Display the API key
149 |                         st.subheader("🔑 Generated API Key")
150 |                         key_col1, key_col2 = st.columns([5, 1])
151 |                         with key_col1:
152 |                             st.code(api_key, language="bash")
153 |                         with key_col2:
154 |                             if st.button("Copy", key="copy_api_key"):
155 |                                 copy_to_clipboard(api_key)
156 |                                 st.success("Copied to clipboard!")
157 | 
158 |                         # Instructions for using the API key
159 |                         st.subheader("How to Use the API Key")
160 |                         st.markdown("""
161 |                         To use this API key with the MCP proxy server:
162 | 
163 |                         1. **Environment Variable:**
164 |                         ```bash
165 |                         export MCP_API_KEY=your_api_key_here
166 |                         ```
167 | 
168 |                         2. **Start the MCP proxy server:**
169 |                         ```bash
170 |                         mcp-proxy-server
171 |                         ```
172 | 
173 |                         3. **Or for SSE mode:**
174 |                         ```bash
175 |                         node build/sse.js
176 |                         ```
177 |                         """)
178 |                     else:
179 |                         st.error("Failed to start servers. Check the logs for more information.")
180 | 
181 |         with col2:
182 |             # Information box
183 |             st.info(
184 |                 "💡 **Tip:**\n\nStarting servers as a group allows you to generate a single API key for all of them.")
185 | 
186 |     # Display running servers
187 |     running_servers = st.session_state.server_manager.get_running_servers()
188 |     if running_servers:
189 |         st.subheader("🟢 Currently Running Servers")
190 |         st.write(", ".join(running_servers))
191 | 
192 |         # Stop all servers button
193 |         if st.button("Stop All Servers", type="secondary"):
194 |             with st.spinner("Stopping all servers..."):
195 |                 asyncio.run(st.session_state.server_manager.stop_all_servers())
196 |                 st.success("All servers stopped.")
197 |                 st.rerun()
198 | 
199 | # API Keys Management page
200 | elif page == "API Keys Management":
201 |     st.header("🔑 API Keys Management")
202 | 
203 |     keys = st.session_state.auth_manager.get_all_keys()
204 | 
205 |     if not keys:
206 |         st.info("No API keys have been generated yet. Launch a server group to generate an API key.")
207 |     else:
208 |         st.markdown("Below are the API keys you've generated for your MCP server groups.")
209 | 
210 |         # Display all keys
211 |         for key, details in keys.items():
212 |             created_time = datetime.datetime.fromtimestamp(details['created'])
213 | 
214 |             with st.expander(f"API Key: {key[:10]}..."):
215 |                 col1, col2 = st.columns([3, 1])
216 | 
217 |                 with col1:
218 |                     st.code(key, language="bash")
219 |                     st.write(f"🖥️ **Servers:** {', '.join(details['servers'])}")
220 |                     st.write(f"📅 **Created:** {created_time.strftime('%Y-%m-%d %H:%M:%S')}")
221 | 
222 |                 with col2:
223 |                     if st.button("Copy", key=f"copy_{key[:8]}"):
224 |                         copy_to_clipboard(key)
225 |                         st.success("Copied to clipboard!")
226 | 
227 |                     if st.button("Revoke", key=f"revoke_{key[:8]}"):
228 |                         if st.session_state.auth_manager.revoke_key(key):
229 |                             st.success("API key revoked.")
230 |                             st.rerun()
231 |                         else:
232 |                             st.error("Failed to revoke API key.")
233 | 
234 | # Footer
235 | st.markdown("---")
236 | st.markdown("© MCP Server Manager - Manage your MCP servers with ease")
237 | 
```

--------------------------------------------------------------------------------
/server/src/mcp-proxy.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
  2 | import {
  3 |   CallToolRequestSchema,
  4 |   GetPromptRequestSchema,
  5 |   ListPromptsRequestSchema,
  6 |   ListResourcesRequestSchema,
  7 |   ListToolsRequestSchema,
  8 |   ReadResourceRequestSchema,
  9 |   Tool,
 10 |   ListToolsResultSchema,
 11 |   ListPromptsResultSchema,
 12 |   ListResourcesResultSchema,
 13 |   ReadResourceResultSchema,
 14 |   ListResourceTemplatesRequestSchema,
 15 |   ListResourceTemplatesResultSchema,
 16 |   ResourceTemplate,
 17 |   CompatibilityCallToolResultSchema,
 18 |   GetPromptResultSchema
 19 | } from "@modelcontextprotocol/sdk/types.js";
 20 | import { createClients, ConnectedClient } from './client.js';
 21 | import { Config, loadConfig } from './config.js';
 22 | import { z } from 'zod';
 23 | import * as eventsource from 'eventsource';
 24 | 
 25 | global.EventSource = eventsource.EventSource
 26 | 
 27 | export const createServer = async (apiKey?: string) => {
 28 |   const config = await loadConfig(apiKey);
 29 |   
 30 |   if (config.servers.length === 0) {
 31 |     console.warn('Warning: No servers found.');
 32 |   } else {
 33 |     console.log(`Found servers: ${config.servers.length}`);
 34 |   }
 35 |   
 36 |   const connectedClients = await createClients(config.servers);
 37 |   console.log(`Connected to ${connectedClients.length} server`);
 38 | 
 39 |   // Maps to track which client owns which resource
 40 |   const toolToClientMap = new Map<string, ConnectedClient>();
 41 |   const resourceToClientMap = new Map<string, ConnectedClient>();
 42 |   const promptToClientMap = new Map<string, ConnectedClient>();
 43 | 
 44 |   const server = new Server(
 45 |     {
 46 |       name: "mcp-proxy-server",
 47 |       version: "1.0.0",
 48 |     },
 49 |     {
 50 |       capabilities: {
 51 |         prompts: {},
 52 |         resources: { subscribe: true },
 53 |         tools: {},
 54 |       },
 55 |     },
 56 |   );
 57 | 
 58 |   // List Tools Handler
 59 |   server.setRequestHandler(ListToolsRequestSchema, async (request) => {
 60 |     const allTools: Tool[] = [];
 61 |     toolToClientMap.clear();
 62 | 
 63 |     for (const connectedClient of connectedClients) {
 64 |       try {
 65 |         const result = await connectedClient.client.request(
 66 |           {
 67 |             method: 'tools/list',
 68 |             params: {
 69 |               _meta: request.params?._meta
 70 |             }
 71 |           },
 72 |           ListToolsResultSchema
 73 |         );
 74 | 
 75 |         if (result.tools) {
 76 |           const toolsWithSource = result.tools.map(tool => {
 77 |             toolToClientMap.set(tool.name, connectedClient);
 78 |             return {
 79 |               ...tool,
 80 |               description: `[${connectedClient.name}] ${tool.description || ''}`
 81 |             };
 82 |           });
 83 |           allTools.push(...toolsWithSource);
 84 |         }
 85 |       } catch (error) {
 86 |         console.error(`Error fetching tools from ${connectedClient.name}:`, error);
 87 |       }
 88 |     }
 89 | 
 90 |     return { tools: allTools };
 91 |   });
 92 | 
 93 |   // Call Tool Handler
 94 |   server.setRequestHandler(CallToolRequestSchema, async (request) => {
 95 |     const { name, arguments: args } = request.params;
 96 |     const clientForTool = toolToClientMap.get(name);
 97 | 
 98 |     if (!clientForTool) {
 99 |       throw new Error(`Unknown tool: ${name}`);
100 |     }
101 | 
102 |     try {
103 |       console.log('Forwarding tool call:', name);
104 | 
105 |       // Use the correct schema for tool calls
106 |       return await clientForTool.client.request(
107 |         {
108 |           method: 'tools/call',
109 |           params: {
110 |             name,
111 |             arguments: args || {},
112 |             _meta: {
113 |               progressToken: request.params._meta?.progressToken
114 |             }
115 |           }
116 |         },
117 |         CompatibilityCallToolResultSchema
118 |       );
119 |     } catch (error) {
120 |       console.error(`Error calling tool through ${clientForTool.name}:`, error);
121 |       throw error;
122 |     }
123 |   });
124 | 
125 |   // Get Prompt Handler
126 |   server.setRequestHandler(GetPromptRequestSchema, async (request) => {
127 |     const { name } = request.params;
128 |     const clientForPrompt = promptToClientMap.get(name);
129 | 
130 |     if (!clientForPrompt) {
131 |       throw new Error(`Unknown prompt: ${name}`);
132 |     }
133 | 
134 |     try {
135 |       console.log('Forwarding prompt request:', name);
136 | 
137 |       // Match the exact structure from the example code
138 |       const response = await clientForPrompt.client.request(
139 |         {
140 |           method: 'prompts/get' as const,
141 |           params: {
142 |             name,
143 |             arguments: request.params.arguments || {},
144 |             _meta: request.params._meta || {
145 |               progressToken: undefined
146 |             }
147 |           }
148 |         },
149 |         GetPromptResultSchema
150 |       );
151 | 
152 |       console.log('Prompt result:', response);
153 |       return response;
154 |     } catch (error) {
155 |       console.error(`Error getting prompt from ${clientForPrompt.name}:`, error);
156 |       throw error;
157 |     }
158 |   });
159 | 
160 |   // List Prompts Handler
161 |   server.setRequestHandler(ListPromptsRequestSchema, async (request) => {
162 |     const allPrompts: z.infer<typeof ListPromptsResultSchema>['prompts'] = [];
163 |     promptToClientMap.clear();
164 | 
165 |     for (const connectedClient of connectedClients) {
166 |       try {
167 |         const result = await connectedClient.client.request(
168 |           {
169 |             method: 'prompts/list' as const,
170 |             params: {
171 |               cursor: request.params?.cursor,
172 |               _meta: request.params?._meta || {
173 |                 progressToken: undefined
174 |               }
175 |             }
176 |           },
177 |           ListPromptsResultSchema
178 |         );
179 | 
180 |         if (result.prompts) {
181 |           const promptsWithSource = result.prompts.map(prompt => {
182 |             promptToClientMap.set(prompt.name, connectedClient);
183 |             return {
184 |               ...prompt,
185 |               description: `[${connectedClient.name}] ${prompt.description || ''}`
186 |             };
187 |           });
188 |           allPrompts.push(...promptsWithSource);
189 |         }
190 |       } catch (error) {
191 |         console.error(`Error fetching prompts from ${connectedClient.name}:`, error);
192 |       }
193 |     }
194 | 
195 |     return {
196 |       prompts: allPrompts,
197 |       nextCursor: request.params?.cursor
198 |     };
199 |   });
200 | 
201 |   // List Resources Handler
202 |   server.setRequestHandler(ListResourcesRequestSchema, async (request) => {
203 |     const allResources: z.infer<typeof ListResourcesResultSchema>['resources'] = [];
204 |     resourceToClientMap.clear();
205 | 
206 |     for (const connectedClient of connectedClients) {
207 |       try {
208 |         const result = await connectedClient.client.request(
209 |           {
210 |             method: 'resources/list',
211 |             params: {
212 |               cursor: request.params?.cursor,
213 |               _meta: request.params?._meta
214 |             }
215 |           },
216 |           ListResourcesResultSchema
217 |         );
218 | 
219 |         if (result.resources) {
220 |           const resourcesWithSource = result.resources.map(resource => {
221 |             resourceToClientMap.set(resource.uri, connectedClient);
222 |             return {
223 |               ...resource,
224 |               name: `[${connectedClient.name}] ${resource.name || ''}`
225 |             };
226 |           });
227 |           allResources.push(...resourcesWithSource);
228 |         }
229 |       } catch (error) {
230 |         console.error(`Error fetching resources from ${connectedClient.name}:`, error);
231 |       }
232 |     }
233 | 
234 |     return {
235 |       resources: allResources,
236 |       nextCursor: undefined
237 |     };
238 |   });
239 | 
240 |   // Read Resource Handler
241 |   server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
242 |     const { uri } = request.params;
243 |     const clientForResource = resourceToClientMap.get(uri);
244 | 
245 |     if (!clientForResource) {
246 |       throw new Error(`Unknown resource: ${uri}`);
247 |     }
248 | 
249 |     try {
250 |       return await clientForResource.client.request(
251 |         {
252 |           method: 'resources/read',
253 |           params: {
254 |             uri,
255 |             _meta: request.params._meta
256 |           }
257 |         },
258 |         ReadResourceResultSchema
259 |       );
260 |     } catch (error) {
261 |       console.error(`Error reading resource from ${clientForResource.name}:`, error);
262 |       throw error;
263 |     }
264 |   });
265 | 
266 |   // List Resource Templates Handler
267 |   server.setRequestHandler(ListResourceTemplatesRequestSchema, async (request) => {
268 |     const allTemplates: ResourceTemplate[] = [];
269 | 
270 |     for (const connectedClient of connectedClients) {
271 |       try {
272 |         const result = await connectedClient.client.request(
273 |           {
274 |             method: 'resources/templates/list' as const,
275 |             params: {
276 |               cursor: request.params?.cursor,
277 |               _meta: request.params?._meta || {
278 |                 progressToken: undefined
279 |               }
280 |             }
281 |           },
282 |           ListResourceTemplatesResultSchema
283 |         );
284 | 
285 |         if (result.resourceTemplates) {
286 |           const templatesWithSource = result.resourceTemplates.map(template => ({
287 |             ...template,
288 |             name: `[${connectedClient.name}] ${template.name || ''}`,
289 |             description: template.description ? `[${connectedClient.name}] ${template.description}` : undefined
290 |           }));
291 |           allTemplates.push(...templatesWithSource);
292 |         }
293 |       } catch (error) {
294 |         console.error(`Error fetching resource templates from ${connectedClient.name}:`, error);
295 |       }
296 |     }
297 | 
298 |     return {
299 |       resourceTemplates: allTemplates,
300 |       nextCursor: request.params?.cursor
301 |     };
302 |   });
303 | 
304 |   const cleanup = async () => {
305 |     await Promise.all(connectedClients.map(({ cleanup }) => cleanup()));
306 |   };
307 | 
308 |   return { server, cleanup };
309 | };
310 | 
```

--------------------------------------------------------------------------------
/example_llm_mcp/main.py:
--------------------------------------------------------------------------------

```python
  1 | import asyncio
  2 | import json
  3 | import os
  4 | import shutil
  5 | from contextlib import AsyncExitStack
  6 | from typing import Any, Union
  7 | 
  8 | from dotenv import load_dotenv
  9 | from mcp import ClientSession, StdioServerParameters
 10 | from mcp.client.stdio import stdio_client
 11 | 
 12 | from openai import OpenAI, ChatCompletion
 13 | 
 14 | 
 15 | class Configuration:
 16 |     """Manages configuration and environment variables for the MCP client."""
 17 | 
 18 |     def __init__(self) -> None:
 19 |         self.load_env()
 20 |         self.api_key = os.getenv("OPENAI_API_KEY")
 21 | 
 22 |     @staticmethod
 23 |     def load_env() -> None:
 24 |         load_dotenv()
 25 | 
 26 |     @staticmethod
 27 |     def load_config(file_path: str = "servers_config.json") -> dict[str, Any]:
 28 |         with open(file_path, "r") as f:
 29 |             return json.load(f)
 30 | 
 31 |     @property
 32 |     def llm_api_key(self) -> str:
 33 |         if not self.api_key:
 34 |             raise ValueError("LLM_API_KEY not found in environment variables")
 35 |         return self.api_key
 36 | 
 37 | 
 38 | class Server:
 39 |     """Manages MCP server connections and tool execution."""
 40 | 
 41 |     def __init__(self, name: str, config: dict[str, Any]) -> None:
 42 |         self.name: str = name
 43 |         self.config: dict[str, Any] = config
 44 |         self.stdio_context: Any | None = None
 45 |         self.session: ClientSession | None = None
 46 |         self._cleanup_lock: asyncio.Lock = asyncio.Lock()
 47 |         self.exit_stack: AsyncExitStack = AsyncExitStack()
 48 | 
 49 |     async def initialize(self) -> None:
 50 |         command = (
 51 |             shutil.which("npx")
 52 |             if self.config["command"] == "npx"
 53 |             else self.config["command"]
 54 |         )
 55 |         if command is None:
 56 |             raise ValueError("The command must be a valid string and cannot be None.")
 57 | 
 58 |         server_params = StdioServerParameters(
 59 |             command=command,
 60 |             args=self.config["args"],
 61 |             env={**os.environ, **self.config["env"]}
 62 |             if self.config.get("env")
 63 |             else None,
 64 |         )
 65 |         try:
 66 |             stdio_transport = await self.exit_stack.enter_async_context(
 67 |                 stdio_client(server_params)
 68 |             )
 69 |             read, write = stdio_transport
 70 |             session = await self.exit_stack.enter_async_context(
 71 |                 ClientSession(read, write)
 72 |             )
 73 |             await session.initialize()
 74 |             self.session = session
 75 |         except Exception as e:
 76 |             print(f"Error initializing server {self.name}: {e}")
 77 |             await self.cleanup()
 78 |             raise
 79 | 
 80 |     async def list_tools(self) -> list[Any]:
 81 |         if not self.session:
 82 |             raise RuntimeError(f"Server {self.name} not initialized")
 83 | 
 84 |         tools_response = await self.session.list_tools()
 85 |         tools = []
 86 | 
 87 |         for item in tools_response:
 88 |             if isinstance(item, tuple) and item[0] == "tools":
 89 |                 for tool in item[1]:
 90 |                     tools.append(Tool(tool.name, tool.description, tool.inputSchema))
 91 | 
 92 |         return tools
 93 | 
 94 |     async def execute_tool(
 95 |             self,
 96 |             tool_name: str,
 97 |             arguments: dict[str, Any],
 98 |             retries: int = 2,
 99 |             delay: float = 1.0,
100 |     ) -> Any:
101 |         if not self.session:
102 |             raise RuntimeError(f"Server {self.name} not initialized")
103 | 
104 |         attempt = 0
105 |         while attempt < retries:
106 |             try:
107 |                 print(f"Executing tool {tool_name}")
108 |                 result = await self.session.call_tool(tool_name, arguments)
109 | 
110 |                 return result
111 | 
112 |             except Exception as e:
113 |                 attempt += 1
114 |                 print(f"Error executing tool: {e}. Attempt {attempt} of {retries}.")
115 |                 if attempt < retries:
116 |                     print(f"Retrying in {delay} seconds...")
117 |                     await asyncio.sleep(delay)
118 |                 else:
119 |                     print("Max retries reached. Failing.")
120 |                     raise
121 | 
122 |     async def cleanup(self) -> None:
123 |         async with self._cleanup_lock:
124 |             try:
125 |                 await self.exit_stack.aclose()
126 |                 self.session = None
127 |                 self.stdio_context = None
128 |             except Exception as e:
129 |                 print(f"Error during cleanup of server {self.name}: {e}")
130 | 
131 | 
132 | class Tool:
133 |     """Represents a tool with its properties and formatting."""
134 | 
135 |     def __init__(
136 |             self, name: str, description: str, input_schema: dict[str, Any]
137 |     ) -> None:
138 |         self.name: str = name
139 |         self.description: str = description
140 |         self.input_schema: dict[str, Any] = input_schema
141 | 
142 |     def format_for_llm(self, provider_with_func_call: bool = False) -> str | dict:
143 |         if provider_with_func_call:
144 |             return {
145 |                 "type": "function",
146 |                 "function": {
147 |                     "name": self.name,
148 |                     "description": self.description,
149 |                     "parameters": {
150 |                         "type": "object",
151 |                         "properties": {
152 |                             param_name: param_info
153 |                             for param_name, param_info in self.input_schema["properties"].items()
154 |                         } if "properties" in self.input_schema else {},
155 |                         "required": self.input_schema.get("required", []),
156 |                         "additionalProperties": self.input_schema.get("additionalProperties", False)
157 |                     }
158 |                 }
159 |             }
160 |         else:
161 |             args_desc = []
162 | 
163 |             if "properties" in self.input_schema:
164 |                 for param_name, param_info in self.input_schema["properties"].items():
165 |                     arg_desc = (
166 |                         f"- {param_name}: {param_info.get('description', 'No description')}"
167 |                     )
168 |                     if param_name in self.input_schema.get("required", []):
169 |                         arg_desc += " (required)"
170 |                     args_desc.append(arg_desc)
171 | 
172 |             return f"""
173 | Tool: {self.name}
174 | Description: {self.description}
175 | Arguments:
176 | {chr(10).join(args_desc)}
177 | """
178 | 
179 | 
180 | class LLMClient:
181 |     """Manages communication with the LLM provider."""
182 | 
183 |     def __init__(self, api_key: str = os.getenv("OPENAI_API_KEY")) -> None:
184 |         self.api_key: str = api_key
185 |         self.client = OpenAI(api_key=self.api_key)
186 | 
187 |     def get_response(
188 |             self,
189 |             messages: list[dict[str, str]],
190 |             temperature: float = 0.3,
191 |             model: str = "gpt-4o",
192 |             max_tokens: int = 4096,
193 |             tools: list[dict[str, Any]] | None = None,
194 |     ) -> Union[str, ChatCompletion]:
195 |         if tools:
196 |             response = self.client.chat.completions.create(
197 |                 model=model,
198 |                 messages=messages,
199 |                 temperature=temperature,
200 |                 max_tokens=max_tokens,
201 |                 tools=tools,
202 |             )
203 | 
204 |             if response.choices[0].finish_reason == "tool_calls":
205 |                 return response
206 |             else:
207 |                 return response.choices[0].message.content
208 |         else:
209 |             response = self.client.chat.completions.create(
210 |                 model=model,
211 |                 messages=messages,
212 |                 temperature=temperature,
213 |                 max_tokens=max_tokens,
214 |             )
215 | 
216 |             return response.choices[0].message.content
217 | 
218 | 
219 | class ChatSession:
220 |     """Orchestrates the interaction between user, LLM, and tools."""
221 | 
222 |     def __init__(self, servers: list[Server], llm_client: LLMClient) -> None:
223 |         self.servers: list[Server] = servers
224 |         self.llm_client: LLMClient = llm_client
225 | 
226 |     async def cleanup_servers(self) -> None:
227 |         cleanup_tasks = []
228 |         for server in self.servers:
229 |             cleanup_tasks.append(asyncio.create_task(server.cleanup()))
230 | 
231 |         if cleanup_tasks:
232 |             try:
233 |                 await asyncio.gather(*cleanup_tasks, return_exceptions=True)
234 |             except Exception as e:
235 |                 print(f"Warning during final cleanup: {e}")
236 | 
237 |     async def process_llm_response(self, llm_response: Union[str, ChatCompletion]) -> str | list:
238 |         try:
239 |             if isinstance(llm_response, str):
240 |                 tool_call = json.loads(llm_response)
241 |                 if "tool" in tool_call and "arguments" in tool_call:
242 |                     print(f"Executing tool: {tool_call['tool']}")
243 |                     print(f"With arguments: {tool_call['arguments']}")
244 | 
245 |                     for server in self.servers:
246 |                         tools = await server.list_tools()
247 |                         if any(tool.name == tool_call["tool"] for tool in tools):
248 |                             try:
249 |                                 result = await server.execute_tool(
250 |                                     tool_call["tool"], tool_call["arguments"]
251 |                                 )
252 | 
253 |                                 if isinstance(result, dict) and "progress" in result:
254 |                                     progress = result["progress"]
255 |                                     total = result["total"]
256 |                                     percentage = (progress / total) * 100
257 |                                     print(f"Progress: {progress}/{total} ({percentage:.1f}%)")
258 | 
259 |                                 return f"Tool execution result: {result}"
260 |                             except Exception as e:
261 |                                 error_msg = f"Error executing tool: {str(e)}"
262 |                                 print(error_msg)
263 |                                 return error_msg
264 |                 return llm_response
265 |             else:
266 |                 if llm_response.choices[0].finish_reason == "tool_calls":
267 |                     for tool_call in llm_response.choices[0].message.tool_calls:
268 |                         function_call = json.loads(tool_call.function.to_json())
269 |                         if "arguments" in function_call and "name" in function_call:
270 |                             function_name = function_call["name"]
271 |                             arguments = json.loads(function_call["arguments"])
272 |                             print(f"Executing function: {function_name} with arguments: {arguments}")
273 |                             results = []
274 | 
275 |                             for server in self.servers:
276 |                                 tools = await server.list_tools()
277 |                                 if any(tool.name == function_name for tool in tools):
278 |                                     try:
279 |                                         result = await server.execute_tool(
280 |                                             function_name, arguments
281 |                                         )
282 | 
283 |                                         if isinstance(result, dict) and "progress" in result:
284 |                                             progress = result["progress"]
285 |                                             total = result["total"]
286 |                                             percentage = (progress / total) * 100
287 |                                             print(f"Progress: {progress}/{total} ({percentage:.1f}%)")
288 | 
289 |                                         results.append(f"Tool execution result: {result}")
290 |                                     except Exception as e:
291 |                                         error_msg = f"Error executing tool: {str(e)}"
292 |                                         print(error_msg)
293 |                                         results.append(error_msg)
294 | 
295 |                             return results
296 | 
297 |             return llm_response
298 |         except json.JSONDecodeError:
299 |             return llm_response
300 | 
301 |     async def start(self) -> None:
302 |         try:
303 |             for server in self.servers:
304 |                 try:
305 |                     await server.initialize()
306 |                 except Exception as e:
307 |                     print(f"Failed to initialize server: {e}")
308 |                     await self.cleanup_servers()
309 |                     return
310 | 
311 |             all_tools = []
312 |             for server in self.servers:
313 |                 tools = await server.list_tools()
314 |                 all_tools.extend(tools)
315 | 
316 |             tools_schema = [tool.format_for_llm(provider_with_func_call=True) for tool in all_tools]
317 | 
318 |             # tools_description = "\n".join([tool.format_for_llm() for tool in all_tools])
319 |             #
320 |             # system_message = (
321 |             #     "You are a helpful assistant with access to these tools:\n\n"
322 |             #     f"{tools_description}\n"
323 |             #     "Choose the appropriate tool based on the user's question. "
324 |             #     "If no tool is needed, reply directly.\n\n"
325 |             #     "IMPORTANT: When you need to use a tool, you must ONLY respond with "
326 |             #     "the exact JSON object format below, nothing else:\n"
327 |             #     "{\n"
328 |             #     '    "tool": "tool-name",\n'
329 |             #     '    "arguments": {\n'
330 |             #     '        "argument-name": "value"\n'
331 |             #     "    }\n"
332 |             #     "}\n\n"
333 |             #     "After receiving a tool's response:\n"
334 |             #     "1. Transform the raw data into a natural, conversational response\n"
335 |             #     "2. Keep responses concise but informative\n"
336 |             #     "3. Focus on the most relevant information\n"
337 |             #     "4. Use appropriate context from the user's question\n"
338 |             #     "5. Avoid simply repeating the raw data\n\n"
339 |             #     "Please use only the tools that are explicitly defined above."
340 |             # )
341 | 
342 |             system_message = "You are a helpful assistant with access to some tools. Use them only when necessary."
343 | 
344 |             messages = [{"role": "system", "content": system_message}]
345 | 
346 |             while True:
347 |                 try:
348 |                     user_input = input("You: ").strip().lower()
349 |                     if user_input in ["quit", "exit"]:
350 |                         print("\nExiting...")
351 |                         break
352 | 
353 |                     messages.append({"role": "user", "content": user_input})
354 | 
355 |                     # llm_response = self.llm_client.get_response(messages)
356 |                     llm_response = self.llm_client.get_response(messages, tools=tools_schema)
357 |                     print("\nAssistant: %s", llm_response)
358 | 
359 |                     result = await self.process_llm_response(llm_response)
360 | 
361 |                     if isinstance(result, list) or result != llm_response:
362 |                         print(f"\nTool execution result: {result}")
363 |                         tool_call_output = json.loads(llm_response.choices[0].message.to_json())
364 |                         messages.append(tool_call_output)
365 |                         messages[-1]['content'] = ""
366 | 
367 |                         if isinstance(result, str):
368 |                             messages.append({                               # append result message
369 |                                 "type": "function_call_output",
370 |                                 "tool_call_id": tool_call_output['tool_calls'][0]['id'],
371 |                                 "content": result,
372 |                                 "role": "tool",
373 |                             })
374 |                         else:
375 |                             for i, res in enumerate(result):
376 |                                 messages.append({                               # append result message
377 |                                     "type": "function_call_output",
378 |                                     "tool_call_id": tool_call_output['tool_calls'][i]['id'],
379 |                                     "content": res,
380 |                                     "role": "tool",
381 |                                 })
382 |                         # messages.append({"role": "assistant", "content": llm_response})
383 |                         # messages.append({"role": "system", "content": result})
384 | 
385 |                         final_response = self.llm_client.get_response(messages)
386 |                         print("\nFinal response: %s", final_response)
387 |                         messages.append(
388 |                             {"role": "assistant", "content": final_response}
389 |                         )
390 |                     else:
391 |                         messages.append({"role": "assistant", "content": llm_response})
392 | 
393 |                 except KeyboardInterrupt:
394 |                     print("\nExiting...")
395 |                     break
396 | 
397 |         finally:
398 |             await self.cleanup_servers()
399 | 
400 | 
401 | async def main() -> None:
402 |     """Initialize and run the chat session."""
403 |     config = Configuration()
404 |     server_config = config.load_config("servers_config.json")
405 |     servers = [
406 |         Server(name, srv_config)
407 |         for name, srv_config in server_config["mcpServers"].items()
408 |     ]
409 |     llm_client = LLMClient(config.llm_api_key)
410 |     chat_session = ChatSession(servers, llm_client)
411 |     await chat_session.start()
412 | 
413 | 
414 | if __name__ == "__main__":
415 |     asyncio.run(main())
```