# Directory Structure
```
├── .gitignore
├── bun.lockb
├── LICENSE
├── package.json
├── README.md
├── src
│ ├── build-unikernel
│ │ ├── build-unikernel.test.ts
│ │ ├── build-unikernel.ts
│ │ └── example-configs
│ │ ├── Dockerfile.generated
│ │ ├── kitchen-sink.json
│ │ ├── node-only.json
│ │ └── python-only.json
│ ├── lib
│ │ └── config.ts
│ └── mcp-server-wrapper
│ ├── example-client
│ │ └── example-client.ts
│ ├── logger.ts
│ ├── mcp-server-wrapper.ts
│ ├── parse.test.ts
│ ├── parse.ts
│ ├── process-pool.test.ts
│ └── process-pool.ts
└── tsconfig.json
```
# Files
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
1 | # Based on https://raw.githubusercontent.com/github/gitignore/main/Node.gitignore
2 |
3 | # Logs
4 |
5 | logs
6 | _.log
7 | npm-debug.log_
8 | yarn-debug.log*
9 | yarn-error.log*
10 | lerna-debug.log*
11 | .pnpm-debug.log*
12 |
13 | # Caches
14 |
15 | .cache
16 |
17 | # Diagnostic reports (https://nodejs.org/api/report.html)
18 |
19 | report.[0-9]_.[0-9]_.[0-9]_.[0-9]_.json
20 |
21 | # Runtime data
22 |
23 | pids
24 | _.pid
25 | _.seed
26 | *.pid.lock
27 |
28 | # Directory for instrumented libs generated by jscoverage/JSCover
29 |
30 | lib-cov
31 |
32 | # Coverage directory used by tools like istanbul
33 |
34 | coverage
35 | *.lcov
36 |
37 | # nyc test coverage
38 |
39 | .nyc_output
40 |
41 | # Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
42 |
43 | .grunt
44 |
45 | # Bower dependency directory (https://bower.io/)
46 |
47 | bower_components
48 |
49 | # node-waf configuration
50 |
51 | .lock-wscript
52 |
53 | # Compiled binary addons (https://nodejs.org/api/addons.html)
54 |
55 | build/Release
56 |
57 | # Dependency directories
58 |
59 | node_modules/
60 | jspm_packages/
61 |
62 | # Snowpack dependency directory (https://snowpack.dev/)
63 |
64 | web_modules/
65 |
66 | # TypeScript cache
67 |
68 | *.tsbuildinfo
69 |
70 | # Optional npm cache directory
71 |
72 | .npm
73 |
74 | # Optional eslint cache
75 |
76 | .eslintcache
77 |
78 | # Optional stylelint cache
79 |
80 | .stylelintcache
81 |
82 | # Microbundle cache
83 |
84 | .rpt2_cache/
85 | .rts2_cache_cjs/
86 | .rts2_cache_es/
87 | .rts2_cache_umd/
88 |
89 | # Optional REPL history
90 |
91 | .node_repl_history
92 |
93 | # Output of 'npm pack'
94 |
95 | *.tgz
96 |
97 | # Yarn Integrity file
98 |
99 | .yarn-integrity
100 |
101 | # dotenv environment variable files
102 |
103 | .env
104 | .env.development.local
105 | .env.test.local
106 | .env.production.local
107 | .env.local
108 |
109 | # parcel-bundler cache (https://parceljs.org/)
110 |
111 | .parcel-cache
112 |
113 | # Next.js build output
114 |
115 | .next
116 | out
117 |
118 | # Nuxt.js build / generate output
119 |
120 | .nuxt
121 | dist
122 |
123 | # Gatsby files
124 |
125 | # Comment in the public line in if your project uses Gatsby and not Next.js
126 |
127 | # https://nextjs.org/blog/next-9-1#public-directory-support
128 |
129 | # public
130 |
131 | # vuepress build output
132 |
133 | .vuepress/dist
134 |
135 | # vuepress v2.x temp and cache directory
136 |
137 | .temp
138 |
139 | # Docusaurus cache and generated files
140 |
141 | .docusaurus
142 |
143 | # Serverless directories
144 |
145 | .serverless/
146 |
147 | # FuseBox cache
148 |
149 | .fusebox/
150 |
151 | # DynamoDB Local files
152 |
153 | .dynamodb/
154 |
155 | # TernJS port file
156 |
157 | .tern-port
158 |
159 | # Stores VSCode versions used for testing VSCode extensions
160 |
161 | .vscode-test
162 |
163 | # yarn v2
164 |
165 | .yarn/cache
166 | .yarn/unplugged
167 | .yarn/build-state.yml
168 | .yarn/install-state.gz
169 | .pnp.*
170 |
171 | # IntelliJ based IDEs
172 | .idea
173 |
174 | # Finder (MacOS) folder config
175 | .DS_Store
176 |
177 | .archive/
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
1 | # mcp-server-server
2 |
3 | This repo is a proof of concept MCP server that exposes another stdio MCP server over a websocket.
4 |
5 | ## But...why?
6 |
7 | MCP servers are hard to use.
8 |
9 | The primary transport mechanism for MCP servers is stdio, i.e. in your MCP client program you need to spawn a new process for the MCP server you want to use.
10 | This has downsides:
11 |
12 | 1. It's cumbersome--every MCP client needs to be a process manager now. The way you [configure Claude Desktop](https://modelcontextprotocol.io/quickstart#installation) to use MCP servers is a good demonstration of this--it needs a list of processes to run.
13 | 2. It creates an infra problem: if you have many users, all of which require different MCP server configurations (e.g. they all have different credentials for underlying MCP servers like Github, Google Drive, etc.), then you now have tons of processes to operate and route client requests to.
14 | 3. It's slow: the default way to spin up an MCP server is `npx ...` or `uvx ...` which comes with all of the slowness of these tools (2-3s spinup times are normal).
15 |
16 | ## A better way
17 |
18 | What if MCP servers were actually... servers? I.e. communication with them happened over the network instead of stdio.
19 | Then you could have an easier time using them programatically.
20 |
21 | ### Step 1: Convert a stdio MCP server to a websocket MCP server
22 |
23 | This repo contains a wrapper program that will take an existing MCP server ([here](https://github.com/modelcontextprotocol/servers/tree/main/src/) is a list of the official ones, but they're all over now) and expose it via websocket:
24 |
25 | ```zsh
26 | bun run mcp-server-wrapper -p 3001 -- npx -y @modelcontextprotocol/server-puppeteer@latest
27 | ```
28 |
29 | and for faster spin up times, install it and invoke it using `node` directly:
30 |
31 | ```zsh
32 | pnpm install -g @modelcontextprotocol/server-puppeteer@latest
33 | bun run mcp-server-wrapper -p 3001 -- node ~/Library/pnpm/global/5/node_modules/@modelcontextprotocol/server-puppeteer/dist/index.js
34 | ```
35 |
36 | ### Step 2: Interact with the MCP server programatically without managing processes
37 |
38 | ```typescript
39 | import { Client } from "@modelcontextprotocol/sdk/client/index.js";
40 | import { WebSocketClientTransport } from "@modelcontextprotocol/sdk/client/websocket.js";
41 |
42 | const transport = new WebSocketClientTransport(new URL("ws://localhost:3001"));
43 |
44 | const client = new Client(
45 | {
46 | name: "example-client",
47 | version: "1.0.0",
48 | },
49 | {
50 | capabilities: {},
51 | }
52 | );
53 | await client.connect(transport);
54 | const tools = await client.listTools();
55 | console.log(
56 | "Tools:",
57 | tools.tools.map((t) => t.name)
58 | );
59 | await client.close();
60 | ```
61 |
62 | ```zsh
63 | bun run mcp-server-wrapper-client
64 | $ bun run src/mcp-server-wrapper/example-client/example-client.ts
65 | Tools: [ "puppeteer_navigate", "puppeteer_screenshot", "puppeteer_click", "puppeteer_fill",
66 | "puppeteer_evaluate"
67 | ]
68 | ```
69 |
70 | ### Step 3: Build it into a docker image
71 |
72 | For a given MCP server configuration, e.g.
73 |
74 | ```json
75 | {
76 | "mcpServers": {
77 | "fetch": {
78 | "command": "uvx",
79 | "args": ["mcp-server-fetch"]
80 | }
81 | }
82 | }
83 | ```
84 |
85 | We'd like to build it into a docker image that exposes a websocket, that we can run anywhere.
86 | This repo contains a script that will output a Dockerfile for a given MCP server configuration:
87 |
88 | ```bash
89 |
90 |
91 | ```
92 |
```
--------------------------------------------------------------------------------
/src/build-unikernel/example-configs/python-only.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "mcpServers": {
3 | "fetch": {
4 | "command": "uvx",
5 | "args": ["mcp-server-fetch"]
6 | }
7 | }
8 | }
9 |
```
--------------------------------------------------------------------------------
/src/build-unikernel/example-configs/node-only.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "mcpServers": {
3 | "puppeteer": {
4 | "command": "npx",
5 | "args": ["@modelcontextprotocol/server-puppeteer"]
6 | }
7 | }
8 | }
9 |
```
--------------------------------------------------------------------------------
/src/build-unikernel/example-configs/kitchen-sink.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "mcpServers": {
3 | "fetch": {
4 | "command": "uvx",
5 | "args": ["mcp-server-fetch"]
6 | },
7 | "puppeteer": {
8 | "command": "npx",
9 | "args": ["@modelcontextprotocol/server-puppeteer"]
10 | }
11 | }
12 | }
13 |
```
--------------------------------------------------------------------------------
/src/lib/config.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { z } from "zod";
2 |
3 | // Define the MCP server configuration schema
4 | export const MCPServerConfigSchema = z.object({
5 | command: z.string(),
6 | args: z.array(z.string()),
7 | });
8 |
9 | export const ConfigSchema = z.object({
10 | mcpServers: z.record(z.string(), MCPServerConfigSchema),
11 | });
12 |
13 | export type Config = z.infer<typeof ConfigSchema>;
14 | export type MCPServerConfig = z.infer<typeof MCPServerConfigSchema>;
15 |
16 | export async function loadConfig(configPath: string): Promise<Config> {
17 | const configContent = await Bun.file(configPath).text();
18 | const configJson = JSON.parse(configContent);
19 | return ConfigSchema.parse(configJson);
20 | }
21 |
```
--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "compilerOptions": {
3 | // Enable latest features
4 | "lib": ["ESNext", "DOM"],
5 | "target": "ESNext",
6 | "module": "ESNext",
7 | "moduleDetection": "force",
8 | "jsx": "react-jsx",
9 | "allowJs": true,
10 |
11 | // Bundler mode
12 | "moduleResolution": "bundler",
13 | "allowImportingTsExtensions": true,
14 | "verbatimModuleSyntax": true,
15 | "noEmit": true,
16 |
17 | // Best practices
18 | "strict": true,
19 | "skipLibCheck": true,
20 | "noFallthroughCasesInSwitch": true,
21 |
22 | // Some stricter flags (disabled by default)
23 | "noUnusedLocals": false,
24 | "noUnusedParameters": false,
25 | "noPropertyAccessFromIndexSignature": false
26 | }
27 | }
28 |
```
--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "name": "mcp-server-server",
3 | "description": "a model context protocol server of servers",
4 | "module": "src/index.ts",
5 | "type": "module",
6 | "scripts": {
7 | "mcp-server-wrapper-build": "bun build --compile --minify --sourcemap --bytecode ./src/mcp-server-wrapper/mcp-server-wrapper.ts --outfile build/mcp-server-wrapper",
8 | "mcp-server-wrapper": "bun run src/mcp-server-wrapper/mcp-server-wrapper.ts",
9 | "mcp-server-wrapper-client": "bun run src/mcp-server-wrapper/example-client/example-client.ts"
10 | },
11 | "dependencies": {
12 | "@modelcontextprotocol/sdk": "0.6.0",
13 | "winston": "^3.17.0",
14 | "zod": "^3.23.8",
15 | "zod-to-json-schema": "^3.23.5"
16 | },
17 | "devDependencies": {
18 | "@types/bun": "latest"
19 | },
20 | "peerDependencies": {
21 | "typescript": "^5.0.0"
22 | }
23 | }
24 |
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/logger.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { createLogger, format, Logger, transports } from "winston";
2 |
3 | const logger = createLogger({
4 | level: process.env.LOG_LEVEL || "info",
5 | format: format.combine(
6 | format.colorize(),
7 | format.timestamp(),
8 | format.printf(({ timestamp, level, message }) => {
9 | return `${timestamp} [${level}]: ${message}`;
10 | })
11 | ),
12 | transports: [new transports.Console()],
13 | });
14 |
15 | function childProcessLogger(pid: number | undefined): Logger {
16 | return createLogger({
17 | level: process.env.LOG_LEVEL || "info",
18 | format: format.combine(
19 | format.colorize(),
20 | format.timestamp(),
21 | format.printf(({ timestamp, level, message }) => {
22 | return `${timestamp} [${level}]: \x1b[34m[child_process[${pid}]]\x1b[0m: ${message}`;
23 | })
24 | ),
25 | transports: [new transports.Console()],
26 | });
27 | }
28 |
29 | export { childProcessLogger, logger };
30 |
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/parse.ts:
--------------------------------------------------------------------------------
```typescript
1 | export type Options = {
2 | port: number;
3 | configPath: string;
4 | };
5 |
6 | export function parseCommandLineArgs(args: string[]): Options {
7 | const options: Options = {
8 | port: 3000,
9 | configPath: "",
10 | };
11 |
12 | for (let i = 0; i < args.length; i++) {
13 | const arg = args[i];
14 | switch (arg) {
15 | case "-p":
16 | case "--port":
17 | if (i + 1 >= args.length) {
18 | throw new Error("Missing port number");
19 | }
20 | const port = parseInt(args[++i]);
21 | if (isNaN(port)) {
22 | throw new Error(`Invalid port number: ${args[i]}`);
23 | }
24 | options.port = port;
25 | break;
26 | default:
27 | if (arg.startsWith("-")) {
28 | throw new Error(`Unknown option: ${arg}`);
29 | }
30 | options.configPath = arg;
31 | }
32 | }
33 |
34 | if (!options.configPath) {
35 | throw new Error("No config file path provided");
36 | }
37 |
38 | return options;
39 | }
40 |
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/parse.test.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { describe, expect, test } from "bun:test";
2 | import { parseCommandLineArgs } from "./parse";
3 |
4 | describe("parseCommandLineArgs", () => {
5 | test("should parse default port when only config path provided", () => {
6 | const options = parseCommandLineArgs(["config.json"]);
7 | expect(options.port).toBe(3000);
8 | expect(options.configPath).toBe("config.json");
9 | });
10 |
11 | test("should parse port and config path correctly", () => {
12 | const options = parseCommandLineArgs(["-p", "8080", "config.json"]);
13 | expect(options.port).toBe(8080);
14 | expect(options.configPath).toBe("config.json");
15 |
16 | const options2 = parseCommandLineArgs(["--port", "9000", "config.json"]);
17 | expect(options2.port).toBe(9000);
18 | expect(options2.configPath).toBe("config.json");
19 | });
20 |
21 | test("should handle config path before port flag", () => {
22 | const options = parseCommandLineArgs(["config.json", "-p", "8080"]);
23 | expect(options.port).toBe(8080);
24 | expect(options.configPath).toBe("config.json");
25 | });
26 |
27 | test("should error on missing config path", () => {
28 | expect(() => parseCommandLineArgs(["-p", "8080"])).toThrow();
29 | });
30 |
31 | test("should error on invalid port", () => {
32 | expect(() =>
33 | parseCommandLineArgs(["-p", "invalid", "config.json"])
34 | ).toThrow();
35 | });
36 |
37 | test("should error on unknown flag", () => {
38 | expect(() => parseCommandLineArgs(["-x", "config.json"])).toThrow();
39 | });
40 | });
41 |
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/example-client/example-client.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { Client } from "@modelcontextprotocol/sdk/client/index.js";
2 | import { WebSocketClientTransport } from "@modelcontextprotocol/sdk/client/websocket.js";
3 | import { loadConfig } from "../../lib/config";
4 |
5 | async function testServer(serverName: string, port: number) {
6 | console.log(`\nTesting server: ${serverName}`);
7 | const transport = new WebSocketClientTransport(
8 | new URL(`ws://localhost:${port}/${serverName}`)
9 | );
10 |
11 | const client = new Client(
12 | {
13 | name: "example-client",
14 | version: "1.0.0",
15 | },
16 | {
17 | capabilities: {},
18 | }
19 | );
20 |
21 | try {
22 | console.time(`${serverName} Connection`);
23 | await client.connect(transport);
24 | console.timeEnd(`${serverName} Connection`);
25 |
26 | console.time(`${serverName} List Tools`);
27 | const tools = await client.listTools();
28 | console.timeEnd(`${serverName} List Tools`);
29 |
30 | console.log(
31 | `${serverName} Tools:`,
32 | tools.tools.map((t) => t.name)
33 | );
34 | } catch (error) {
35 | console.error(`Error testing ${serverName}:`, error);
36 | } finally {
37 | await client.close();
38 | }
39 | }
40 |
41 | async function main() {
42 | const args = process.argv.slice(2);
43 | if (args.length < 1) {
44 | console.error("Usage: example-client <config-file-path> [port]");
45 | process.exit(1);
46 | }
47 |
48 | const configPath = args[0];
49 | const port = args[1] ? parseInt(args[1]) : 3001;
50 |
51 | const config = await loadConfig(configPath);
52 |
53 | // Test each server in sequence
54 | for (const serverName of Object.keys(config.mcpServers)) {
55 | await testServer(serverName, port);
56 | }
57 | }
58 |
59 | main().catch(console.error);
60 |
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/process-pool.test.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { describe, expect, test } from "bun:test";
2 | import { ProcessPool } from "./process-pool";
3 |
4 | describe("ProcessPool", () => {
5 | test("initializes with correct number of processes", async () => {
6 | const pool = new ProcessPool(["echo", "hello"], {}, 2);
7 | await pool.initialize();
8 | expect(pool.getPoolSize()).toBe(2);
9 | pool.cleanup();
10 | });
11 |
12 | test("maintains pool size after getting processes", async () => {
13 | const pool = new ProcessPool(["echo", "hello"], {}, 2);
14 | await pool.initialize();
15 |
16 | // Get a process and verify pool size
17 | await pool.getProcess();
18 | const processes = await Promise.all([pool.getProcess(), pool.getProcess()]);
19 |
20 | // Cleanup the processes we got
21 | processes.forEach((p) => p.process.kill());
22 | pool.cleanup();
23 | });
24 |
25 | test("spawns new process when pool is empty", async () => {
26 | const pool = new ProcessPool(["echo", "hello"], {}, 1);
27 | await pool.initialize();
28 |
29 | // Get two processes (pool size is 1)
30 | const process1 = await pool.getProcess();
31 | const process2 = await pool.getProcess();
32 |
33 | expect(process1).toBeDefined();
34 | expect(process2).toBeDefined();
35 | expect(process1).not.toBe(process2);
36 |
37 | process1.process.kill();
38 | process2.process.kill();
39 | pool.cleanup();
40 | });
41 |
42 | test("handles concurrent process requests", async () => {
43 | const pool = new ProcessPool(["echo", "hello"], {}, 1);
44 | await pool.initialize();
45 |
46 | // Request multiple processes concurrently
47 | const processes = await Promise.all([
48 | pool.getProcess(),
49 | pool.getProcess(),
50 | pool.getProcess(),
51 | ]);
52 |
53 | expect(processes.length).toBe(3);
54 | expect(processes.every((p) => p.process && p.stdin)).toBe(true);
55 | // Verify all processes are different
56 | const pids = processes.map((p) => p.process.pid);
57 | expect(new Set(pids).size).toBe(pids.length);
58 |
59 | // Cleanup the processes we got
60 | processes.forEach((p) => p.process.kill());
61 | pool.cleanup();
62 | });
63 |
64 | test("cleans up processes on cleanup", async () => {
65 | const pool = new ProcessPool(["echo", "hello"], {}, 2);
66 | await pool.initialize();
67 |
68 | expect(pool.getPoolSize()).toBe(2);
69 | pool.cleanup();
70 | expect(pool.getPoolSize()).toBe(0);
71 | });
72 |
73 | test("process stdin works correctly", async () => {
74 | const pool = new ProcessPool(["cat"], {}, 1);
75 | await pool.initialize();
76 |
77 | const { process, stdin } = await pool.getProcess();
78 | const testMessage = "hello world\n";
79 |
80 | // Create a promise that resolves with stdout data
81 | const outputPromise = new Promise<string>((resolve) => {
82 | process.stdout?.on("data", (data: Buffer) => {
83 | resolve(data.toString());
84 | });
85 | });
86 |
87 | // Write to stdin
88 | stdin.write(testMessage);
89 |
90 | // Wait for the output and verify it matches
91 | const output = await outputPromise;
92 | expect(output).toBe(testMessage);
93 |
94 | process.kill();
95 | pool.cleanup();
96 | });
97 | });
98 |
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/process-pool.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { spawn } from "child_process";
2 | import { PassThrough } from "node:stream";
3 | import { childProcessLogger, logger as l } from "./logger";
4 |
5 | export type SpawnedProcess = {
6 | process: ReturnType<typeof spawn>;
7 | stdin: PassThrough;
8 | };
9 |
10 | export class ProcessPool {
11 | private processes: SpawnedProcess[] = [];
12 | private command: string[];
13 | private env: Record<string, string>;
14 | private minPoolSize: number;
15 | private logger = l;
16 | private spawningCount = 0;
17 |
18 | constructor(command: string[], env: Record<string, string>, minPoolSize = 1) {
19 | this.command = command;
20 | this.env = env;
21 | this.minPoolSize = minPoolSize;
22 | }
23 |
24 | private async spawnProcess(): Promise<SpawnedProcess> {
25 | this.spawningCount++;
26 | try {
27 | const startTime = performance.now();
28 | const childProcess = spawn(this.command[0], this.command.slice(1), {
29 | env: { ...process.env, ...this.env },
30 | stdio: ["pipe", "pipe", "pipe"],
31 | });
32 | const spawnTime = performance.now() - startTime;
33 | const cl = childProcessLogger(childProcess.pid);
34 |
35 | childProcess.stderr?.on("data", (data: Buffer) => {
36 | cl.error(data.toString());
37 | });
38 |
39 | const stdin = new PassThrough();
40 | stdin.pipe(childProcess.stdin!);
41 |
42 | const spawnedProcess: SpawnedProcess = {
43 | process: childProcess,
44 | stdin,
45 | };
46 |
47 | this.logger.info(
48 | `spawned process with PID ${childProcess.pid} in ${spawnTime.toFixed(
49 | 2
50 | )}ms`
51 | );
52 |
53 | return spawnedProcess;
54 | } finally {
55 | this.spawningCount--;
56 | }
57 | }
58 |
59 | private async spawnReplacement() {
60 | // Only spawn if total processes (running + spawning) is less than minPoolSize
61 | if (this.processes.length + this.spawningCount < this.minPoolSize) {
62 | const process = await this.spawnProcess();
63 | // Double check we still need this process
64 | if (this.processes.length + this.spawningCount < this.minPoolSize) {
65 | this.processes.push(process);
66 | } else {
67 | // We don't need this process anymore, kill it
68 | l.info(`killing process ${process.process.pid}`);
69 | process.process.kill();
70 | }
71 | }
72 | }
73 |
74 | async initialize() {
75 | // Start initial processes
76 | const promises = [];
77 | for (let i = 0; i < this.minPoolSize; i++) {
78 | promises.push(
79 | this.spawnProcess().then((process) => {
80 | this.processes.push(process);
81 | })
82 | );
83 | }
84 | await Promise.all(promises);
85 | }
86 |
87 | async getProcess(): Promise<SpawnedProcess> {
88 | // If we have a process available, return it
89 | if (this.processes.length > 0) {
90 | const process = this.processes.pop()!;
91 | // Spawn a replacement asynchronously
92 | this.spawnReplacement();
93 | return process;
94 | }
95 |
96 | // If no process available, spawn one immediately
97 | return await this.spawnProcess();
98 | }
99 |
100 | cleanup() {
101 | for (const process of this.processes) {
102 | l.info(`killing process ${process.process.pid}`);
103 | process.process.kill();
104 | }
105 | this.processes = [];
106 | }
107 |
108 | // For testing purposes
109 | getPoolSize(): number {
110 | return this.processes.length;
111 | }
112 |
113 | getSpawningCount(): number {
114 | return this.spawningCount;
115 | }
116 | }
117 |
```
--------------------------------------------------------------------------------
/src/build-unikernel/build-unikernel.test.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { describe, expect, test } from "bun:test";
2 | import { type Config } from "../lib/config";
3 | import { determineRequiredSetups, generateDockerfile } from "./build-unikernel";
4 |
5 | describe("determineRequiredSetups", () => {
6 | test("correctly identifies Python-only setup", () => {
7 | const config: Config = {
8 | mcpServers: {
9 | test: {
10 | command: "uvx",
11 | args: ["test-package"],
12 | },
13 | },
14 | };
15 | const result = determineRequiredSetups(config);
16 | expect(result.needsPython).toBe(true);
17 | expect(result.needsNode).toBe(false);
18 | });
19 | });
20 |
21 | describe("generateDockerfile", () => {
22 | const testConfig = {
23 | mcpServers: {
24 | test: {
25 | command: "uvx",
26 | args: ["test-package"],
27 | },
28 | },
29 | };
30 |
31 | test("generates correct Dockerfile for Python/UV setup", () => {
32 | const dockerfile = generateDockerfile(
33 | testConfig,
34 | JSON.stringify(testConfig, null, 2)
35 | );
36 | expect(dockerfile).toContain("Install Python and UV");
37 | expect(dockerfile).toContain("uv tool install test-package");
38 | expect(dockerfile).toContain(
39 | "cat > /usr/app/config/mcp-config.json << 'ENDCONFIG'"
40 | );
41 | expect(dockerfile).toContain(JSON.stringify(testConfig, null, 2));
42 | });
43 |
44 | test("generates correct Dockerfile for Node setup with npx command", () => {
45 | const config: Config = {
46 | mcpServers: {
47 | test: {
48 | command: "npx",
49 | args: ["test-package"],
50 | },
51 | },
52 | };
53 | const dockerfile = generateDockerfile(
54 | config,
55 | JSON.stringify(config, null, 2)
56 | );
57 | expect(dockerfile).toContain("Install Node.js and npm");
58 | expect(dockerfile).toContain("npm install test-package");
59 | expect(dockerfile).toContain(
60 | "cat > /usr/app/config/mcp-config.json << 'ENDCONFIG'"
61 | );
62 | });
63 |
64 | test("generates correct Dockerfile for both Python and Node setup with multiple packages", () => {
65 | const config: Config = {
66 | mcpServers: {
67 | test1: {
68 | command: "uvx",
69 | args: ["test-package1"],
70 | },
71 | test2: {
72 | command: "npx",
73 | args: ["test-package2"],
74 | },
75 | },
76 | };
77 | const dockerfile = generateDockerfile(
78 | config,
79 | JSON.stringify(config, null, 2)
80 | );
81 | expect(dockerfile).toContain("Install Python and UV");
82 | expect(dockerfile).toContain("uv tool install test-package1");
83 | expect(dockerfile).toContain("Install Node.js and npm");
84 | expect(dockerfile).toContain("npm install test-package2");
85 | expect(dockerfile).toContain(
86 | "cat > /usr/app/config/mcp-config.json << 'ENDCONFIG'"
87 | );
88 | });
89 |
90 | test("generates correct common parts for all setups", () => {
91 | const dockerfile = generateDockerfile(
92 | testConfig,
93 | JSON.stringify(testConfig, null, 2)
94 | );
95 | expect(dockerfile).toContain("FROM debian:bookworm-slim");
96 | expect(dockerfile).toContain("WORKDIR /usr/app");
97 | expect(dockerfile).toContain("Install Bun");
98 | expect(dockerfile).toContain("COPY package*.json .");
99 | expect(dockerfile).toContain("COPY . .");
100 | expect(dockerfile).toContain(
101 | 'ENTRYPOINT ["bun", "/usr/app/src/mcp-server-wrapper/mcp-server-wrapper.ts"'
102 | );
103 | });
104 | });
105 |
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/mcp-server-wrapper.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { type Server } from "bun";
2 | import { spawn } from "child_process";
3 | import { randomUUID } from "crypto";
4 | import { PassThrough } from "node:stream";
5 | import { loadConfig } from "../lib/config";
6 | import { childProcessLogger, logger as l } from "./logger";
7 | import { parseCommandLineArgs } from "./parse";
8 | import { ProcessPool } from "./process-pool";
9 |
10 | // WSContextData is the state associated with each ws connection
11 | type WSContextData = {
12 | childProcess: ReturnType<typeof spawn>;
13 | stdin: PassThrough;
14 | sessionId: string;
15 | serverName: string;
16 | };
17 |
18 | type ServerPools = {
19 | [key: string]: ProcessPool;
20 | };
21 |
22 | async function main() {
23 | l.debug(`argv: ${process.argv.slice(2)}`);
24 |
25 | let options;
26 | try {
27 | options = parseCommandLineArgs(process.argv.slice(2));
28 | } catch (error: any) {
29 | l.error(`Command line error: ${error.message}`);
30 | l.error("Usage: mcp-server-wrapper [-p PORT] <config-file-path>");
31 | process.exit(1);
32 | }
33 |
34 | const config = await loadConfig(options.configPath);
35 |
36 | // Create a process pool for each MCP server
37 | const pools: ServerPools = {};
38 | for (const [name, serverConfig] of Object.entries(config.mcpServers)) {
39 | const pool = new ProcessPool(
40 | [serverConfig.command, ...serverConfig.args],
41 | {}
42 | );
43 | await pool.initialize();
44 | pools[name] = pool;
45 | }
46 |
47 | Bun.serve<WSContextData>({
48 | port: options.port,
49 | fetch(req: Request, server: Server) {
50 | l.debug(`connection attempt: ${req.url}`);
51 |
52 | // Extract the server name from the URL path
53 | const url = new URL(req.url);
54 | const serverName = url.pathname.slice(1); // Remove leading slash
55 |
56 | if (!pools[serverName]) {
57 | return new Response(`No MCP server found at ${serverName}`, {
58 | status: 404,
59 | });
60 | }
61 |
62 | if (server.upgrade(req, { data: { serverName } })) {
63 | return;
64 | }
65 | return new Response("Upgrade failed", { status: 500 });
66 | },
67 |
68 | websocket: {
69 | async open(ws) {
70 | const sessionId = randomUUID();
71 | l.debug(`open[${sessionId}]`);
72 |
73 | try {
74 | const serverName = ws.data.serverName;
75 | const pool = pools[serverName];
76 | const { process: child, stdin } = await pool.getProcess();
77 | const cl = childProcessLogger(child.pid);
78 |
79 | ws.data = {
80 | childProcess: child,
81 | stdin,
82 | sessionId,
83 | serverName,
84 | };
85 | l.info(`assigned process PID ${child.pid} (session: ${sessionId})`);
86 |
87 | // stdout of the MCP server is a message to the client
88 | child.stdout?.on("data", (data: Buffer) => {
89 | const lines = data.toString().trim().split("\n");
90 | for (const line of lines) {
91 | if (line) {
92 | cl.info(`[session: ${sessionId}] ${line}`);
93 | ws.send(line);
94 | }
95 | }
96 | });
97 |
98 | child.on("close", (code) => {
99 | const ll = code !== null && code > 0 ? l.error : l.info;
100 | ll(
101 | `process ${child.pid} exited with code ${code} (session: ${sessionId})`
102 | );
103 | ws.close();
104 | });
105 | } catch (error) {
106 | l.error(`Failed to get process for session ${sessionId}: ${error}`);
107 | ws.close();
108 | }
109 | },
110 |
111 | message(ws, message) {
112 | l.debug(`message: ${message} (session: ${ws.data.sessionId})`);
113 | ws.data.stdin.write(message + "\n");
114 | },
115 |
116 | close(ws) {
117 | l.debug(`close: connection (session: ${ws.data.sessionId})`);
118 | ws.data.childProcess.kill("SIGINT");
119 | },
120 | },
121 | });
122 |
123 | l.info(`WebSocket server listening on port ${options.port}`);
124 |
125 | // Cleanup on exit
126 | const cleanup = () => {
127 | l.info("Shutting down...");
128 | for (const pool of Object.values(pools)) {
129 | pool.cleanup();
130 | }
131 | process.exit(0);
132 | };
133 | process.on("SIGINT", cleanup);
134 | process.on("SIGTERM", cleanup);
135 | }
136 |
137 | main().catch((error) => {
138 | l.error("Fatal error: " + error);
139 | process.exit(1);
140 | });
141 |
```
--------------------------------------------------------------------------------
/src/build-unikernel/build-unikernel.ts:
--------------------------------------------------------------------------------
```typescript
1 | import { createHash } from "crypto";
2 | import * as fs from "fs/promises";
3 | import * as path from "path";
4 | import { type Config, loadConfig } from "../lib/config";
5 |
6 | async function createBuildDir(
7 | configPath: string,
8 | configContent: string
9 | ): Promise<string> {
10 | // Create a hash of the config content
11 | const hash = createHash("sha256")
12 | .update(configContent)
13 | .digest("hex")
14 | .slice(0, 8); // Use first 8 chars of hash
15 |
16 | // Create build directory name
17 | const buildDir = `./build-unikernel-${hash}`;
18 |
19 | // Create directory structure
20 | await fs.mkdir(buildDir, { recursive: true });
21 | await fs.mkdir(path.join(buildDir, "unikernel"), { recursive: true });
22 | await fs.mkdir(path.join(buildDir, "unikernel", "analysis"), {
23 | recursive: true,
24 | });
25 | await fs.mkdir(path.join(buildDir, "unikernel", "analysis", "ldd-output"), {
26 | recursive: true,
27 | });
28 | await fs.mkdir(
29 | path.join(buildDir, "unikernel", "analysis", "strace-output"),
30 | { recursive: true }
31 | );
32 |
33 | return buildDir;
34 | }
35 |
36 | export function determineRequiredSetups(config: Config): {
37 | needsPython: boolean;
38 | needsNode: boolean;
39 | } {
40 | const commands = Object.values(config.mcpServers).map(
41 | (server) => server.command
42 | );
43 | return {
44 | needsPython: commands.some((cmd) => ["uvx", "python"].includes(cmd)),
45 | needsNode: commands.some((cmd) => ["node", "npx"].includes(cmd)),
46 | };
47 | }
48 |
49 | export function generateDockerfile(
50 | config: Config,
51 | configContent: string
52 | ): string {
53 | const { needsPython, needsNode } = determineRequiredSetups(config);
54 |
55 | // Collect all packages that need to be installed
56 | const npmPackages = needsNode
57 | ? Object.values(config.mcpServers)
58 | .filter((server) => server.command === "npx")
59 | .map((server) => server.args[0])
60 | : [];
61 | const uvTools = needsPython
62 | ? Object.values(config.mcpServers)
63 | .filter((server) => server.command === "uvx")
64 | .map((server) => server.args[0])
65 | : [];
66 |
67 | let dockerfile = `FROM debian:bookworm-slim
68 |
69 | WORKDIR /usr/app
70 |
71 | RUN apt-get update && apt-get install -y curl wget unzip\n`;
72 |
73 | // Add Python/UV setup if needed
74 | if (needsPython) {
75 | dockerfile += `
76 | # Install Python and UV
77 | RUN apt-get install -y python3 python3-venv
78 | RUN curl -LsSf https://astral.sh/uv/install.sh | sh
79 | ENV PATH="/root/.local/bin:$PATH"\n`;
80 |
81 | // Add UV tool installations if any
82 | if (uvTools.length > 0) {
83 | dockerfile += `
84 | # Pre-install UV tools
85 | RUN uv tool install ${uvTools.join(" ")}\n`;
86 | }
87 | }
88 |
89 | // Add Node.js setup if needed
90 | if (needsNode) {
91 | dockerfile += `
92 | # Install Node.js and npm
93 | RUN apt-get install -y nodejs npm\n`;
94 |
95 | // Add npm package installations if any
96 | if (npmPackages.length > 0) {
97 | dockerfile += `
98 | # Pre-install npm packages
99 | RUN npm install ${npmPackages.join(" ")}\n`;
100 | }
101 | }
102 |
103 | // Add the common parts with Bun installation and embedded config
104 | dockerfile += `
105 | # Install Bun
106 | RUN curl -fsSL https://bun.sh/install | bash
107 | ENV PATH="/root/.bun/bin:$PATH"
108 |
109 | # Copy package files
110 | COPY package*.json .
111 | COPY bun.lockb .
112 | RUN bun install
113 |
114 | # Copy the application
115 | COPY . .
116 |
117 | # Embed the config file
118 | COPY <<'ENDCONFIG' /usr/app/config/mcp-config.json
119 | ${configContent}
120 | ENDCONFIG
121 |
122 | ENTRYPOINT ["bun", "/usr/app/src/mcp-server-wrapper/mcp-server-wrapper.ts", "-p", "3001", "/usr/app/config/mcp-config.json"]`;
123 |
124 | return dockerfile;
125 | }
126 |
127 | function generateInstrumentedDockerfile(
128 | config: Config,
129 | configContent: string,
130 | analysisType: "ldd" | "strace"
131 | ): string {
132 | const baseDockerfile = generateDockerfile(config, configContent);
133 |
134 | // Split the Dockerfile at the ENTRYPOINT
135 | const [baseContent] = baseDockerfile.split("ENTRYPOINT");
136 |
137 | if (analysisType === "ldd") {
138 | // Add analysis tools for ldd analysis
139 | return `${baseContent}
140 | # Install analysis tools
141 | RUN apt-get update && apt-get install -y libc-bin
142 |
143 | # Create analysis scripts
144 | COPY <<'ENDSCRIPT' /usr/app/analyze-binaries.sh
145 | #!/bin/bash
146 | set -e
147 |
148 | analyze_binary() {
149 | local binary_name=\$1
150 | local output_file="/analysis/ldd-output/\${binary_name}.txt"
151 | if command -v \$binary_name &> /dev/null; then
152 | echo "Analyzing \${binary_name}..." > "\$output_file"
153 | # Run ldd with error handling
154 | if ! ldd \$(which \$binary_name) >> "\$output_file" 2>&1; then
155 | echo "Warning: ldd failed for \${binary_name}, trying with LD_TRACE_LOADED_OBJECTS=1" >> "\$output_file"
156 | # Fallback to using LD_TRACE_LOADED_OBJECTS if ldd fails
157 | LD_TRACE_LOADED_OBJECTS=1 \$(which \$binary_name) >> "\$output_file" 2>&1 || true
158 | fi
159 | fi
160 | }
161 |
162 | # Analyze each binary
163 | analyze_binary "bun"
164 | analyze_binary "node"
165 | analyze_binary "python3"
166 | analyze_binary "uv"
167 |
168 | # Additional system information
169 | echo "System information:" > /analysis/system-info.txt
170 | uname -a >> /analysis/system-info.txt
171 | cat /etc/os-release >> /analysis/system-info.txt
172 | ENDSCRIPT
173 |
174 | RUN chmod +x /usr/app/analyze-*.sh
175 |
176 | VOLUME /analysis
177 | ENTRYPOINT ["/bin/bash", "-c", "/usr/app/analyze-binaries.sh"]`;
178 | } else {
179 | // Add analysis tools for strace analysis
180 | return `${baseContent}
181 | # Install analysis tools
182 | RUN apt-get update && apt-get install -y strace
183 |
184 | # Create analysis scripts
185 | COPY <<'ENDSCRIPT' /usr/app/analyze-runtime.sh
186 | #!/bin/bash
187 | set -e
188 |
189 | # Start the server with strace
190 | strace -f -e trace=open,openat bun /usr/app/src/mcp-server-wrapper/mcp-server-wrapper.ts -p 3001 /usr/app/config/mcp-config.json 2> /analysis/strace-output/server.txt &
191 | SERVER_PID=\$!
192 |
193 | # Wait for server to start
194 | sleep 2
195 |
196 | # Run example client with strace
197 | strace -f -e trace=open,openat bun /usr/app/src/mcp-server-wrapper/example-client/example-client.ts /usr/app/config/mcp-config.json 3001 2> /analysis/strace-output/client.txt
198 |
199 | # Kill server
200 | kill \$SERVER_PID || true
201 | ENDSCRIPT
202 |
203 | RUN chmod +x /usr/app/analyze-*.sh
204 |
205 | VOLUME /analysis
206 | ENTRYPOINT ["/bin/bash", "-c", "/usr/app/analyze-runtime.sh"]`;
207 | }
208 | }
209 |
210 | async function runAnalysis(
211 | buildDir: string,
212 | config: Config,
213 | configContent: string
214 | ) {
215 | // Generate both Dockerfiles
216 | const lddDockerfile = generateInstrumentedDockerfile(
217 | config,
218 | configContent,
219 | "ldd"
220 | );
221 | const straceDockerfile = generateInstrumentedDockerfile(
222 | config,
223 | configContent,
224 | "strace"
225 | );
226 |
227 | const lddPath = path.join(buildDir, "unikernel", "Dockerfile.ldd");
228 | const stracePath = path.join(buildDir, "unikernel", "Dockerfile.strace");
229 |
230 | await fs.writeFile(lddPath, lddDockerfile);
231 | await fs.writeFile(stracePath, straceDockerfile);
232 |
233 | const analysisDir = path.resolve(
234 | path.join(buildDir, "unikernel", "analysis")
235 | );
236 |
237 | // Run ldd analysis on x86_64
238 | const lddImageName = `mcp-analysis-ldd:${path.basename(buildDir)}`;
239 | console.log("Building ldd analysis container (x86_64)...");
240 | const lddBuildResult = Bun.spawnSync(
241 | [
242 | "sh",
243 | "-c",
244 | `docker build --platform linux/amd64 -t ${lddImageName} -f ${lddPath} .`,
245 | ],
246 | {
247 | stdio: ["inherit", "inherit", "inherit"],
248 | }
249 | );
250 | if (lddBuildResult.exitCode !== 0) {
251 | throw new Error("Failed to build ldd analysis container");
252 | }
253 |
254 | console.log("Running ldd analysis...");
255 | const lddRunResult = Bun.spawnSync(
256 | [
257 | "sh",
258 | "-c",
259 | `docker run --platform linux/amd64 --rm -v "${analysisDir}:/analysis" ${lddImageName}`,
260 | ],
261 | {
262 | stdio: ["inherit", "inherit", "inherit"],
263 | }
264 | );
265 | if (lddRunResult.exitCode !== 0) {
266 | throw new Error("ldd analysis failed");
267 | }
268 |
269 | // Run strace analysis on native arm64
270 | const straceImageName = `mcp-analysis-strace:${path.basename(buildDir)}`;
271 | console.log("Building strace analysis container (arm64)...");
272 | const straceBuildResult = Bun.spawnSync(
273 | [
274 | "sh",
275 | "-c",
276 | `docker build --platform linux/arm64 -t ${straceImageName} -f ${stracePath} .`,
277 | ],
278 | {
279 | stdio: ["inherit", "inherit", "inherit"],
280 | }
281 | );
282 | if (straceBuildResult.exitCode !== 0) {
283 | throw new Error("Failed to build strace analysis container");
284 | }
285 |
286 | console.log("Running strace analysis...");
287 | const straceRunResult = Bun.spawnSync(
288 | [
289 | "sh",
290 | "-c",
291 | `docker run --platform linux/arm64 --cap-add=SYS_PTRACE --rm -v "${analysisDir}:/analysis" ${straceImageName}`,
292 | ],
293 | {
294 | stdio: ["inherit", "inherit", "inherit"],
295 | }
296 | );
297 | if (straceRunResult.exitCode !== 0) {
298 | throw new Error("strace analysis failed");
299 | }
300 |
301 | // TODO: Process analysis results
302 | // TODO: Generate unikernel Dockerfile
303 | }
304 |
305 | async function main() {
306 | const args = process.argv.slice(2);
307 | if (args.length !== 1) {
308 | console.error("Usage: build-unikernel <config-file-path>");
309 | process.exit(1);
310 | }
311 |
312 | const configPath = args[0];
313 | try {
314 | const configContent = await Bun.file(configPath).text();
315 | const config = await loadConfig(configPath);
316 |
317 | // Validate that all commands are supported
318 | const unsupportedCommands = Object.values(config.mcpServers)
319 | .map((server) => server.command)
320 | .filter((cmd) => !["uvx", "python", "node", "npx"].includes(cmd));
321 |
322 | if (unsupportedCommands.length > 0) {
323 | console.error(
324 | `Error: Unsupported commands found: ${unsupportedCommands.join(", ")}`
325 | );
326 | process.exit(1);
327 | }
328 |
329 | // Create build directory structure
330 | const buildDir = await createBuildDir(configPath, configContent);
331 | console.log(`Created build directory: ${buildDir}`);
332 |
333 | // Generate and write the regular Dockerfile
334 | const dockerfile = generateDockerfile(config, configContent);
335 | const dockerfilePath = path.join(buildDir, "Dockerfile.generated");
336 | await fs.writeFile(dockerfilePath, dockerfile);
337 | console.log(`Generated Dockerfile at: ${dockerfilePath}`);
338 |
339 | // Run analysis
340 | await runAnalysis(buildDir, config, configContent);
341 | console.log(
342 | "Analysis complete. Results in:",
343 | path.join(buildDir, "unikernel", "analysis")
344 | );
345 | } catch (error) {
346 | console.error("Error:", error);
347 | process.exit(1);
348 | }
349 | }
350 |
351 | if (require.main === module) {
352 | main().catch(console.error);
353 | }
354 |
```