# Directory Structure
```
├── .gitignore
├── bun.lockb
├── LICENSE
├── package.json
├── README.md
├── src
│ ├── build-unikernel
│ │ ├── build-unikernel.test.ts
│ │ ├── build-unikernel.ts
│ │ └── example-configs
│ │ ├── Dockerfile.generated
│ │ ├── kitchen-sink.json
│ │ ├── node-only.json
│ │ └── python-only.json
│ ├── lib
│ │ └── config.ts
│ └── mcp-server-wrapper
│ ├── example-client
│ │ └── example-client.ts
│ ├── logger.ts
│ ├── mcp-server-wrapper.ts
│ ├── parse.test.ts
│ ├── parse.ts
│ ├── process-pool.test.ts
│ └── process-pool.ts
└── tsconfig.json
```
# Files
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Based on https://raw.githubusercontent.com/github/gitignore/main/Node.gitignore
# Logs
logs
_.log
npm-debug.log_
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
.pnpm-debug.log*
# Caches
.cache
# Diagnostic reports (https://nodejs.org/api/report.html)
report.[0-9]_.[0-9]_.[0-9]_.[0-9]_.json
# Runtime data
pids
_.pid
_.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
*.lcov
# nyc test coverage
.nyc_output
# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# Snowpack dependency directory (https://snowpack.dev/)
web_modules/
# TypeScript cache
*.tsbuildinfo
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Optional stylelint cache
.stylelintcache
# Microbundle cache
.rpt2_cache/
.rts2_cache_cjs/
.rts2_cache_es/
.rts2_cache_umd/
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variable files
.env
.env.development.local
.env.test.local
.env.production.local
.env.local
# parcel-bundler cache (https://parceljs.org/)
.parcel-cache
# Next.js build output
.next
out
# Nuxt.js build / generate output
.nuxt
dist
# Gatsby files
# Comment in the public line in if your project uses Gatsby and not Next.js
# https://nextjs.org/blog/next-9-1#public-directory-support
# public
# vuepress build output
.vuepress/dist
# vuepress v2.x temp and cache directory
.temp
# Docusaurus cache and generated files
.docusaurus
# Serverless directories
.serverless/
# FuseBox cache
.fusebox/
# DynamoDB Local files
.dynamodb/
# TernJS port file
.tern-port
# Stores VSCode versions used for testing VSCode extensions
.vscode-test
# yarn v2
.yarn/cache
.yarn/unplugged
.yarn/build-state.yml
.yarn/install-state.gz
.pnp.*
# IntelliJ based IDEs
.idea
# Finder (MacOS) folder config
.DS_Store
.archive/
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
# mcp-server-server
This repo is a proof of concept MCP server that exposes another stdio MCP server over a websocket.
## But...why?
MCP servers are hard to use.
The primary transport mechanism for MCP servers is stdio, i.e. in your MCP client program you need to spawn a new process for the MCP server you want to use.
This has downsides:
1. It's cumbersome--every MCP client needs to be a process manager now. The way you [configure Claude Desktop](https://modelcontextprotocol.io/quickstart#installation) to use MCP servers is a good demonstration of this--it needs a list of processes to run.
2. It creates an infra problem: if you have many users, all of which require different MCP server configurations (e.g. they all have different credentials for underlying MCP servers like Github, Google Drive, etc.), then you now have tons of processes to operate and route client requests to.
3. It's slow: the default way to spin up an MCP server is `npx ...` or `uvx ...` which comes with all of the slowness of these tools (2-3s spinup times are normal).
## A better way
What if MCP servers were actually... servers? I.e. communication with them happened over the network instead of stdio.
Then you could have an easier time using them programatically.
### Step 1: Convert a stdio MCP server to a websocket MCP server
This repo contains a wrapper program that will take an existing MCP server ([here](https://github.com/modelcontextprotocol/servers/tree/main/src/) is a list of the official ones, but they're all over now) and expose it via websocket:
```zsh
bun run mcp-server-wrapper -p 3001 -- npx -y @modelcontextprotocol/server-puppeteer@latest
```
and for faster spin up times, install it and invoke it using `node` directly:
```zsh
pnpm install -g @modelcontextprotocol/server-puppeteer@latest
bun run mcp-server-wrapper -p 3001 -- node ~/Library/pnpm/global/5/node_modules/@modelcontextprotocol/server-puppeteer/dist/index.js
```
### Step 2: Interact with the MCP server programatically without managing processes
```typescript
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { WebSocketClientTransport } from "@modelcontextprotocol/sdk/client/websocket.js";
const transport = new WebSocketClientTransport(new URL("ws://localhost:3001"));
const client = new Client(
{
name: "example-client",
version: "1.0.0",
},
{
capabilities: {},
}
);
await client.connect(transport);
const tools = await client.listTools();
console.log(
"Tools:",
tools.tools.map((t) => t.name)
);
await client.close();
```
```zsh
bun run mcp-server-wrapper-client
$ bun run src/mcp-server-wrapper/example-client/example-client.ts
Tools: [ "puppeteer_navigate", "puppeteer_screenshot", "puppeteer_click", "puppeteer_fill",
"puppeteer_evaluate"
]
```
### Step 3: Build it into a docker image
For a given MCP server configuration, e.g.
```json
{
"mcpServers": {
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
}
```
We'd like to build it into a docker image that exposes a websocket, that we can run anywhere.
This repo contains a script that will output a Dockerfile for a given MCP server configuration:
```bash
```
```
--------------------------------------------------------------------------------
/src/build-unikernel/example-configs/python-only.json:
--------------------------------------------------------------------------------
```json
{
"mcpServers": {
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
}
```
--------------------------------------------------------------------------------
/src/build-unikernel/example-configs/node-only.json:
--------------------------------------------------------------------------------
```json
{
"mcpServers": {
"puppeteer": {
"command": "npx",
"args": ["@modelcontextprotocol/server-puppeteer"]
}
}
}
```
--------------------------------------------------------------------------------
/src/build-unikernel/example-configs/kitchen-sink.json:
--------------------------------------------------------------------------------
```json
{
"mcpServers": {
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
},
"puppeteer": {
"command": "npx",
"args": ["@modelcontextprotocol/server-puppeteer"]
}
}
}
```
--------------------------------------------------------------------------------
/src/lib/config.ts:
--------------------------------------------------------------------------------
```typescript
import { z } from "zod";
// Define the MCP server configuration schema
export const MCPServerConfigSchema = z.object({
command: z.string(),
args: z.array(z.string()),
});
export const ConfigSchema = z.object({
mcpServers: z.record(z.string(), MCPServerConfigSchema),
});
export type Config = z.infer<typeof ConfigSchema>;
export type MCPServerConfig = z.infer<typeof MCPServerConfigSchema>;
export async function loadConfig(configPath: string): Promise<Config> {
const configContent = await Bun.file(configPath).text();
const configJson = JSON.parse(configContent);
return ConfigSchema.parse(configJson);
}
```
--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------
```json
{
"compilerOptions": {
// Enable latest features
"lib": ["ESNext", "DOM"],
"target": "ESNext",
"module": "ESNext",
"moduleDetection": "force",
"jsx": "react-jsx",
"allowJs": true,
// Bundler mode
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"noEmit": true,
// Best practices
"strict": true,
"skipLibCheck": true,
"noFallthroughCasesInSwitch": true,
// Some stricter flags (disabled by default)
"noUnusedLocals": false,
"noUnusedParameters": false,
"noPropertyAccessFromIndexSignature": false
}
}
```
--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------
```json
{
"name": "mcp-server-server",
"description": "a model context protocol server of servers",
"module": "src/index.ts",
"type": "module",
"scripts": {
"mcp-server-wrapper-build": "bun build --compile --minify --sourcemap --bytecode ./src/mcp-server-wrapper/mcp-server-wrapper.ts --outfile build/mcp-server-wrapper",
"mcp-server-wrapper": "bun run src/mcp-server-wrapper/mcp-server-wrapper.ts",
"mcp-server-wrapper-client": "bun run src/mcp-server-wrapper/example-client/example-client.ts"
},
"dependencies": {
"@modelcontextprotocol/sdk": "0.6.0",
"winston": "^3.17.0",
"zod": "^3.23.8",
"zod-to-json-schema": "^3.23.5"
},
"devDependencies": {
"@types/bun": "latest"
},
"peerDependencies": {
"typescript": "^5.0.0"
}
}
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/logger.ts:
--------------------------------------------------------------------------------
```typescript
import { createLogger, format, Logger, transports } from "winston";
const logger = createLogger({
level: process.env.LOG_LEVEL || "info",
format: format.combine(
format.colorize(),
format.timestamp(),
format.printf(({ timestamp, level, message }) => {
return `${timestamp} [${level}]: ${message}`;
})
),
transports: [new transports.Console()],
});
function childProcessLogger(pid: number | undefined): Logger {
return createLogger({
level: process.env.LOG_LEVEL || "info",
format: format.combine(
format.colorize(),
format.timestamp(),
format.printf(({ timestamp, level, message }) => {
return `${timestamp} [${level}]: \x1b[34m[child_process[${pid}]]\x1b[0m: ${message}`;
})
),
transports: [new transports.Console()],
});
}
export { childProcessLogger, logger };
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/parse.ts:
--------------------------------------------------------------------------------
```typescript
export type Options = {
port: number;
configPath: string;
};
export function parseCommandLineArgs(args: string[]): Options {
const options: Options = {
port: 3000,
configPath: "",
};
for (let i = 0; i < args.length; i++) {
const arg = args[i];
switch (arg) {
case "-p":
case "--port":
if (i + 1 >= args.length) {
throw new Error("Missing port number");
}
const port = parseInt(args[++i]);
if (isNaN(port)) {
throw new Error(`Invalid port number: ${args[i]}`);
}
options.port = port;
break;
default:
if (arg.startsWith("-")) {
throw new Error(`Unknown option: ${arg}`);
}
options.configPath = arg;
}
}
if (!options.configPath) {
throw new Error("No config file path provided");
}
return options;
}
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/parse.test.ts:
--------------------------------------------------------------------------------
```typescript
import { describe, expect, test } from "bun:test";
import { parseCommandLineArgs } from "./parse";
describe("parseCommandLineArgs", () => {
test("should parse default port when only config path provided", () => {
const options = parseCommandLineArgs(["config.json"]);
expect(options.port).toBe(3000);
expect(options.configPath).toBe("config.json");
});
test("should parse port and config path correctly", () => {
const options = parseCommandLineArgs(["-p", "8080", "config.json"]);
expect(options.port).toBe(8080);
expect(options.configPath).toBe("config.json");
const options2 = parseCommandLineArgs(["--port", "9000", "config.json"]);
expect(options2.port).toBe(9000);
expect(options2.configPath).toBe("config.json");
});
test("should handle config path before port flag", () => {
const options = parseCommandLineArgs(["config.json", "-p", "8080"]);
expect(options.port).toBe(8080);
expect(options.configPath).toBe("config.json");
});
test("should error on missing config path", () => {
expect(() => parseCommandLineArgs(["-p", "8080"])).toThrow();
});
test("should error on invalid port", () => {
expect(() =>
parseCommandLineArgs(["-p", "invalid", "config.json"])
).toThrow();
});
test("should error on unknown flag", () => {
expect(() => parseCommandLineArgs(["-x", "config.json"])).toThrow();
});
});
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/example-client/example-client.ts:
--------------------------------------------------------------------------------
```typescript
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { WebSocketClientTransport } from "@modelcontextprotocol/sdk/client/websocket.js";
import { loadConfig } from "../../lib/config";
async function testServer(serverName: string, port: number) {
console.log(`\nTesting server: ${serverName}`);
const transport = new WebSocketClientTransport(
new URL(`ws://localhost:${port}/${serverName}`)
);
const client = new Client(
{
name: "example-client",
version: "1.0.0",
},
{
capabilities: {},
}
);
try {
console.time(`${serverName} Connection`);
await client.connect(transport);
console.timeEnd(`${serverName} Connection`);
console.time(`${serverName} List Tools`);
const tools = await client.listTools();
console.timeEnd(`${serverName} List Tools`);
console.log(
`${serverName} Tools:`,
tools.tools.map((t) => t.name)
);
} catch (error) {
console.error(`Error testing ${serverName}:`, error);
} finally {
await client.close();
}
}
async function main() {
const args = process.argv.slice(2);
if (args.length < 1) {
console.error("Usage: example-client <config-file-path> [port]");
process.exit(1);
}
const configPath = args[0];
const port = args[1] ? parseInt(args[1]) : 3001;
const config = await loadConfig(configPath);
// Test each server in sequence
for (const serverName of Object.keys(config.mcpServers)) {
await testServer(serverName, port);
}
}
main().catch(console.error);
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/process-pool.test.ts:
--------------------------------------------------------------------------------
```typescript
import { describe, expect, test } from "bun:test";
import { ProcessPool } from "./process-pool";
describe("ProcessPool", () => {
test("initializes with correct number of processes", async () => {
const pool = new ProcessPool(["echo", "hello"], {}, 2);
await pool.initialize();
expect(pool.getPoolSize()).toBe(2);
pool.cleanup();
});
test("maintains pool size after getting processes", async () => {
const pool = new ProcessPool(["echo", "hello"], {}, 2);
await pool.initialize();
// Get a process and verify pool size
await pool.getProcess();
const processes = await Promise.all([pool.getProcess(), pool.getProcess()]);
// Cleanup the processes we got
processes.forEach((p) => p.process.kill());
pool.cleanup();
});
test("spawns new process when pool is empty", async () => {
const pool = new ProcessPool(["echo", "hello"], {}, 1);
await pool.initialize();
// Get two processes (pool size is 1)
const process1 = await pool.getProcess();
const process2 = await pool.getProcess();
expect(process1).toBeDefined();
expect(process2).toBeDefined();
expect(process1).not.toBe(process2);
process1.process.kill();
process2.process.kill();
pool.cleanup();
});
test("handles concurrent process requests", async () => {
const pool = new ProcessPool(["echo", "hello"], {}, 1);
await pool.initialize();
// Request multiple processes concurrently
const processes = await Promise.all([
pool.getProcess(),
pool.getProcess(),
pool.getProcess(),
]);
expect(processes.length).toBe(3);
expect(processes.every((p) => p.process && p.stdin)).toBe(true);
// Verify all processes are different
const pids = processes.map((p) => p.process.pid);
expect(new Set(pids).size).toBe(pids.length);
// Cleanup the processes we got
processes.forEach((p) => p.process.kill());
pool.cleanup();
});
test("cleans up processes on cleanup", async () => {
const pool = new ProcessPool(["echo", "hello"], {}, 2);
await pool.initialize();
expect(pool.getPoolSize()).toBe(2);
pool.cleanup();
expect(pool.getPoolSize()).toBe(0);
});
test("process stdin works correctly", async () => {
const pool = new ProcessPool(["cat"], {}, 1);
await pool.initialize();
const { process, stdin } = await pool.getProcess();
const testMessage = "hello world\n";
// Create a promise that resolves with stdout data
const outputPromise = new Promise<string>((resolve) => {
process.stdout?.on("data", (data: Buffer) => {
resolve(data.toString());
});
});
// Write to stdin
stdin.write(testMessage);
// Wait for the output and verify it matches
const output = await outputPromise;
expect(output).toBe(testMessage);
process.kill();
pool.cleanup();
});
});
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/process-pool.ts:
--------------------------------------------------------------------------------
```typescript
import { spawn } from "child_process";
import { PassThrough } from "node:stream";
import { childProcessLogger, logger as l } from "./logger";
export type SpawnedProcess = {
process: ReturnType<typeof spawn>;
stdin: PassThrough;
};
export class ProcessPool {
private processes: SpawnedProcess[] = [];
private command: string[];
private env: Record<string, string>;
private minPoolSize: number;
private logger = l;
private spawningCount = 0;
constructor(command: string[], env: Record<string, string>, minPoolSize = 1) {
this.command = command;
this.env = env;
this.minPoolSize = minPoolSize;
}
private async spawnProcess(): Promise<SpawnedProcess> {
this.spawningCount++;
try {
const startTime = performance.now();
const childProcess = spawn(this.command[0], this.command.slice(1), {
env: { ...process.env, ...this.env },
stdio: ["pipe", "pipe", "pipe"],
});
const spawnTime = performance.now() - startTime;
const cl = childProcessLogger(childProcess.pid);
childProcess.stderr?.on("data", (data: Buffer) => {
cl.error(data.toString());
});
const stdin = new PassThrough();
stdin.pipe(childProcess.stdin!);
const spawnedProcess: SpawnedProcess = {
process: childProcess,
stdin,
};
this.logger.info(
`spawned process with PID ${childProcess.pid} in ${spawnTime.toFixed(
2
)}ms`
);
return spawnedProcess;
} finally {
this.spawningCount--;
}
}
private async spawnReplacement() {
// Only spawn if total processes (running + spawning) is less than minPoolSize
if (this.processes.length + this.spawningCount < this.minPoolSize) {
const process = await this.spawnProcess();
// Double check we still need this process
if (this.processes.length + this.spawningCount < this.minPoolSize) {
this.processes.push(process);
} else {
// We don't need this process anymore, kill it
l.info(`killing process ${process.process.pid}`);
process.process.kill();
}
}
}
async initialize() {
// Start initial processes
const promises = [];
for (let i = 0; i < this.minPoolSize; i++) {
promises.push(
this.spawnProcess().then((process) => {
this.processes.push(process);
})
);
}
await Promise.all(promises);
}
async getProcess(): Promise<SpawnedProcess> {
// If we have a process available, return it
if (this.processes.length > 0) {
const process = this.processes.pop()!;
// Spawn a replacement asynchronously
this.spawnReplacement();
return process;
}
// If no process available, spawn one immediately
return await this.spawnProcess();
}
cleanup() {
for (const process of this.processes) {
l.info(`killing process ${process.process.pid}`);
process.process.kill();
}
this.processes = [];
}
// For testing purposes
getPoolSize(): number {
return this.processes.length;
}
getSpawningCount(): number {
return this.spawningCount;
}
}
```
--------------------------------------------------------------------------------
/src/build-unikernel/build-unikernel.test.ts:
--------------------------------------------------------------------------------
```typescript
import { describe, expect, test } from "bun:test";
import { type Config } from "../lib/config";
import { determineRequiredSetups, generateDockerfile } from "./build-unikernel";
describe("determineRequiredSetups", () => {
test("correctly identifies Python-only setup", () => {
const config: Config = {
mcpServers: {
test: {
command: "uvx",
args: ["test-package"],
},
},
};
const result = determineRequiredSetups(config);
expect(result.needsPython).toBe(true);
expect(result.needsNode).toBe(false);
});
});
describe("generateDockerfile", () => {
const testConfig = {
mcpServers: {
test: {
command: "uvx",
args: ["test-package"],
},
},
};
test("generates correct Dockerfile for Python/UV setup", () => {
const dockerfile = generateDockerfile(
testConfig,
JSON.stringify(testConfig, null, 2)
);
expect(dockerfile).toContain("Install Python and UV");
expect(dockerfile).toContain("uv tool install test-package");
expect(dockerfile).toContain(
"cat > /usr/app/config/mcp-config.json << 'ENDCONFIG'"
);
expect(dockerfile).toContain(JSON.stringify(testConfig, null, 2));
});
test("generates correct Dockerfile for Node setup with npx command", () => {
const config: Config = {
mcpServers: {
test: {
command: "npx",
args: ["test-package"],
},
},
};
const dockerfile = generateDockerfile(
config,
JSON.stringify(config, null, 2)
);
expect(dockerfile).toContain("Install Node.js and npm");
expect(dockerfile).toContain("npm install test-package");
expect(dockerfile).toContain(
"cat > /usr/app/config/mcp-config.json << 'ENDCONFIG'"
);
});
test("generates correct Dockerfile for both Python and Node setup with multiple packages", () => {
const config: Config = {
mcpServers: {
test1: {
command: "uvx",
args: ["test-package1"],
},
test2: {
command: "npx",
args: ["test-package2"],
},
},
};
const dockerfile = generateDockerfile(
config,
JSON.stringify(config, null, 2)
);
expect(dockerfile).toContain("Install Python and UV");
expect(dockerfile).toContain("uv tool install test-package1");
expect(dockerfile).toContain("Install Node.js and npm");
expect(dockerfile).toContain("npm install test-package2");
expect(dockerfile).toContain(
"cat > /usr/app/config/mcp-config.json << 'ENDCONFIG'"
);
});
test("generates correct common parts for all setups", () => {
const dockerfile = generateDockerfile(
testConfig,
JSON.stringify(testConfig, null, 2)
);
expect(dockerfile).toContain("FROM debian:bookworm-slim");
expect(dockerfile).toContain("WORKDIR /usr/app");
expect(dockerfile).toContain("Install Bun");
expect(dockerfile).toContain("COPY package*.json .");
expect(dockerfile).toContain("COPY . .");
expect(dockerfile).toContain(
'ENTRYPOINT ["bun", "/usr/app/src/mcp-server-wrapper/mcp-server-wrapper.ts"'
);
});
});
```
--------------------------------------------------------------------------------
/src/mcp-server-wrapper/mcp-server-wrapper.ts:
--------------------------------------------------------------------------------
```typescript
import { type Server } from "bun";
import { spawn } from "child_process";
import { randomUUID } from "crypto";
import { PassThrough } from "node:stream";
import { loadConfig } from "../lib/config";
import { childProcessLogger, logger as l } from "./logger";
import { parseCommandLineArgs } from "./parse";
import { ProcessPool } from "./process-pool";
// WSContextData is the state associated with each ws connection
type WSContextData = {
childProcess: ReturnType<typeof spawn>;
stdin: PassThrough;
sessionId: string;
serverName: string;
};
type ServerPools = {
[key: string]: ProcessPool;
};
async function main() {
l.debug(`argv: ${process.argv.slice(2)}`);
let options;
try {
options = parseCommandLineArgs(process.argv.slice(2));
} catch (error: any) {
l.error(`Command line error: ${error.message}`);
l.error("Usage: mcp-server-wrapper [-p PORT] <config-file-path>");
process.exit(1);
}
const config = await loadConfig(options.configPath);
// Create a process pool for each MCP server
const pools: ServerPools = {};
for (const [name, serverConfig] of Object.entries(config.mcpServers)) {
const pool = new ProcessPool(
[serverConfig.command, ...serverConfig.args],
{}
);
await pool.initialize();
pools[name] = pool;
}
Bun.serve<WSContextData>({
port: options.port,
fetch(req: Request, server: Server) {
l.debug(`connection attempt: ${req.url}`);
// Extract the server name from the URL path
const url = new URL(req.url);
const serverName = url.pathname.slice(1); // Remove leading slash
if (!pools[serverName]) {
return new Response(`No MCP server found at ${serverName}`, {
status: 404,
});
}
if (server.upgrade(req, { data: { serverName } })) {
return;
}
return new Response("Upgrade failed", { status: 500 });
},
websocket: {
async open(ws) {
const sessionId = randomUUID();
l.debug(`open[${sessionId}]`);
try {
const serverName = ws.data.serverName;
const pool = pools[serverName];
const { process: child, stdin } = await pool.getProcess();
const cl = childProcessLogger(child.pid);
ws.data = {
childProcess: child,
stdin,
sessionId,
serverName,
};
l.info(`assigned process PID ${child.pid} (session: ${sessionId})`);
// stdout of the MCP server is a message to the client
child.stdout?.on("data", (data: Buffer) => {
const lines = data.toString().trim().split("\n");
for (const line of lines) {
if (line) {
cl.info(`[session: ${sessionId}] ${line}`);
ws.send(line);
}
}
});
child.on("close", (code) => {
const ll = code !== null && code > 0 ? l.error : l.info;
ll(
`process ${child.pid} exited with code ${code} (session: ${sessionId})`
);
ws.close();
});
} catch (error) {
l.error(`Failed to get process for session ${sessionId}: ${error}`);
ws.close();
}
},
message(ws, message) {
l.debug(`message: ${message} (session: ${ws.data.sessionId})`);
ws.data.stdin.write(message + "\n");
},
close(ws) {
l.debug(`close: connection (session: ${ws.data.sessionId})`);
ws.data.childProcess.kill("SIGINT");
},
},
});
l.info(`WebSocket server listening on port ${options.port}`);
// Cleanup on exit
const cleanup = () => {
l.info("Shutting down...");
for (const pool of Object.values(pools)) {
pool.cleanup();
}
process.exit(0);
};
process.on("SIGINT", cleanup);
process.on("SIGTERM", cleanup);
}
main().catch((error) => {
l.error("Fatal error: " + error);
process.exit(1);
});
```
--------------------------------------------------------------------------------
/src/build-unikernel/build-unikernel.ts:
--------------------------------------------------------------------------------
```typescript
import { createHash } from "crypto";
import * as fs from "fs/promises";
import * as path from "path";
import { type Config, loadConfig } from "../lib/config";
async function createBuildDir(
configPath: string,
configContent: string
): Promise<string> {
// Create a hash of the config content
const hash = createHash("sha256")
.update(configContent)
.digest("hex")
.slice(0, 8); // Use first 8 chars of hash
// Create build directory name
const buildDir = `./build-unikernel-${hash}`;
// Create directory structure
await fs.mkdir(buildDir, { recursive: true });
await fs.mkdir(path.join(buildDir, "unikernel"), { recursive: true });
await fs.mkdir(path.join(buildDir, "unikernel", "analysis"), {
recursive: true,
});
await fs.mkdir(path.join(buildDir, "unikernel", "analysis", "ldd-output"), {
recursive: true,
});
await fs.mkdir(
path.join(buildDir, "unikernel", "analysis", "strace-output"),
{ recursive: true }
);
return buildDir;
}
export function determineRequiredSetups(config: Config): {
needsPython: boolean;
needsNode: boolean;
} {
const commands = Object.values(config.mcpServers).map(
(server) => server.command
);
return {
needsPython: commands.some((cmd) => ["uvx", "python"].includes(cmd)),
needsNode: commands.some((cmd) => ["node", "npx"].includes(cmd)),
};
}
export function generateDockerfile(
config: Config,
configContent: string
): string {
const { needsPython, needsNode } = determineRequiredSetups(config);
// Collect all packages that need to be installed
const npmPackages = needsNode
? Object.values(config.mcpServers)
.filter((server) => server.command === "npx")
.map((server) => server.args[0])
: [];
const uvTools = needsPython
? Object.values(config.mcpServers)
.filter((server) => server.command === "uvx")
.map((server) => server.args[0])
: [];
let dockerfile = `FROM debian:bookworm-slim
WORKDIR /usr/app
RUN apt-get update && apt-get install -y curl wget unzip\n`;
// Add Python/UV setup if needed
if (needsPython) {
dockerfile += `
# Install Python and UV
RUN apt-get install -y python3 python3-venv
RUN curl -LsSf https://astral.sh/uv/install.sh | sh
ENV PATH="/root/.local/bin:$PATH"\n`;
// Add UV tool installations if any
if (uvTools.length > 0) {
dockerfile += `
# Pre-install UV tools
RUN uv tool install ${uvTools.join(" ")}\n`;
}
}
// Add Node.js setup if needed
if (needsNode) {
dockerfile += `
# Install Node.js and npm
RUN apt-get install -y nodejs npm\n`;
// Add npm package installations if any
if (npmPackages.length > 0) {
dockerfile += `
# Pre-install npm packages
RUN npm install ${npmPackages.join(" ")}\n`;
}
}
// Add the common parts with Bun installation and embedded config
dockerfile += `
# Install Bun
RUN curl -fsSL https://bun.sh/install | bash
ENV PATH="/root/.bun/bin:$PATH"
# Copy package files
COPY package*.json .
COPY bun.lockb .
RUN bun install
# Copy the application
COPY . .
# Embed the config file
COPY <<'ENDCONFIG' /usr/app/config/mcp-config.json
${configContent}
ENDCONFIG
ENTRYPOINT ["bun", "/usr/app/src/mcp-server-wrapper/mcp-server-wrapper.ts", "-p", "3001", "/usr/app/config/mcp-config.json"]`;
return dockerfile;
}
function generateInstrumentedDockerfile(
config: Config,
configContent: string,
analysisType: "ldd" | "strace"
): string {
const baseDockerfile = generateDockerfile(config, configContent);
// Split the Dockerfile at the ENTRYPOINT
const [baseContent] = baseDockerfile.split("ENTRYPOINT");
if (analysisType === "ldd") {
// Add analysis tools for ldd analysis
return `${baseContent}
# Install analysis tools
RUN apt-get update && apt-get install -y libc-bin
# Create analysis scripts
COPY <<'ENDSCRIPT' /usr/app/analyze-binaries.sh
#!/bin/bash
set -e
analyze_binary() {
local binary_name=\$1
local output_file="/analysis/ldd-output/\${binary_name}.txt"
if command -v \$binary_name &> /dev/null; then
echo "Analyzing \${binary_name}..." > "\$output_file"
# Run ldd with error handling
if ! ldd \$(which \$binary_name) >> "\$output_file" 2>&1; then
echo "Warning: ldd failed for \${binary_name}, trying with LD_TRACE_LOADED_OBJECTS=1" >> "\$output_file"
# Fallback to using LD_TRACE_LOADED_OBJECTS if ldd fails
LD_TRACE_LOADED_OBJECTS=1 \$(which \$binary_name) >> "\$output_file" 2>&1 || true
fi
fi
}
# Analyze each binary
analyze_binary "bun"
analyze_binary "node"
analyze_binary "python3"
analyze_binary "uv"
# Additional system information
echo "System information:" > /analysis/system-info.txt
uname -a >> /analysis/system-info.txt
cat /etc/os-release >> /analysis/system-info.txt
ENDSCRIPT
RUN chmod +x /usr/app/analyze-*.sh
VOLUME /analysis
ENTRYPOINT ["/bin/bash", "-c", "/usr/app/analyze-binaries.sh"]`;
} else {
// Add analysis tools for strace analysis
return `${baseContent}
# Install analysis tools
RUN apt-get update && apt-get install -y strace
# Create analysis scripts
COPY <<'ENDSCRIPT' /usr/app/analyze-runtime.sh
#!/bin/bash
set -e
# Start the server with strace
strace -f -e trace=open,openat bun /usr/app/src/mcp-server-wrapper/mcp-server-wrapper.ts -p 3001 /usr/app/config/mcp-config.json 2> /analysis/strace-output/server.txt &
SERVER_PID=\$!
# Wait for server to start
sleep 2
# Run example client with strace
strace -f -e trace=open,openat bun /usr/app/src/mcp-server-wrapper/example-client/example-client.ts /usr/app/config/mcp-config.json 3001 2> /analysis/strace-output/client.txt
# Kill server
kill \$SERVER_PID || true
ENDSCRIPT
RUN chmod +x /usr/app/analyze-*.sh
VOLUME /analysis
ENTRYPOINT ["/bin/bash", "-c", "/usr/app/analyze-runtime.sh"]`;
}
}
async function runAnalysis(
buildDir: string,
config: Config,
configContent: string
) {
// Generate both Dockerfiles
const lddDockerfile = generateInstrumentedDockerfile(
config,
configContent,
"ldd"
);
const straceDockerfile = generateInstrumentedDockerfile(
config,
configContent,
"strace"
);
const lddPath = path.join(buildDir, "unikernel", "Dockerfile.ldd");
const stracePath = path.join(buildDir, "unikernel", "Dockerfile.strace");
await fs.writeFile(lddPath, lddDockerfile);
await fs.writeFile(stracePath, straceDockerfile);
const analysisDir = path.resolve(
path.join(buildDir, "unikernel", "analysis")
);
// Run ldd analysis on x86_64
const lddImageName = `mcp-analysis-ldd:${path.basename(buildDir)}`;
console.log("Building ldd analysis container (x86_64)...");
const lddBuildResult = Bun.spawnSync(
[
"sh",
"-c",
`docker build --platform linux/amd64 -t ${lddImageName} -f ${lddPath} .`,
],
{
stdio: ["inherit", "inherit", "inherit"],
}
);
if (lddBuildResult.exitCode !== 0) {
throw new Error("Failed to build ldd analysis container");
}
console.log("Running ldd analysis...");
const lddRunResult = Bun.spawnSync(
[
"sh",
"-c",
`docker run --platform linux/amd64 --rm -v "${analysisDir}:/analysis" ${lddImageName}`,
],
{
stdio: ["inherit", "inherit", "inherit"],
}
);
if (lddRunResult.exitCode !== 0) {
throw new Error("ldd analysis failed");
}
// Run strace analysis on native arm64
const straceImageName = `mcp-analysis-strace:${path.basename(buildDir)}`;
console.log("Building strace analysis container (arm64)...");
const straceBuildResult = Bun.spawnSync(
[
"sh",
"-c",
`docker build --platform linux/arm64 -t ${straceImageName} -f ${stracePath} .`,
],
{
stdio: ["inherit", "inherit", "inherit"],
}
);
if (straceBuildResult.exitCode !== 0) {
throw new Error("Failed to build strace analysis container");
}
console.log("Running strace analysis...");
const straceRunResult = Bun.spawnSync(
[
"sh",
"-c",
`docker run --platform linux/arm64 --cap-add=SYS_PTRACE --rm -v "${analysisDir}:/analysis" ${straceImageName}`,
],
{
stdio: ["inherit", "inherit", "inherit"],
}
);
if (straceRunResult.exitCode !== 0) {
throw new Error("strace analysis failed");
}
// TODO: Process analysis results
// TODO: Generate unikernel Dockerfile
}
async function main() {
const args = process.argv.slice(2);
if (args.length !== 1) {
console.error("Usage: build-unikernel <config-file-path>");
process.exit(1);
}
const configPath = args[0];
try {
const configContent = await Bun.file(configPath).text();
const config = await loadConfig(configPath);
// Validate that all commands are supported
const unsupportedCommands = Object.values(config.mcpServers)
.map((server) => server.command)
.filter((cmd) => !["uvx", "python", "node", "npx"].includes(cmd));
if (unsupportedCommands.length > 0) {
console.error(
`Error: Unsupported commands found: ${unsupportedCommands.join(", ")}`
);
process.exit(1);
}
// Create build directory structure
const buildDir = await createBuildDir(configPath, configContent);
console.log(`Created build directory: ${buildDir}`);
// Generate and write the regular Dockerfile
const dockerfile = generateDockerfile(config, configContent);
const dockerfilePath = path.join(buildDir, "Dockerfile.generated");
await fs.writeFile(dockerfilePath, dockerfile);
console.log(`Generated Dockerfile at: ${dockerfilePath}`);
// Run analysis
await runAnalysis(buildDir, config, configContent);
console.log(
"Analysis complete. Results in:",
path.join(buildDir, "unikernel", "analysis")
);
} catch (error) {
console.error("Error:", error);
process.exit(1);
}
}
if (require.main === module) {
main().catch(console.error);
}
```