# Directory Structure
```
├── client.py
├── comfyui_client.py
├── comfyui_workflow_test.py
├── LICENSE
├── README.md
├── server.py
└── workflows
├── basic_api_test.json
└── basic.json
```
# Files
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
# ComfyUI MCP Server
A lightweight Python-based MCP (Model Context Protocol) server that interfaces with a local [ComfyUI](https://github.com/comfyanonymous/ComfyUI) instance to generate images programmatically via AI agent requests.
## Overview
This project enables AI agents to send image generation requests to ComfyUI using the MCP protocol over WebSocket. It supports:
- Flexible workflow selection (e.g., `basic_api_test.json`).
- Dynamic parameters: `prompt`, `width`, `height`, and `model`.
- Returns image URLs served by ComfyUI.
## Prerequisites
- **Python 3.10+**
- **ComfyUI**: Installed and running locally (e.g., on `localhost:8188`).
- **Dependencies**: `requests`, `websockets`, `mcp` (install via pip).
## Setup
1. **Clone the Repository**:
git clone <your-repo-url>
cd comfyui-mcp-server
2. **Install Dependencies**:
pip install requests websockets mcp
3. **Start ComfyUI**:
- Install ComfyUI (see [ComfyUI docs](https://github.com/comfyanonymous/ComfyUI)).
- Run it on port 8188:
```
cd <ComfyUI_dir>
python main.py --port 8188
```
4. **Prepare Workflows**:
- Place API-format workflow files (e.g., `basic_api_test.json`) in the `workflows/` directory.
- Export workflows from ComfyUI’s UI with “Save (API Format)” (enable dev mode in settings).
## Usage
1. **Run the MCP Server**:
python server.py
- Listens on `ws://localhost:9000`.
2. **Test with the Client**:
python client.py
- Sends a sample request: `"a dog wearing sunglasses"` with `512x512` using `sd_xl_base_1.0.safetensors`.
- Output example:
```
Response from server:
{
"image_url": "http://localhost:8188/view?filename=ComfyUI_00001_.png&subfolder=&type=output"
}
```
3. **Custom Requests**:
- Modify `client.py`’s `payload` to change `prompt`, `width`, `height`, `workflow_id`, or `model`.
- Example:
```
"params": json.dumps({
"prompt": "a cat in space",
"width": 768,
"height": 768,
"workflow_id": "basic_api_test",
"model": "v1-5-pruned-emaonly.ckpt"
})
```
## Project Structure
- `server.py`: MCP server with WebSocket transport and lifecycle support.
- `comfyui_client.py`: Interfaces with ComfyUI’s API, handles workflow queuing.
- `client.py`: Test client for sending MCP requests.
- `workflows/`: Directory for API-format workflow JSON files.
## Notes
- Ensure your chosen `model` (e.g., `v1-5-pruned-emaonly.ckpt`) exists in `<ComfyUI_dir>/models/checkpoints/`.
- The MCP SDK lacks native WebSocket transport; this uses a custom implementation.
- For custom workflows, adjust node IDs in `comfyui_client.py`’s `DEFAULT_MAPPING` if needed.
## Contributing
Feel free to submit issues or PRs to enhance flexibility (e.g., dynamic node mapping, progress streaming).
## License
Apache License
```
--------------------------------------------------------------------------------
/client.py:
--------------------------------------------------------------------------------
```python
import asyncio
import websockets
import json
payload = {
"tool": "generate_image",
"params": json.dumps({
"prompt": "an english mastiff dog sitting on a large boulder, bright shiny day",
"width": 512,
"height": 512,
"workflow_id": "basic_api_test",
"model": "v1-5-pruned-emaonly.ckpt" # No extra quote
})
}
async def test_mcp_server():
uri = "ws://localhost:9000"
try:
async with websockets.connect(uri) as ws:
print("Connected to MCP server")
await ws.send(json.dumps(payload))
response = await ws.recv()
print("Response from server:")
print(json.dumps(json.loads(response), indent=2))
except Exception as e:
print(f"WebSocket error: {e}")
if __name__ == "__main__":
print("Testing MCP server with WebSocket...")
asyncio.run(test_mcp_server())
```
--------------------------------------------------------------------------------
/comfyui_workflow_test.py:
--------------------------------------------------------------------------------
```python
import json
from urllib import request
#This is the ComfyUI api prompt format.
#If you want it for a specific workflow you can "enable dev mode options"
#in the settings of the UI (gear beside the "Queue Size: ") this will enable
#a button on the UI to save workflows in api format.
#keep in mind ComfyUI is pre alpha software so this format will change a bit.
#this is the one for the default workflow
def queue_prompt(prompt):
p = {"prompt": prompt}
data = json.dumps(p).encode('utf-8')
req = request.Request("http://localhost:8188/prompt", data=data)
request.urlopen(req)
with open("workflows/basic_api_test.json", "r") as f:
prompt = json.load(f)
#set the text prompt for our positive CLIPTextEncode
prompt["6"]["inputs"]["text"] = "beautiful scenery nature glass bottle landscape, purple galaxy bottle"
#set the seed for our KSampler node
prompt["3"]["inputs"]["seed"] = 5
queue_prompt(prompt)
```
--------------------------------------------------------------------------------
/workflows/basic_api_test.json:
--------------------------------------------------------------------------------
```json
{
"3": {
"inputs": {
"seed": 156680208700286,
"steps": 20,
"cfg": 8,
"sampler_name": "euler",
"scheduler": "normal",
"denoise": 1,
"model": [
"4",
0
],
"positive": [
"6",
0
],
"negative": [
"7",
0
],
"latent_image": [
"5",
0
]
},
"class_type": "KSampler",
"_meta": {
"title": "KSampler"
}
},
"4": {
"inputs": {
"ckpt_name": "v1-5-pruned-emaonly.ckpt"
},
"class_type": "CheckpointLoaderSimple",
"_meta": {
"title": "Load Checkpoint"
}
},
"5": {
"inputs": {
"width": 512,
"height": 512,
"batch_size": 1
},
"class_type": "EmptyLatentImage",
"_meta": {
"title": "Empty Latent Image"
}
},
"6": {
"inputs": {
"text": "beautiful scenery nature glass bottle landscape, , purple galaxy bottle,",
"clip": [
"4",
1
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"7": {
"inputs": {
"text": "text, watermark",
"clip": [
"4",
1
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
},
"8": {
"inputs": {
"samples": [
"3",
0
],
"vae": [
"4",
2
]
},
"class_type": "VAEDecode",
"_meta": {
"title": "VAE Decode"
}
},
"9": {
"inputs": {
"filename_prefix": "ComfyUI",
"images": [
"8",
0
]
},
"class_type": "SaveImage",
"_meta": {
"title": "Save Image"
}
}
}
```
--------------------------------------------------------------------------------
/server.py:
--------------------------------------------------------------------------------
```python
import asyncio
import json
import logging
from typing import AsyncIterator
from contextlib import asynccontextmanager
import websockets
from mcp.server.fastmcp import FastMCP
from comfyui_client import ComfyUIClient
# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("MCP_Server")
# Global ComfyUI client (fallback since context isn’t available)
comfyui_client = ComfyUIClient("http://localhost:8188")
# Define application context (for future use)
class AppContext:
def __init__(self, comfyui_client: ComfyUIClient):
self.comfyui_client = comfyui_client
# Lifespan management (placeholder for future context support)
@asynccontextmanager
async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
"""Manage application lifecycle"""
logger.info("Starting MCP server lifecycle...")
try:
# Startup: Could add ComfyUI health check here in the future
logger.info("ComfyUI client initialized globally")
yield AppContext(comfyui_client=comfyui_client)
finally:
# Shutdown: Cleanup (if needed)
logger.info("Shutting down MCP server")
# Initialize FastMCP with lifespan
mcp = FastMCP("ComfyUI_MCP_Server", lifespan=app_lifespan)
# Define the image generation tool
@mcp.tool()
def generate_image(params: str) -> dict:
"""Generate an image using ComfyUI"""
logger.info(f"Received request with params: {params}")
try:
param_dict = json.loads(params)
prompt = param_dict["prompt"]
width = param_dict.get("width", 512)
height = param_dict.get("height", 512)
workflow_id = param_dict.get("workflow_id", "basic_api_test")
model = param_dict.get("model", None)
# Use global comfyui_client (since mcp.context isn’t available)
image_url = comfyui_client.generate_image(
prompt=prompt,
width=width,
height=height,
workflow_id=workflow_id,
model=model
)
logger.info(f"Returning image URL: {image_url}")
return {"image_url": image_url}
except Exception as e:
logger.error(f"Error: {e}")
return {"error": str(e)}
# WebSocket server
async def handle_websocket(websocket, path):
logger.info("WebSocket client connected")
try:
async for message in websocket:
request = json.loads(message)
logger.info(f"Received message: {request}")
if request.get("tool") == "generate_image":
result = generate_image(request.get("params", ""))
await websocket.send(json.dumps(result))
else:
await websocket.send(json.dumps({"error": "Unknown tool"}))
except websockets.ConnectionClosed:
logger.info("WebSocket client disconnected")
# Main server loop
async def main():
logger.info("Starting MCP server on ws://localhost:9000...")
async with websockets.serve(handle_websocket, "localhost", 9000):
await asyncio.Future() # Run forever
if __name__ == "__main__":
asyncio.run(main())
```
--------------------------------------------------------------------------------
/workflows/basic.json:
--------------------------------------------------------------------------------
```json
{"last_node_id":9,"last_link_id":9,"nodes":[{"id":7,"type":"CLIPTextEncode","pos":[413,389],"size":[425.27801513671875,180.6060791015625],"flags":{},"order":3,"mode":0,"inputs":[{"name":"clip","localized_name":"clip","type":"CLIP","link":5}],"outputs":[{"name":"CONDITIONING","localized_name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.22","Node name for S&R":"CLIPTextEncode"},"widgets_values":["text, watermark"]},{"id":6,"type":"CLIPTextEncode","pos":[415,186],"size":[422.84503173828125,164.31304931640625],"flags":{},"order":2,"mode":0,"inputs":[{"name":"clip","localized_name":"clip","type":"CLIP","link":3}],"outputs":[{"name":"CONDITIONING","localized_name":"CONDITIONING","type":"CONDITIONING","links":[4],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.22","Node name for S&R":"CLIPTextEncode"},"widgets_values":["beautiful scenery nature glass bottle landscape, , purple galaxy bottle,"]},{"id":5,"type":"EmptyLatentImage","pos":[473,609],"size":[315,106],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","localized_name":"LATENT","type":"LATENT","links":[2],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.22","Node name for S&R":"EmptyLatentImage"},"widgets_values":[512,512,1]},{"id":3,"type":"KSampler","pos":[863,186],"size":[315,262],"flags":{},"order":4,"mode":0,"inputs":[{"name":"model","localized_name":"model","type":"MODEL","link":1},{"name":"positive","localized_name":"positive","type":"CONDITIONING","link":4},{"name":"negative","localized_name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","localized_name":"latent_image","type":"LATENT","link":2}],"outputs":[{"name":"LATENT","localized_name":"LATENT","type":"LATENT","links":[7],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.22","Node name for S&R":"KSampler"},"widgets_values":[156680208700286,"randomize",20,8,"euler","normal",1]},{"id":8,"type":"VAEDecode","pos":[1209,188],"size":[210,46],"flags":{},"order":5,"mode":0,"inputs":[{"name":"samples","localized_name":"samples","type":"LATENT","link":7},{"name":"vae","localized_name":"vae","type":"VAE","link":8}],"outputs":[{"name":"IMAGE","localized_name":"IMAGE","type":"IMAGE","links":[9],"slot_index":0}],"properties":{"cnr_id":"comfy-core","ver":"0.3.22","Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":9,"type":"SaveImage","pos":[1451,189],"size":[210,58],"flags":{},"order":6,"mode":0,"inputs":[{"name":"images","localized_name":"images","type":"IMAGE","link":9}],"outputs":[],"properties":{"cnr_id":"comfy-core","ver":"0.3.22"},"widgets_values":["ComfyUI"]},{"id":4,"type":"CheckpointLoaderSimple","pos":[26,474],"size":[315,98],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","localized_name":"MODEL","type":"MODEL","links":[1],"slot_index":0},{"name":"CLIP","localized_name":"CLIP","type":"CLIP","links":[3,5],"slot_index":1},{"name":"VAE","localized_name":"VAE","type":"VAE","links":[8],"slot_index":2}],"properties":{"cnr_id":"comfy-core","ver":"0.3.22","Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["v1-5-pruned-emaonly.ckpt"]}],"links":[[1,4,0,3,0,"MODEL"],[2,5,0,3,3,"LATENT"],[3,4,1,6,0,"CLIP"],[4,6,0,3,1,"CONDITIONING"],[5,4,1,7,0,"CLIP"],[6,7,0,3,2,"CONDITIONING"],[7,3,0,8,0,"LATENT"],[8,4,2,8,1,"VAE"],[9,8,0,9,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1,"offset":[0,0]}},"version":0.4}
```
--------------------------------------------------------------------------------
/comfyui_client.py:
--------------------------------------------------------------------------------
```python
import requests
import json
import time
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("ComfyUIClient")
DEFAULT_MAPPING = {
"prompt": ("6", "text"),
"width": ("5", "width"),
"height": ("5", "height"),
"model": ("4", "ckpt_name")
}
class ComfyUIClient:
def __init__(self, base_url):
self.base_url = base_url
self.available_models = self._get_available_models()
def _get_available_models(self):
"""Fetch list of available checkpoint models from ComfyUI"""
try:
response = requests.get(f"{self.base_url}/object_info/CheckpointLoaderSimple")
if response.status_code != 200:
logger.warning("Failed to fetch model list; using default handling")
return []
data = response.json()
models = data["CheckpointLoaderSimple"]["input"]["required"]["ckpt_name"][0]
logger.info(f"Available models: {models}")
return models
except Exception as e:
logger.warning(f"Error fetching models: {e}")
return []
def generate_image(self, prompt, width, height, workflow_id="basic_api_test", model=None):
try:
workflow_file = f"workflows/{workflow_id}.json"
with open(workflow_file, "r") as f:
workflow = json.load(f)
params = {"prompt": prompt, "width": width, "height": height}
if model:
# Validate or correct model name
if model.endswith("'"): # Strip accidental quote
model = model.rstrip("'")
logger.info(f"Corrected model name: {model}")
if self.available_models and model not in self.available_models:
raise Exception(f"Model '{model}' not in available models: {self.available_models}")
params["model"] = model
for param_key, value in params.items():
if param_key in DEFAULT_MAPPING:
node_id, input_key = DEFAULT_MAPPING[param_key]
if node_id not in workflow:
raise Exception(f"Node {node_id} not found in workflow {workflow_id}")
workflow[node_id]["inputs"][input_key] = value
logger.info(f"Submitting workflow {workflow_id} to ComfyUI...")
response = requests.post(f"{self.base_url}/prompt", json={"prompt": workflow})
if response.status_code != 200:
raise Exception(f"Failed to queue workflow: {response.status_code} - {response.text}")
prompt_id = response.json()["prompt_id"]
logger.info(f"Queued workflow with prompt_id: {prompt_id}")
max_attempts = 30
for _ in range(max_attempts):
history = requests.get(f"{self.base_url}/history/{prompt_id}").json()
if history.get(prompt_id):
outputs = history[prompt_id]["outputs"]
logger.info("Workflow outputs: %s", json.dumps(outputs, indent=2))
image_node = next((nid for nid, out in outputs.items() if "images" in out), None)
if not image_node:
raise Exception(f"No output node with images found: {outputs}")
image_filename = outputs[image_node]["images"][0]["filename"]
image_url = f"{self.base_url}/view?filename={image_filename}&subfolder=&type=output"
logger.info(f"Generated image URL: {image_url}")
return image_url
time.sleep(1)
raise Exception(f"Workflow {prompt_id} didn’t complete within {max_attempts} seconds")
except FileNotFoundError:
raise Exception(f"Workflow file '{workflow_file}' not found")
except KeyError as e:
raise Exception(f"Workflow error - invalid node or input: {e}")
except requests.RequestException as e:
raise Exception(f"ComfyUI API error: {e}")
```