#
tokens: 4093/50000 6/6 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── .python-version
├── LICENSE
├── pyproject.toml
├── README.md
├── src
│   └── comfy_ui_mcp_server
│       ├── __init__.py
│       └── server.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
1 | 3.10
2 | 
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
 1 | # Python-generated files
 2 | __pycache__/
 3 | *.py[oc]
 4 | build/
 5 | dist/
 6 | wheels/
 7 | *.egg-info
 8 | 
 9 | # Virtual environments
10 | .venv
11 | 
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # comfy-ui-mcp-server MCP server
  2 | 
  3 | A server for connnecting to a local comfyUI
  4 | 
  5 | ## Components
  6 | 
  7 | ### Resources
  8 | 
  9 | The server implements a simple note storage system with:
 10 | - Custom note:// URI scheme for accessing individual notes
 11 | - Each note resource has a name, description and text/plain mimetype
 12 | 
 13 | ### Prompts
 14 | 
 15 | The server provides a single prompt:
 16 | - summarize-notes: Creates summaries of all stored notes
 17 |   - Optional "style" argument to control detail level (brief/detailed)
 18 |   - Generates prompt combining all current notes with style preference
 19 | 
 20 | ### Tools
 21 | 
 22 | The server implements one tool:
 23 | - add-note: Adds a new note to the server
 24 |   - Takes "name" and "content" as required string arguments
 25 |   - Updates server state and notifies clients of resource changes
 26 | 
 27 | ## Configuration
 28 | 
 29 | [TODO: Add configuration details specific to your implementation]
 30 | 
 31 | ## Quickstart
 32 | 
 33 | ### Install
 34 | 
 35 | #### Claude Desktop
 36 | 
 37 | On MacOS: `~/Library/Application\ Support/Claude/claude_desktop_config.json`
 38 | On Windows: `%APPDATA%/Claude/claude_desktop_config.json`
 39 | 
 40 | <details>
 41 |   <summary>Development/Unpublished Servers Configuration</summary>
 42 |   ```
 43 |   "mcpServers": {
 44 |     "comfy-ui-mcp-server": {
 45 |       "command": "uv",
 46 |       "args": [
 47 |         "--directory",
 48 |         "E:\Claude\comfy-ui-mcp-server",
 49 |         "run",
 50 |         "comfy-ui-mcp-server"
 51 |       ]
 52 |     }
 53 |   }
 54 |   ```
 55 | </details>
 56 | 
 57 | <details>
 58 |   <summary>Published Servers Configuration</summary>
 59 |   ```
 60 |   "mcpServers": {
 61 |     "comfy-ui-mcp-server": {
 62 |       "command": "uvx",
 63 |       "args": [
 64 |         "comfy-ui-mcp-server"
 65 |       ]
 66 |     }
 67 |   }
 68 |   ```
 69 | </details>
 70 | 
 71 | ## Development
 72 | 
 73 | ### Building and Publishing
 74 | 
 75 | To prepare the package for distribution:
 76 | 
 77 | 1. Sync dependencies and update lockfile:
 78 | ```bash
 79 | uv sync
 80 | ```
 81 | 
 82 | 2. Build package distributions:
 83 | ```bash
 84 | uv build
 85 | ```
 86 | 
 87 | This will create source and wheel distributions in the `dist/` directory.
 88 | 
 89 | 3. Publish to PyPI:
 90 | ```bash
 91 | uv publish
 92 | ```
 93 | 
 94 | Note: You'll need to set PyPI credentials via environment variables or command flags:
 95 | - Token: `--token` or `UV_PUBLISH_TOKEN`
 96 | - Or username/password: `--username`/`UV_PUBLISH_USERNAME` and `--password`/`UV_PUBLISH_PASSWORD`
 97 | 
 98 | ### Debugging
 99 | 
100 | Since MCP servers run over stdio, debugging can be challenging. For the best debugging
101 | experience, we strongly recommend using the [MCP Inspector](https://github.com/modelcontextprotocol/inspector).
102 | 
103 | 
104 | You can launch the MCP Inspector via [`npm`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) with this command:
105 | 
106 | ```bash
107 | npx @modelcontextprotocol/inspector uv --directory E:\Claude\comfy-ui-mcp-server run comfy-ui-mcp-server
108 | ```
109 | 
110 | 
111 | Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.
```

--------------------------------------------------------------------------------
/src/comfy_ui_mcp_server/__init__.py:
--------------------------------------------------------------------------------

```python
 1 | # __init__.py
 2 | 
 3 | import asyncio
 4 | from .server import main as server_main
 5 | 
 6 | def main():
 7 |     """Entry point for the package."""
 8 |     try:
 9 |         asyncio.run(server_main())
10 |     except Exception as e:
11 |         print(f"Error running server: {e}")
12 |         raise
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "comfy-ui-mcp-server"
 3 | version = "0.1.0"
 4 | description = "MCP server for ComfyUI integration"
 5 | authors = [{ name = "Your Name", email = "[email protected]" }]
 6 | dependencies = [
 7 |     "mcp>=0.1.0",
 8 |     "websockets>=12.0",
 9 |     "aiohttp>=3.9.1",
10 |     "pydantic>=2.5.2",
11 |     "websocket-client>=1.8.0"
12 | ]
13 | requires-python = ">=3.10"
14 | 
15 | [build-system]
16 | requires = ["hatchling"]
17 | build-backend = "hatchling.build"
18 | 
19 | [tool.hatch.metadata]
20 | allow-direct-references = true
21 | 
22 | [tool.hatch.build.targets.wheel]
23 | packages = ["src/comfy_ui_mcp_server"]
24 | 
25 | [project.scripts]
26 | comfy-ui-mcp-server = "comfy_ui_mcp_server:main"
27 | 
28 | [tool.rye]
29 | managed = true
30 | dev-dependencies = [
31 |     "pytest>=7.4.3",
32 |     "pytest-asyncio>=0.23.2"
33 | ]
```

--------------------------------------------------------------------------------
/src/comfy_ui_mcp_server/server.py:
--------------------------------------------------------------------------------

```python
  1 | import asyncio
  2 | import json
  3 | import logging
  4 | import os
  5 | import uuid
  6 | import base64
  7 | from dataclasses import dataclass
  8 | from typing import Any, Dict, List, Optional
  9 | 
 10 | import aiohttp
 11 | import websockets
 12 | from mcp.server import Server
 13 | from mcp.server.stdio import stdio_server
 14 | from mcp.types import (CallToolResult, ImageContent, TextContent, Tool,
 15 |                       EmbeddedResource)
 16 | 
 17 | # Configure logging
 18 | logging.basicConfig(level=logging.INFO)
 19 | logger = logging.getLogger("comfy-mcp-server")
 20 | 
 21 | @dataclass
 22 | class ComfyConfig:
 23 |     server_address: str
 24 |     client_id: str
 25 | 
 26 | class ComfyUIServer:
 27 |     def __init__(self):
 28 |         self.config = ComfyConfig(
 29 |             server_address=os.getenv("COMFY_SERVER", "127.0.0.1:8188"),
 30 |             client_id=str(uuid.uuid4())
 31 |         )
 32 |         self.app = Server("comfy-mcp-server")
 33 |         self.setup_handlers()
 34 | 
 35 |     def setup_handlers(self):
 36 |         @self.app.list_tools()
 37 |         async def list_tools() -> List[Tool]:
 38 |             """List available image generation tools."""
 39 |             return [
 40 |                 Tool(
 41 |                     name="generate_image",
 42 |                     description="Generate an image using ComfyUI",
 43 |                     inputSchema={
 44 |                         "type": "object",
 45 |                         "properties": {
 46 |                             "prompt": {
 47 |                                 "type": "string",
 48 |                                 "description": "Positive prompt describing what you want in the image"
 49 |                             },
 50 |                             "negative_prompt": {
 51 |                                 "type": "string",
 52 |                                 "description": "Negative prompt describing what you don't want",
 53 |                                 "default": "bad hands, bad quality"
 54 |                             },
 55 |                             "seed": {
 56 |                                 "type": "number",
 57 |                                 "description": "Seed for reproducible generation",
 58 |                                 "default": 8566257
 59 |                             },
 60 |                             "width": {
 61 |                                 "type": "number",
 62 |                                 "description": "Image width in pixels",
 63 |                                 "default": 512
 64 |                             },
 65 |                             "height": {
 66 |                                 "type": "number",
 67 |                                 "description": "Image height in pixels",
 68 |                                 "default": 512
 69 |                             }
 70 |                         },
 71 |                         "required": ["prompt"]
 72 |                     }
 73 |                 )
 74 |             ]
 75 | 
 76 |         @self.app.call_tool()
 77 |         async def call_tool(name: str, arguments: Dict[str, Any]) -> List[TextContent | ImageContent | EmbeddedResource]:
 78 |             """Handle tool execution for image generation."""
 79 |             if name != "generate_image":
 80 |                 raise ValueError(f"Unknown tool: {name}")
 81 | 
 82 |             if not isinstance(arguments, dict) or "prompt" not in arguments:
 83 |                 raise ValueError("Invalid generation arguments")
 84 | 
 85 |             try:
 86 |                 logger.info(f"Generating image with arguments: {arguments}")
 87 |                 image_data = await self.generate_image(
 88 |                     prompt=arguments["prompt"],
 89 |                     negative_prompt=arguments.get("negative_prompt", "bad hands, bad quality"),
 90 |                     seed=int(arguments.get("seed", 8566257)),
 91 |                     width=int(arguments.get("width", 512)),
 92 |                     height=int(arguments.get("height", 512))
 93 |                 )
 94 | 
 95 |                 if image_data:
 96 |                     return [
 97 |                         ImageContent(
 98 |                             type="image",
 99 |                             data=base64.b64encode(image_data).decode('utf-8'),
100 |                             mimeType="image/png"
101 |                         )
102 |                     ]
103 |                 else:
104 |                     raise RuntimeError("No image data received")
105 | 
106 |             except Exception as e:
107 |                 logger.error(f"Generation error: {str(e)}")
108 |                 return [
109 |                     TextContent(
110 |                         type="text",
111 |                         text=f"Image generation failed: {str(e)}"
112 |                     )
113 |                 ]
114 | 
115 |     async def generate_image(
116 |         self,
117 |         prompt: str,
118 |         negative_prompt: str,
119 |         seed: int,
120 |         width: int,
121 |         height: int
122 |     ) -> bytes:
123 |         """Generate an image using ComfyUI."""
124 |         # Construct ComfyUI workflow
125 |         workflow = {
126 |             "4": {
127 |                 "class_type": "CheckpointLoaderSimple",
128 |                 "inputs": {
129 |                     "ckpt_name": "v1-5-pruned-emaonly.safetensors"
130 |                 }
131 |             },
132 |             "5": {
133 |                 "class_type": "EmptyLatentImage",
134 |                 "inputs": {
135 |                     "batch_size": 1,
136 |                     "height": height,
137 |                     "width": width
138 |                 }
139 |             },
140 |             "6": {
141 |                 "class_type": "CLIPTextEncode",
142 |                 "inputs": {
143 |                     "clip": ["4", 1],
144 |                     "text": prompt
145 |                 }
146 |             },
147 |             "7": {
148 |                 "class_type": "CLIPTextEncode",
149 |                 "inputs": {
150 |                     "clip": ["4", 1],
151 |                     "text": negative_prompt
152 |                 }
153 |             },
154 |             "3": {
155 |                 "class_type": "KSampler",
156 |                 "inputs": {
157 |                     "cfg": 8,
158 |                     "denoise": 1,
159 |                     "latent_image": ["5", 0],
160 |                     "model": ["4", 0],
161 |                     "negative": ["7", 0],
162 |                     "positive": ["6", 0],
163 |                     "sampler_name": "euler",
164 |                     "scheduler": "normal",
165 |                     "seed": seed,
166 |                     "steps": 20
167 |                 }
168 |             },
169 |             "8": {
170 |                 "class_type": "VAEDecode",
171 |                 "inputs": {
172 |                     "samples": ["3", 0],
173 |                     "vae": ["4", 2]
174 |                 }
175 |             },
176 |             "save_image_websocket": {
177 |                 "class_type": "SaveImageWebsocket",
178 |                 "inputs": {
179 |                     "images": ["8", 0]
180 |                 }
181 |             },
182 |             "save_image": {
183 |                 "class_type": "SaveImage",
184 |                 "inputs": {
185 |                     "images": ["8", 0],
186 |                     "filename_prefix": "mcp"
187 |                 }
188 |             }
189 |         }
190 | 
191 |         try:
192 |             prompt_response = await self.queue_prompt(workflow)
193 |             logger.info(f"Queued prompt, got response: {prompt_response}")
194 |             prompt_id = prompt_response["prompt_id"]
195 |         except Exception as e:
196 |             logger.error(f"Error queuing prompt: {e}")
197 |             raise
198 | 
199 |         uri = f"ws://{self.config.server_address}/ws?clientId={self.config.client_id}"
200 |         logger.info(f"Connecting to websocket at {uri}")
201 |         
202 |         async with websockets.connect(uri) as websocket:
203 |             while True:
204 |                 try:
205 |                     message = await websocket.recv()
206 |                     
207 |                     if isinstance(message, str):
208 |                         try:
209 |                             data = json.loads(message)
210 |                             logger.info(f"Received text message: {data}")
211 |                             
212 |                             if data.get("type") == "executing":
213 |                                 exec_data = data.get("data", {})
214 |                                 if exec_data.get("prompt_id") == prompt_id:
215 |                                     node = exec_data.get("node")
216 |                                     logger.info(f"Processing node: {node}")
217 |                                     if node is None:
218 |                                         logger.info("Generation complete signal received")
219 |                                         break
220 |                         except:
221 |                             pass
222 |                     else:
223 |                         logger.info(f"Received binary message of length: {len(message)}")
224 |                         if len(message) > 8:  # Check if we have actual image data
225 |                             return message[8:]  # Remove binary header
226 |                         else:
227 |                             logger.warning(f"Received short binary message: {message}")
228 |                 
229 |                 except websockets.exceptions.ConnectionClosed as e:
230 |                     logger.error(f"WebSocket connection closed: {e}")
231 |                     break
232 |                 except Exception as e:
233 |                     logger.error(f"Error processing message: {e}")
234 |                     continue
235 | 
236 |         raise RuntimeError("No valid image data received")
237 | 
238 |     async def queue_prompt(self, prompt: Dict[str, Any]) -> Dict[str, Any]:
239 |         """Queue a prompt with ComfyUI."""
240 |         async with aiohttp.ClientSession() as session:
241 |             try:
242 |                 async with session.post(
243 |                     f"http://{self.config.server_address}/prompt",
244 |                     json={
245 |                         "prompt": prompt,
246 |                         "client_id": self.config.client_id
247 |                     }
248 |                 ) as response:
249 |                     if response.status != 200:
250 |                         text = await response.text()
251 |                         raise RuntimeError(f"Failed to queue prompt: {response.status} - {text}")
252 |                     return await response.json()
253 |             except aiohttp.ClientError as e:
254 |                 raise RuntimeError(f"HTTP request failed: {e}")
255 | 
256 | async def main():
257 |     """Main entry point for the ComfyUI MCP server."""
258 |     server = ComfyUIServer()
259 |     
260 |     async with stdio_server() as (read_stream, write_stream):
261 |         await server.app.run(
262 |             read_stream,
263 |             write_stream,
264 |             server.app.create_initialization_options()
265 |         )
266 | 
267 | if __name__ == "__main__":
268 |     asyncio.run(main())
```