#
tokens: 2920/50000 6/6 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .gitignore
├── .python-version
├── LICENSE
├── pyproject.toml
├── README.md
├── src
│   └── comfy_ui_mcp_server
│       ├── __init__.py
│       └── server.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
3.10

```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# Python-generated files
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info

# Virtual environments
.venv

```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# comfy-ui-mcp-server MCP server

A server for connnecting to a local comfyUI

## Components

### Resources

The server implements a simple note storage system with:
- Custom note:// URI scheme for accessing individual notes
- Each note resource has a name, description and text/plain mimetype

### Prompts

The server provides a single prompt:
- summarize-notes: Creates summaries of all stored notes
  - Optional "style" argument to control detail level (brief/detailed)
  - Generates prompt combining all current notes with style preference

### Tools

The server implements one tool:
- add-note: Adds a new note to the server
  - Takes "name" and "content" as required string arguments
  - Updates server state and notifies clients of resource changes

## Configuration

[TODO: Add configuration details specific to your implementation]

## Quickstart

### Install

#### Claude Desktop

On MacOS: `~/Library/Application\ Support/Claude/claude_desktop_config.json`
On Windows: `%APPDATA%/Claude/claude_desktop_config.json`

<details>
  <summary>Development/Unpublished Servers Configuration</summary>
  ```
  "mcpServers": {
    "comfy-ui-mcp-server": {
      "command": "uv",
      "args": [
        "--directory",
        "E:\Claude\comfy-ui-mcp-server",
        "run",
        "comfy-ui-mcp-server"
      ]
    }
  }
  ```
</details>

<details>
  <summary>Published Servers Configuration</summary>
  ```
  "mcpServers": {
    "comfy-ui-mcp-server": {
      "command": "uvx",
      "args": [
        "comfy-ui-mcp-server"
      ]
    }
  }
  ```
</details>

## Development

### Building and Publishing

To prepare the package for distribution:

1. Sync dependencies and update lockfile:
```bash
uv sync
```

2. Build package distributions:
```bash
uv build
```

This will create source and wheel distributions in the `dist/` directory.

3. Publish to PyPI:
```bash
uv publish
```

Note: You'll need to set PyPI credentials via environment variables or command flags:
- Token: `--token` or `UV_PUBLISH_TOKEN`
- Or username/password: `--username`/`UV_PUBLISH_USERNAME` and `--password`/`UV_PUBLISH_PASSWORD`

### Debugging

Since MCP servers run over stdio, debugging can be challenging. For the best debugging
experience, we strongly recommend using the [MCP Inspector](https://github.com/modelcontextprotocol/inspector).


You can launch the MCP Inspector via [`npm`](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) with this command:

```bash
npx @modelcontextprotocol/inspector uv --directory E:\Claude\comfy-ui-mcp-server run comfy-ui-mcp-server
```


Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.
```

--------------------------------------------------------------------------------
/src/comfy_ui_mcp_server/__init__.py:
--------------------------------------------------------------------------------

```python
# __init__.py

import asyncio
from .server import main as server_main

def main():
    """Entry point for the package."""
    try:
        asyncio.run(server_main())
    except Exception as e:
        print(f"Error running server: {e}")
        raise
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[project]
name = "comfy-ui-mcp-server"
version = "0.1.0"
description = "MCP server for ComfyUI integration"
authors = [{ name = "Your Name", email = "[email protected]" }]
dependencies = [
    "mcp>=0.1.0",
    "websockets>=12.0",
    "aiohttp>=3.9.1",
    "pydantic>=2.5.2",
    "websocket-client>=1.8.0"
]
requires-python = ">=3.10"

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[tool.hatch.metadata]
allow-direct-references = true

[tool.hatch.build.targets.wheel]
packages = ["src/comfy_ui_mcp_server"]

[project.scripts]
comfy-ui-mcp-server = "comfy_ui_mcp_server:main"

[tool.rye]
managed = true
dev-dependencies = [
    "pytest>=7.4.3",
    "pytest-asyncio>=0.23.2"
]
```

--------------------------------------------------------------------------------
/src/comfy_ui_mcp_server/server.py:
--------------------------------------------------------------------------------

```python
import asyncio
import json
import logging
import os
import uuid
import base64
from dataclasses import dataclass
from typing import Any, Dict, List, Optional

import aiohttp
import websockets
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import (CallToolResult, ImageContent, TextContent, Tool,
                      EmbeddedResource)

# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger("comfy-mcp-server")

@dataclass
class ComfyConfig:
    server_address: str
    client_id: str

class ComfyUIServer:
    def __init__(self):
        self.config = ComfyConfig(
            server_address=os.getenv("COMFY_SERVER", "127.0.0.1:8188"),
            client_id=str(uuid.uuid4())
        )
        self.app = Server("comfy-mcp-server")
        self.setup_handlers()

    def setup_handlers(self):
        @self.app.list_tools()
        async def list_tools() -> List[Tool]:
            """List available image generation tools."""
            return [
                Tool(
                    name="generate_image",
                    description="Generate an image using ComfyUI",
                    inputSchema={
                        "type": "object",
                        "properties": {
                            "prompt": {
                                "type": "string",
                                "description": "Positive prompt describing what you want in the image"
                            },
                            "negative_prompt": {
                                "type": "string",
                                "description": "Negative prompt describing what you don't want",
                                "default": "bad hands, bad quality"
                            },
                            "seed": {
                                "type": "number",
                                "description": "Seed for reproducible generation",
                                "default": 8566257
                            },
                            "width": {
                                "type": "number",
                                "description": "Image width in pixels",
                                "default": 512
                            },
                            "height": {
                                "type": "number",
                                "description": "Image height in pixels",
                                "default": 512
                            }
                        },
                        "required": ["prompt"]
                    }
                )
            ]

        @self.app.call_tool()
        async def call_tool(name: str, arguments: Dict[str, Any]) -> List[TextContent | ImageContent | EmbeddedResource]:
            """Handle tool execution for image generation."""
            if name != "generate_image":
                raise ValueError(f"Unknown tool: {name}")

            if not isinstance(arguments, dict) or "prompt" not in arguments:
                raise ValueError("Invalid generation arguments")

            try:
                logger.info(f"Generating image with arguments: {arguments}")
                image_data = await self.generate_image(
                    prompt=arguments["prompt"],
                    negative_prompt=arguments.get("negative_prompt", "bad hands, bad quality"),
                    seed=int(arguments.get("seed", 8566257)),
                    width=int(arguments.get("width", 512)),
                    height=int(arguments.get("height", 512))
                )

                if image_data:
                    return [
                        ImageContent(
                            type="image",
                            data=base64.b64encode(image_data).decode('utf-8'),
                            mimeType="image/png"
                        )
                    ]
                else:
                    raise RuntimeError("No image data received")

            except Exception as e:
                logger.error(f"Generation error: {str(e)}")
                return [
                    TextContent(
                        type="text",
                        text=f"Image generation failed: {str(e)}"
                    )
                ]

    async def generate_image(
        self,
        prompt: str,
        negative_prompt: str,
        seed: int,
        width: int,
        height: int
    ) -> bytes:
        """Generate an image using ComfyUI."""
        # Construct ComfyUI workflow
        workflow = {
            "4": {
                "class_type": "CheckpointLoaderSimple",
                "inputs": {
                    "ckpt_name": "v1-5-pruned-emaonly.safetensors"
                }
            },
            "5": {
                "class_type": "EmptyLatentImage",
                "inputs": {
                    "batch_size": 1,
                    "height": height,
                    "width": width
                }
            },
            "6": {
                "class_type": "CLIPTextEncode",
                "inputs": {
                    "clip": ["4", 1],
                    "text": prompt
                }
            },
            "7": {
                "class_type": "CLIPTextEncode",
                "inputs": {
                    "clip": ["4", 1],
                    "text": negative_prompt
                }
            },
            "3": {
                "class_type": "KSampler",
                "inputs": {
                    "cfg": 8,
                    "denoise": 1,
                    "latent_image": ["5", 0],
                    "model": ["4", 0],
                    "negative": ["7", 0],
                    "positive": ["6", 0],
                    "sampler_name": "euler",
                    "scheduler": "normal",
                    "seed": seed,
                    "steps": 20
                }
            },
            "8": {
                "class_type": "VAEDecode",
                "inputs": {
                    "samples": ["3", 0],
                    "vae": ["4", 2]
                }
            },
            "save_image_websocket": {
                "class_type": "SaveImageWebsocket",
                "inputs": {
                    "images": ["8", 0]
                }
            },
            "save_image": {
                "class_type": "SaveImage",
                "inputs": {
                    "images": ["8", 0],
                    "filename_prefix": "mcp"
                }
            }
        }

        try:
            prompt_response = await self.queue_prompt(workflow)
            logger.info(f"Queued prompt, got response: {prompt_response}")
            prompt_id = prompt_response["prompt_id"]
        except Exception as e:
            logger.error(f"Error queuing prompt: {e}")
            raise

        uri = f"ws://{self.config.server_address}/ws?clientId={self.config.client_id}"
        logger.info(f"Connecting to websocket at {uri}")
        
        async with websockets.connect(uri) as websocket:
            while True:
                try:
                    message = await websocket.recv()
                    
                    if isinstance(message, str):
                        try:
                            data = json.loads(message)
                            logger.info(f"Received text message: {data}")
                            
                            if data.get("type") == "executing":
                                exec_data = data.get("data", {})
                                if exec_data.get("prompt_id") == prompt_id:
                                    node = exec_data.get("node")
                                    logger.info(f"Processing node: {node}")
                                    if node is None:
                                        logger.info("Generation complete signal received")
                                        break
                        except:
                            pass
                    else:
                        logger.info(f"Received binary message of length: {len(message)}")
                        if len(message) > 8:  # Check if we have actual image data
                            return message[8:]  # Remove binary header
                        else:
                            logger.warning(f"Received short binary message: {message}")
                
                except websockets.exceptions.ConnectionClosed as e:
                    logger.error(f"WebSocket connection closed: {e}")
                    break
                except Exception as e:
                    logger.error(f"Error processing message: {e}")
                    continue

        raise RuntimeError("No valid image data received")

    async def queue_prompt(self, prompt: Dict[str, Any]) -> Dict[str, Any]:
        """Queue a prompt with ComfyUI."""
        async with aiohttp.ClientSession() as session:
            try:
                async with session.post(
                    f"http://{self.config.server_address}/prompt",
                    json={
                        "prompt": prompt,
                        "client_id": self.config.client_id
                    }
                ) as response:
                    if response.status != 200:
                        text = await response.text()
                        raise RuntimeError(f"Failed to queue prompt: {response.status} - {text}")
                    return await response.json()
            except aiohttp.ClientError as e:
                raise RuntimeError(f"HTTP request failed: {e}")

async def main():
    """Main entry point for the ComfyUI MCP server."""
    server = ComfyUIServer()
    
    async with stdio_server() as (read_stream, write_stream):
        await server.app.run(
            read_stream,
            write_stream,
            server.app.create_initialization_options()
        )

if __name__ == "__main__":
    asyncio.run(main())
```