#
tokens: 14944/50000 11/11 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .env.example
├── .gitignore
├── chat_handler.py
├── models.py
├── prompts.py
├── pyproject.toml
├── README.md
├── requirements.txt
├── server.py
├── setup.sh
└── storage.py
```

# Files

--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------

```
 1 | # μ-MCP Configuration
 2 | # Copy this file to .env and configure your settings
 3 | 
 4 | # OpenRouter API Key (required)
 5 | # Get yours at: https://openrouter.ai/keys
 6 | OPENROUTER_API_KEY=your_key_here
 7 | 
 8 | # Allowed Models (optional)
 9 | # Comma-separated list of models to enable
10 | # Leave empty to allow all models
11 | # Examples: gpt-5,gemini-pro,o3,deepseek
12 | # Full list: gpt-5, gpt-5-mini, gpt-4o, o3, o3-mini, o3-mini-high,
13 | #           o4-mini, o4-mini-high, sonnet, opus, haiku,
14 | #           gemini-2.5-pro, gemini-2.5-flash,
15 | #           deepseek-chat, deepseek-r1, grok-4, grok-code-fast-1,
16 | #           qwen3-max
17 | OPENROUTER_ALLOWED_MODELS=
18 | 
19 | # Logging Level (optional)
20 | # Options: DEBUG, INFO, WARNING, ERROR
21 | # Default: INFO
22 | LOG_LEVEL=INFO
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
  1 | # Python
  2 | __pycache__/
  3 | *.py[cod]
  4 | *$py.class
  5 | *.so
  6 | .Python
  7 | build/
  8 | develop-eggs/
  9 | dist/
 10 | downloads/
 11 | eggs/
 12 | .eggs/
 13 | lib/
 14 | lib64/
 15 | parts/
 16 | sdist/
 17 | var/
 18 | wheels/
 19 | share/python-wheels/
 20 | *.egg-info/
 21 | .installed.cfg
 22 | *.egg
 23 | MANIFEST
 24 | 
 25 | # Virtual Environments
 26 | venv/
 27 | ENV/
 28 | env/
 29 | .venv/
 30 | .env.local
 31 | .env.*.local
 32 | 
 33 | # IDE
 34 | .vscode/
 35 | .idea/
 36 | *.swp
 37 | *.swo
 38 | *~
 39 | .DS_Store
 40 | 
 41 | # Testing
 42 | .coverage
 43 | .pytest_cache/
 44 | .mypy_cache/
 45 | .dmypy.json
 46 | dmypy.json
 47 | htmlcov/
 48 | .tox/
 49 | .nox/
 50 | coverage.xml
 51 | *.cover
 52 | *.log
 53 | 
 54 | # Environment variables
 55 | .env
 56 | .env.local
 57 | .env.*.local
 58 | 
 59 | # Database
 60 | *.db
 61 | *.sqlite
 62 | *.sqlite3
 63 | 
 64 | # Package managers
 65 | uv.lock
 66 | poetry.lock
 67 | Pipfile.lock
 68 | 
 69 | # Serena cache
 70 | .serena/
 71 | 
 72 | # Jupyter Notebook
 73 | .ipynb_checkpoints
 74 | 
 75 | # pyenv
 76 | .python-version
 77 | 
 78 | # Celery
 79 | celerybeat-schedule
 80 | celerybeat.pid
 81 | 
 82 | # SageMath parsed files
 83 | *.sage.py
 84 | 
 85 | # Environments
 86 | .env
 87 | .venv
 88 | env/
 89 | venv/
 90 | ENV/
 91 | env.bak/
 92 | venv.bak/
 93 | 
 94 | # Spyder project settings
 95 | .spyderproject
 96 | .spyproject
 97 | 
 98 | # Rope project settings
 99 | .ropeproject
100 | 
101 | # mkdocs documentation
102 | /site
103 | 
104 | # mypy
105 | .mypy_cache/
106 | .dmypy.json
107 | dmypy.json
108 | 
109 | # Pyre type checker
110 | .pyre/
111 | 
112 | # pytype static type analyzer
113 | .pytype/
114 | 
115 | # Cython debug symbols
116 | cython_debug/
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # μ-MCP Server
  2 | 
  3 | **μ** (mu) = micro, minimal - in sardonic contrast to zen-mcp's 10,000+ lines of orchestration.
  4 | 
  5 | A pure MCP server that does one thing well: enable chat with AI models via OpenRouter.
  6 | 
  7 | ## Philosophy
  8 | 
  9 | Following UNIX principles:
 10 | - **Do one thing well**: Provide access to AI models
 11 | - **No hardcoded control flow**: The AI agents decide everything
 12 | - **Minimal interface**: One tool, clean parameters
 13 | - **Persistent state**: Conversations persist across sessions
 14 | - **Model agnostic**: Support any OpenRouter model
 15 | 
 16 | ## Features
 17 | 
 18 | - ✅ **Multi-model conversations** - Switch models mid-conversation
 19 | - ✅ **Persistent storage** - Conversations saved to disk
 20 | - ✅ **Model registry** - Curated models with capabilities
 21 | - ✅ **LLM-driven model selection** - Calling agent picks the best model
 22 | - ✅ **Reasoning effort control** - Simple pass-through to OpenRouter (low/medium/high)
 23 | - ✅ **MCP prompts** - Slash commands `/mu:chat` and `/mu:continue`
 24 | - ✅ **Token-based budgeting** - Smart file truncation
 25 | - ✅ **Proper MIME types** - Correct image format handling
 26 | 
 27 | ## What's NOT Included
 28 | 
 29 | - ❌ Workflow orchestration (let AI decide)
 30 | - ❌ Step tracking (unnecessary complexity)
 31 | - ❌ Confidence levels (trust the models)
 32 | - ❌ Expert validation (models are the experts)
 33 | - ❌ Hardcoded procedures (pure AI agency)
 34 | - ❌ Web search implementation (just ask Claude)
 35 | - ❌ Multiple providers (OpenRouter handles all)
 36 | 
 37 | ## Setup
 38 | 
 39 | ### Quick Install (with uv)
 40 | 
 41 | 1. **Get OpenRouter API key**: https://openrouter.ai/keys
 42 | 
 43 | 2. **Run setup script**:
 44 |    ```bash
 45 |    ./setup.sh
 46 |    ```
 47 |    
 48 |    This will:
 49 |    - Install `uv` if not present (blazing fast Python package manager)
 50 |    - Install dependencies
 51 |    - Show you the Claude Desktop config
 52 | 
 53 | 3. **Add to Claude Desktop config** (`~/.config/claude/claude_desktop_config.json`):
 54 |    ```json
 55 |    {
 56 |      "mcpServers": {
 57 |        "mu-mcp": {
 58 |          "command": "uv",
 59 |          "args": ["--directory", "/path/to/mu-mcp", "run", "python", "/path/to/mu-mcp/server.py"],
 60 |          "env": {
 61 |            "OPENROUTER_API_KEY": "your-key-here"
 62 |          }
 63 |        }
 64 |      }
 65 |    }
 66 |    ```
 67 | 
 68 | 4. **Restart Claude Desktop**
 69 | 
 70 | ### Manual Install (traditional)
 71 | 
 72 | If you prefer pip/venv:
 73 | ```bash
 74 | python3 -m venv venv
 75 | source venv/bin/activate  # On Windows: venv\Scripts\activate
 76 | pip install -r requirements.txt
 77 | ```
 78 | 
 79 | Then use this Claude Desktop config:
 80 | ```json
 81 | {
 82 |   "mcpServers": {
 83 |     "mu-mcp": {
 84 |       "command": "/path/to/mu-mcp/venv/bin/python",
 85 |       "args": ["/path/to/mu-mcp/server.py"],
 86 |       "env": {
 87 |         "OPENROUTER_API_KEY": "your-key-here"
 88 |       }
 89 |     }
 90 |   }
 91 | }
 92 | 
 93 | ## Usage
 94 | 
 95 | ### Basic Chat
 96 | ```
 97 | /mu:chat
 98 | Then specify model and prompt: "Use gpt-5 to explain quantum computing"
 99 | ```
100 | 
101 | ### Continue Conversations
102 | ```
103 | /mu:continue
104 | Claude sees your recent conversations and can intelligently continue them
105 | Preserves full context even after Claude's memory is compacted
106 | ```
107 | 
108 | **Key Use Case**: Maintain context between different Claude sessions. When Claude's context gets compacted or you need to switch between tasks, `/mu:continue` allows Claude to see all your recent conversations (with titles, timestamps, and models used) and seamlessly resume where you left off. The agent intelligently selects or asks which conversation to continue based on your needs.
109 | 
110 | ### Challenge Mode
111 | ```
112 | /mu:challenge
113 | Encourages critical thinking and avoids reflexive agreement
114 | ```
115 | 
116 | ### Multi-AI Discussion
117 | ```
118 | /mu:discuss
119 | Orchestrate multi-turn discussions among diverse AI models
120 | ```
121 | 
122 | ### Model Selection
123 | ```
124 | Chat with GPT-5 about code optimization
125 | Chat with O3 Mini High for complex reasoning
126 | Chat with DeepSeek R1 for systematic analysis  
127 | Chat with Claude about API design
128 | ```
129 | 
130 | ### Reasoning Effort Control
131 | ```
132 | Chat with o3-mini using high reasoning effort for complex problems
133 | Chat with gpt-5 using low reasoning effort for quick responses
134 | Chat with o4-mini-high using medium reasoning effort for balanced analysis
135 | ```
136 | 
137 | Note: Reasoning effort is automatically ignored by models that don't support it.
138 | 
139 | ### With Files and Images
140 | ```
141 | Review this code: /path/to/file.py
142 | Analyze this diagram: /path/to/image.png
143 | ```
144 | 
145 | ### Model Selection by LLM
146 | The calling LLM agent (Claude) sees all available models with their descriptions and capabilities, allowing intelligent selection based on:
147 | - Task requirements and complexity
148 | - Performance vs cost trade-offs  
149 | - Specific model strengths
150 | - Context window needs
151 | - Image support requirements
152 | 
153 | ## Architecture
154 | 
155 | ```
156 | server.py         # MCP server with prompt handlers
157 | chat_handler.py   # Chat logic with multi-model support
158 | models.py         # Model registry and capabilities
159 | prompts.py        # System prompts for peer AI collaboration
160 | storage.py        # Persistent conversation storage
161 | .env.example      # Configuration template
162 | ```
163 | 
164 | ## Configuration
165 | 
166 | ### Environment Variables
167 | 
168 | - `OPENROUTER_API_KEY` - Your OpenRouter API key (required)
169 | - `OPENROUTER_ALLOWED_MODELS` - Comma-separated list of allowed models (optional)
170 | - `LOG_LEVEL` - Logging verbosity (DEBUG, INFO, WARNING, ERROR)
171 | 
172 | ## Why μ-MCP?
173 | 
174 | ### The Problem with zen-mcp-server
175 | 
176 | zen-mcp grew to **10,000+ lines** trying to control AI behavior:
177 | - 15+ specialized tools with overlapping functionality
178 | - Complex workflow orchestration that limits AI agency
179 | - Hardcoded decision trees that prescribe solutions
180 | - "Step tracking" and "confidence levels" that add noise
181 | - Redundant schema fields and validation layers
182 | 
183 | ### The μ-MCP Approach
184 | 
185 | **Less code, more capability**:
186 | - Single tool that does one thing perfectly
187 | - AI agents make all decisions
188 | - Clean, persistent conversation state
189 | - Model capabilities, not hardcoded behaviors
190 | - Trust in AI intelligence over procedural control
191 | 
192 | ### Philosophical Difference
193 | 
194 | **zen-mcp**: "Let me orchestrate 12 steps for you to debug this code"
195 | **μ-mcp**: "Here's the model catalog. Pick what you need."
196 | 
197 | The best tool is the one that gets out of the way.
198 | 
199 | ## Related Projects
200 | 
201 | - [zen-mcp-server](https://github.com/winnative/zen-mcp-server) - The bloated alternative we're reacting against
202 | 
203 | ## License
204 | 
205 | MIT
```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
1 | mcp>=1.0.0
2 | aiohttp>=3.9.0
3 | python-dotenv>=1.0.0
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "mu-mcp"
 3 | version = "2.0.0"
 4 | description = "Minimal MCP server for AI model interactions via OpenRouter"
 5 | requires-python = ">=3.10"
 6 | dependencies = [
 7 |     "mcp>=1.0.0",
 8 |     "aiohttp>=3.9.0",
 9 |     "python-dotenv>=1.0.0",
10 | ]
11 | 
12 | [tool.uv]
13 | dev-dependencies = []
14 | 
15 | [project.scripts]
16 | mu-mcp = "server:main"
```

--------------------------------------------------------------------------------
/setup.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | 
 3 | echo "🚀 μ-MCP Server Setup (with uv)"
 4 | echo "=============================="
 5 | 
 6 | # Check if uv is installed
 7 | if ! command -v uv &> /dev/null; then
 8 |     echo "📦 Installing uv..."
 9 |     curl -LsSf https://astral.sh/uv/install.sh | sh
10 |     
11 |     # Add to PATH for current session
12 |     export PATH="$HOME/.cargo/bin:$PATH"
13 |     
14 |     echo "✅ uv installed successfully"
15 | else
16 |     echo "✅ uv is already installed"
17 | fi
18 | 
19 | # Install dependencies using uv
20 | echo ""
21 | echo "📥 Installing dependencies..."
22 | uv pip sync requirements.txt
23 | 
24 | # Check for API key
25 | if [ -z "$OPENROUTER_API_KEY" ]; then
26 |     echo ""
27 |     echo "⚠️  OPENROUTER_API_KEY not set!"
28 |     echo "Please add to your shell profile:"
29 |     echo "export OPENROUTER_API_KEY='your-api-key'"
30 |     echo ""
31 |     echo "Get your API key at: https://openrouter.ai/keys"
32 | else
33 |     echo "✅ OpenRouter API key found"
34 | fi
35 | 
36 | # Create MCP config
37 | echo ""
38 | echo "📝 Add to Claude Desktop config (~/.config/claude/claude_desktop_config.json):"
39 | echo ""
40 | cat << EOF
41 | {
42 |   "mcpServers": {
43 |     "mu-mcp": {
44 |       "command": "uv",
45 |       "args": ["--directory", "$(pwd)", "run", "python", "$(pwd)/server.py"],
46 |       "env": {
47 |         "OPENROUTER_API_KEY": "\$OPENROUTER_API_KEY"
48 |       }
49 |     }
50 |   }
51 | }
52 | EOF
53 | 
54 | echo ""
55 | echo "✅ Setup complete! Restart Claude Desktop to use μ-MCP server."
56 | echo ""
57 | echo "To test the server manually, run:"
58 | echo "  uv run python server.py"
```

--------------------------------------------------------------------------------
/prompts.py:
--------------------------------------------------------------------------------

```python
 1 | """System prompts for μ-MCP."""
 2 | 
 3 | 
 4 | def get_llm_system_prompt(model_name: str = None) -> str:
 5 |     """
 6 |     System prompt for the LLM being called.
 7 |     Modern, direct, without childish "you are" patterns.
 8 |     """
 9 |     return """Collaborate as a technical peer with Claude, the AI agent requesting assistance.
10 | 
11 | Core principles:
12 | - Provide expert analysis and alternative perspectives
13 | - Challenge assumptions constructively when warranted
14 | - Share implementation details and edge cases
15 | - Acknowledge uncertainty rather than guessing
16 | 
17 | When additional context would strengthen your response:
18 | - Request Claude perform web searches for current documentation
19 | - Ask Claude to provide specific files or code sections
20 | 
21 | Format code with proper syntax highlighting.
22 | Maintain technical precision over conversational comfort.
23 | Skip unnecessary preambles - dive directly into substance."""
24 | 
25 | 
26 | def get_request_wrapper() -> str:
27 |     """
28 |     Wrapper text to inform the peer AI that this request is from Claude.
29 |     """
30 |     return """
31 | 
32 | ---
33 | 
34 | REQUEST FROM CLAUDE: The following query comes from Claude, an AI assistant seeking peer collaboration."""
35 | 
36 | 
37 | def get_response_wrapper(model_name: str) -> str:
38 |     """
39 |     Wrapper text for Claude to understand this is another AI's perspective.
40 |     
41 |     Args:
42 |         model_name: Short model name (e.g., "gpt-5", "sonnet")
43 |     """
44 |     # Format short name for display (e.g., "gpt-5" -> "GPT 5")
45 |     display_name = model_name.upper().replace("-", " ")
46 |     return f"""
47 | 
48 | ---
49 | 
50 | PEER AI RESPONSE ({display_name}): Evaluate this perspective critically and integrate valuable insights."""
51 | 
52 | 
53 | def get_agent_tool_description() -> str:
54 |     """
55 |     Description for the calling agent (Claude) about how to use this tool.
56 |     """
57 |     return """Direct access to state-of-the-art AI models via OpenRouter.
58 | 
59 | Provide EXACTLY ONE:
60 | - title: Start fresh (when switching topics, context too long, or isolating model contexts)
61 | - continuation_id: Continue existing conversation (preserves full context)
62 | 
63 | When starting fresh: Model has no context - include background details or attach files
64 | When continuing: Model has conversation history - don't repeat context
65 | 
66 | FILE ATTACHMENT BEST PRACTICES:
67 | - Proactively attach relevant files when starting new conversations for context
68 | - For long content (git diffs, logs, terminal output), save to a file and attach it rather than pasting verbatim in prompt
69 | - Files are processed more efficiently and precisely than inline text"""
```

--------------------------------------------------------------------------------
/models.py:
--------------------------------------------------------------------------------

```python
  1 | """OpenRouter model registry and capabilities."""
  2 | 
  3 | import os
  4 | from dataclasses import dataclass
  5 | from typing import Optional
  6 | 
  7 | # Load environment variables from .env file
  8 | from dotenv import load_dotenv
  9 | load_dotenv()
 10 | 
 11 | 
 12 | @dataclass
 13 | class ModelCapabilities:
 14 |     """Model metadata for routing and selection."""
 15 |     
 16 |     name: str  # Full OpenRouter model path
 17 |     description: str  # What the model is best for
 18 | 
 19 | 
 20 | # OpenRouter model registry - popular models with good support
 21 | OPENROUTER_MODELS = {
 22 |     # OpenAI Models
 23 |     "gpt-5": ModelCapabilities(
 24 |         name="openai/gpt-5",
 25 |         description="Most advanced OpenAI model with extended context. Excels at complex reasoning, coding, and multimodal understanding",
 26 |     ),
 27 |     "gpt-5-mini": ModelCapabilities(
 28 |         name="openai/gpt-5-mini",
 29 |         description="Efficient GPT-5 variant. Balances performance and cost for general-purpose tasks",
 30 |     ),
 31 |     "gpt-4o": ModelCapabilities(
 32 |         name="openai/gpt-4o",
 33 |         description="Multimodal model supporting text, image, audio, and video. Strong at creative writing and following complex instructions",
 34 |     ),
 35 |     "o3": ModelCapabilities(
 36 |         name="openai/o3",
 37 |         description="Advanced reasoning model with tool integration and visual reasoning. Excels at mathematical proofs and complex problem-solving",
 38 |     ),
 39 |     "o3-mini": ModelCapabilities(
 40 |         name="openai/o3-mini",
 41 |         description="Production-ready small reasoning model with function calling and structured outputs. Good for systematic problem-solving",
 42 |     ),
 43 |     "o3-mini-high": ModelCapabilities(
 44 |         name="openai/o3-mini-high",
 45 |         description="Enhanced O3 Mini with deeper reasoning, better accuracy vs standard O3 Mini",
 46 |     ),
 47 |     "o4-mini": ModelCapabilities(
 48 |         name="openai/o4-mini",
 49 |         description="Fast reasoning model optimized for speed. Exceptional at math, coding, and visual tasks with tool support",
 50 |     ),
 51 |     "o4-mini-high": ModelCapabilities(
 52 |         name="openai/o4-mini-high",
 53 |         description="Premium O4 Mini variant with enhanced reasoning depth and accuracy",
 54 |     ),
 55 |     
 56 |     # Anthropic Models
 57 |     "sonnet": ModelCapabilities(
 58 |         name="anthropic/claude-sonnet-4",
 59 |         description="Industry-leading coding model with superior instruction following. Excellent for software development and technical writing",
 60 |     ),
 61 |     "opus": ModelCapabilities(
 62 |         name="anthropic/claude-opus-4.1",
 63 |         description="Most capable Claude model for sustained complex work. Strongest at deep analysis and long-running tasks",
 64 |     ),
 65 |     "haiku": ModelCapabilities(
 66 |         name="anthropic/claude-3.5-haiku",
 67 |         description="Fast, efficient model matching previous flagship performance. Great for high-volume, quick-response scenarios",
 68 |     ),
 69 |     
 70 |     # Google Models
 71 |     "gemini-2.5-pro": ModelCapabilities(
 72 |         name="google/gemini-2.5-pro",
 73 |         description="Massive context window with thinking mode. Best for analyzing huge datasets, codebases, and STEM reasoning",
 74 |     ),
 75 |     "gemini-2.5-flash": ModelCapabilities(
 76 |         name="google/gemini-2.5-flash",
 77 |         description="Best price-performance with thinking capabilities. Ideal for high-volume tasks with multimodal and multilingual support",
 78 |     ),
 79 |     
 80 |     # DeepSeek Models
 81 |     "deepseek-chat": ModelCapabilities(
 82 |         name="deepseek/deepseek-chat-v3.1",
 83 |         description="Hybrid model switching between reasoning and direct modes. Strong multilingual support and code completion",
 84 |     ),
 85 |     "deepseek-r1": ModelCapabilities(
 86 |         name="deepseek/deepseek-r1",
 87 |         description="Open-source reasoning model with exceptional math capabilities. Highly cost-effective for complex reasoning tasks",
 88 |     ),
 89 |     
 90 |     # X.AI Models
 91 |     "grok-4": ModelCapabilities(
 92 |         name="x-ai/grok-4",
 93 |         description="Multimodal model with strong reasoning and analysis capabilities. Excellent at complex problem-solving and scientific tasks",
 94 |     ),
 95 |     "grok-code-fast-1": ModelCapabilities(
 96 |         name="x-ai/grok-code-fast-1",
 97 |         description="Ultra-fast coding specialist optimized for IDE integration. Best for rapid code generation and bug fixes",
 98 |     ),
 99 |     
100 |     # Qwen Models
101 |     "qwen3-max": ModelCapabilities(
102 |         name="qwen/qwen3-max",
103 |         description="Trillion-parameter model with ultra-long context. Excels at complex reasoning, structured data, and creative tasks",
104 |     ),
105 | }
106 | 
107 | 
108 | def get_allowed_models() -> dict[str, ModelCapabilities]:
109 |     """Get models filtered by OPENROUTER_ALLOWED_MODELS env var."""
110 |     allowed = os.getenv("OPENROUTER_ALLOWED_MODELS", "")
111 |     
112 |     if not allowed:
113 |         # No restrictions, return all models
114 |         return OPENROUTER_MODELS
115 |     
116 |     # Parse comma-separated list
117 |     allowed_names = [name.strip().lower() for name in allowed.split(",")]
118 |     filtered = {}
119 |     
120 |     for key, model in OPENROUTER_MODELS.items():
121 |         # Check main key
122 |         if key.lower() in allowed_names:
123 |             filtered[key] = model
124 |             continue
125 |                 
126 |         # Check full model name
127 |         if model.name.split("/")[-1].lower() in allowed_names:
128 |             filtered[key] = model
129 |     
130 |     return filtered
131 | 
132 | 
133 | def resolve_model(name: str) -> Optional[str]:
134 |     """Resolve a model name to the full OpenRouter model path."""
135 |     if not name:
136 |         return None
137 |         
138 |     name_lower = name.lower()
139 |     
140 |     # Check if it's already a full path
141 |     if "/" in name:
142 |         return name
143 |     
144 |     # Check available models
145 |     models = get_allowed_models()
146 |     
147 |     # Direct key match
148 |     if name_lower in models:
149 |         return models[name_lower].name
150 |     
151 |     # Check by model name suffix
152 |     for key, model in models.items():
153 |         if model.name.endswith(f"/{name_lower}"):
154 |             return model.name
155 |     
156 |     return None
157 | 
158 | 
159 | def get_short_name(full_name: str) -> Optional[str]:
160 |     """Get the short name (key) for a full model path.
161 |     
162 |     Args:
163 |         full_name: Full OpenRouter model path (e.g., "openai/gpt-5")
164 |         
165 |     Returns:
166 |         Short name key (e.g., "gpt-5") or None if not found
167 |     """
168 |     if not full_name:
169 |         return None
170 |     
171 |     # Check available models for matching full name
172 |     models = get_allowed_models()
173 |     
174 |     for key, model in models.items():
175 |         if model.name == full_name:
176 |             return key
177 |     
178 |     # If not found in registry, return None
179 |     # This handles cases where a custom full path was used
180 |     return None
181 | 
```

--------------------------------------------------------------------------------
/chat_handler.py:
--------------------------------------------------------------------------------

```python
  1 | """Chat handler for μ-MCP."""
  2 | 
  3 | import base64
  4 | import logging
  5 | import mimetypes
  6 | import os
  7 | import uuid
  8 | from pathlib import Path
  9 | from typing import Optional, Union
 10 | 
 11 | import aiohttp
 12 | 
 13 | from models import (
 14 |     get_allowed_models,
 15 |     resolve_model,
 16 |     get_short_name,
 17 | )
 18 | from prompts import (
 19 |     get_llm_system_prompt,
 20 |     get_response_wrapper,
 21 |     get_request_wrapper,
 22 | )
 23 | from storage import ConversationStorage
 24 | 
 25 | logger = logging.getLogger(__name__)
 26 | 
 27 | 
 28 | class ChatHandler:
 29 |     """Handle chat interactions with OpenRouter models."""
 30 | 
 31 |     def __init__(self):
 32 |         self.api_key = os.getenv("OPENROUTER_API_KEY")
 33 |         if not self.api_key:
 34 |             raise ValueError("OPENROUTER_API_KEY environment variable not set")
 35 |         
 36 |         self.base_url = "https://openrouter.ai/api/v1/chat/completions"
 37 |         
 38 |         # Initialize persistent storage with default directory
 39 |         self.storage = ConversationStorage()
 40 | 
 41 |     async def chat(
 42 |         self,
 43 |         prompt: str,
 44 |         model: str,  # Now required
 45 |         title: Optional[str] = None,
 46 |         continuation_id: Optional[str] = None,
 47 |         files: Optional[list[str]] = None,
 48 |         images: Optional[list[str]] = None,
 49 |         reasoning_effort: Optional[str] = "medium",
 50 |     ) -> dict:
 51 |         """
 52 |         Chat with an AI model.
 53 |         
 54 |         Args:
 55 |             prompt: The user's message
 56 |             model: Model name (required)
 57 |             title: Title for a new conversation (provide this OR continuation_id, not both)
 58 |             continuation_id: UUID to continue existing conversation (provide this OR title, not both)
 59 |             files: List of file paths to include
 60 |             images: List of image paths to include
 61 |             reasoning_effort: Reasoning depth - "low", "medium", or "high" (for models that support it)
 62 |         
 63 |         Returns dict with:
 64 |         - content: The model's response with wrapper
 65 |         - continuation_id: UUID for continuing this conversation
 66 |         - model_used: The actual model that was used
 67 |         """
 68 |         # Resolve model name/alias
 69 |         resolved_model = resolve_model(model)
 70 |         if not resolved_model:
 71 |             # If not found in registry, use as-is (might be full path)
 72 |             resolved_model = model
 73 |         
 74 |         # Validate: exactly one of title or continuation_id must be provided
 75 |         if (title and continuation_id):
 76 |             return {
 77 |                 "error": "Cannot provide both 'title' and 'continuation_id'. Use 'title' for new conversations or 'continuation_id' to continue existing ones.",
 78 |                 "continuation_id": None,
 79 |                 "model_used": None,
 80 |             }
 81 |         
 82 |         if (not title and not continuation_id):
 83 |             return {
 84 |                 "error": "Must provide either 'title' for a new conversation or 'continuation_id' to continue an existing one.",
 85 |                 "continuation_id": None,
 86 |                 "model_used": None,
 87 |             }
 88 |         
 89 |         # Get or create conversation
 90 |         messages_with_metadata = []
 91 |         if continuation_id:
 92 |             # Try to load from persistent storage
 93 |             conversation_data = self.storage.load_conversation(continuation_id)
 94 |             if conversation_data:
 95 |                 messages_with_metadata = conversation_data.get("messages", [])
 96 |             else:
 97 |                 # Fail fast - conversation not found
 98 |                 return {
 99 |                     "error": f"Conversation {continuation_id} not found. Please start a new conversation or use a valid continuation_id.",
100 |                     "continuation_id": None,
101 |                     "model_used": None,
102 |                 }
103 |         else:
104 |             # New conversation with title provided
105 |             continuation_id = str(uuid.uuid4())
106 | 
107 |         # Build the user message with metadata and request wrapper
108 |         wrapped_prompt = prompt + get_request_wrapper()
109 |         user_content = self._build_user_content(wrapped_prompt, files, images)
110 |         user_message = self.storage.add_metadata_to_message(
111 |             {"role": "user", "content": user_content},
112 |             {"target_model": resolved_model}
113 |         )
114 |         messages_with_metadata.append(user_message)
115 |         
116 |         # Get clean messages for API (without metadata)
117 |         api_messages = self.storage.get_messages_for_api(messages_with_metadata)
118 |         
119 |         # Add system prompt for the LLM
120 |         system_prompt = get_llm_system_prompt(resolved_model)
121 |         api_messages.insert(0, {"role": "system", "content": system_prompt})
122 | 
123 |         # Make API call
124 |         response_text = await self._call_openrouter(
125 |             api_messages, resolved_model, reasoning_effort
126 |         )
127 | 
128 |         # Add assistant response with metadata
129 |         assistant_message = self.storage.add_metadata_to_message(
130 |             {"role": "assistant", "content": response_text},
131 |             {"model": resolved_model, "model_used": resolved_model}
132 |         )
133 |         messages_with_metadata.append(assistant_message)
134 |         
135 |         # Save conversation to persistent storage
136 |         # Pass title only for new conversations (when title was provided)
137 |         self.storage.save_conversation(
138 |             continuation_id,
139 |             messages_with_metadata,
140 |             {"models_used": [resolved_model]},
141 |             title=title  # Will be None for continuations, actual title for new conversations
142 |         )
143 |         
144 |         # Get short name for agent interface
145 |         short_name = get_short_name(resolved_model)
146 |         # Fall back to resolved model if not in registry (custom path)
147 |         display_name = short_name if short_name else resolved_model
148 |         
149 |         # Add response wrapper for Claude with model identification
150 |         wrapped_response = response_text + get_response_wrapper(display_name)
151 | 
152 |         return {
153 |             "content": wrapped_response,
154 |             "continuation_id": continuation_id,
155 |             "model_used": display_name,
156 |         }
157 | 
158 |     def _build_user_content(
159 |         self, prompt: str, files: Optional[list[str]], images: Optional[list[str]]
160 |     ) -> str | list:
161 |         """Build user message content with files and images."""
162 |         content_parts = []
163 | 
164 |         # Add main prompt
165 |         content_parts.append({"type": "text", "text": prompt})
166 | 
167 |         # Add files as text
168 |         if files:
169 |             file_content = self._read_files(files)
170 |             if file_content:
171 |                 content_parts.append({"type": "text", "text": f"\n\nFiles:\n{file_content}"})
172 | 
173 |         # Add images as base64 with proper MIME type
174 |         if images:
175 |             for image_path in images:
176 |                 result = self._encode_image(image_path)
177 |                 if result:
178 |                     encoded_data, mime_type = result
179 |                     content_parts.append(
180 |                         {
181 |                             "type": "image_url",
182 |                             "image_url": {"url": f"data:{mime_type};base64,{encoded_data}"},
183 |                         }
184 |                     )
185 | 
186 |         # If only text, return string; otherwise return multi-part content
187 |         if len(content_parts) == 1:
188 |             return prompt
189 |         return content_parts
190 | 
191 |     def _read_files(self, file_paths: list[str]) -> str:
192 |         """Read and combine file contents with token-based budgeting."""
193 |         contents = []
194 |         # Simple token estimation: ~4 chars per token
195 |         # Reserve tokens for prompt and response
196 |         max_file_tokens = 50_000  # ~200k chars
197 |         total_tokens = 0
198 | 
199 |         for file_path in file_paths:
200 |             try:
201 |                 path = Path(file_path)
202 |                 if path.exists() and path.is_file():
203 |                     content = path.read_text(errors="ignore")
204 |                     # Estimate tokens
205 |                     file_tokens = len(content) // 4
206 |                     
207 |                     if total_tokens + file_tokens > max_file_tokens:
208 |                         # Truncate if needed
209 |                         remaining_tokens = max_file_tokens - total_tokens
210 |                         if remaining_tokens > 100:  # Worth including partial
211 |                             char_limit = remaining_tokens * 4
212 |                             content = content[:char_limit] + "\n[File truncated]"
213 |                             contents.append(f"\n--- {file_path} ---\n{content}")
214 |                         break
215 |                     
216 |                     contents.append(f"\n--- {file_path} ---\n{content}")
217 |                     total_tokens += file_tokens
218 |             except Exception as e:
219 |                 logger.warning(f"Could not read file {file_path}: {e}")
220 | 
221 |         return "".join(contents)
222 | 
223 |     def _encode_image(self, image_path: str) -> Optional[tuple[str, str]]:
224 |         """Encode image to base64 with proper MIME type."""
225 |         try:
226 |             path = Path(image_path)
227 |             if path.exists() and path.is_file():
228 |                 # Detect MIME type
229 |                 mime_type, _ = mimetypes.guess_type(str(path))
230 |                 if not mime_type or not mime_type.startswith('image/'):
231 |                     # Default to JPEG for unknown types
232 |                     mime_type = 'image/jpeg'
233 |                 
234 |                 with open(path, "rb") as f:
235 |                     encoded = base64.b64encode(f.read()).decode("utf-8")
236 |                     return encoded, mime_type
237 |         except Exception as e:
238 |             logger.warning(f"Could not encode image {image_path}: {e}")
239 |         return None
240 | 
241 |     async def _call_openrouter(
242 |         self,
243 |         messages: list,
244 |         model: str,
245 |         reasoning_effort: Optional[str],
246 |     ) -> str:
247 |         """Make API call to OpenRouter."""
248 |         headers = {
249 |             "Authorization": f"Bearer {self.api_key}",
250 |             "Content-Type": "application/json",
251 |             "HTTP-Referer": "https://github.com/mu-mcp",
252 |             "X-Title": "μ-MCP Server",
253 |         }
254 | 
255 |         data = {
256 |             "model": model,
257 |             "messages": messages,
258 |         }
259 | 
260 |         # Add reasoning effort if specified
261 |         # OpenRouter will automatically ignore this for non-reasoning models
262 |         if reasoning_effort:
263 |             data["reasoning"] = {
264 |                 "effort": reasoning_effort  # "low", "medium", or "high"
265 |             }
266 | 
267 |         async with aiohttp.ClientSession() as session:
268 |             async with session.post(self.base_url, headers=headers, json=data) as response:
269 |                 if response.status != 200:
270 |                     error_text = await response.text()
271 |                     raise Exception(f"OpenRouter API error: {response.status} - {error_text}")
272 | 
273 |                 result = await response.json()
274 |                 return result["choices"][0]["message"]["content"]
275 | 
```

--------------------------------------------------------------------------------
/storage.py:
--------------------------------------------------------------------------------

```python
  1 | """Persistent conversation storage for μ-MCP."""
  2 | 
  3 | import json
  4 | import os
  5 | from datetime import datetime
  6 | from pathlib import Path
  7 | from typing import Optional
  8 | import logging
  9 | 
 10 | from models import get_short_name
 11 | 
 12 | logger = logging.getLogger(__name__)
 13 | 
 14 | 
 15 | class ConversationStorage:
 16 |     """Handles persistent storage of multi-model conversations."""
 17 |     
 18 |     def __init__(self):
 19 |         """Initialize storage with default directory."""
 20 |         # Always use default ~/.mu-mcp/conversations
 21 |         self.storage_dir = Path.home() / ".mu-mcp" / "conversations"
 22 |         
 23 |         # Create directory if it doesn't exist
 24 |         self.storage_dir.mkdir(parents=True, exist_ok=True)
 25 |         
 26 |         # In-memory cache for conversations (no limit within MCP lifecycle)
 27 |         self._cache = {}
 28 |         
 29 |         # Track last conversation for "continue" command
 30 |         self._last_conversation_id = None
 31 |         self._last_model_used = None
 32 |         
 33 |         logger.info(f"Conversation storage initialized at: {self.storage_dir}")
 34 |     
 35 |     def save_conversation(self, conversation_id: str, messages: list, 
 36 |                          model_metadata: Optional[dict] = None, title: Optional[str] = None) -> bool:
 37 |         """
 38 |         Save a conversation to disk and update cache.
 39 |         
 40 |         Args:
 41 |             conversation_id: Unique conversation identifier
 42 |             messages: List of message dicts with role and content
 43 |             model_metadata: Optional metadata about models used
 44 |             title: Optional conversation title
 45 |         
 46 |         Returns:
 47 |             True if saved successfully
 48 |         """
 49 |         try:
 50 |             file_path = self.storage_dir / f"{conversation_id}.json"
 51 |             
 52 |             # Check if conversation exists to determine created time
 53 |             existing = {}
 54 |             if file_path.exists():
 55 |                 with open(file_path, "r") as f:
 56 |                     existing = json.load(f)
 57 |                     created = existing.get("created")
 58 |             else:
 59 |                 created = datetime.utcnow().isoformat()
 60 |             
 61 |             # Prepare conversation data
 62 |             conversation_data = {
 63 |                 "id": conversation_id,
 64 |                 "created": created,
 65 |                 "updated": datetime.utcnow().isoformat(),
 66 |                 "messages": messages,
 67 |             }
 68 |             
 69 |             # Add title if provided or preserve existing title
 70 |             if title:
 71 |                 conversation_data["title"] = title
 72 |             elif "title" in existing:
 73 |                 conversation_data["title"] = existing["title"]
 74 |             
 75 |             # Add model metadata if provided
 76 |             if model_metadata:
 77 |                 conversation_data["model_metadata"] = model_metadata
 78 |             
 79 |             # Write to file
 80 |             with open(file_path, "w") as f:
 81 |                 json.dump(conversation_data, f, indent=2)
 82 |             
 83 |             # Update cache (write-through)
 84 |             self._cache[conversation_id] = conversation_data
 85 |             
 86 |             # Update last conversation tracking
 87 |             self._last_conversation_id = conversation_id
 88 |             # Extract the last model used from messages or metadata
 89 |             last_full_name = None
 90 |             if model_metadata and "models_used" in model_metadata:
 91 |                 last_full_name = model_metadata["models_used"][-1] if model_metadata["models_used"] else None
 92 |             else:
 93 |                 # Try to extract from the last assistant message
 94 |                 for msg in reversed(messages):
 95 |                     if msg.get("role") == "assistant":
 96 |                         metadata = msg.get("metadata", {})
 97 |                         if "model" in metadata:
 98 |                             last_full_name = metadata["model"]
 99 |                             break
100 |             
101 |             # Convert to short name for agent interface
102 |             if last_full_name:
103 |                 short_name = get_short_name(last_full_name)
104 |                 self._last_model_used = short_name if short_name else last_full_name
105 |             else:
106 |                 self._last_model_used = None
107 |             
108 |             logger.debug(f"Saved conversation {conversation_id} with {len(messages)} messages")
109 |             return True
110 |             
111 |         except Exception as e:
112 |             logger.error(f"Failed to save conversation {conversation_id}: {e}")
113 |             return False
114 |     
115 |     def load_conversation(self, conversation_id: str) -> Optional[dict]:
116 |         """
117 |         Load a conversation from cache or disk.
118 |         
119 |         Args:
120 |             conversation_id: Unique conversation identifier
121 |         
122 |         Returns:
123 |             Conversation data dict or None if not found
124 |         """
125 |         # Check cache first
126 |         if conversation_id in self._cache:
127 |             data = self._cache[conversation_id]
128 |             logger.debug(f"Loaded conversation {conversation_id} from cache with {len(data.get('messages', []))} messages")
129 |             return data
130 |         
131 |         # Not in cache, try loading from disk
132 |         try:
133 |             file_path = self.storage_dir / f"{conversation_id}.json"
134 |             
135 |             if not file_path.exists():
136 |                 logger.debug(f"Conversation {conversation_id} not found")
137 |                 return None
138 |             
139 |             with open(file_path, "r") as f:
140 |                 data = json.load(f)
141 |             
142 |             # Add to cache for future access
143 |             self._cache[conversation_id] = data
144 |             
145 |             logger.debug(f"Loaded conversation {conversation_id} from disk with {len(data.get('messages', []))} messages")
146 |             return data
147 |             
148 |         except Exception as e:
149 |             logger.error(f"Failed to load conversation {conversation_id}: {e}")
150 |             return None
151 |     
152 |     def get_last_conversation_info(self) -> tuple[Optional[str], Optional[str]]:
153 |         """
154 |         Get the last conversation ID and model used.
155 |         
156 |         Returns:
157 |             Tuple of (conversation_id, model_used) or (None, None) if no conversations
158 |         """
159 |         return self._last_conversation_id, self._last_model_used
160 |     
161 |     
162 |     def get_messages_for_api(self, messages: list) -> list:
163 |         """
164 |         Extract just role and content for API calls.
165 |         Strips metadata that OpenRouter doesn't understand.
166 |         
167 |         Args:
168 |             messages: List of message dicts potentially with metadata
169 |         
170 |         Returns:
171 |             Clean list of messages for API
172 |         """
173 |         clean_messages = []
174 |         
175 |         for msg in messages:
176 |             # Only include role and content for API
177 |             clean_msg = {
178 |                 "role": msg.get("role"),
179 |                 "content": msg.get("content")
180 |             }
181 |             clean_messages.append(clean_msg)
182 |         
183 |         return clean_messages
184 |     
185 |     def add_metadata_to_message(self, message: dict, metadata: dict) -> dict:
186 |         """
187 |         Add metadata to a message for storage.
188 |         
189 |         Args:
190 |             message: Basic message dict with role and content
191 |             metadata: Metadata to add (timestamp, model, etc.)
192 |         
193 |         Returns:
194 |             Message with metadata added
195 |         """
196 |         return {
197 |             **message,
198 |             "metadata": {
199 |                 "timestamp": datetime.utcnow().isoformat(),
200 |                 **metadata
201 |             }
202 |         }
203 |     
204 |     def list_recent_conversations(self, limit: int = 20) -> list[dict]:
205 |         """
206 |         List the most recently updated conversations.
207 |         
208 |         Args:
209 |             limit: Maximum number of conversations to return
210 |         
211 |         Returns:
212 |             List of conversation summaries sorted by update time (newest first)
213 |         """
214 |         conversations = []
215 |         
216 |         try:
217 |             # Get all conversation files with their modification times
218 |             files_with_mtime = []
219 |             for file_path in self.storage_dir.glob("*.json"):
220 |                 try:
221 |                     mtime = file_path.stat().st_mtime
222 |                     files_with_mtime.append((mtime, file_path))
223 |                 except Exception as e:
224 |                     logger.warning(f"Failed to stat file {file_path}: {e}")
225 |                     continue
226 |             
227 |             # Sort by modification time (newest first) and take only the limit
228 |             files_with_mtime.sort(key=lambda x: x[0], reverse=True)
229 |             recent_files = files_with_mtime[:limit]
230 |             
231 |             # Now load only the recent files
232 |             for _, file_path in recent_files:
233 |                 try:
234 |                     with open(file_path, "r") as f:
235 |                         data = json.load(f)
236 |                         
237 |                         # Extract key information
238 |                         conv_summary = {
239 |                             "id": data.get("id"),
240 |                             "title": data.get("title"),
241 |                             "created": data.get("created"),
242 |                             "updated": data.get("updated"),
243 |                         }
244 |                         
245 |                         # Extract model used from messages or metadata
246 |                         model_full_name = None
247 |                         if "model_metadata" in data and "models_used" in data["model_metadata"]:
248 |                             models = data["model_metadata"]["models_used"]
249 |                             model_full_name = models[-1] if models else None
250 |                         else:
251 |                             # Try to extract from the last assistant message
252 |                             for msg in reversed(data.get("messages", [])):
253 |                                 if msg.get("role") == "assistant":
254 |                                     metadata = msg.get("metadata", {})
255 |                                     if "model" in metadata:
256 |                                         model_full_name = metadata["model"]
257 |                                         break
258 |                         
259 |                         # Convert to short name for agent interface
260 |                         if model_full_name:
261 |                             short_name = get_short_name(model_full_name)
262 |                             model_used = short_name if short_name else model_full_name
263 |                         else:
264 |                             model_used = None
265 |                         
266 |                         conv_summary["model_used"] = model_used
267 |                         
268 |                         # If no title exists (should not happen with new version)
269 |                         # just use a placeholder
270 |                         if not conv_summary["title"]:
271 |                             conv_summary["title"] = "[Untitled conversation]"
272 |                         
273 |                         conversations.append(conv_summary)
274 |                         
275 |                 except Exception as e:
276 |                     logger.warning(f"Failed to read conversation file {file_path}: {e}")
277 |                     continue
278 |             
279 |             # The files are already in order from the filesystem sorting
280 |             # But we should still sort by the actual "updated" field in case of discrepancies
281 |             conversations.sort(key=lambda x: x.get("updated", ""), reverse=True)
282 |             
283 |             return conversations
284 |             
285 |         except Exception as e:
286 |             logger.error(f"Failed to list recent conversations: {e}")
287 |             return []
```

--------------------------------------------------------------------------------
/server.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """μ-MCP Server - Minimal MCP server for AI model interactions.
  3 | 
  4 | In contrast to zen-mcp's 10,000+ lines of orchestration,
  5 | μ-MCP provides pure model access with no hardcoded workflows.
  6 | """
  7 | 
  8 | import asyncio
  9 | import json
 10 | import logging
 11 | import os
 12 | import sys
 13 | from typing import Any
 14 | 
 15 | # Load environment variables from .env file
 16 | from dotenv import load_dotenv
 17 | load_dotenv()
 18 | 
 19 | from mcp import McpError, types
 20 | from mcp.server import Server
 21 | from mcp.server.models import InitializationOptions
 22 | from mcp.types import (
 23 |     TextContent,
 24 |     Tool,
 25 |     ServerCapabilities,
 26 |     ToolsCapability,
 27 |     Prompt,
 28 |     GetPromptResult,
 29 |     PromptMessage,
 30 |     PromptsCapability,
 31 | )
 32 | 
 33 | from models import get_allowed_models
 34 | from prompts import get_agent_tool_description
 35 | 
 36 | # Configure logging
 37 | log_level = os.getenv("LOG_LEVEL", "INFO")
 38 | logging.basicConfig(
 39 |     level=getattr(logging, log_level),
 40 |     format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
 41 | )
 42 | logger = logging.getLogger(__name__)
 43 | 
 44 | app = Server("μ-mcp")
 45 | 
 46 | 
 47 | @app.list_tools()
 48 | async def list_tools() -> list[Tool]:
 49 |     """List available tools - just one: chat."""
 50 |     # Build model enum for schema
 51 |     models = get_allowed_models()
 52 |     
 53 |     # Build model enum for schema
 54 |     model_enum = []
 55 |     model_descriptions = []
 56 |     
 57 |     for key, model in models.items():
 58 |         # Use short name (key) in enum
 59 |         model_enum.append(key)
 60 |         # Show only short name in description, not full path
 61 |         model_descriptions.append(f"• {key}: {model.description}")
 62 |     
 63 |     # Build the combined description
 64 |     models_description = "Select the AI model that best fits your task:\n\n" + "\n".join(model_descriptions)
 65 |     
 66 |     return [
 67 |         Tool(
 68 |             name="chat",
 69 |             description=get_agent_tool_description(),
 70 |             inputSchema={
 71 |                 "type": "object",
 72 |                 "properties": {
 73 |                     "prompt": {
 74 |                         "type": "string",
 75 |                         "description": "Your message or question"
 76 |                     },
 77 |                     "model": {
 78 |                         "type": "string",
 79 |                         "enum": model_enum,
 80 |                         "description": models_description,
 81 |                     },
 82 |                     "title": {
 83 |                         "type": "string",
 84 |                         "description": "Title for new conversation (3-10 words). Provide this OR continuation_id, not both",
 85 |                     },
 86 |                     "continuation_id": {
 87 |                         "type": "string",
 88 |                         "description": "UUID to continue existing conversation. Provide this OR title, not both",
 89 |                     },
 90 |                     "files": {
 91 |                         "type": "array",
 92 |                         "items": {"type": "string"},
 93 |                         "description": "Absolute paths to files to include as context",
 94 |                     },
 95 |                     "images": {
 96 |                         "type": "array",
 97 |                         "items": {"type": "string"},
 98 |                         "description": "Absolute paths to images to include",
 99 |                     },
100 |                     "reasoning_effort": {
101 |                         "type": "string",
102 |                         "enum": ["low", "medium", "high"],
103 |                         "description": "Reasoning depth for models that support it (low=20%, medium=50%, high=80% of computation)",
104 |                         "default": "medium",
105 |                     },
106 |                 },
107 |                 "required": ["prompt", "model"],  # Model is now required
108 |             },
109 |         )
110 |     ]
111 | 
112 | 
113 | @app.list_prompts()
114 | async def list_prompts() -> list[Prompt]:
115 |     """List available prompts for slash commands."""
116 |     return [
117 |         Prompt(
118 |             name="chat",
119 |             description="Start a chat with AI models",
120 |             arguments=[],
121 |         ),
122 |         Prompt(
123 |             name="continue",
124 |             description="Continue the previous conversation",
125 |             arguments=[],
126 |         ),
127 |         Prompt(
128 |             name="challenge",
129 |             description="Encourage critical thinking and avoid reflexive agreement",
130 |             arguments=[],
131 |         ),
132 |         Prompt(
133 |             name="discuss",
134 |             description="Orchestrate multi-turn discussion among multiple AIs",
135 |             arguments=[],
136 |         ),
137 |     ]
138 | 
139 | 
140 | @app.get_prompt()
141 | async def get_prompt(name: str, arguments: dict[str, Any] = None) -> GetPromptResult:
142 |     """Generate prompt text for slash commands."""
143 |     if name == "chat":
144 |         return GetPromptResult(
145 |             description="Start a chat with AI models",
146 |             messages=[
147 |                 PromptMessage(
148 |                     role="user",
149 |                     content=TextContent(
150 |                         type="text",
151 |                         text="Use the chat tool to interact with an AI model."
152 |                     )
153 |                 )
154 |             ],
155 |         )
156 |     elif name == "continue":
157 |         # Get the list of recent conversations
158 |         from chat_handler import ChatHandler
159 |         from datetime import datetime
160 |         
161 |         handler = ChatHandler()
162 |         recent_conversations = handler.storage.list_recent_conversations(20)
163 |         
164 |         if recent_conversations:
165 |             # Format the conversation list
166 |             conv_list = []
167 |             for i, conv in enumerate(recent_conversations, 1):
168 |                 # Calculate relative time
169 |                 if conv.get("updated"):
170 |                     try:
171 |                         updated_time = datetime.fromisoformat(conv["updated"])
172 |                         now = datetime.utcnow()
173 |                         time_diff = now - updated_time
174 |                         
175 |                         # Format relative time
176 |                         if time_diff.days > 0:
177 |                             time_str = f"{time_diff.days} day{'s' if time_diff.days > 1 else ''} ago"
178 |                         elif time_diff.seconds >= 3600:
179 |                             hours = time_diff.seconds // 3600
180 |                             time_str = f"{hours} hour{'s' if hours > 1 else ''} ago"
181 |                         elif time_diff.seconds >= 60:
182 |                             minutes = time_diff.seconds // 60
183 |                             time_str = f"{minutes} minute{'s' if minutes > 1 else ''} ago"
184 |                         else:
185 |                             time_str = "just now"
186 |                     except:
187 |                         time_str = "unknown time"
188 |                 else:
189 |                     time_str = "unknown time"
190 |                 
191 |                 # Get display text (title should always exist)
192 |                 display = conv.get("title", "[Untitled]")
193 |                 # model_used is already a short name from list_recent_conversations()
194 |                 model = conv.get("model_used", "unknown model")
195 |                 
196 |                 conv_list.append(
197 |                     f"{i}. [{time_str}] {display}\n"
198 |                     f"   Model: {model} | ID: {conv['id']}"
199 |                 )
200 |             
201 |             instruction_text = f"""Select a conversation to continue using the chat tool.
202 | 
203 | Recent Conversations (newest first):
204 | {chr(10).join(conv_list)}
205 | 
206 | To continue a conversation, use the chat tool with the desired continuation_id.
207 | Example: Use continuation_id: "{recent_conversations[0]['id']}" for the most recent conversation.
208 | 
209 | This allows you to access the full conversation history even if your context was compacted."""
210 |         else:
211 |             instruction_text = "No previous conversations found. Start a new conversation using the chat tool."
212 |         
213 |         return GetPromptResult(
214 |             description="Continue a previous conversation",
215 |             messages=[
216 |                 PromptMessage(
217 |                     role="user",
218 |                     content=TextContent(
219 |                         type="text",
220 |                         text=instruction_text
221 |                     )
222 |                 )
223 |             ],
224 |         )
225 |     elif name == "challenge":
226 |         return GetPromptResult(
227 |             description="Encourage critical thinking and avoid reflexive agreement",
228 |             messages=[
229 |                 PromptMessage(
230 |                     role="user",
231 |                     content=TextContent(
232 |                         type="text",
233 |                         text="""CRITICAL REASSESSMENT MODE:
234 | 
235 | When using the chat tool, wrap your prompt with instructions for the AI to:
236 | - Challenge ideas and think critically before responding
237 | - Evaluate whether they actually agree or disagree
238 | - Provide thoughtful analysis rather than reflexive agreement
239 | 
240 | Example: Instead of accepting a statement, ask the AI to examine it for accuracy, completeness, and reasoning flaws.
241 | This promotes truth-seeking over compliance."""
242 |                     )
243 |                 )
244 |             ],
245 |         )
246 |     elif name == "discuss":
247 |         return GetPromptResult(
248 |             description="Orchestrate multi-turn discussion among multiple AIs",
249 |             messages=[
250 |                 PromptMessage(
251 |                     role="user",
252 |                     content=TextContent(
253 |                         type="text",
254 |                         text="""MULTI-AI DISCUSSION MODE:
255 | 
256 | Use the chat tool to orchestrate a multi-turn discussion among diverse AI models.
257 | 
258 | Requirements:
259 | 1. Select models with complementary strengths based on the topic
260 | 2. Start fresh conversations (no continuation_id) for each model
261 | 3. Provide context about the topic and other participants' perspectives
262 | 4. Exchange key insights between models across multiple turns
263 | 5. Encourage constructive disagreement - not consensus for its own sake
264 | 6. Continue until either consensus emerges naturally OR sufficiently diverse perspectives are gathered
265 | 
266 | Do NOT stop after one round. Keep the discussion going through multiple exchanges until reaching a natural conclusion.
267 | Synthesize findings, highlighting both agreements and valuable disagreements."""
268 |                     )
269 |                 )
270 |             ],
271 |         )
272 |     else:
273 |         raise ValueError(f"Unknown prompt: {name}")
274 | 
275 | 
276 | @app.call_tool()
277 | async def call_tool(name: str, arguments: Any) -> list[TextContent]:
278 |     """Handle tool calls - just chat."""
279 |     if name != "chat":
280 |         raise McpError(f"Unknown tool: {name}")
281 | 
282 |     from chat_handler import ChatHandler
283 | 
284 |     try:
285 |         handler = ChatHandler()
286 |         result = await handler.chat(**arguments)
287 |         return [TextContent(type="text", text=json.dumps(result, indent=2))]
288 |     except Exception as e:
289 |         logger.error(f"Chat tool error: {e}")
290 |         return [TextContent(type="text", text=f"Error: {str(e)}")]
291 | 
292 | 
293 | async def main():
294 |     """Run the MCP server."""
295 |     # Check for API key
296 |     if not os.getenv("OPENROUTER_API_KEY"):
297 |         logger.error("OPENROUTER_API_KEY environment variable not set")
298 |         logger.error("Get your API key at: https://openrouter.ai/keys")
299 |         sys.exit(1)
300 | 
301 |     # Log configuration
302 |     models = get_allowed_models()
303 |     logger.info(f"Starting μ-MCP Server...")
304 |     logger.info(f"Available models: {len(models)}")
305 |     
306 |     # Use stdio transport
307 |     from mcp.server.stdio import stdio_server
308 | 
309 |     async with stdio_server() as (read_stream, write_stream):
310 |         await app.run(
311 |             read_stream,
312 |             write_stream,
313 |             InitializationOptions(
314 |                 server_name="μ-mcp",
315 |                 server_version="2.0.0",
316 |                 capabilities=ServerCapabilities(
317 |                     tools=ToolsCapability(),
318 |                     prompts=PromptsCapability(),
319 |                 ),
320 |             ),
321 |         )
322 | 
323 | 
324 | if __name__ == "__main__":
325 |     asyncio.run(main())
326 | 
```