#
tokens: 49786/50000 41/56 files (page 1/6)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 1 of 6. Use http://codebase.md/arthurcolle/openai-mcp?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .gitignore
├── claude_code
│   ├── __init__.py
│   ├── __pycache__
│   │   ├── __init__.cpython-312.pyc
│   │   └── mcp_server.cpython-312.pyc
│   ├── claude.py
│   ├── commands
│   │   ├── __init__.py
│   │   ├── __pycache__
│   │   │   ├── __init__.cpython-312.pyc
│   │   │   └── serve.cpython-312.pyc
│   │   ├── client.py
│   │   ├── multi_agent_client.py
│   │   └── serve.py
│   ├── config
│   │   └── __init__.py
│   ├── examples
│   │   ├── agents_config.json
│   │   ├── claude_mcp_config.html
│   │   ├── claude_mcp_config.json
│   │   ├── echo_server.py
│   │   └── README.md
│   ├── lib
│   │   ├── __init__.py
│   │   ├── __pycache__
│   │   │   └── __init__.cpython-312.pyc
│   │   ├── context
│   │   │   └── __init__.py
│   │   ├── monitoring
│   │   │   ├── __init__.py
│   │   │   ├── __pycache__
│   │   │   │   ├── __init__.cpython-312.pyc
│   │   │   │   └── server_metrics.cpython-312.pyc
│   │   │   ├── cost_tracker.py
│   │   │   └── server_metrics.py
│   │   ├── providers
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   └── openai.py
│   │   ├── rl
│   │   │   ├── __init__.py
│   │   │   ├── grpo.py
│   │   │   ├── mcts.py
│   │   │   └── tool_optimizer.py
│   │   ├── tools
│   │   │   ├── __init__.py
│   │   │   ├── __pycache__
│   │   │   │   ├── __init__.cpython-312.pyc
│   │   │   │   ├── base.cpython-312.pyc
│   │   │   │   ├── file_tools.cpython-312.pyc
│   │   │   │   └── manager.cpython-312.pyc
│   │   │   ├── ai_tools.py
│   │   │   ├── base.py
│   │   │   ├── code_tools.py
│   │   │   ├── file_tools.py
│   │   │   ├── manager.py
│   │   │   └── search_tools.py
│   │   └── ui
│   │       ├── __init__.py
│   │       └── tool_visualizer.py
│   ├── mcp_server.py
│   ├── README_MCP_CLIENT.md
│   ├── README_MULTI_AGENT.md
│   └── util
│       └── __init__.py
├── claude.py
├── cli.py
├── data
│   └── prompt_templates.json
├── deploy_modal_mcp.py
├── deploy.sh
├── examples
│   ├── agents_config.json
│   └── echo_server.py
├── install.sh
├── mcp_modal_adapter.py
├── mcp_server.py
├── modal_mcp_server.py
├── README_modal_mcp.md
├── README.md
├── requirements.txt
├── setup.py
├── static
│   └── style.css
├── templates
│   └── index.html
└── web-client.html
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | venv
2 | .aider*
3 | 
```

--------------------------------------------------------------------------------
/claude_code/examples/README.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Claude Code MCP Examples
 2 | 
 3 | This directory contains examples for using the Claude Code MCP client with different MCP servers.
 4 | 
 5 | ## Echo Server
 6 | 
 7 | A simple server that provides two tools:
 8 | - `echo`: Echoes back any message sent to it
 9 | - `reverse`: Reverses any message sent to it
10 | 
11 | To run the echo server example:
12 | 
13 | 1. Start the server:
14 | ```bash
15 | python examples/echo_server.py
16 | ```
17 | 
18 | 2. In a separate terminal, connect to it with the MCP client:
19 | ```bash
20 | claude mcp-client examples/echo_server.py
21 | ```
22 | 
23 | 3. Try these example queries:
24 |    - "Echo the phrase 'hello world'"
25 |    - "Can you reverse the text 'Claude is awesome'?"
26 | 
27 | ## Multi-Agent Example
28 | 
29 | The `agents_config.json` file contains a configuration for a multi-agent setup with three specialized roles:
30 | - **Researcher**: Focuses on finding and analyzing information
31 | - **Coder**: Specializes in writing and debugging code
32 | - **Critic**: Evaluates solutions and suggests improvements
33 | 
34 | To run the multi-agent example:
35 | 
36 | 1. Start the echo server:
37 | ```bash
38 | python examples/echo_server.py
39 | ```
40 | 
41 | 2. In a separate terminal, launch the multi-agent client:
42 | ```bash
43 | claude mcp-multi-agent examples/echo_server.py --config examples/agents_config.json
44 | ```
45 | 
46 | 3. Try these example interactions:
47 |    - "I need to write a function that calculates the Fibonacci sequence"
48 |    - "/talk Researcher What are the applications of Fibonacci sequences?"
49 |    - "/talk Critic What are the efficiency concerns with recursive Fibonacci implementations?"
50 |    - "/agents" (to see all available agents)
51 |    - "/history" (to view the conversation history)
52 | 
53 | ## Adding Your Own Examples
54 | 
55 | Feel free to create your own MCP servers by following these steps:
56 | 
57 | 1. Create a new Python file in this directory
58 | 2. Import FastMCP: `from fastmcp import FastMCP`
59 | 3. Create a server instance: `my_server = FastMCP("Server Name", description="...")`
60 | 4. Define tools using the `@my_server.tool` decorator
61 | 5. Define resources using the `@my_server.resource` decorator
62 | 6. Run your server with `my_server.run()`
63 | 
64 | ### Creating Custom Agent Configurations
65 | 
66 | To create your own agent configurations:
67 | 
68 | 1. Create a JSON file with an array of agent definitions:
69 | ```json
70 | [
71 |   {
72 |     "name": "AgentName",
73 |     "role": "agent specialization",
74 |     "model": "claude model to use",
75 |     "system_prompt": "Detailed instructions for the agent's behavior and role"
76 |   },
77 |   ...
78 | ]
79 | ```
80 | 
81 | 2. Launch the multi-agent client with your configuration:
82 | ```bash
83 | claude mcp-multi-agent path/to/server.py --config path/to/your_config.json
84 | ```
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | [![MseeP.ai Security Assessment Badge](https://mseep.net/pr/arthurcolle-openai-mcp-badge.png)](https://mseep.ai/app/arthurcolle-openai-mcp)
  2 | 
  3 | # MCP Coding Assistant with support for OpenAI + other LLM Providers
  4 | 
  5 | A powerful Python recreation of Claude Code with enhanced real-time visualization, cost management, and Model Context Protocol (MCP) server capabilities. This tool provides a natural language interface for software development tasks with support for multiple LLM providers.
  6 | 
  7 | ![Version](https://img.shields.io/badge/version-0.1.0-blue)
  8 | ![Python](https://img.shields.io/badge/python-3.10+-green)
  9 | 
 10 | ## Key Features
 11 | 
 12 | - **Multi-Provider Support:** Works with OpenAI, Anthropic, and other LLM providers
 13 | - **Model Context Protocol Integration:** 
 14 |   - Run as an MCP server for use with Claude Desktop and other clients
 15 |   - Connect to any MCP server with the built-in MCP client
 16 |   - Multi-agent synchronization for complex problem solving
 17 | - **Real-Time Tool Visualization:** See tool execution progress and results in real-time
 18 | - **Cost Management:** Track token usage and expenses with budget controls
 19 | - **Comprehensive Tool Suite:** File operations, search, command execution, and more
 20 | - **Enhanced UI:** Rich terminal interface with progress indicators and syntax highlighting
 21 | - **Context Optimization:** Smart conversation compaction and memory management
 22 | - **Agent Coordination:** Specialized agents with different roles can collaborate on tasks
 23 | 
 24 | ## Installation
 25 | 
 26 | 1. Clone this repository
 27 | 2. Install dependencies:
 28 | 
 29 | ```bash
 30 | pip install -r requirements.txt
 31 | ```
 32 | 
 33 | 3. Create a `.env` file with your API keys:
 34 | 
 35 | ```
 36 | # Choose one or more providers
 37 | OPENAI_API_KEY=your_openai_api_key_here
 38 | ANTHROPIC_API_KEY=your_anthropic_api_key_here
 39 | 
 40 | # Optional model selection
 41 | OPENAI_MODEL=gpt-4o
 42 | ANTHROPIC_MODEL=claude-3-opus-20240229
 43 | ```
 44 | 
 45 | ## Usage
 46 | 
 47 | ### CLI Mode
 48 | 
 49 | Run the CLI with the default provider (determined from available API keys):
 50 | 
 51 | ```bash
 52 | python claude.py chat
 53 | ```
 54 | 
 55 | Specify a provider and model:
 56 | 
 57 | ```bash
 58 | python claude.py chat --provider openai --model gpt-4o
 59 | ```
 60 | 
 61 | Set a budget limit to manage costs:
 62 | 
 63 | ```bash
 64 | python claude.py chat --budget 5.00
 65 | ```
 66 | 
 67 | ### MCP Server Mode
 68 | 
 69 | Run as a Model Context Protocol server:
 70 | 
 71 | ```bash
 72 | python claude.py serve
 73 | ```
 74 | 
 75 | Start in development mode with the MCP Inspector:
 76 | 
 77 | ```bash
 78 | python claude.py serve --dev
 79 | ```
 80 | 
 81 | Configure host and port:
 82 | 
 83 | ```bash
 84 | python claude.py serve --host 0.0.0.0 --port 8000
 85 | ```
 86 | 
 87 | Specify additional dependencies:
 88 | 
 89 | ```bash
 90 | python claude.py serve --dependencies pandas numpy
 91 | ```
 92 | 
 93 | Load environment variables from file:
 94 | 
 95 | ```bash
 96 | python claude.py serve --env-file .env
 97 | ```
 98 | 
 99 | ### MCP Client Mode
100 | 
101 | Connect to an MCP server using Claude as the reasoning engine:
102 | 
103 | ```bash
104 | python claude.py mcp-client path/to/server.py
105 | ```
106 | 
107 | Specify a Claude model:
108 | 
109 | ```bash
110 | python claude.py mcp-client path/to/server.py --model claude-3-5-sonnet-20241022
111 | ```
112 | 
113 | Try the included example server:
114 | 
115 | ```bash
116 | # In terminal 1 - start the server
117 | python examples/echo_server.py
118 | 
119 | # In terminal 2 - connect with the client
120 | python claude.py mcp-client examples/echo_server.py
121 | ```
122 | 
123 | ### Multi-Agent MCP Mode
124 | 
125 | Launch a multi-agent client with synchronized agents:
126 | 
127 | ```bash
128 | python claude.py mcp-multi-agent path/to/server.py
129 | ```
130 | 
131 | Use a custom agent configuration file:
132 | 
133 | ```bash
134 | python claude.py mcp-multi-agent path/to/server.py --config examples/agents_config.json
135 | ```
136 | 
137 | Example with the echo server:
138 | 
139 | ```bash
140 | # In terminal 1 - start the server
141 | python examples/echo_server.py
142 | 
143 | # In terminal 2 - launch the multi-agent client
144 | python claude.py mcp-multi-agent examples/echo_server.py --config examples/agents_config.json
145 | ```
146 | 
147 | ## Available Tools
148 | 
149 | - **View:** Read files with optional line limits
150 | - **Edit:** Modify files with precise text replacement
151 | - **Replace:** Create or overwrite files
152 | - **GlobTool:** Find files by pattern matching
153 | - **GrepTool:** Search file contents using regex
154 | - **LS:** List directory contents
155 | - **Bash:** Execute shell commands
156 | 
157 | ## Chat Commands
158 | 
159 | - **/help:** Show available commands
160 | - **/compact:** Compress conversation history to save tokens
161 | - **/version:** Show version information
162 | - **/providers:** List available LLM providers
163 | - **/cost:** Show cost and usage information
164 | - **/budget [amount]:** Set a budget limit
165 | - **/quit, /exit:** Exit the application
166 | 
167 | ## Architecture
168 | 
169 | Claude Code Python Edition is built with a modular architecture:
170 | 
171 | ```
172 | /claude_code/
173 |   /lib/
174 |     /providers/      # LLM provider implementations
175 |     /tools/          # Tool implementations
176 |     /context/        # Context management
177 |     /ui/             # UI components
178 |     /monitoring/     # Cost tracking & metrics
179 |   /commands/         # CLI commands
180 |   /config/           # Configuration management
181 |   /util/             # Utility functions
182 |   claude.py          # Main CLI entry point
183 |   mcp_server.py      # Model Context Protocol server
184 | ```
185 | 
186 | ## Using with Model Context Protocol
187 | 
188 | ### Using Claude Code as an MCP Server
189 | 
190 | Once the MCP server is running, you can connect to it from Claude Desktop or other MCP-compatible clients:
191 | 
192 | 1. Install and run the MCP server:
193 |    ```bash
194 |    python claude.py serve
195 |    ```
196 | 
197 | 2. Open the configuration page in your browser:
198 |    ```
199 |    http://localhost:8000
200 |    ```
201 | 
202 | 3. Follow the instructions to configure Claude Desktop, including:
203 |    - Copy the JSON configuration
204 |    - Download the auto-configured JSON file
205 |    - Step-by-step setup instructions
206 | 
207 | ### Using Claude Code as an MCP Client
208 | 
209 | To connect to any MCP server using Claude Code:
210 | 
211 | 1. Ensure you have your Anthropic API key in the environment or .env file
212 | 2. Start the MCP server you want to connect to
213 | 3. Connect using the MCP client:
214 |    ```bash
215 |    python claude.py mcp-client path/to/server.py
216 |    ```
217 | 4. Type queries in the interactive chat interface
218 | 
219 | ### Using Multi-Agent Mode
220 | 
221 | For complex tasks, the multi-agent mode allows multiple specialized agents to collaborate:
222 | 
223 | 1. Create an agent configuration file or use the provided example
224 | 2. Start your MCP server
225 | 3. Launch the multi-agent client:
226 |    ```bash
227 |    python claude.py mcp-multi-agent path/to/server.py --config examples/agents_config.json
228 |    ```
229 | 4. Use the command interface to interact with multiple agents:
230 |    - Type a message to broadcast to all agents
231 |    - Use `/talk Agent_Name message` for direct communication
232 |    - Use `/agents` to see all available agents
233 |    - Use `/history` to view the conversation history
234 | 
235 | ## Contributing
236 | 
237 | 1. Fork the repository
238 | 2. Create a feature branch
239 | 3. Implement your changes with tests
240 | 4. Submit a pull request
241 | 
242 | ## License
243 | 
244 | MIT
245 | 
246 | ## Acknowledgments
247 | 
248 | This project is inspired by Anthropic's Claude Code CLI tool, reimplemented in Python with additional features for enhanced visibility, cost management, and MCP server capabilities.# OpenAI Code Assistant
249 | 
250 | A powerful command-line and API-based coding assistant that uses OpenAI APIs with function calling and streaming.
251 | 
252 | ## Features
253 | 
254 | - Interactive CLI for coding assistance
255 | - Web API for integration with other applications
256 | - Model Context Protocol (MCP) server implementation
257 | - Replication support for high availability
258 | - Tool-based architecture for extensibility
259 | - Reinforcement learning for tool optimization
260 | - Web client for browser-based interaction
261 | 
262 | ## Installation
263 | 
264 | 1. Clone the repository
265 | 2. Install dependencies:
266 |    ```bash
267 |    pip install -r requirements.txt
268 |    ```
269 | 3. Set your OpenAI API key:
270 |    ```bash
271 |    export OPENAI_API_KEY=your_api_key
272 |    ```
273 | 
274 | ## Usage
275 | 
276 | ### CLI Mode
277 | 
278 | Run the assistant in interactive CLI mode:
279 | 
280 | ```bash
281 | python cli.py
282 | ```
283 | 
284 | Options:
285 | - `--model`, `-m`: Specify the model to use (default: gpt-4o)
286 | - `--temperature`, `-t`: Set temperature for response generation (default: 0)
287 | - `--verbose`, `-v`: Enable verbose output with additional information
288 | - `--enable-rl/--disable-rl`: Enable/disable reinforcement learning for tool optimization
289 | - `--rl-update`: Manually trigger an update of the RL model
290 | 
291 | ### API Server Mode
292 | 
293 | Run the assistant as an API server:
294 | 
295 | ```bash
296 | python cli.py serve
297 | ```
298 | 
299 | Options:
300 | - `--host`: Host address to bind to (default: 127.0.0.1)
301 | - `--port`, `-p`: Port to listen on (default: 8000)
302 | - `--workers`, `-w`: Number of worker processes (default: 1)
303 | - `--enable-replication`: Enable replication across instances
304 | - `--primary/--secondary`: Whether this is a primary or secondary instance
305 | - `--peer`: Peer instances to replicate with (host:port), can be specified multiple times
306 | 
307 | ### MCP Server Mode
308 | 
309 | Run the assistant as a Model Context Protocol (MCP) server:
310 | 
311 | ```bash
312 | python cli.py mcp-serve
313 | ```
314 | 
315 | Options:
316 | - `--host`: Host address to bind to (default: 127.0.0.1)
317 | - `--port`, `-p`: Port to listen on (default: 8000)
318 | - `--dev`: Enable development mode with additional logging
319 | - `--dependencies`: Additional Python dependencies to install
320 | - `--env-file`: Path to .env file with environment variables
321 | 
322 | ### MCP Client Mode
323 | 
324 | Connect to an MCP server using the assistant as the reasoning engine:
325 | 
326 | ```bash
327 | python cli.py mcp-client path/to/server.py
328 | ```
329 | 
330 | Options:
331 | - `--model`, `-m`: Model to use for reasoning (default: gpt-4o)
332 | - `--host`: Host address for the MCP server (default: 127.0.0.1)
333 | - `--port`, `-p`: Port for the MCP server (default: 8000)
334 | 
335 | ### Deployment Script
336 | 
337 | For easier deployment, use the provided script:
338 | 
339 | ```bash
340 | ./deploy.sh --host 0.0.0.0 --port 8000 --workers 4
341 | ```
342 | 
343 | To enable replication:
344 | 
345 | ```bash
346 | # Primary instance
347 | ./deploy.sh --enable-replication --port 8000
348 | 
349 | # Secondary instance
350 | ./deploy.sh --enable-replication --secondary --port 8001 --peer 127.0.0.1:8000
351 | ```
352 | 
353 | ### Web Client
354 | 
355 | To use the web client, open `web-client.html` in your browser. Make sure the API server is running.
356 | 
357 | ## API Endpoints
358 | 
359 | ### Standard API Endpoints
360 | 
361 | - `POST /conversation`: Create a new conversation
362 | - `POST /conversation/{conversation_id}/message`: Send a message to a conversation
363 | - `POST /conversation/{conversation_id}/message/stream`: Stream a message response
364 | - `GET /conversation/{conversation_id}`: Get conversation details
365 | - `DELETE /conversation/{conversation_id}`: Delete a conversation
366 | - `GET /health`: Health check endpoint
367 | 
368 | ### MCP Protocol Endpoints
369 | 
370 | - `GET /`: Health check (MCP protocol)
371 | - `POST /context`: Get context for a prompt template
372 | - `GET /prompts`: List available prompt templates
373 | - `GET /prompts/{prompt_id}`: Get a specific prompt template
374 | - `POST /prompts`: Create a new prompt template
375 | - `PUT /prompts/{prompt_id}`: Update an existing prompt template
376 | - `DELETE /prompts/{prompt_id}`: Delete a prompt template
377 | 
378 | ## Replication
379 | 
380 | The replication system allows running multiple instances of the assistant with synchronized state. This provides:
381 | 
382 | - High availability
383 | - Load balancing
384 | - Fault tolerance
385 | 
386 | To set up replication:
387 | 1. Start a primary instance with `--enable-replication`
388 | 2. Start secondary instances with `--enable-replication --secondary --peer [primary-host:port]`
389 | 
390 | ## Tools
391 | 
392 | The assistant includes various tools:
393 | - Weather: Get current weather for a location
394 | - View: Read files from the filesystem
395 | - Edit: Edit files
396 | - Replace: Write files
397 | - Bash: Execute bash commands
398 | - GlobTool: File pattern matching
399 | - GrepTool: Content search
400 | - LS: List directory contents
401 | - JinaSearch: Web search using Jina.ai
402 | - JinaFactCheck: Fact checking using Jina.ai
403 | - JinaReadURL: Read and summarize webpages
404 | 
405 | ## CLI Commands
406 | 
407 | - `/help`: Show help message
408 | - `/compact`: Compact the conversation to reduce token usage
409 | - `/status`: Show token usage and session information
410 | - `/config`: Show current configuration settings
411 | - `/rl-status`: Show RL tool optimizer status (if enabled)
412 | - `/rl-update`: Update the RL model manually (if enabled)
413 | - `/rl-stats`: Show tool usage statistics (if enabled)
414 | 
```

--------------------------------------------------------------------------------
/claude_code/config/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/claude_code/lib/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/claude_code/lib/context/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/claude_code/lib/monitoring/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/claude_code/lib/ui/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/claude_code/util/__init__.py:
--------------------------------------------------------------------------------

```python
1 | 
```

--------------------------------------------------------------------------------
/claude_code/commands/__init__.py:
--------------------------------------------------------------------------------

```python
1 | """Commands package for Claude Code."""
2 | 
3 | from claude_code.commands import serve
4 | from claude_code.commands import client
5 | 
```

--------------------------------------------------------------------------------
/claude_code/examples/claude_mcp_config.json:
--------------------------------------------------------------------------------

```json
1 | {
2 |   "name": "Claude Code Tools",
3 |   "type": "local_process",
4 |   "command": "python",
5 |   "args": ["claude.py", "serve"],
6 |   "workingDirectory": "/path/to/claude-code-directory",
7 |   "environment": {},
8 |   "description": "A Model Context Protocol server for Claude Code tools"
9 | }
```

--------------------------------------------------------------------------------
/claude_code/__init__.py:
--------------------------------------------------------------------------------

```python
1 | """
2 | Claude Code Python Edition - A powerful LLM-powered CLI for software development.
3 | 
4 | This package provides a Python reimplementation of Claude Code with enhanced 
5 | real-time tool visualization and cost management features.
6 | """
7 | 
8 | __version__ = "0.1.0"
9 | __author__ = "Claude Code Team"
```

--------------------------------------------------------------------------------
/claude_code/lib/rl/__init__.py:
--------------------------------------------------------------------------------

```python
 1 | """
 2 | Reinforcement Learning module for Claude Code.
 3 | This package contains implementations of MCTS and GRPO for decision making.
 4 | """
 5 | 
 6 | from .mcts import AdvancedMCTS, MCTSToolSelector
 7 | from .grpo import GRPO, MultiAgentGroupRL, ToolSelectionGRPO
 8 | 
 9 | __all__ = [
10 |     "AdvancedMCTS",
11 |     "MCTSToolSelector",
12 |     "GRPO",
13 |     "MultiAgentGroupRL",
14 |     "ToolSelectionGRPO",
15 | ]
```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
 1 | # Core dependencies
 2 | openai>=1.0.0
 3 | anthropic>=0.8.0
 4 | python-dotenv>=1.0.0
 5 | pydantic>=2.0.0
 6 | requests>=2.0.0
 7 | fastmcp>=0.4.1
 8 | 
 9 | # CLI and UI
10 | typer>=0.9.0
11 | rich>=10.0.0
12 | prompt_toolkit>=3.0.0
13 | 
14 | # Tools and utilities
15 | tiktoken>=0.3.0
16 | tokenizers>=0.13.0
17 | regex>=2022.0.0
18 | GitPython>=3.1.0
19 | pygments>=2.15.0
20 | 
21 | # Performance
22 | tqdm>=4.65.0
23 | concurrent-log-handler>=0.9.0
24 | 
25 | # Machine Learning and Optimization
26 | torch>=2.0.0
27 | numpy>=1.20.0
28 | sentence-transformers>=2.2.0
29 | 
30 | # Testing
31 | pytest>=7.0.0
32 | pytest-cov>=4.0.0
33 | 
34 | # Web API
35 | fastapi>=0.100.0
36 | uvicorn>=0.23.0
37 | 
```

--------------------------------------------------------------------------------
/claude_code/lib/tools/__init__.py:
--------------------------------------------------------------------------------

```python
 1 | """Tools module for Claude Code Python Edition."""
 2 | 
 3 | from .base import Tool, ToolParameter, ToolResult, ToolRegistry, tool
 4 | from .manager import ToolExecutionManager
 5 | from .file_tools import register_file_tools
 6 | from .search_tools import register_search_tools
 7 | from .code_tools import register_code_tools
 8 | from .ai_tools import register_ai_tools
 9 | 
10 | __all__ = [
11 |     "Tool", 
12 |     "ToolParameter", 
13 |     "ToolResult", 
14 |     "ToolRegistry", 
15 |     "ToolExecutionManager", 
16 |     "tool",
17 |     "register_file_tools",
18 |     "register_search_tools",
19 |     "register_code_tools",
20 |     "register_ai_tools"
21 | ]
22 | 
23 | def register_all_tools(registry: ToolRegistry = None) -> ToolRegistry:
24 |     """Register all available tools with the registry.
25 |     
26 |     Args:
27 |         registry: Existing registry or None to create a new one
28 |         
29 |     Returns:
30 |         Tool registry with all tools registered
31 |     """
32 |     if registry is None:
33 |         registry = ToolRegistry()
34 |     
35 |     # Register tool categories
36 |     register_file_tools(registry)
37 |     register_search_tools(registry)
38 |     register_code_tools(registry)
39 |     register_ai_tools(registry)
40 |     
41 |     # Load saved routines
42 |     registry.load_routines()
43 |     
44 |     return registry
45 | 
```

--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------

```python
 1 | from setuptools import setup, find_packages
 2 | 
 3 | with open("README.md", "r", encoding="utf-8") as fh:
 4 |     long_description = fh.read()
 5 | 
 6 | with open("requirements.txt", "r", encoding="utf-8") as f:
 7 |     requirements = [line.strip() for line in f.readlines() if line.strip()]
 8 | 
 9 | setup(
10 |     name="claude_code",
11 |     version="0.1.0",
12 |     author="Claude Code Team",
13 |     author_email="[email protected]",
14 |     description="Python recreation of Claude Code with enhanced features",
15 |     long_description=long_description,
16 |     long_description_content_type="text/markdown",
17 |     url="https://github.com/yourusername/claude-code-python",
18 |     packages=find_packages(),
19 |     install_requires=requirements,
20 |     classifiers=[
21 |         "Programming Language :: Python :: 3",
22 |         "Programming Language :: Python :: 3.10",
23 |         "Programming Language :: Python :: 3.11",
24 |         "License :: OSI Approved :: MIT License",
25 |         "Operating System :: OS Independent",
26 |         "Development Status :: 3 - Alpha",
27 |         "Intended Audience :: Developers",
28 |         "Topic :: Software Development :: User Interfaces",
29 |     ],
30 |     python_requires=">=3.10",
31 |     entry_points={
32 |         "console_scripts": [
33 |             "claude-code=claude_code.claude:app",
34 |         ],
35 |     },
36 | )
```

--------------------------------------------------------------------------------
/claude_code/examples/agents_config.json:
--------------------------------------------------------------------------------

```json
 1 | [
 2 |   {
 3 |     "name": "Researcher",
 4 |     "role": "research specialist",
 5 |     "model": "claude-3-5-sonnet-20241022",
 6 |     "system_prompt": "You are a research specialist participating in a multi-agent conversation. Your primary role is to find information, analyze data, and provide well-researched answers. You should use tools to gather information and verify facts. Always cite your sources when possible."
 7 |   },
 8 |   {
 9 |     "name": "Coder",
10 |     "role": "programming expert",
11 |     "model": "claude-3-5-sonnet-20241022",
12 |     "system_prompt": "You are a coding expert participating in a multi-agent conversation. Your primary role is to write, debug, and explain code. You should use tools to test your code and provide working solutions. Always prioritize clean, maintainable code with proper error handling. You can collaborate with other agents to solve complex problems."
13 |   },
14 |   {
15 |     "name": "Critic",
16 |     "role": "critical thinker",
17 |     "model": "claude-3-5-sonnet-20241022",
18 |     "system_prompt": "You are a critical thinker participating in a multi-agent conversation. Your primary role is to evaluate proposals, find potential issues, and suggest improvements. You should question assumptions, point out flaws, and help refine ideas. Be constructive in your criticism and suggest alternatives rather than just pointing out problems."
19 |   }
20 | ]
```

--------------------------------------------------------------------------------
/claude_code/examples/echo_server.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """
 3 | Example Echo MCP Server for testing the Claude Code MCP client.
 4 | This server provides a simple 'echo' tool that returns whatever is sent to it.
 5 | """
 6 | 
 7 | from fastmcp import FastMCP
 8 | import logging
 9 | 
10 | # Set up logging
11 | logging.basicConfig(level=logging.INFO)
12 | logger = logging.getLogger(__name__)
13 | 
14 | # Create the MCP server
15 | echo_server = FastMCP(
16 |     "Echo Server",
17 |     description="A simple echo server for testing MCP clients",
18 |     dependencies=[]
19 | )
20 | 
21 | @echo_server.tool(name="echo", description="Echoes back the input message")
22 | async def echo(message: str) -> str:
23 |     """Echo back the input message.
24 |     
25 |     Args:
26 |         message: The message to echo back
27 |         
28 |     Returns:
29 |         The same message
30 |     """
31 |     logger.info(f"Received message: {message}")
32 |     return f"Echo: {message}"
33 | 
34 | @echo_server.tool(name="reverse", description="Reverses the input message")
35 | async def reverse(message: str) -> str:
36 |     """Reverse the input message.
37 |     
38 |     Args:
39 |         message: The message to reverse
40 |         
41 |     Returns:
42 |         The reversed message
43 |     """
44 |     logger.info(f"Reversing message: {message}")
45 |     return f"Reversed: {message[::-1]}"
46 | 
47 | @echo_server.resource("echo://{message}")
48 | def echo_resource(message: str) -> str:
49 |     """Echo resource.
50 |     
51 |     Args:
52 |         message: The message to echo
53 |         
54 |     Returns:
55 |         The echoed message
56 |     """
57 |     return f"Resource Echo: {message}"
58 | 
59 | if __name__ == "__main__":
60 |     # Run the server
61 |     echo_server.run()
```

--------------------------------------------------------------------------------
/examples/agents_config.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "agents": [
 3 |     {
 4 |       "name": "CodeExpert",
 5 |       "role": "primary",
 6 |       "system_prompt": "You are a code expert specializing in software development. Focus on providing high-quality code solutions, explaining code concepts, and helping with debugging issues. You should prioritize code quality, readability, and best practices.",
 7 |       "model": "gpt-4o",
 8 |       "temperature": 0.0
 9 |     },
10 |     {
11 |       "name": "Architect",
12 |       "role": "specialist",
13 |       "system_prompt": "You are a software architect specializing in system design. Focus on providing high-level architectural guidance, design patterns, and system organization advice. You should think about scalability, maintainability, and overall system structure.",
14 |       "model": "gpt-4o",
15 |       "temperature": 0.1
16 |     },
17 |     {
18 |       "name": "SecurityExpert",
19 |       "role": "specialist",
20 |       "system_prompt": "You are a security expert specializing in identifying and fixing security vulnerabilities in code. Focus on security best practices, potential vulnerabilities, and secure coding patterns. Always prioritize security considerations in your advice.",
21 |       "model": "gpt-4o",
22 |       "temperature": 0.0
23 |     }
24 |   ],
25 |   "coordination": {
26 |     "strategy": "round_robin",
27 |     "primary_agent": "CodeExpert",
28 |     "auto_delegation": true,
29 |     "voting_threshold": 0.6
30 |   },
31 |   "settings": {
32 |     "max_turns_per_agent": 3,
33 |     "enable_agent_reflection": true,
34 |     "enable_cross_agent_communication": true,
35 |     "enable_user_selection": true
36 |   }
37 | }
38 | 
```

--------------------------------------------------------------------------------
/deploy.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | 
 3 | # OpenAI Code Assistant Deployment Script
 4 | 
 5 | # Default values
 6 | HOST="127.0.0.1"
 7 | PORT=8000
 8 | WORKERS=1
 9 | ENABLE_REPLICATION=false
10 | PRIMARY=true
11 | PEERS=""
12 | 
13 | # Parse command line arguments
14 | while [[ $# -gt 0 ]]; do
15 |   case $1 in
16 |     --host)
17 |       HOST="$2"
18 |       shift 2
19 |       ;;
20 |     --port)
21 |       PORT="$2"
22 |       shift 2
23 |       ;;
24 |     --workers)
25 |       WORKERS="$2"
26 |       shift 2
27 |       ;;
28 |     --enable-replication)
29 |       ENABLE_REPLICATION=true
30 |       shift
31 |       ;;
32 |     --secondary)
33 |       PRIMARY=false
34 |       shift
35 |       ;;
36 |     --peer)
37 |       if [ -z "$PEERS" ]; then
38 |         PEERS="--peer $2"
39 |       else
40 |         PEERS="$PEERS --peer $2"
41 |       fi
42 |       shift 2
43 |       ;;
44 |     *)
45 |       echo "Unknown option: $1"
46 |       exit 1
47 |       ;;
48 |   esac
49 | done
50 | 
51 | # Check if OpenAI API key is set
52 | if [ -z "$OPENAI_API_KEY" ]; then
53 |   echo "Error: OPENAI_API_KEY environment variable is not set"
54 |   echo "Please set it with: export OPENAI_API_KEY=your_api_key"
55 |   exit 1
56 | fi
57 | 
58 | # Create log directory if it doesn't exist
59 | mkdir -p logs
60 | 
61 | # Start the server
62 | echo "Starting OpenAI Code Assistant API Server..."
63 | echo "Host: $HOST"
64 | echo "Port: $PORT"
65 | echo "Workers: $WORKERS"
66 | echo "Replication: $ENABLE_REPLICATION"
67 | echo "Role: $([ "$PRIMARY" = true ] && echo "Primary" || echo "Secondary")"
68 | echo "Peers: $PEERS"
69 | 
70 | # Build the command
71 | CMD="python cli.py serve --host $HOST --port $PORT --workers $WORKERS"
72 | 
73 | if [ "$ENABLE_REPLICATION" = true ]; then
74 |   CMD="$CMD --enable-replication"
75 | fi
76 | 
77 | if [ "$PRIMARY" = false ]; then
78 |   CMD="$CMD --secondary"
79 | fi
80 | 
81 | if [ -n "$PEERS" ]; then
82 |   CMD="$CMD $PEERS"
83 | fi
84 | 
85 | # Run the command
86 | echo "Running: $CMD"
87 | $CMD > logs/server_$(date +%Y%m%d_%H%M%S).log 2>&1 &
88 | 
89 | # Save the PID
90 | echo $! > server.pid
91 | echo "Server started with PID $(cat server.pid)"
92 | echo "Logs are being written to logs/server_*.log"
93 | 
```

--------------------------------------------------------------------------------
/claude.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """Main entry point for Claude Code."""
 3 | 
 4 | import os
 5 | import sys
 6 | import argparse
 7 | import logging
 8 | from typing import Optional, List, Dict, Any
 9 | 
10 | # Configure logging
11 | logging.basicConfig(
12 |     level=logging.INFO,
13 |     format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
14 | )
15 | logger = logging.getLogger(__name__)
16 | 
17 | 
18 | def main() -> int:
19 |     """Main entry point for Claude Code.
20 |     
21 |     Returns:
22 |         Exit code
23 |     """
24 |     # Create the main parser
25 |     parser = argparse.ArgumentParser(
26 |         description="Claude Code - A powerful LLM-powered CLI for software development"
27 |     )
28 |     
29 |     # Add version information
30 |     from claude_code import __version__
31 |     parser.add_argument(
32 |         "--version", 
33 |         action="version", 
34 |         version=f"Claude Code v{__version__}"
35 |     )
36 |     
37 |     # Create subparsers for commands
38 |     subparsers = parser.add_subparsers(
39 |         title="commands",
40 |         dest="command",
41 |         help="Command to execute"
42 |     )
43 |     
44 |     # Add the chat command (default)
45 |     chat_parser = subparsers.add_parser(
46 |         "chat", 
47 |         help="Start an interactive chat session with Claude Code"
48 |     )
49 |     # Add chat-specific arguments here
50 |     
51 |     # Add the serve command for MCP server
52 |     serve_parser = subparsers.add_parser(
53 |         "serve", 
54 |         help="Start the Claude Code MCP server"
55 |     )
56 |     
57 |     # Add serve-specific arguments
58 |     from claude_code.commands.serve import add_arguments
59 |     add_arguments(serve_parser)
60 |     
61 |     # Parse arguments
62 |     args = parser.parse_args()
63 |     
64 |     # If no command specified, default to chat
65 |     if not args.command:
66 |         args.command = "chat"
67 |         
68 |     # Execute the appropriate command
69 |     if args.command == "chat":
70 |         # Import and run the chat command
71 |         from claude_code.claude import main as chat_main
72 |         return chat_main()
73 |     elif args.command == "serve":
74 |         # Import and run the serve command
75 |         from claude_code.commands.serve import execute
76 |         return execute(args)
77 |     else:
78 |         parser.print_help()
79 |         return 1
80 | 
81 | 
82 | if __name__ == "__main__":
83 |     sys.exit(main())
```

--------------------------------------------------------------------------------
/claude_code/README_MCP_CLIENT.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Claude Code MCP Client
 2 | 
 3 | This is an implementation of a Model Context Protocol (MCP) client for Claude Code. It allows you to connect to any MCP-compatible server and interact with it using Claude as the reasoning engine.
 4 | 
 5 | ## Prerequisites
 6 | 
 7 | - Python 3.8 or later
 8 | - Anthropic API key (set in your environment or `.env` file)
 9 | - Required packages: `mcp`, `anthropic`, `python-dotenv`
10 | 
11 | ## Installation
12 | 
13 | The MCP client is included as part of Claude Code. If you have Claude Code installed, you already have access to the MCP client.
14 | 
15 | If you need to install the dependencies separately:
16 | 
17 | ```bash
18 | pip install mcp anthropic python-dotenv
19 | ```
20 | 
21 | ## Usage
22 | 
23 | ### Command Line Interface
24 | 
25 | The MCP client can be run directly from the command line:
26 | 
27 | ```bash
28 | # Using the claude command (recommended)
29 | claude mcp-client path/to/server.py [--model MODEL]
30 | 
31 | # Or by running the client module directly
32 | python -m claude_code.commands.client path/to/server.py [--model MODEL]
33 | ```
34 | 
35 | ### Arguments
36 | 
37 | - `server_script`: Path to the MCP server script (required, must be a `.py` or `.js` file)
38 | - `--model`: Claude model to use (optional, defaults to `claude-3-5-sonnet-20241022`)
39 | 
40 | ### Environment Variables
41 | 
42 | Create a `.env` file in your project directory with your Anthropic API key:
43 | 
44 | ```
45 | ANTHROPIC_API_KEY=your_api_key_here
46 | ```
47 | 
48 | ## Features
49 | 
50 | - Connect to any MCP-compatible server (Python or JavaScript)
51 | - Interactive chat interface
52 | - Automatically handles tool calls between Claude and the MCP server
53 | - Maintains conversation context
54 | - Clean resource management with proper error handling
55 | 
56 | ## Example
57 | 
58 | 1. Start your MCP server (e.g., a weather server)
59 | 2. Run the MCP client targeting that server:
60 | 
61 | ```bash
62 | claude mcp-client path/to/weather_server.py
63 | ```
64 | 
65 | 3. Interact with the server through the client:
66 | ```
67 | Query: What's the weather in San Francisco?
68 | [Claude will use the tools provided by the server to answer your query]
69 | ```
70 | 
71 | ## Troubleshooting
72 | 
73 | - If the client can't find the server, double-check the path to your server script
74 | - Ensure your environment variables are correctly set (ANTHROPIC_API_KEY)
75 | - For Node.js servers, make sure Node.js is installed on your system
76 | - The first response might take up to 30 seconds while the server initializes
77 | 
78 | ## Extending the Client
79 | 
80 | The MCP client is designed to be modular. You can extend its functionality by:
81 | 
82 | 1. Adding custom response processing
83 | 2. Implementing specific tool handling
84 | 3. Enhancing the user interface
85 | 4. Adding support for additional authentication methods
86 | 
87 | ## License
88 | 
89 | Same as Claude Code
```

--------------------------------------------------------------------------------
/static/style.css:
--------------------------------------------------------------------------------

```css
  1 | /* OpenAI Code Assistant MCP Server Dashboard Styles */
  2 | body {
  3 |     font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;
  4 |     line-height: 1.6;
  5 |     color: #333;
  6 |     background-color: #f8f9fa;
  7 |     margin: 0;
  8 |     padding: 20px;
  9 | }
 10 | 
 11 | .container {
 12 |     max-width: 1200px;
 13 |     margin: 0 auto;
 14 | }
 15 | 
 16 | h1 {
 17 |     color: #2c3e50;
 18 |     border-bottom: 2px solid #eee;
 19 |     padding-bottom: 10px;
 20 |     margin-bottom: 20px;
 21 | }
 22 | 
 23 | h2 {
 24 |     color: #3498db;
 25 |     margin-top: 30px;
 26 |     margin-bottom: 15px;
 27 | }
 28 | 
 29 | .card {
 30 |     background-color: #fff;
 31 |     border-radius: 8px;
 32 |     box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
 33 |     margin-bottom: 20px;
 34 |     overflow: hidden;
 35 | }
 36 | 
 37 | .card-header {
 38 |     background-color: #f1f1f1;
 39 |     padding: 12px 15px;
 40 |     font-weight: bold;
 41 |     border-bottom: 1px solid #ddd;
 42 | }
 43 | 
 44 | .card-body {
 45 |     padding: 15px;
 46 | }
 47 | 
 48 | .stats-grid {
 49 |     display: grid;
 50 |     grid-template-columns: repeat(auto-fill, minmax(250px, 1fr));
 51 |     gap: 20px;
 52 |     margin-bottom: 30px;
 53 | }
 54 | 
 55 | .stat-card {
 56 |     background-color: #fff;
 57 |     border-radius: 8px;
 58 |     box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
 59 |     padding: 15px;
 60 |     text-align: center;
 61 | }
 62 | 
 63 | .stat-value {
 64 |     font-size: 24px;
 65 |     font-weight: bold;
 66 |     color: #2980b9;
 67 |     margin: 10px 0;
 68 | }
 69 | 
 70 | .stat-label {
 71 |     color: #7f8c8d;
 72 |     font-size: 14px;
 73 | }
 74 | 
 75 | .btn {
 76 |     display: inline-block;
 77 |     padding: 8px 16px;
 78 |     margin-right: 10px;
 79 |     border-radius: 4px;
 80 |     text-decoration: none;
 81 |     font-weight: 500;
 82 |     cursor: pointer;
 83 |     border: none;
 84 | }
 85 | 
 86 | .btn-primary {
 87 |     background-color: #3498db;
 88 |     color: white;
 89 | }
 90 | 
 91 | .btn-secondary {
 92 |     background-color: #95a5a6;
 93 |     color: white;
 94 | }
 95 | 
 96 | .btn-info {
 97 |     background-color: #2ecc71;
 98 |     color: white;
 99 | }
100 | 
101 | .template-grid {
102 |     display: grid;
103 |     grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
104 |     gap: 20px;
105 | }
106 | 
107 | .parameter-list {
108 |     list-style-type: none;
109 |     padding-left: 0;
110 | }
111 | 
112 | .parameter-list li {
113 |     padding: 5px 0;
114 |     border-bottom: 1px solid #eee;
115 | }
116 | 
117 | .parameter-list li:last-child {
118 |     border-bottom: none;
119 | }
120 | 
121 | .tag {
122 |     display: inline-block;
123 |     background-color: #e0f7fa;
124 |     color: #0097a7;
125 |     padding: 3px 8px;
126 |     border-radius: 4px;
127 |     font-size: 12px;
128 |     margin-right: 5px;
129 | }
130 | 
131 | .footer {
132 |     margin-top: 40px;
133 |     padding-top: 20px;
134 |     border-top: 1px solid #eee;
135 |     text-align: center;
136 |     color: #7f8c8d;
137 |     font-size: 14px;
138 | }
139 | 
140 | /* Responsive adjustments */
141 | @media (max-width: 768px) {
142 |     .stats-grid, .template-grid {
143 |         grid-template-columns: 1fr;
144 |     }
145 | }
146 | 
```

--------------------------------------------------------------------------------
/claude_code/lib/providers/__init__.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | # claude_code/lib/providers/__init__.py
 3 | """LLM provider module."""
 4 | 
 5 | import logging
 6 | import os
 7 | from typing import Dict, Type, Optional
 8 | 
 9 | from .base import BaseProvider
10 | from .openai import OpenAIProvider
11 | 
12 | logger = logging.getLogger(__name__)
13 | 
14 | # Registry of provider classes
15 | PROVIDER_REGISTRY: Dict[str, Type[BaseProvider]] = {
16 |     "openai": OpenAIProvider,
17 | }
18 | 
19 | # Try to import other providers if available
20 | try:
21 |     from .anthropic import AnthropicProvider
22 |     PROVIDER_REGISTRY["anthropic"] = AnthropicProvider
23 | except ImportError:
24 |     logger.debug("Anthropic provider not available")
25 | 
26 | try:
27 |     from .local import LocalProvider
28 |     PROVIDER_REGISTRY["local"] = LocalProvider
29 | except ImportError:
30 |     logger.debug("Local provider not available")
31 | 
32 | 
33 | def get_provider(name: Optional[str] = None, **kwargs) -> BaseProvider:
34 |     """Get a provider instance by name.
35 |     
36 |     Args:
37 |         name: Provider name, or None to use default provider
38 |         **kwargs: Additional arguments to pass to the provider constructor
39 |         
40 |     Returns:
41 |         Provider instance
42 |         
43 |     Raises:
44 |         ValueError: If provider is not found
45 |     """
46 |     # If name is not specified, try to infer from environment
47 |     if name is None:
48 |         if os.environ.get("OPENAI_API_KEY"):
49 |             name = "openai"
50 |         elif os.environ.get("ANTHROPIC_API_KEY"):
51 |             name = "anthropic"
52 |         else:
53 |             # Default to OpenAI if nothing else is available
54 |             name = "openai"
55 |     
56 |     if name.lower() not in PROVIDER_REGISTRY:
57 |         raise ValueError(f"Provider {name} not found. Available providers: {', '.join(PROVIDER_REGISTRY.keys())}")
58 |     
59 |     provider_class = PROVIDER_REGISTRY[name.lower()]
60 |     return provider_class(**kwargs)
61 | 
62 | 
63 | def list_available_providers() -> Dict[str, Dict]:
64 |     """List all available providers and their models.
65 |     
66 |     Returns:
67 |         Dictionary mapping provider names to information about them
68 |     """
69 |     result = {}
70 |     
71 |     for name, provider_class in PROVIDER_REGISTRY.items():
72 |         try:
73 |             # Create a temporary instance to get model information
74 |             # This might fail if API keys are not available
75 |             instance = provider_class()
76 |             result[name] = {
77 |                 "name": instance.name,
78 |                 "available": True,
79 |                 "models": instance.available_models,
80 |                 "current_model": instance.current_model
81 |             }
82 |         except Exception as e:
83 |             # Provider is available but not configured correctly
84 |             result[name] = {
85 |                 "name": name.capitalize(),
86 |                 "available": False,
87 |                 "error": str(e),
88 |                 "models": [],
89 |                 "current_model": None
90 |             }
91 |     
92 |     return result
```

--------------------------------------------------------------------------------
/data/prompt_templates.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "greeting": {
 3 |     "template": "Hello! The current time is {time}. How can I help you today?",
 4 |     "description": "A simple greeting template",
 5 |     "parameters": {
 6 |       "time": {
 7 |         "type": "string",
 8 |         "description": "The current time"
 9 |       }
10 |     },
11 |     "default_model": "gpt-4o",
12 |     "metadata": {
13 |       "category": "general"
14 |     }
15 |   },
16 |   "code_review": {
17 |     "template": "Please review the following code:\n\n```{language}\n{code}\n```\n\nFocus on: {focus_areas}",
18 |     "description": "Template for code review requests",
19 |     "parameters": {
20 |       "language": {
21 |         "type": "string",
22 |         "description": "Programming language of the code"
23 |       },
24 |       "code": {
25 |         "type": "string",
26 |         "description": "The code to review"
27 |       },
28 |       "focus_areas": {
29 |         "type": "string",
30 |         "description": "Areas to focus on during review (e.g., 'performance, security')"
31 |       }
32 |     },
33 |     "default_model": "gpt-4o",
34 |     "metadata": {
35 |       "category": "development"
36 |     }
37 |   },
38 |   "system_prompt": {
39 |     "template": "You are OpenAI Code Assistant, a CLI tool that helps users with software engineering tasks and general information.\nUse the available tools to assist the user with their requests.\n\n# Tone and style\nYou should be concise, direct, and to the point. When you run a non-trivial bash command, \nyou should explain what the command does and why you are running it.\nOutput text to communicate with the user; all text you output outside of tool use is displayed to the user.\nRemember that your output will be displayed on a command line interface.\n\n# Tool usage policy\n- When doing file search, remember to search effectively with the available tools.\n- Always use the appropriate tool for the task.\n- Use parallel tool calls when appropriate to improve performance.\n- NEVER commit changes unless the user explicitly asks you to.\n- For weather queries, use the Weather tool to provide real-time information.\n\n# Tasks\nThe user will primarily request you perform software engineering tasks:\n1. Solving bugs\n2. Adding new functionality \n3. Refactoring code\n4. Explaining code\n5. Writing tests\n\nFor these tasks:\n1. Use search tools to understand the codebase\n2. Implement solutions using the available tools\n3. Verify solutions with tests if possible\n4. Run lint and typecheck commands when appropriate\n\nThe user may also ask for general information:\n1. Weather conditions\n2. Simple calculations\n3. General knowledge questions\n\n# Code style\n- Follow the existing code style of the project\n- Maintain consistent naming conventions\n- Use appropriate libraries that are already in the project\n- Add comments when code is complex or non-obvious\n\nIMPORTANT: You should minimize output tokens as much as possible while maintaining helpfulness, \nquality, and accuracy. Answer concisely with short lines of text unless the user asks for detail.",
40 |     "description": "System prompt for the assistant",
41 |     "parameters": {},
42 |     "default_model": "gpt-4o",
43 |     "metadata": {
44 |       "category": "system"
45 |     }
46 |   }
47 | }
48 | 
```

--------------------------------------------------------------------------------
/install.sh:
--------------------------------------------------------------------------------

```bash
  1 | #!/bin/bash
  2 | # Installation script for Claude Code Python Edition
  3 | 
  4 | # Set up colors
  5 | GREEN='\033[0;32m'
  6 | YELLOW='\033[1;33m'
  7 | RED='\033[0;31m'
  8 | NC='\033[0m' # No Color
  9 | 
 10 | echo -e "${GREEN}Installing Claude Code Python Edition...${NC}"
 11 | 
 12 | # Check Python version
 13 | python_version=$(python3 --version 2>&1 | awk '{print $2}')
 14 | echo -e "${YELLOW}Detected Python version: ${python_version}${NC}"
 15 | 
 16 | # Check if Python version is at least 3.10
 17 | if [[ $(echo "${python_version}" | cut -d. -f1,2 | sed 's/\.//') -lt 310 ]]; then
 18 |     echo -e "${RED}Error: Python 3.10 or higher is required.${NC}"
 19 |     exit 1
 20 | fi
 21 | 
 22 | # Create virtual environment if it doesn't exist
 23 | if [ ! -d "venv" ]; then
 24 |     echo -e "${YELLOW}Creating virtual environment...${NC}"
 25 |     python3 -m venv venv
 26 |     if [ $? -ne 0 ]; then
 27 |         echo -e "${RED}Error creating virtual environment.${NC}"
 28 |         exit 1
 29 |     fi
 30 | else
 31 |     echo -e "${YELLOW}Using existing virtual environment.${NC}"
 32 | fi
 33 | 
 34 | # Activate virtual environment
 35 | echo -e "${YELLOW}Activating virtual environment...${NC}"
 36 | source venv/bin/activate
 37 | if [ $? -ne 0 ]; then
 38 |     echo -e "${RED}Error activating virtual environment.${NC}"
 39 |     exit 1
 40 | fi
 41 | 
 42 | # Install dependencies
 43 | echo -e "${YELLOW}Installing dependencies...${NC}"
 44 | pip install -r requirements.txt
 45 | if [ $? -ne 0 ]; then
 46 |     echo -e "${RED}Error installing dependencies.${NC}"
 47 |     exit 1
 48 | fi
 49 | 
 50 | # Install in development mode
 51 | echo -e "${YELLOW}Installing Claude Code in development mode...${NC}"
 52 | pip install -e .
 53 | if [ $? -ne 0 ]; then
 54 |     echo -e "${RED}Error installing package.${NC}"
 55 |     exit 1
 56 | fi
 57 | 
 58 | # Create .env file if it doesn't exist
 59 | if [ ! -f ".env" ]; then
 60 |     echo -e "${YELLOW}Creating .env file...${NC}"
 61 |     cat > .env << EOF
 62 | # API Keys (uncomment and add your keys)
 63 | # OPENAI_API_KEY=your_openai_api_key
 64 | # ANTHROPIC_API_KEY=your_anthropic_api_key
 65 | 
 66 | # Models (optional)
 67 | # OPENAI_MODEL=gpt-4o
 68 | # ANTHROPIC_MODEL=claude-3-opus-20240229
 69 | 
 70 | # Budget limit in dollars (optional)
 71 | # BUDGET_LIMIT=5.0
 72 | EOF
 73 |     echo -e "${YELLOW}Created .env file. Please edit it to add your API keys.${NC}"
 74 | else
 75 |     echo -e "${YELLOW}.env file already exists. Skipping creation.${NC}"
 76 | fi
 77 | 
 78 | # Create setup.py if it doesn't exist
 79 | if [ ! -f "setup.py" ]; then
 80 |     echo -e "${YELLOW}Creating setup.py...${NC}"
 81 |     cat > setup.py << EOF
 82 | from setuptools import setup, find_packages
 83 | 
 84 | setup(
 85 |     name="claude_code",
 86 |     version="0.1.0",
 87 |     packages=find_packages(),
 88 |     install_requires=[
 89 |         line.strip() for line in open("requirements.txt", "r").readlines()
 90 |     ],
 91 |     entry_points={
 92 |         "console_scripts": [
 93 |             "claude-code=claude_code.claude:app",
 94 |         ],
 95 |     },
 96 | )
 97 | EOF
 98 |     echo -e "${YELLOW}Created setup.py file.${NC}"
 99 | else
100 |     echo -e "${YELLOW}setup.py file already exists. Skipping creation.${NC}"
101 | fi
102 | 
103 | echo -e "${GREEN}Installation complete!${NC}"
104 | echo -e "${YELLOW}To activate the virtual environment, run:${NC}"
105 | echo -e "    source venv/bin/activate"
106 | echo -e "${YELLOW}To run Claude Code, use:${NC}"
107 | echo -e "    claude-code"
108 | echo -e "${YELLOW}Or:${NC}"
109 | echo -e "    python -m claude_code.claude"
110 | echo -e "${GREEN}Enjoy using Claude Code Python Edition!${NC}"
```

--------------------------------------------------------------------------------
/claude_code/commands/serve.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | # claude_code/commands/serve.py
  3 | """Command to start the MCP server."""
  4 | 
  5 | import os
  6 | import sys
  7 | import logging
  8 | import argparse
  9 | from typing import Dict, Any, Optional, List
 10 | 
 11 | from claude_code.mcp_server import initialize_server
 12 | 
 13 | # Setup logging
 14 | logging.basicConfig(level=logging.INFO)
 15 | logger = logging.getLogger(__name__)
 16 | 
 17 | 
 18 | def add_arguments(parser: argparse.ArgumentParser) -> None:
 19 |     """Add command-specific arguments to the parser.
 20 |     
 21 |     Args:
 22 |         parser: Argument parser
 23 |     """
 24 |     parser.add_argument(
 25 |         "--dev", 
 26 |         action="store_true", 
 27 |         help="Run in development mode with the MCP Inspector"
 28 |     )
 29 |     
 30 |     parser.add_argument(
 31 |         "--host", 
 32 |         type=str, 
 33 |         default="localhost", 
 34 |         help="Host to bind the server to"
 35 |     )
 36 |     
 37 |     parser.add_argument(
 38 |         "--port", 
 39 |         type=int, 
 40 |         default=8000, 
 41 |         help="Port to bind the server to"
 42 |     )
 43 |     
 44 |     parser.add_argument(
 45 |         "--dependencies", 
 46 |         type=str, 
 47 |         nargs="*", 
 48 |         help="Additional dependencies to install"
 49 |     )
 50 |     
 51 |     parser.add_argument(
 52 |         "--env-file", 
 53 |         type=str, 
 54 |         help="Path to environment file (.env)"
 55 |     )
 56 | 
 57 | 
 58 | def execute(args: argparse.Namespace) -> int:
 59 |     """Execute the serve command.
 60 |     
 61 |     Args:
 62 |         args: Command arguments
 63 |         
 64 |     Returns:
 65 |         Exit code
 66 |     """
 67 |     try:
 68 |         # Initialize the MCP server
 69 |         mcp_server = initialize_server()
 70 |         
 71 |         # Add any additional dependencies
 72 |         if args.dependencies:
 73 |             for dep in args.dependencies:
 74 |                 mcp_server.dependencies.append(dep)
 75 |         
 76 |         # Load environment variables from file
 77 |         if args.env_file:
 78 |             if not os.path.exists(args.env_file):
 79 |                 logger.error(f"Environment file not found: {args.env_file}")
 80 |                 return 1
 81 |                 
 82 |             import dotenv
 83 |             dotenv.load_dotenv(args.env_file)
 84 |         
 85 |         # Run the server
 86 |         if args.dev:
 87 |             logger.info(f"Starting MCP server in development mode on {args.host}:{args.port}")
 88 |             # Use the fastmcp dev mode
 89 |             import subprocess
 90 |             cmd = [
 91 |                 "fastmcp", "dev", 
 92 |                 "--module", "claude_code.mcp_server:mcp",
 93 |                 "--host", args.host,
 94 |                 "--port", str(args.port)
 95 |             ]
 96 |             return subprocess.call(cmd)
 97 |         else:
 98 |             # Run directly
 99 |             logger.info(f"Starting MCP server on {args.host}:{args.port}")
100 |             logger.info(f"Visit http://{args.host}:{args.port} for Claude Desktop configuration instructions")
101 |             
102 |             # FastMCP.run() method signature changed to accept host/port
103 |             try:
104 |                 mcp_server.run(host=args.host, port=args.port)
105 |             except TypeError:
106 |                 # Fallback for older versions of FastMCP
107 |                 logger.info("Using older FastMCP version without host/port parameters")
108 |                 mcp_server.run()
109 |                 
110 |             return 0
111 |             
112 |     except Exception as e:
113 |         logger.exception(f"Error running MCP server: {e}")
114 |         return 1
115 | 
116 | 
117 | def main() -> int:
118 |     """Run the serve command as a standalone script."""
119 |     parser = argparse.ArgumentParser(description="Run the Claude Code MCP server")
120 |     add_arguments(parser)
121 |     args = parser.parse_args()
122 |     return execute(args)
123 | 
124 | 
125 | if __name__ == "__main__":
126 |     sys.exit(main())
```

--------------------------------------------------------------------------------
/claude_code/lib/providers/base.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | # claude_code/lib/providers/base.py
  3 | """Base provider interface for LLM integration."""
  4 | 
  5 | import abc
  6 | from typing import Dict, List, Generator, Optional, Any, Union
  7 | 
  8 | 
  9 | class BaseProvider(abc.ABC):
 10 |     """Abstract base class for LLM providers.
 11 |     
 12 |     This class defines the interface that all LLM providers must implement.
 13 |     Providers are responsible for:
 14 |     - Generating completions from LLMs
 15 |     - Counting tokens
 16 |     - Managing rate limits
 17 |     - Tracking costs
 18 |     """
 19 |     
 20 |     @property
 21 |     @abc.abstractmethod
 22 |     def name(self) -> str:
 23 |         """Get the name of the provider."""
 24 |         pass
 25 |     
 26 |     @property
 27 |     @abc.abstractmethod
 28 |     def available_models(self) -> List[str]:
 29 |         """Get a list of available models from this provider."""
 30 |         pass
 31 |     
 32 |     @property
 33 |     @abc.abstractmethod
 34 |     def current_model(self) -> str:
 35 |         """Get the currently selected model."""
 36 |         pass
 37 |     
 38 |     @abc.abstractmethod
 39 |     def set_model(self, model_name: str) -> None:
 40 |         """Set the current model.
 41 |         
 42 |         Args:
 43 |             model_name: The name of the model to use
 44 |             
 45 |         Raises:
 46 |             ValueError: If the model is not available
 47 |         """
 48 |         pass
 49 |     
 50 |     @abc.abstractmethod
 51 |     def generate_completion(self, 
 52 |                            messages: List[Dict[str, Any]], 
 53 |                            tools: Optional[List[Dict[str, Any]]] = None,
 54 |                            temperature: float = 0.0,
 55 |                            stream: bool = True) -> Union[Dict[str, Any], Generator[Dict[str, Any], None, None]]:
 56 |         """Generate a completion from the provider.
 57 |         
 58 |         Args:
 59 |             messages: List of message dictionaries
 60 |             tools: Optional list of tool dictionaries
 61 |             temperature: Model temperature (0-1)
 62 |             stream: Whether to stream the response
 63 |             
 64 |         Returns:
 65 |             If stream=True, returns a generator of response chunks
 66 |             If stream=False, returns the complete response
 67 |         """
 68 |         pass
 69 |     
 70 |     @abc.abstractmethod
 71 |     def count_tokens(self, text: str) -> int:
 72 |         """Count tokens in text.
 73 |         
 74 |         Args:
 75 |             text: The text to count tokens for
 76 |             
 77 |         Returns:
 78 |             The number of tokens in the text
 79 |         """
 80 |         pass
 81 |     
 82 |     @abc.abstractmethod
 83 |     def count_message_tokens(self, messages: List[Dict[str, Any]]) -> Dict[str, int]:
 84 |         """Count tokens in a message list.
 85 |         
 86 |         Args:
 87 |             messages: List of message dictionaries
 88 |             
 89 |         Returns:
 90 |             Dictionary with 'input' and 'output' token counts
 91 |         """
 92 |         pass
 93 |     
 94 |     @abc.abstractmethod
 95 |     def get_model_info(self) -> Dict[str, Any]:
 96 |         """Get information about the current model.
 97 |         
 98 |         Returns:
 99 |             Dictionary with model information including:
100 |             - context_window: Maximum context window size
101 |             - input_cost_per_1k: Cost per 1K input tokens
102 |             - output_cost_per_1k: Cost per 1K output tokens
103 |             - capabilities: List of model capabilities
104 |         """
105 |         pass
106 |     
107 |     @property
108 |     @abc.abstractmethod
109 |     def cost_per_1k_tokens(self) -> Dict[str, float]:
110 |         """Get cost per 1K tokens for input and output.
111 |         
112 |         Returns:
113 |             Dictionary with 'input' and 'output' costs
114 |         """
115 |         pass
116 |     
117 |     @abc.abstractmethod
118 |     def validate_api_key(self) -> bool:
119 |         """Validate the API key.
120 |         
121 |         Returns:
122 |             True if the API key is valid, False otherwise
123 |         """
124 |         pass
125 |     
126 |     @abc.abstractmethod
127 |     def get_rate_limit_info(self) -> Dict[str, Any]:
128 |         """Get rate limit information.
129 |         
130 |         Returns:
131 |             Dictionary with rate limit information
132 |         """
133 |         pass
```

--------------------------------------------------------------------------------
/claude_code/README_MULTI_AGENT.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Claude Code Multi-Agent MCP Client
  2 | 
  3 | This is an implementation of a multi-agent Model Context Protocol (MCP) client for Claude Code. It allows you to run multiple Claude-powered agents that can communicate with each other while connected to the same MCP server.
  4 | 
  5 | ## Key Features
  6 | 
  7 | - **Multiple Specialized Agents**: Run agents with different roles and prompts simultaneously
  8 | - **Agent Synchronization**: Agents automatically share messages and respond to each other
  9 | - **Direct & Broadcast Messaging**: Send messages to specific agents or broadcast to all
 10 | - **Rich Interface**: Colorful terminal interface with command-based controls
 11 | - **Message History**: Track all conversations between agents
 12 | - **Customizable Roles**: Define agent specializations through configuration files
 13 | 
 14 | ## Prerequisites
 15 | 
 16 | - Python 3.8 or later
 17 | - Anthropic API key (set in your environment or `.env` file)
 18 | - Required packages: `mcp`, `anthropic`, `dotenv`, `rich`
 19 | 
 20 | ## Usage
 21 | 
 22 | ### Command Line Interface
 23 | 
 24 | The multi-agent client can be run directly from the command line:
 25 | 
 26 | ```bash
 27 | # Using the claude command (recommended)
 28 | claude mcp-multi-agent path/to/server.py [--config CONFIG_FILE]
 29 | 
 30 | # Or by running the client module directly
 31 | python -m claude_code.commands.multi_agent_client path/to/server.py [--config CONFIG_FILE]
 32 | ```
 33 | 
 34 | ### Arguments
 35 | 
 36 | - `server_script`: Path to the MCP server script (required, must be a `.py` or `.js` file)
 37 | - `--config`: Path to agent configuration JSON file (optional, default uses a single assistant agent)
 38 | 
 39 | ### Environment Variables
 40 | 
 41 | Create a `.env` file in your project directory with your Anthropic API key:
 42 | 
 43 | ```
 44 | ANTHROPIC_API_KEY=your_api_key_here
 45 | ```
 46 | 
 47 | ## Agent Configuration
 48 | 
 49 | Create a JSON file to define your agents:
 50 | 
 51 | ```json
 52 | [
 53 |   {
 54 |     "name": "Researcher",
 55 |     "role": "research specialist",
 56 |     "model": "claude-3-5-sonnet-20241022",
 57 |     "system_prompt": "You are a research specialist participating in a multi-agent conversation. Your primary role is to find information, analyze data, and provide well-researched answers."
 58 |   },
 59 |   {
 60 |     "name": "Coder",
 61 |     "role": "programming expert",
 62 |     "model": "claude-3-5-sonnet-20241022",
 63 |     "system_prompt": "You are a coding expert participating in a multi-agent conversation. Your primary role is to write, debug, and explain code."
 64 |   }
 65 | ]
 66 | ```
 67 | 
 68 | ## Interactive Commands
 69 | 
 70 | When running the multi-agent client, you can use these commands:
 71 | 
 72 | - `/help`: Show available commands
 73 | - `/agents`: List all active agents
 74 | - `/talk <agent> <message>`: Send a direct message to a specific agent
 75 | - `/history`: Show message history
 76 | - `/quit`, `/exit`: Exit the application
 77 | 
 78 | To broadcast a message to all agents, simply type your message without any command.
 79 | 
 80 | ## Example Session
 81 | 
 82 | This is a sample session with the multi-agent client:
 83 | 
 84 | 1. Start a server:
 85 |    ```bash
 86 |    python examples/echo_server.py
 87 |    ```
 88 | 
 89 | 2. Start the multi-agent client:
 90 |    ```bash
 91 |    claude mcp-multi-agent examples/echo_server.py --config examples/agents_config.json
 92 |    ```
 93 | 
 94 | 3. Broadcast a message to all agents:
 95 |    ```
 96 |    > I need to analyze some data and then create a visualization
 97 |    ```
 98 | 
 99 | 4. Send a direct message to the researcher agent:
100 |    ```
101 |    > /talk Researcher What statistical methods would be best for this analysis?
102 |    ```
103 | 
104 | 5. View the message history:
105 |    ```
106 |    > /history
107 |    ```
108 | 
109 | ## Use Cases
110 | 
111 | The multi-agent client is particularly useful for:
112 | 
113 | 1. **Complex Problem Solving**: Break down problems into parts handled by specialized agents
114 | 2. **Collaborative Development**: Use a researcher, coder, and critic to develop better solutions
115 | 3. **Debate and Refinement**: Have agents with different perspectives refine ideas
116 | 4. **Automated Workflows**: Set up agents that collaborate on tasks without human intervention
117 | 5. **Education**: Create teaching scenarios where agents play different roles
118 | 
119 | ## Troubleshooting
120 | 
121 | - If agents aren't responding to each other, check for errors in your configuration file
122 | - For better performance, use smaller models for simple agents
123 | - Make sure your Anthropic API key has sufficient quota for multiple simultaneous requests
124 | - Use the `/history` command to debug message flow between agents
125 | 
126 | ## License
127 | 
128 | Same as Claude Code
```

--------------------------------------------------------------------------------
/README_modal_mcp.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Modal MCP Server
  2 | 
  3 | This project provides an OpenAI-compatible API server running on Modal.com with a Model Context Protocol (MCP) adapter.
  4 | 
  5 | ## Components
  6 | 
  7 | 1. **Modal OpenAI-compatible Server** (`modal_mcp_server.py`): A full-featured OpenAI-compatible API server that runs on Modal.com's infrastructure.
  8 | 
  9 | 2. **MCP Adapter** (`mcp_modal_adapter.py`): A FastAPI server that adapts the OpenAI API to the Model Context Protocol (MCP).
 10 | 
 11 | 3. **Deployment Script** (`deploy_modal_mcp.py`): A helper script to deploy both components.
 12 | 
 13 | ## Features
 14 | 
 15 | - **OpenAI-compatible API**: Full compatibility with OpenAI's chat completions API
 16 | - **Multiple Models**: Support for various models including Llama 3, Phi-4, DeepSeek-R1, and more
 17 | - **Streaming Support**: Real-time streaming of model outputs
 18 | - **Advanced Caching**: Efficient caching of responses for improved performance
 19 | - **Rate Limiting**: Token bucket algorithm for fair API usage
 20 | - **MCP Compatibility**: Adapter for Model Context Protocol support
 21 | 
 22 | ## Prerequisites
 23 | 
 24 | - Python 3.10+
 25 | - Modal.com account and CLI set up (`pip install modal`)
 26 | - FastAPI and Uvicorn (`pip install fastapi uvicorn`)
 27 | - HTTPX for async HTTP requests (`pip install httpx`)
 28 | 
 29 | ## Installation
 30 | 
 31 | 1. Install dependencies:
 32 | 
 33 | ```bash
 34 | pip install modal fastapi uvicorn httpx
 35 | ```
 36 | 
 37 | 2. Set up Modal CLI:
 38 | 
 39 | ```bash
 40 | modal token new
 41 | ```
 42 | 
 43 | ## Deployment
 44 | 
 45 | ### Option 1: Using the deployment script
 46 | 
 47 | The easiest way to deploy is using the provided script:
 48 | 
 49 | ```bash
 50 | python deploy_modal_mcp.py
 51 | ```
 52 | 
 53 | This will:
 54 | 1. Deploy the OpenAI-compatible server to Modal
 55 | 2. Start the MCP adapter locally
 56 | 3. Open a browser to verify the deployment
 57 | 
 58 | ### Option 2: Manual deployment
 59 | 
 60 | 1. Deploy the Modal server:
 61 | 
 62 | ```bash
 63 | modal deploy modal_mcp_server.py
 64 | ```
 65 | 
 66 | 2. Note the URL of your deployed Modal app.
 67 | 
 68 | 3. Set environment variables for the MCP adapter:
 69 | 
 70 | ```bash
 71 | export MODAL_API_URL="https://your-modal-app-url.modal.run"
 72 | export MODAL_API_KEY="sk-modal-llm-api-key"  # Default key
 73 | export DEFAULT_MODEL="phi-4"  # Or any other supported model
 74 | ```
 75 | 
 76 | 4. Start the MCP adapter:
 77 | 
 78 | ```bash
 79 | uvicorn mcp_modal_adapter:app --host 0.0.0.0 --port 8000
 80 | ```
 81 | 
 82 | ## Usage
 83 | 
 84 | ### MCP API Endpoints
 85 | 
 86 | - `GET /health`: Health check endpoint
 87 | - `GET /prompts`: List available prompt templates
 88 | - `GET /prompts/{prompt_id}`: Get a specific prompt template
 89 | - `POST /context/{prompt_id}`: Generate context from a prompt template
 90 | - `POST /prompts`: Add a new prompt template
 91 | - `DELETE /prompts/{prompt_id}`: Delete a prompt template
 92 | 
 93 | ### Example: Generate context
 94 | 
 95 | ```bash
 96 | curl -X POST "http://localhost:8000/context/default" \
 97 |   -H "Content-Type: application/json" \
 98 |   -d '{
 99 |     "parameters": {
100 |       "prompt": "Explain quantum computing in simple terms"
101 |     },
102 |     "model": "phi-4",
103 |     "stream": false
104 |   }'
105 | ```
106 | 
107 | ### Example: Streaming response
108 | 
109 | ```bash
110 | curl -X POST "http://localhost:8000/context/default" \
111 |   -H "Content-Type: application/json" \
112 |   -d '{
113 |     "parameters": {
114 |       "prompt": "Write a short story about AI"
115 |     },
116 |     "model": "phi-4",
117 |     "stream": true
118 |   }'
119 | ```
120 | 
121 | ## Advanced Configuration
122 | 
123 | ### Adding Custom Prompt Templates
124 | 
125 | ```bash
126 | curl -X POST "http://localhost:8000/prompts" \
127 |   -H "Content-Type: application/json" \
128 |   -d '{
129 |     "id": "code-generator",
130 |     "name": "Code Generator",
131 |     "description": "Generates code based on a description",
132 |     "template": "Write code in {language} that accomplishes the following: {task}",
133 |     "parameters": {
134 |       "language": {
135 |         "type": "string",
136 |         "description": "Programming language"
137 |       },
138 |       "task": {
139 |         "type": "string",
140 |         "description": "Task description"
141 |       }
142 |     }
143 |   }'
144 | ```
145 | 
146 | ### Using Custom Prompt Templates
147 | 
148 | ```bash
149 | curl -X POST "http://localhost:8000/context/code-generator" \
150 |   -H "Content-Type: application/json" \
151 |   -d '{
152 |     "parameters": {
153 |       "language": "Python",
154 |       "task": "Create a function that calculates the Fibonacci sequence"
155 |     },
156 |     "model": "phi-4"
157 |   }'
158 | ```
159 | 
160 | ## Supported Models
161 | 
162 | - **vLLM Models**:
163 |   - `llama3-8b`: Meta Llama 3.1 8B Instruct (quantized)
164 |   - `mistral-7b`: Mistral 7B Instruct v0.2
165 |   - `tiny-llama-1.1b`: TinyLlama 1.1B Chat
166 | 
167 | - **Llama.cpp Models**:
168 |   - `deepseek-r1`: DeepSeek R1 (quantized)
169 |   - `phi-4`: Microsoft Phi-4 (quantized)
170 |   - `phi-2`: Microsoft Phi-2 (quantized)
171 | 
172 | ## License
173 | 
174 | MIT
175 | 
```

--------------------------------------------------------------------------------
/deploy_modal_mcp.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Deployment script for Modal MCP Server
  4 | """
  5 | import os
  6 | import sys
  7 | import argparse
  8 | import subprocess
  9 | import webbrowser
 10 | import time
 11 | from pathlib import Path
 12 | 
 13 | def check_dependencies():
 14 |     """Check if required dependencies are installed"""
 15 |     try:
 16 |         import modal
 17 |         import httpx
 18 |         import fastapi
 19 |         import uvicorn
 20 |         print("✅ All required dependencies are installed")
 21 |         return True
 22 |     except ImportError as e:
 23 |         print(f"❌ Missing dependency: {e}")
 24 |         print("Please install required dependencies:")
 25 |         print("pip install modal httpx fastapi uvicorn")
 26 |         return False
 27 | 
 28 | def deploy_modal_server(args):
 29 |     """Deploy the Modal OpenAI-compatible server"""
 30 |     print("Deploying Modal OpenAI-compatible server...")
 31 |     
 32 |     # Run the Modal deployment command
 33 |     cmd = ["modal", "deploy", "modal_mcp_server.py"]
 34 |     
 35 |     try:
 36 |         result = subprocess.run(cmd, capture_output=True, text=True)
 37 |         
 38 |         if result.returncode != 0:
 39 |             print(f"❌ Error deploying Modal server: {result.stderr}")
 40 |             return None
 41 |         
 42 |         # Extract the deployment URL from the output
 43 |         for line in result.stdout.splitlines():
 44 |             if "https://" in line and "modal.run" in line:
 45 |                 url = line.strip()
 46 |                 print(f"✅ Modal server deployed at: {url}")
 47 |                 return url
 48 |         
 49 |         print("❌ Could not find deployment URL in output")
 50 |         print(result.stdout)
 51 |         return None
 52 |         
 53 |     except Exception as e:
 54 |         print(f"❌ Error deploying Modal server: {e}")
 55 |         return None
 56 | 
 57 | def deploy_mcp_adapter(modal_url, args):
 58 |     """Deploy the MCP adapter server"""
 59 |     print("Deploying MCP adapter server...")
 60 |     
 61 |     # Set environment variables for the adapter
 62 |     os.environ["MODAL_API_URL"] = modal_url
 63 |     os.environ["MODAL_API_KEY"] = args.api_key
 64 |     os.environ["DEFAULT_MODEL"] = args.model
 65 |     
 66 |     # Start the adapter server
 67 |     try:
 68 |         import uvicorn
 69 |         from mcp_modal_adapter import app
 70 |         
 71 |         # Start in a separate process if not in foreground mode
 72 |         if not args.foreground:
 73 |             print(f"Starting MCP adapter server on port {args.port}...")
 74 |             cmd = [
 75 |                 sys.executable, "-m", "uvicorn", "mcp_modal_adapter:app", 
 76 |                 "--host", "0.0.0.0", "--port", str(args.port)
 77 |             ]
 78 |             
 79 |             # Use subprocess.Popen to run in background
 80 |             process = subprocess.Popen(
 81 |                 cmd,
 82 |                 stdout=subprocess.PIPE if not args.verbose else None,
 83 |                 stderr=subprocess.PIPE if not args.verbose else None
 84 |             )
 85 |             
 86 |             # Wait a bit to make sure it starts
 87 |             time.sleep(2)
 88 |             
 89 |             # Check if process is still running
 90 |             if process.poll() is None:
 91 |                 print(f"✅ MCP adapter server running on http://localhost:{args.port}")
 92 |                 return f"http://localhost:{args.port}"
 93 |             else:
 94 |                 stdout, stderr = process.communicate()
 95 |                 print(f"❌ Error starting MCP adapter server: {stderr.decode() if stderr else 'Unknown error'}")
 96 |                 return None
 97 |         else:
 98 |             # Run in foreground
 99 |             print(f"Starting MCP adapter server on port {args.port} in foreground mode...")
100 |             uvicorn.run(app, host="0.0.0.0", port=args.port)
101 |             return None  # Will never reach here in foreground mode
102 |             
103 |     except Exception as e:
104 |         print(f"❌ Error starting MCP adapter server: {e}")
105 |         return None
106 | 
107 | def main():
108 |     """Main entry point"""
109 |     parser = argparse.ArgumentParser(description="Deploy Modal MCP Server")
110 |     parser.add_argument("--port", type=int, default=8000, help="Port for MCP adapter server")
111 |     parser.add_argument("--api-key", type=str, default="sk-modal-llm-api-key", help="API key for Modal server")
112 |     parser.add_argument("--model", type=str, default="phi-4", help="Default model to use")
113 |     parser.add_argument("--foreground", action="store_true", help="Run MCP adapter in foreground")
114 |     parser.add_argument("--verbose", action="store_true", help="Show verbose output")
115 |     parser.add_argument("--skip-modal-deploy", action="store_true", help="Skip Modal server deployment")
116 |     parser.add_argument("--modal-url", type=str, help="Use existing Modal server URL")
117 |     
118 |     args = parser.parse_args()
119 |     
120 |     # Check dependencies
121 |     if not check_dependencies():
122 |         return 1
123 |     
124 |     # Deploy Modal server if not skipped
125 |     modal_url = args.modal_url
126 |     if not args.skip_modal_deploy and not modal_url:
127 |         modal_url = deploy_modal_server(args)
128 |         if not modal_url:
129 |             return 1
130 |     
131 |     # Deploy MCP adapter
132 |     mcp_url = deploy_mcp_adapter(modal_url, args)
133 |     if not mcp_url and not args.foreground:
134 |         return 1
135 |     
136 |     # Open browser if not in foreground mode
137 |     if mcp_url and not args.foreground:
138 |         print(f"Opening browser to MCP server health check...")
139 |         webbrowser.open(f"{mcp_url}/health")
140 |         
141 |         print("\nMCP Server is now running!")
142 |         print(f"- Health check: {mcp_url}/health")
143 |         print(f"- List prompts: {mcp_url}/prompts")
144 |         print(f"- Modal API: {modal_url}")
145 |         
146 |         print("\nPress Ctrl+C to stop the server")
147 |         try:
148 |             while True:
149 |                 time.sleep(1)
150 |         except KeyboardInterrupt:
151 |             print("\nStopping server...")
152 |     
153 |     return 0
154 | 
155 | if __name__ == "__main__":
156 |     sys.exit(main())
157 | 
```

--------------------------------------------------------------------------------
/claude_code/commands/client.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | # claude_code/commands/client.py
  3 | """MCP client implementation for testing MCP servers."""
  4 | 
  5 | import asyncio
  6 | import sys
  7 | import os
  8 | import logging
  9 | import argparse
 10 | from typing import Optional, Dict, Any
 11 | from contextlib import AsyncExitStack
 12 | 
 13 | from mcp import ClientSession, StdioServerParameters
 14 | from mcp.client.stdio import stdio_client
 15 | 
 16 | from anthropic import Anthropic
 17 | from dotenv import load_dotenv
 18 | 
 19 | # Setup logging
 20 | logging.basicConfig(level=logging.INFO)
 21 | logger = logging.getLogger(__name__)
 22 | 
 23 | # Load environment variables
 24 | load_dotenv()
 25 | 
 26 | 
 27 | class MCPClient:
 28 |     """Model Context Protocol client for testing MCP servers."""
 29 |     
 30 |     def __init__(self, model: str = "claude-3-5-sonnet-20241022"):
 31 |         """Initialize the MCP client.
 32 |         
 33 |         Args:
 34 |             model: The Claude model to use
 35 |         """
 36 |         # Initialize session and client objects
 37 |         self.session: Optional[ClientSession] = None
 38 |         self.exit_stack = AsyncExitStack()
 39 |         self.anthropic = Anthropic()
 40 |         self.model = model
 41 | 
 42 |     async def connect_to_server(self, server_script_path: str):
 43 |         """Connect to an MCP server.
 44 | 
 45 |         Args:
 46 |             server_script_path: Path to the server script (.py or .js)
 47 |         """
 48 |         is_python = server_script_path.endswith('.py')
 49 |         is_js = server_script_path.endswith('.js')
 50 |         if not (is_python or is_js):
 51 |             raise ValueError("Server script must be a .py or .js file")
 52 | 
 53 |         command = "python" if is_python else "node"
 54 |         server_params = StdioServerParameters(
 55 |             command=command,
 56 |             args=[server_script_path],
 57 |             env=None
 58 |         )
 59 | 
 60 |         stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
 61 |         self.stdio, self.write = stdio_transport
 62 |         self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))
 63 | 
 64 |         await self.session.initialize()
 65 | 
 66 |         # List available tools
 67 |         response = await self.session.list_tools()
 68 |         tools = response.tools
 69 |         logger.info(f"Connected to server with tools: {[tool.name for tool in tools]}")
 70 |         print("\nConnected to server with tools:", [tool.name for tool in tools])
 71 | 
 72 |     async def process_query(self, query: str) -> str:
 73 |         """Process a query using Claude and available tools.
 74 |         
 75 |         Args:
 76 |             query: The user query
 77 |             
 78 |         Returns:
 79 |             The response text
 80 |         """
 81 |         messages = [
 82 |             {
 83 |                 "role": "user",
 84 |                 "content": query
 85 |             }
 86 |         ]
 87 | 
 88 |         response = await self.session.list_tools()
 89 |         available_tools = [{
 90 |             "name": tool.name,
 91 |             "description": tool.description,
 92 |             "input_schema": tool.inputSchema
 93 |         } for tool in response.tools]
 94 | 
 95 |         # Initial Claude API call
 96 |         response = self.anthropic.messages.create(
 97 |             model=self.model,
 98 |             max_tokens=1000,
 99 |             messages=messages,
100 |             tools=available_tools
101 |         )
102 | 
103 |         # Process response and handle tool calls
104 |         tool_results = []
105 |         final_text = []
106 | 
107 |         assistant_message_content = []
108 |         for content in response.content:
109 |             if content.type == 'text':
110 |                 final_text.append(content.text)
111 |                 assistant_message_content.append(content)
112 |             elif content.type == 'tool_use':
113 |                 tool_name = content.name
114 |                 tool_args = content.input
115 | 
116 |                 # Execute tool call
117 |                 result = await self.session.call_tool(tool_name, tool_args)
118 |                 tool_results.append({"call": tool_name, "result": result})
119 |                 final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")
120 | 
121 |                 assistant_message_content.append(content)
122 |                 messages.append({
123 |                     "role": "assistant",
124 |                     "content": assistant_message_content
125 |                 })
126 |                 messages.append({
127 |                     "role": "user",
128 |                     "content": [
129 |                         {
130 |                             "type": "tool_result",
131 |                             "tool_use_id": content.id,
132 |                             "content": result.content
133 |                         }
134 |                     ]
135 |                 })
136 | 
137 |                 # Get next response from Claude
138 |                 response = self.anthropic.messages.create(
139 |                     model=self.model,
140 |                     max_tokens=1000,
141 |                     messages=messages,
142 |                     tools=available_tools
143 |                 )
144 | 
145 |                 final_text.append(response.content[0].text)
146 | 
147 |         return "\n".join(final_text)
148 | 
149 |     async def chat_loop(self):
150 |         """Run an interactive chat loop."""
151 |         print("\nMCP Client Started!")
152 |         print("Type your queries or 'quit' to exit.")
153 | 
154 |         while True:
155 |             try:
156 |                 query = input("\nQuery: ").strip()
157 | 
158 |                 if query.lower() == 'quit':
159 |                     break
160 | 
161 |                 response = await self.process_query(query)
162 |                 print("\n" + response)
163 | 
164 |             except Exception as e:
165 |                 print(f"\nError: {str(e)}")
166 |                 logger.exception("Error processing query")
167 | 
168 |     async def cleanup(self):
169 |         """Clean up resources."""
170 |         await self.exit_stack.aclose()
171 | 
172 | 
173 | def add_arguments(parser: argparse.ArgumentParser) -> None:
174 |     """Add command-specific arguments to the parser.
175 |     
176 |     Args:
177 |         parser: Argument parser
178 |     """
179 |     parser.add_argument(
180 |         "server_script",
181 |         type=str,
182 |         help="Path to the server script (.py or .js)"
183 |     )
184 |     
185 |     parser.add_argument(
186 |         "--model",
187 |         type=str,
188 |         default="claude-3-5-sonnet-20241022",
189 |         help="Claude model to use"
190 |     )
191 | 
192 | 
193 | def execute(args: argparse.Namespace) -> int:
194 |     """Execute the client command.
195 |     
196 |     Args:
197 |         args: Command arguments
198 |         
199 |     Returns:
200 |         Exit code
201 |     """
202 |     try:
203 |         client = MCPClient(model=args.model)
204 |         
205 |         async def run_client():
206 |             try:
207 |                 await client.connect_to_server(args.server_script)
208 |                 await client.chat_loop()
209 |             finally:
210 |                 await client.cleanup()
211 |                 
212 |         asyncio.run(run_client())
213 |         return 0
214 |         
215 |     except Exception as e:
216 |         logger.exception(f"Error running MCP client: {e}")
217 |         print(f"\nError: {str(e)}")
218 |         return 1
219 | 
220 | 
221 | def main() -> int:
222 |     """Run the client command as a standalone script."""
223 |     parser = argparse.ArgumentParser(description="Run the Claude Code MCP client")
224 |     add_arguments(parser)
225 |     args = parser.parse_args()
226 |     return execute(args)
227 | 
228 | 
229 | if __name__ == "__main__":
230 |     sys.exit(main())
```

--------------------------------------------------------------------------------
/examples/echo_server.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Simple Echo MCP Server Example
  4 | 
  5 | This is a basic implementation of a Model Context Protocol (MCP) server
  6 | that simply echoes back the parameters it receives.
  7 | """
  8 | 
  9 | import os
 10 | import json
 11 | import time
 12 | import uuid
 13 | from typing import Dict, List, Any, Optional
 14 | from fastapi import FastAPI, HTTPException, Request
 15 | from fastapi.responses import JSONResponse
 16 | from fastapi.middleware.cors import CORSMiddleware
 17 | from pydantic import BaseModel, Field
 18 | import uvicorn
 19 | 
 20 | # MCP Protocol Models
 21 | class MCPHealthResponse(BaseModel):
 22 |     status: str = "healthy"
 23 |     version: str = "1.0.0"
 24 |     protocol_version: str = "0.1.0"
 25 |     provider: str = "Echo MCP Server"
 26 |     models: List[str] = ["echo-model"]
 27 | 
 28 | class MCPContextRequest(BaseModel):
 29 |     prompt_id: str
 30 |     parameters: Dict[str, Any] = Field(default_factory=dict)
 31 |     model: Optional[str] = None
 32 |     stream: bool = False
 33 |     user: Optional[str] = None
 34 |     conversation_id: Optional[str] = None
 35 |     message_id: Optional[str] = None
 36 | 
 37 | class MCPContextResponse(BaseModel):
 38 |     context: str
 39 |     context_id: str
 40 |     model: str
 41 |     usage: Dict[str, int] = Field(default_factory=dict)
 42 |     metadata: Dict[str, Any] = Field(default_factory=dict)
 43 | 
 44 | class MCPPromptTemplate(BaseModel):
 45 |     id: str
 46 |     template: str
 47 |     description: Optional[str] = None
 48 |     parameters: Dict[str, Dict[str, Any]] = Field(default_factory=dict)
 49 |     default_model: Optional[str] = None
 50 |     metadata: Dict[str, Any] = Field(default_factory=dict)
 51 | 
 52 | class MCPPromptLibraryResponse(BaseModel):
 53 |     prompts: List[MCPPromptTemplate]
 54 |     count: int
 55 | 
 56 | # Create FastAPI app
 57 | app = FastAPI(
 58 |     title="Echo MCP Server",
 59 |     description="A simple MCP server that echoes back parameters",
 60 |     version="1.0.0",
 61 | )
 62 | 
 63 | # Add CORS middleware
 64 | app.add_middleware(
 65 |     CORSMiddleware,
 66 |     allow_origins=["*"],
 67 |     allow_credentials=True,
 68 |     allow_methods=["*"],
 69 |     allow_headers=["*"],
 70 | )
 71 | 
 72 | # Define prompt templates
 73 | prompt_templates = {
 74 |     "echo": {
 75 |         "template": "You said: {message}",
 76 |         "description": "Echoes back the message",
 77 |         "parameters": {
 78 |             "message": {
 79 |                 "type": "string",
 80 |                 "description": "The message to echo"
 81 |             }
 82 |         },
 83 |         "default_model": "echo-model",
 84 |         "metadata": {
 85 |             "category": "utility"
 86 |         }
 87 |     },
 88 |     "reverse": {
 89 |         "template": "Reversed: {message}",
 90 |         "description": "Reverses the message",
 91 |         "parameters": {
 92 |             "message": {
 93 |                 "type": "string",
 94 |                 "description": "The message to reverse"
 95 |             }
 96 |         },
 97 |         "default_model": "echo-model",
 98 |         "metadata": {
 99 |             "category": "utility"
100 |         }
101 |     }
102 | }
103 | 
104 | # MCP Protocol Routes
105 | @app.get("/", response_model=MCPHealthResponse)
106 | async def health_check():
107 |     """Health check endpoint required by MCP protocol"""
108 |     return MCPHealthResponse()
109 | 
110 | @app.post("/context", response_model=MCPContextResponse)
111 | async def get_context(request: MCPContextRequest):
112 |     """Get context for a prompt template with parameters"""
113 |     try:
114 |         # Check if prompt template exists
115 |         if request.prompt_id not in prompt_templates:
116 |             raise HTTPException(
117 |                 status_code=404,
118 |                 detail=f"Prompt template '{request.prompt_id}' not found"
119 |             )
120 |         
121 |         # Get prompt template
122 |         template = prompt_templates[request.prompt_id]
123 |         
124 |         # Use default model if not specified
125 |         model = request.model or template.get("default_model", "echo-model")
126 |         
127 |         # Generate context ID
128 |         context_id = str(uuid.uuid4())
129 |         
130 |         # Process template with parameters
131 |         try:
132 |             if request.prompt_id == "echo":
133 |                 context = f"Echo: {request.parameters.get('message', '')}"
134 |             elif request.prompt_id == "reverse":
135 |                 message = request.parameters.get('message', '')
136 |                 context = f"Reversed: {message[::-1]}"
137 |             else:
138 |                 context = template["template"].format(**request.parameters)
139 |         except KeyError as e:
140 |             raise HTTPException(
141 |                 status_code=400,
142 |                 detail=f"Missing required parameter: {e}"
143 |             )
144 |         
145 |         # Calculate token usage (simplified)
146 |         token_estimate = len(context.split())
147 |         usage = {
148 |             "prompt_tokens": token_estimate,
149 |             "completion_tokens": 0,
150 |             "total_tokens": token_estimate
151 |         }
152 |         
153 |         return MCPContextResponse(
154 |             context=context,
155 |             context_id=context_id,
156 |             model=model,
157 |             usage=usage,
158 |             metadata={
159 |                 "prompt_id": request.prompt_id,
160 |                 "timestamp": time.time()
161 |             }
162 |         )
163 |         
164 |     except HTTPException:
165 |         raise
166 |     except Exception as e:
167 |         raise HTTPException(
168 |             status_code=500,
169 |             detail=f"Error processing context: {str(e)}"
170 |         )
171 | 
172 | @app.get("/prompts", response_model=MCPPromptLibraryResponse)
173 | async def get_prompts():
174 |     """Get available prompt templates"""
175 |     prompts = [
176 |         MCPPromptTemplate(
177 |             id=prompt_id,
178 |             template=template["template"],
179 |             description=template.get("description", ""),
180 |             parameters=template.get("parameters", {}),
181 |             default_model=template.get("default_model", "echo-model"),
182 |             metadata=template.get("metadata", {})
183 |         )
184 |         for prompt_id, template in prompt_templates.items()
185 |     ]
186 |     
187 |     return MCPPromptLibraryResponse(
188 |         prompts=prompts,
189 |         count=len(prompts)
190 |     )
191 | 
192 | @app.get("/prompts/{prompt_id}", response_model=MCPPromptTemplate)
193 | async def get_prompt(prompt_id: str):
194 |     """Get a specific prompt template"""
195 |     if prompt_id not in prompt_templates:
196 |         raise HTTPException(
197 |             status_code=404,
198 |             detail=f"Prompt template '{prompt_id}' not found"
199 |         )
200 |     
201 |     template = prompt_templates[prompt_id]
202 |     return MCPPromptTemplate(
203 |         id=prompt_id,
204 |         template=template["template"],
205 |         description=template.get("description", ""),
206 |         parameters=template.get("parameters", {}),
207 |         default_model=template.get("default_model", "echo-model"),
208 |         metadata=template.get("metadata", {})
209 |     )
210 | 
211 | # Error handlers
212 | @app.exception_handler(HTTPException)
213 | async def http_exception_handler(request: Request, exc: HTTPException):
214 |     """Handle HTTP exceptions in MCP format"""
215 |     return JSONResponse(
216 |         status_code=exc.status_code,
217 |         content={
218 |             "error": exc.detail,
219 |             "error_type": "http_error",
220 |             "status_code": exc.status_code,
221 |             "details": exc.detail if isinstance(exc.detail, dict) else None
222 |         }
223 |     )
224 | 
225 | @app.exception_handler(Exception)
226 | async def general_exception_handler(request: Request, exc: Exception):
227 |     """Handle general exceptions in MCP format"""
228 |     return JSONResponse(
229 |         status_code=500,
230 |         content={
231 |             "error": str(exc),
232 |             "error_type": "server_error",
233 |             "status_code": 500,
234 |             "details": None
235 |         }
236 |     )
237 | 
238 | if __name__ == "__main__":
239 |     uvicorn.run(app, host="127.0.0.1", port=8000)
240 | 
```

--------------------------------------------------------------------------------
/claude_code/lib/tools/ai_tools.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | # claude_code/lib/tools/ai_tools.py
  3 | """AI-powered tools for generation and analysis."""
  4 | 
  5 | import os
  6 | import logging
  7 | import json
  8 | import base64
  9 | import requests
 10 | import tempfile
 11 | from typing import Dict, List, Optional, Any, Union
 12 | import time
 13 | 
 14 | from .base import tool, ToolRegistry
 15 | 
 16 | logger = logging.getLogger(__name__)
 17 | 
 18 | 
 19 | @tool(
 20 |     name="GenerateImage",
 21 |     description="Generate an image using AI based on a text prompt",
 22 |     parameters={
 23 |         "type": "object",
 24 |         "properties": {
 25 |             "prompt": {
 26 |                 "type": "string",
 27 |                 "description": "Text description of the image to generate"
 28 |             },
 29 |             "style": {
 30 |                 "type": "string",
 31 |                 "description": "Style of the image (realistic, cartoon, sketch, etc.)",
 32 |                 "enum": ["realistic", "cartoon", "sketch", "painting", "3d", "pixel-art", "abstract"],
 33 |                 "default": "realistic"
 34 |             },
 35 |             "size": {
 36 |                 "type": "string",
 37 |                 "description": "Size of the image",
 38 |                 "enum": ["small", "medium", "large"],
 39 |                 "default": "medium"
 40 |             },
 41 |             "save_path": {
 42 |                 "type": "string",
 43 |                 "description": "Absolute path where the image should be saved (optional)"
 44 |             }
 45 |         },
 46 |         "required": ["prompt"]
 47 |     },
 48 |     needs_permission=True,
 49 |     category="ai"
 50 | )
 51 | def generate_image(prompt: str, style: str = "realistic", size: str = "medium", save_path: Optional[str] = None) -> str:
 52 |     """Generate an image using AI based on a text prompt.
 53 |     
 54 |     Args:
 55 |         prompt: Text description of the image to generate
 56 |         style: Style of the image
 57 |         size: Size of the image
 58 |         save_path: Path where to save the image
 59 |         
 60 |     Returns:
 61 |         Path to the generated image or error message
 62 |     """
 63 |     logger.info(f"Generating image with prompt: {prompt} (style: {style}, size: {size})")
 64 |     
 65 |     # Map size to actual dimensions
 66 |     size_map = {
 67 |         "small": "512x512",
 68 |         "medium": "1024x1024",
 69 |         "large": "1792x1024"
 70 |     }
 71 |     
 72 |     # Get API key
 73 |     api_key = os.getenv("OPENAI_API_KEY")
 74 |     if not api_key:
 75 |         return "Error: OpenAI API key not found. Please set the OPENAI_API_KEY environment variable."
 76 |     
 77 |     # Prepare the prompt based on style
 78 |     full_prompt = prompt
 79 |     if style != "realistic":
 80 |         style_prompts = {
 81 |             "cartoon": f"A cartoon-style image of {prompt}",
 82 |             "sketch": f"A pencil sketch of {prompt}",
 83 |             "painting": f"An oil painting of {prompt}",
 84 |             "3d": f"A 3D rendered image of {prompt}",
 85 |             "pixel-art": f"A pixel art image of {prompt}",
 86 |             "abstract": f"An abstract representation of {prompt}"
 87 |         }
 88 |         full_prompt = style_prompts.get(style, prompt)
 89 |     
 90 |     try:
 91 |         # Call OpenAI API to generate image
 92 |         headers = {
 93 |             "Content-Type": "application/json",
 94 |             "Authorization": f"Bearer {api_key}"
 95 |         }
 96 |         
 97 |         payload = {
 98 |             "model": "dall-e-3",
 99 |             "prompt": full_prompt,
100 |             "size": size_map.get(size, "1024x1024"),
101 |             "quality": "standard",
102 |             "n": 1
103 |         }
104 |         
105 |         response = requests.post(
106 |             "https://api.openai.com/v1/images/generations",
107 |             headers=headers,
108 |             json=payload
109 |         )
110 |         
111 |         if response.status_code != 200:
112 |             return f"Error: API request failed with status code {response.status_code}: {response.text}"
113 |         
114 |         data = response.json()
115 |         
116 |         if "data" not in data or not data["data"]:
117 |             return "Error: No image data in response"
118 |         
119 |         image_url = data["data"][0]["url"]
120 |         
121 |         # Download the image
122 |         image_response = requests.get(image_url)
123 |         if image_response.status_code != 200:
124 |             return f"Error: Failed to download image: {image_response.status_code}"
125 |         
126 |         # Save the image
127 |         if save_path:
128 |             # Ensure the path is absolute
129 |             if not os.path.isabs(save_path):
130 |                 return f"Error: Save path must be absolute: {save_path}"
131 |             
132 |             # Create directory if it doesn't exist
133 |             os.makedirs(os.path.dirname(save_path), exist_ok=True)
134 |             
135 |             # Save the image
136 |             with open(save_path, "wb") as f:
137 |                 f.write(image_response.content)
138 |             
139 |             return f"Image generated and saved to: {save_path}"
140 |         else:
141 |             # Save to a temporary file
142 |             with tempfile.NamedTemporaryFile(delete=False, suffix=".png") as tmp:
143 |                 tmp.write(image_response.content)
144 |                 return f"Image generated and saved to temporary file: {tmp.name}"
145 |     
146 |     except Exception as e:
147 |         logger.exception(f"Error generating image: {str(e)}")
148 |         return f"Error generating image: {str(e)}"
149 | 
150 | 
151 | @tool(
152 |     name="TextToSpeech",
153 |     description="Convert text to speech using AI",
154 |     parameters={
155 |         "type": "object",
156 |         "properties": {
157 |             "text": {
158 |                 "type": "string",
159 |                 "description": "Text to convert to speech"
160 |             },
161 |             "voice": {
162 |                 "type": "string",
163 |                 "description": "Voice to use",
164 |                 "enum": ["alloy", "echo", "fable", "onyx", "nova", "shimmer"],
165 |                 "default": "nova"
166 |             },
167 |             "save_path": {
168 |                 "type": "string",
169 |                 "description": "Absolute path where the audio file should be saved (optional)"
170 |             }
171 |         },
172 |         "required": ["text"]
173 |     },
174 |     needs_permission=True,
175 |     category="ai"
176 | )
177 | def text_to_speech(text: str, voice: str = "nova", save_path: Optional[str] = None) -> str:
178 |     """Convert text to speech using AI.
179 |     
180 |     Args:
181 |         text: Text to convert to speech
182 |         voice: Voice to use
183 |         save_path: Path where to save the audio file
184 |         
185 |     Returns:
186 |         Path to the generated audio file or error message
187 |     """
188 |     logger.info(f"Converting text to speech: {text[:50]}... (voice: {voice})")
189 |     
190 |     # Get API key
191 |     api_key = os.getenv("OPENAI_API_KEY")
192 |     if not api_key:
193 |         return "Error: OpenAI API key not found. Please set the OPENAI_API_KEY environment variable."
194 |     
195 |     try:
196 |         # Call OpenAI API to generate speech
197 |         headers = {
198 |             "Authorization": f"Bearer {api_key}"
199 |         }
200 |         
201 |         payload = {
202 |             "model": "tts-1",
203 |             "input": text,
204 |             "voice": voice
205 |         }
206 |         
207 |         response = requests.post(
208 |             "https://api.openai.com/v1/audio/speech",
209 |             headers=headers,
210 |             json=payload
211 |         )
212 |         
213 |         if response.status_code != 200:
214 |             return f"Error: API request failed with status code {response.status_code}: {response.text}"
215 |         
216 |         # Save the audio
217 |         if save_path:
218 |             # Ensure the path is absolute
219 |             if not os.path.isabs(save_path):
220 |                 return f"Error: Save path must be absolute: {save_path}"
221 |             
222 |             # Create directory if it doesn't exist
223 |             os.makedirs(os.path.dirname(save_path), exist_ok=True)
224 |             
225 |             # Save the audio
226 |             with open(save_path, "wb") as f:
227 |                 f.write(response.content)
228 |             
229 |             return f"Speech generated and saved to: {save_path}"
230 |         else:
231 |             # Save to a temporary file
232 |             with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmp:
233 |                 tmp.write(response.content)
234 |                 return f"Speech generated and saved to temporary file: {tmp.name}"
235 |     
236 |     except Exception as e:
237 |         logger.exception(f"Error generating speech: {str(e)}")
238 |         return f"Error generating speech: {str(e)}"
239 | 
240 | 
241 | def register_ai_tools(registry: ToolRegistry) -> None:
242 |     """Register all AI tools with the registry.
243 |     
244 |     Args:
245 |         registry: Tool registry to register with
246 |     """
247 |     from .base import create_tools_from_functions
248 |     
249 |     ai_tools = [
250 |         generate_image,
251 |         text_to_speech
252 |     ]
253 |     
254 |     create_tools_from_functions(registry, ai_tools)
255 | 
```

--------------------------------------------------------------------------------
/claude_code/lib/tools/search_tools.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | # claude_code/lib/tools/search_tools.py
  3 | """Web search and information retrieval tools."""
  4 | 
  5 | import os
  6 | import logging
  7 | import json
  8 | import urllib.parse
  9 | import requests
 10 | from typing import Dict, List, Optional, Any
 11 | 
 12 | from .base import tool, ToolRegistry
 13 | 
 14 | logger = logging.getLogger(__name__)
 15 | 
 16 | 
 17 | @tool(
 18 |     name="WebSearch",
 19 |     description="Search the web for information using various search engines",
 20 |     parameters={
 21 |         "type": "object",
 22 |         "properties": {
 23 |             "query": {
 24 |                 "type": "string",
 25 |                 "description": "The search query"
 26 |             },
 27 |             "engine": {
 28 |                 "type": "string",
 29 |                 "description": "Search engine to use (google, bing, duckduckgo)",
 30 |                 "enum": ["google", "bing", "duckduckgo"]
 31 |             },
 32 |             "num_results": {
 33 |                 "type": "integer",
 34 |                 "description": "Number of results to return (max 10)"
 35 |             }
 36 |         },
 37 |         "required": ["query"]
 38 |     },
 39 |     category="search"
 40 | )
 41 | def web_search(query: str, engine: str = "google", num_results: int = 5) -> str:
 42 |     """Search the web for information.
 43 |     
 44 |     Args:
 45 |         query: Search query
 46 |         engine: Search engine to use
 47 |         num_results: Number of results to return
 48 |         
 49 |     Returns:
 50 |         Search results as formatted text
 51 |     """
 52 |     logger.info(f"Searching web for: {query} using {engine}")
 53 |     
 54 |     # Validate inputs
 55 |     if num_results > 10:
 56 |         num_results = 10  # Cap at 10 results
 57 |     
 58 |     # Get API key based on engine
 59 |     api_key = None
 60 |     if engine == "google":
 61 |         api_key = os.getenv("GOOGLE_SEARCH_API_KEY")
 62 |         cx = os.getenv("GOOGLE_SEARCH_CX")
 63 |         if not api_key or not cx:
 64 |             return "Error: Google Search API key or CX not configured. Please set GOOGLE_SEARCH_API_KEY and GOOGLE_SEARCH_CX environment variables."
 65 |     elif engine == "bing":
 66 |         api_key = os.getenv("BING_SEARCH_API_KEY")
 67 |         if not api_key:
 68 |             return "Error: Bing Search API key not configured. Please set BING_SEARCH_API_KEY environment variable."
 69 |     
 70 |     # Perform search based on engine
 71 |     try:
 72 |         if engine == "google":
 73 |             return _google_search(query, api_key, cx, num_results)
 74 |         elif engine == "bing":
 75 |             return _bing_search(query, api_key, num_results)
 76 |         elif engine == "duckduckgo":
 77 |             return _duckduckgo_search(query, num_results)
 78 |         else:
 79 |             return f"Error: Unsupported search engine: {engine}"
 80 |     except Exception as e:
 81 |         logger.exception(f"Error during web search: {str(e)}")
 82 |         return f"Error performing search: {str(e)}"
 83 | 
 84 | 
 85 | def _google_search(query: str, api_key: str, cx: str, num_results: int) -> str:
 86 |     """Perform Google search using Custom Search API."""
 87 |     url = "https://www.googleapis.com/customsearch/v1"
 88 |     params = {
 89 |         "key": api_key,
 90 |         "cx": cx,
 91 |         "q": query,
 92 |         "num": min(num_results, 10)
 93 |     }
 94 |     
 95 |     response = requests.get(url, params=params)
 96 |     if response.status_code != 200:
 97 |         return f"Error: Google search failed with status code {response.status_code}: {response.text}"
 98 |     
 99 |     data = response.json()
100 |     if "items" not in data:
101 |         return f"No results found for '{query}'"
102 |     
103 |     results = []
104 |     for i, item in enumerate(data["items"], 1):
105 |         title = item.get("title", "No title")
106 |         link = item.get("link", "No link")
107 |         snippet = item.get("snippet", "No description").replace("\n", " ")
108 |         results.append(f"{i}. {title}\n   URL: {link}\n   {snippet}\n")
109 |     
110 |     return f"Google Search Results for '{query}':\n\n" + "\n".join(results)
111 | 
112 | 
113 | def _bing_search(query: str, api_key: str, num_results: int) -> str:
114 |     """Perform Bing search using Bing Web Search API."""
115 |     url = "https://api.bing.microsoft.com/v7.0/search"
116 |     headers = {"Ocp-Apim-Subscription-Key": api_key}
117 |     params = {
118 |         "q": query,
119 |         "count": min(num_results, 10),
120 |         "responseFilter": "Webpages"
121 |     }
122 |     
123 |     response = requests.get(url, headers=headers, params=params)
124 |     if response.status_code != 200:
125 |         return f"Error: Bing search failed with status code {response.status_code}: {response.text}"
126 |     
127 |     data = response.json()
128 |     if "webPages" not in data or "value" not in data["webPages"]:
129 |         return f"No results found for '{query}'"
130 |     
131 |     results = []
132 |     for i, item in enumerate(data["webPages"]["value"], 1):
133 |         title = item.get("name", "No title")
134 |         link = item.get("url", "No link")
135 |         snippet = item.get("snippet", "No description").replace("\n", " ")
136 |         results.append(f"{i}. {title}\n   URL: {link}\n   {snippet}\n")
137 |     
138 |     return f"Bing Search Results for '{query}':\n\n" + "\n".join(results)
139 | 
140 | 
141 | def _duckduckgo_search(query: str, num_results: int) -> str:
142 |     """Perform DuckDuckGo search using their API."""
143 |     # DuckDuckGo doesn't have an official API, but we can use their instant answer API
144 |     url = "https://api.duckduckgo.com/"
145 |     params = {
146 |         "q": query,
147 |         "format": "json",
148 |         "no_html": 1,
149 |         "skip_disambig": 1
150 |     }
151 |     
152 |     response = requests.get(url, params=params)
153 |     if response.status_code != 200:
154 |         return f"Error: DuckDuckGo search failed with status code {response.status_code}: {response.text}"
155 |     
156 |     data = response.json()
157 |     
158 |     results = []
159 |     
160 |     # Add the abstract if available
161 |     if data.get("Abstract"):
162 |         results.append(f"Summary: {data['Abstract']}\n")
163 |     
164 |     # Add related topics
165 |     if data.get("RelatedTopics"):
166 |         topics = data["RelatedTopics"][:num_results]
167 |         for i, topic in enumerate(topics, 1):
168 |             if "Text" in topic:
169 |                 text = topic.get("Text", "No description")
170 |                 url = topic.get("FirstURL", "No URL")
171 |                 results.append(f"{i}. {text}\n   URL: {url}\n")
172 |     
173 |     if not results:
174 |         return f"No results found for '{query}'"
175 |     
176 |     return f"DuckDuckGo Search Results for '{query}':\n\n" + "\n".join(results)
177 | 
178 | 
179 | @tool(
180 |     name="WikipediaSearch",
181 |     description="Search Wikipedia for information on a topic",
182 |     parameters={
183 |         "type": "object",
184 |         "properties": {
185 |             "query": {
186 |                 "type": "string",
187 |                 "description": "The topic to search for"
188 |             },
189 |             "language": {
190 |                 "type": "string",
191 |                 "description": "Language code (e.g., 'en', 'es', 'fr')",
192 |                 "default": "en"
193 |             }
194 |         },
195 |         "required": ["query"]
196 |     },
197 |     category="search"
198 | )
199 | def wikipedia_search(query: str, language: str = "en") -> str:
200 |     """Search Wikipedia for information on a topic.
201 |     
202 |     Args:
203 |         query: Topic to search for
204 |         language: Language code
205 |         
206 |     Returns:
207 |         Wikipedia article summary
208 |     """
209 |     logger.info(f"Searching Wikipedia for: {query} in {language}")
210 |     
211 |     try:
212 |         # Wikipedia API endpoint
213 |         url = f"https://{language}.wikipedia.org/api/rest_v1/page/summary/{urllib.parse.quote(query)}"
214 |         
215 |         response = requests.get(url)
216 |         if response.status_code != 200:
217 |             # Try search API if direct lookup fails
218 |             search_url = f"https://{language}.wikipedia.org/w/api.php"
219 |             search_params = {
220 |                 "action": "query",
221 |                 "list": "search",
222 |                 "srsearch": query,
223 |                 "format": "json"
224 |             }
225 |             
226 |             search_response = requests.get(search_url, params=search_params)
227 |             if search_response.status_code != 200:
228 |                 return f"Error: Wikipedia search failed with status code {search_response.status_code}"
229 |             
230 |             search_data = search_response.json()
231 |             if "query" not in search_data or "search" not in search_data["query"] or not search_data["query"]["search"]:
232 |                 return f"No Wikipedia articles found for '{query}'"
233 |             
234 |             # Get the first search result
235 |             first_result = search_data["query"]["search"][0]
236 |             title = first_result["title"]
237 |             
238 |             # Get the summary for the first result
239 |             url = f"https://{language}.wikipedia.org/api/rest_v1/page/summary/{urllib.parse.quote(title)}"
240 |             response = requests.get(url)
241 |             if response.status_code != 200:
242 |                 return f"Error: Wikipedia article lookup failed with status code {response.status_code}"
243 |         
244 |         data = response.json()
245 |         
246 |         # Format the response
247 |         title = data.get("title", "Unknown")
248 |         extract = data.get("extract", "No information available")
249 |         url = data.get("content_urls", {}).get("desktop", {}).get("page", "")
250 |         
251 |         result = f"Wikipedia: {title}\n\n{extract}\n"
252 |         if url:
253 |             result += f"\nSource: {url}"
254 |         
255 |         return result
256 |     
257 |     except Exception as e:
258 |         logger.exception(f"Error during Wikipedia search: {str(e)}")
259 |         return f"Error searching Wikipedia: {str(e)}"
260 | 
261 | 
262 | def register_search_tools(registry: ToolRegistry) -> None:
263 |     """Register all search tools with the registry.
264 |     
265 |     Args:
266 |         registry: Tool registry to register with
267 |     """
268 |     from .base import create_tools_from_functions
269 |     
270 |     search_tools = [
271 |         web_search,
272 |         wikipedia_search
273 |     ]
274 |     
275 |     create_tools_from_functions(registry, search_tools)
276 | 
```

--------------------------------------------------------------------------------
/templates/index.html:
--------------------------------------------------------------------------------

```html
  1 | <!DOCTYPE html>
  2 | <html lang="en">
  3 | <head>
  4 |     <meta charset="UTF-8">
  5 |     <meta name="viewport" content="width=device-width, initial-scale=1.0">
  6 |     <title>OpenAI Code Assistant MCP Server</title>
  7 |     <link rel="stylesheet" href="/static/style.css">
  8 |     <script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
  9 | </head>
 10 | <body>
 11 |     <div class="container">
 12 |         <h1>OpenAI Code Assistant MCP Server</h1>
 13 |         
 14 |         <div class="stats-grid">
 15 |             <div class="stat-card">
 16 |                 <div class="stat-label">Status</div>
 17 |                 <div class="stat-value" style="color: #27ae60;">{{ status }}</div>
 18 |             </div>
 19 |             <div class="stat-card">
 20 |                 <div class="stat-label">Uptime</div>
 21 |                 <div class="stat-value">{{ uptime }}</div>
 22 |             </div>
 23 |             <div class="stat-card">
 24 |                 <div class="stat-label">Requests Served</div>
 25 |                 <div class="stat-value">{{ request_count }}</div>
 26 |             </div>
 27 |             <div class="stat-card">
 28 |                 <div class="stat-label">Cache Hit Ratio</div>
 29 |                 <div class="stat-value">{{ cache_hit_ratio }}%</div>
 30 |             </div>
 31 |         </div>
 32 |         
 33 |         <div class="card">
 34 |             <div class="card-header">System Status</div>
 35 |             <div class="card-body">
 36 |                 <canvas id="requestsChart" height="100"></canvas>
 37 |             </div>
 38 |         </div>
 39 |         
 40 |         <h2>Available Models</h2>
 41 |         <div class="card">
 42 |             <div class="card-body">
 43 |                 <div class="template-grid">
 44 |                     {% for model in models %}
 45 |                     <div class="stat-card">
 46 |                         <div class="stat-label">Model</div>
 47 |                         <div class="stat-value" style="font-size: 20px;">{{ model }}</div>
 48 |                     </div>
 49 |                     {% endfor %}
 50 |                 </div>
 51 |             </div>
 52 |         </div>
 53 |         
 54 |         <h2>Available Prompt Templates</h2>
 55 |         <div class="template-grid">
 56 |             {% for template in templates %}
 57 |             <div class="card">
 58 |                 <div class="card-header">{{ template.id }}</div>
 59 |                 <div class="card-body">
 60 |                     <p><strong>Description:</strong> {{ template.description }}</p>
 61 |                     
 62 |                     {% if template.parameters %}
 63 |                     <p><strong>Parameters:</strong></p>
 64 |                     <ul class="parameter-list">
 65 |                         {% for param in template.parameters %}
 66 |                         <li>{{ param }}</li>
 67 |                         {% endfor %}
 68 |                     </ul>
 69 |                     {% else %}
 70 |                     <p><em>No parameters required</em></p>
 71 |                     {% endif %}
 72 |                     
 73 |                     <p><strong>Default Model:</strong> <span class="tag">{{ template.default_model }}</span></p>
 74 |                     
 75 |                     <div style="margin-top: 15px;">
 76 |                         <button class="btn btn-primary" onclick="testTemplate('{{ template.id }}')">Test Template</button>
 77 |                     </div>
 78 |                 </div>
 79 |             </div>
 80 |             {% endfor %}
 81 |         </div>
 82 |         
 83 |         <h2>API Documentation</h2>
 84 |         <div class="card">
 85 |             <div class="card-body">
 86 |                 <p>Explore the API using the interactive documentation:</p>
 87 |                 <a href="/docs" class="btn btn-primary">Swagger UI</a>
 88 |                 <a href="/redoc" class="btn btn-secondary">ReDoc</a>
 89 |                 <a href="/metrics" class="btn btn-info">Prometheus Metrics</a>
 90 |             </div>
 91 |         </div>
 92 |         
 93 |         <div class="footer">
 94 |             <p>OpenAI Code Assistant MCP Server &copy; 2025</p>
 95 |         </div>
 96 |     </div>
 97 |     
 98 |     <!-- Template Test Modal -->
 99 |     <div id="templateModal" style="display: none; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background-color: rgba(0,0,0,0.5); z-index: 1000;">
100 |         <div style="background-color: white; margin: 10% auto; padding: 20px; width: 80%; max-width: 600px; border-radius: 8px;">
101 |             <h3 id="modalTitle">Test Template</h3>
102 |             <div id="modalContent">
103 |                 <div id="parameterInputs"></div>
104 |                 <div style="margin-top: 20px;">
105 |                     <button class="btn btn-primary" onclick="submitTemplateTest()">Generate Context</button>
106 |                     <button class="btn btn-secondary" onclick="closeModal()">Cancel</button>
107 |                 </div>
108 |             </div>
109 |             <div id="resultContent" style="display: none; margin-top: 20px;">
110 |                 <h4>Generated Context:</h4>
111 |                 <pre id="contextResult" style="background-color: #f5f5f5; padding: 10px; border-radius: 4px; overflow-x: auto;"></pre>
112 |                 <button class="btn btn-secondary" onclick="closeResults()">Close</button>
113 |             </div>
114 |         </div>
115 |     </div>
116 |     
117 |     <script>
118 |         // Sample data for the chart - in a real implementation, this would come from the server
119 |         const ctx = document.getElementById('requestsChart').getContext('2d');
120 |         const requestsChart = new Chart(ctx, {
121 |             type: 'line',
122 |             data: {
123 |                 labels: Array.from({length: 12}, (_, i) => `${i*5} min ago`).reverse(),
124 |                 datasets: [{
125 |                     label: 'Requests',
126 |                     data: [12, 19, 3, 5, 2, 3, 20, 33, 23, 12, 5, 3],
127 |                     borderColor: '#3498db',
128 |                     tension: 0.1,
129 |                     fill: false
130 |                 }]
131 |             },
132 |             options: {
133 |                 responsive: true,
134 |                 scales: {
135 |                     y: {
136 |                         beginAtZero: true
137 |                     }
138 |                 }
139 |             }
140 |         });
141 |         
142 |         // Template testing functionality
143 |         let currentTemplate = '';
144 |         
145 |         function testTemplate(templateId) {
146 |             currentTemplate = templateId;
147 |             document.getElementById('modalTitle').textContent = `Test Template: ${templateId}`;
148 |             document.getElementById('parameterInputs').innerHTML = '';
149 |             document.getElementById('resultContent').style.display = 'none';
150 |             
151 |             // Fetch template details
152 |             fetch(`/prompts/${templateId}`)
153 |                 .then(response => response.json())
154 |                 .then(template => {
155 |                     const parametersDiv = document.getElementById('parameterInputs');
156 |                     
157 |                     // Create input fields for each parameter
158 |                     for (const [paramName, paramInfo] of Object.entries(template.parameters)) {
159 |                         const paramDiv = document.createElement('div');
160 |                         paramDiv.style.marginBottom = '15px';
161 |                         
162 |                         const label = document.createElement('label');
163 |                         label.textContent = `${paramName}: ${paramInfo.description || ''}`;
164 |                         label.style.display = 'block';
165 |                         label.style.marginBottom = '5px';
166 |                         
167 |                         const input = document.createElement('input');
168 |                         input.type = 'text';
169 |                         input.id = `param-${paramName}`;
170 |                         input.style.width = '100%';
171 |                         input.style.padding = '8px';
172 |                         input.style.borderRadius = '4px';
173 |                         input.style.border = '1px solid #ddd';
174 |                         
175 |                         paramDiv.appendChild(label);
176 |                         paramDiv.appendChild(input);
177 |                         parametersDiv.appendChild(paramDiv);
178 |                     }
179 |                     
180 |                     // Show the modal
181 |                     document.getElementById('templateModal').style.display = 'block';
182 |                 })
183 |                 .catch(error => {
184 |                     console.error('Error fetching template:', error);
185 |                     alert('Error fetching template details');
186 |                 });
187 |         }
188 |         
189 |         function submitTemplateTest() {
190 |             // Collect parameter values
191 |             const parameters = {};
192 |             const inputs = document.querySelectorAll('[id^="param-"]');
193 |             
194 |             inputs.forEach(input => {
195 |                 const paramName = input.id.replace('param-', '');
196 |                 parameters[paramName] = input.value;
197 |             });
198 |             
199 |             // Call the context API
200 |             fetch('/context', {
201 |                 method: 'POST',
202 |                 headers: {
203 |                     'Content-Type': 'application/json'
204 |                 },
205 |                 body: JSON.stringify({
206 |                     prompt_id: currentTemplate,
207 |                     parameters: parameters
208 |                 })
209 |             })
210 |             .then(response => response.json())
211 |             .then(data => {
212 |                 // Display the result
213 |                 document.getElementById('contextResult').textContent = data.context;
214 |                 document.getElementById('modalContent').style.display = 'none';
215 |                 document.getElementById('resultContent').style.display = 'block';
216 |             })
217 |             .catch(error => {
218 |                 console.error('Error generating context:', error);
219 |                 alert('Error generating context');
220 |             });
221 |         }
222 |         
223 |         function closeModal() {
224 |             document.getElementById('templateModal').style.display = 'none';
225 |         }
226 |         
227 |         function closeResults() {
228 |             document.getElementById('resultContent').style.display = 'none';
229 |             document.getElementById('modalContent').style.display = 'block';
230 |             closeModal();
231 |         }
232 |         
233 |         // Close modal when clicking outside
234 |         window.onclick = function(event) {
235 |             const modal = document.getElementById('templateModal');
236 |             if (event.target === modal) {
237 |                 closeModal();
238 |             }
239 |         }
240 |     </script>
241 | </body>
242 | </html>
243 | 
```

--------------------------------------------------------------------------------
/claude_code/lib/tools/base.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | # claude_code/lib/tools/base.py
  3 | """Base classes for tools."""
  4 | 
  5 | import abc
  6 | import inspect
  7 | import time
  8 | import logging
  9 | import os
 10 | import json
 11 | from typing import Dict, List, Any, Callable, Optional, Type, Union, Sequence
 12 | from dataclasses import dataclass
 13 | 
 14 | from pydantic import BaseModel, Field, validator
 15 | 
 16 | logger = logging.getLogger(__name__)
 17 | 
 18 | 
 19 | class ToolParameter(BaseModel):
 20 |     """Definition of a tool parameter."""
 21 |     
 22 |     name: str
 23 |     description: str
 24 |     type: str
 25 |     required: bool = False
 26 |     
 27 |     class Config:
 28 |         """Pydantic config."""
 29 |         extra = "forbid"
 30 | 
 31 | 
 32 | class ToolResult(BaseModel):
 33 |     """Result of a tool execution."""
 34 |     
 35 |     tool_call_id: str
 36 |     name: str
 37 |     result: str
 38 |     execution_time: float
 39 |     token_usage: int = 0
 40 |     status: str = "success"
 41 |     error: Optional[str] = None
 42 |     
 43 |     class Config:
 44 |         """Pydantic config."""
 45 |         extra = "forbid"
 46 | 
 47 | 
 48 | class Routine(BaseModel):
 49 |     """Definition of a tool routine."""
 50 |     
 51 |     name: str
 52 |     description: str
 53 |     steps: List[Dict[str, Any]]
 54 |     usage_count: int = 0
 55 |     created_at: float = Field(default_factory=time.time)
 56 |     last_used_at: Optional[float] = None
 57 |     
 58 |     class Config:
 59 |         """Pydantic config."""
 60 |         extra = "allow"
 61 | 
 62 | 
 63 | class Tool(BaseModel):
 64 |     """Base class for all tools."""
 65 |     
 66 |     name: str
 67 |     description: str
 68 |     parameters: Dict[str, Any]
 69 |     function: Callable
 70 |     needs_permission: bool = False
 71 |     category: str = "general"
 72 |     
 73 |     class Config:
 74 |         """Pydantic config."""
 75 |         arbitrary_types_allowed = True
 76 |         extra = "forbid"
 77 |     
 78 |     def execute(self, tool_call: Dict[str, Any]) -> ToolResult:
 79 |         """Execute the tool with the given parameters.
 80 |         
 81 |         Args:
 82 |             tool_call: Dictionary containing tool call information
 83 |             
 84 |         Returns:
 85 |             ToolResult with execution result
 86 |         """
 87 |         # Extract parameters
 88 |         function_name = tool_call.get("function", {}).get("name", "")
 89 |         arguments_str = tool_call.get("function", {}).get("arguments", "{}")
 90 |         tool_call_id = tool_call.get("id", "unknown")
 91 |         
 92 |         # Parse arguments
 93 |         try:
 94 |             arguments = json.loads(arguments_str)
 95 |         except json.JSONDecodeError as e:
 96 |             logger.error(f"Failed to parse arguments: {e}")
 97 |             return ToolResult(
 98 |                 tool_call_id=tool_call_id,
 99 |                 name=self.name,
100 |                 result=f"Error: Failed to parse arguments: {e}",
101 |                 execution_time=0,
102 |                 status="error",
103 |                 error=str(e)
104 |             )
105 |         
106 |         # Execute function
107 |         start_time = time.time()
108 |         try:
109 |             result = self.function(**arguments)
110 |             execution_time = time.time() - start_time
111 |             
112 |             # Convert result to string if it's not already
113 |             if not isinstance(result, str):
114 |                 result = str(result)
115 |             
116 |             return ToolResult(
117 |                 tool_call_id=tool_call_id,
118 |                 name=self.name,
119 |                 result=result,
120 |                 execution_time=execution_time,
121 |                 status="success"
122 |             )
123 |         except Exception as e:
124 |             execution_time = time.time() - start_time
125 |             logger.exception(f"Error executing tool {self.name}: {e}")
126 |             return ToolResult(
127 |                 tool_call_id=tool_call_id,
128 |                 name=self.name,
129 |                 result=f"Error: {str(e)}",
130 |                 execution_time=execution_time,
131 |                 status="error",
132 |                 error=str(e)
133 |             )
134 | 
135 | 
136 | class ToolRegistry:
137 |     """Registry for tools."""
138 |     
139 |     def __init__(self):
140 |         """Initialize the tool registry."""
141 |         self.tools: Dict[str, Tool] = {}
142 |         self.routines: Dict[str, Routine] = {}
143 |         self._routine_file = os.path.join(os.path.expanduser("~"), ".claude_code", "routines.json")
144 |     
145 |     def register_tool(self, tool: Tool) -> None:
146 |         """Register a tool.
147 |         
148 |         Args:
149 |             tool: Tool instance to register
150 |             
151 |         Raises:
152 |             ValueError: If a tool with the same name is already registered
153 |         """
154 |         if tool.name in self.tools:
155 |             raise ValueError(f"Tool {tool.name} is already registered")
156 |         
157 |         self.tools[tool.name] = tool
158 |         logger.debug(f"Registered tool: {tool.name}")
159 |     
160 |     def register_routine(self, routine: Routine) -> None:
161 |         """Register a routine.
162 |         
163 |         Args:
164 |             routine: Routine to register
165 |             
166 |         Raises:
167 |             ValueError: If a routine with the same name is already registered
168 |         """
169 |         if routine.name in self.routines:
170 |             raise ValueError(f"Routine {routine.name} is already registered")
171 |         
172 |         self.routines[routine.name] = routine
173 |         logger.debug(f"Registered routine: {routine.name}")
174 |         self._save_routines()
175 |     
176 |     def register_routine_from_dict(self, routine_dict: Dict[str, Any]) -> None:
177 |         """Register a routine from a dictionary.
178 |         
179 |         Args:
180 |             routine_dict: Dictionary with routine data
181 |             
182 |         Raises:
183 |             ValueError: If a routine with the same name is already registered
184 |         """
185 |         routine = Routine(**routine_dict)
186 |         self.register_routine(routine)
187 |     
188 |     def get_tool(self, name: str) -> Optional[Tool]:
189 |         """Get a tool by name.
190 |         
191 |         Args:
192 |             name: Name of the tool
193 |             
194 |         Returns:
195 |             Tool instance or None if not found
196 |         """
197 |         return self.tools.get(name)
198 |     
199 |     def get_routine(self, name: str) -> Optional[Routine]:
200 |         """Get a routine by name.
201 |         
202 |         Args:
203 |             name: Name of the routine
204 |             
205 |         Returns:
206 |             Routine or None if not found
207 |         """
208 |         return self.routines.get(name)
209 |     
210 |     def get_all_tools(self) -> List[Tool]:
211 |         """Get all registered tools.
212 |         
213 |         Returns:
214 |             List of all registered tools
215 |         """
216 |         return list(self.tools.values())
217 |     
218 |     def get_all_routines(self) -> List[Routine]:
219 |         """Get all registered routines.
220 |         
221 |         Returns:
222 |             List of all registered routines
223 |         """
224 |         return list(self.routines.values())
225 |     
226 |     def get_tool_schemas(self) -> List[Dict[str, Any]]:
227 |         """Get OpenAI-compatible schemas for all tools.
228 |         
229 |         Returns:
230 |             List of tool schemas for OpenAI function calling
231 |         """
232 |         schemas = []
233 |         for tool in self.tools.values():
234 |             schemas.append({
235 |                 "type": "function",
236 |                 "function": {
237 |                     "name": tool.name,
238 |                     "description": tool.description,
239 |                     "parameters": tool.parameters
240 |                 }
241 |             })
242 |         return schemas
243 |     
244 |     def record_routine_usage(self, name: str) -> None:
245 |         """Record usage of a routine.
246 |         
247 |         Args:
248 |             name: Name of the routine
249 |         """
250 |         if name in self.routines:
251 |             routine = self.routines[name]
252 |             routine.usage_count += 1
253 |             routine.last_used_at = time.time()
254 |             self._save_routines()
255 |     
256 |     def _save_routines(self) -> None:
257 |         """Save routines to file."""
258 |         try:
259 |             # Create directory if it doesn't exist
260 |             os.makedirs(os.path.dirname(self._routine_file), exist_ok=True)
261 |             
262 |             # Convert routines to dict for serialization
263 |             routines_dict = {name: routine.dict() for name, routine in self.routines.items()}
264 |             
265 |             # Save to file
266 |             with open(self._routine_file, 'w') as f:
267 |                 json.dump(routines_dict, f, indent=2)
268 |             
269 |             logger.debug(f"Saved {len(self.routines)} routines to {self._routine_file}")
270 |         except Exception as e:
271 |             logger.error(f"Error saving routines: {e}")
272 |     
273 |     def load_routines(self) -> None:
274 |         """Load routines from file."""
275 |         if not os.path.exists(self._routine_file):
276 |             logger.debug(f"Routines file not found: {self._routine_file}")
277 |             return
278 |         
279 |         try:
280 |             with open(self._routine_file, 'r') as f:
281 |                 routines_dict = json.load(f)
282 |             
283 |             # Clear existing routines
284 |             self.routines.clear()
285 |             
286 |             # Register each routine
287 |             for name, routine_data in routines_dict.items():
288 |                 self.routines[name] = Routine(**routine_data)
289 |             
290 |             logger.debug(f"Loaded {len(self.routines)} routines from {self._routine_file}")
291 |         except Exception as e:
292 |             logger.error(f"Error loading routines: {e}")
293 | 
294 | 
295 | @dataclass
296 | class RoutineStep:
297 |     """A step in a routine."""
298 |     tool_name: str
299 |     args: Dict[str, Any]
300 |     condition: Optional[Dict[str, Any]] = None
301 |     store_result: bool = False
302 |     result_var: Optional[str] = None
303 | 
304 | 
305 | @dataclass
306 | class RoutineDefinition:
307 |     """Definition of a routine."""
308 |     name: str
309 |     description: str
310 |     steps: List[RoutineStep]
311 | 
312 | 
313 | def tool(name: str, description: str, parameters: Dict[str, Any], 
314 |          needs_permission: bool = False, category: str = "general"):
315 |     """Decorator to register a function as a tool.
316 |     
317 |     Args:
318 |         name: Name of the tool
319 |         description: Description of the tool
320 |         parameters: Parameter schema for the tool
321 |         needs_permission: Whether the tool needs user permission
322 |         category: Category of the tool
323 |         
324 |     Returns:
325 |         Decorator function
326 |     """
327 |     def decorator(func: Callable) -> Callable:
328 |         # Set tool metadata on the function
329 |         func._tool_info = {
330 |             "name": name,
331 |             "description": description,
332 |             "parameters": parameters,
333 |             "needs_permission": needs_permission,
334 |             "category": category
335 |         }
336 |         return func
337 |     return decorator
338 | 
339 | 
340 | def create_tools_from_functions(registry: ToolRegistry, functions: List[Callable]) -> None:
341 |     """Create and register tools from functions with _tool_info.
342 |     
343 |     Args:
344 |         registry: Tool registry to register tools with
345 |         functions: List of functions to create tools from
346 |     """
347 |     for func in functions:
348 |         if hasattr(func, "_tool_info"):
349 |             info = func._tool_info
350 |             tool = Tool(
351 |                 name=info["name"],
352 |                 description=info["description"],
353 |                 parameters=info["parameters"],
354 |                 function=func,
355 |                 needs_permission=info["needs_permission"],
356 |                 category=info["category"]
357 |             )
358 |             registry.register_tool(tool)
```

--------------------------------------------------------------------------------
/mcp_modal_adapter.py:
--------------------------------------------------------------------------------

```python
  1 | import os
  2 | import json
  3 | import logging
  4 | import asyncio
  5 | import httpx
  6 | from typing import Dict, List, Optional, Any, AsyncIterator
  7 | from fastapi import FastAPI, Request, HTTPException, status
  8 | from fastapi.responses import JSONResponse, StreamingResponse
  9 | from fastapi.middleware.cors import CORSMiddleware
 10 | from pydantic import BaseModel, Field
 11 | 
 12 | # Configure logging
 13 | logging.basicConfig(
 14 |     level=logging.INFO,
 15 |     format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
 16 | )
 17 | 
 18 | # Create FastAPI app
 19 | app = FastAPI(
 20 |     title="MCP Server Modal Adapter",
 21 |     description="Model Context Protocol server adapter for Modal OpenAI API",
 22 |     version="1.0.0"
 23 | )
 24 | 
 25 | # Add CORS middleware
 26 | app.add_middleware(
 27 |     CORSMiddleware,
 28 |     allow_origins=["*"],
 29 |     allow_credentials=True,
 30 |     allow_methods=["*"],
 31 |     allow_headers=["*"],
 32 | )
 33 | 
 34 | # Configuration
 35 | MODAL_API_URL = os.environ.get("MODAL_API_URL", "https://your-modal-app-url.modal.run")
 36 | MODAL_API_KEY = os.environ.get("MODAL_API_KEY", "sk-modal-llm-api-key")  # Default key from modal_mcp_server.py
 37 | DEFAULT_MODEL = os.environ.get("DEFAULT_MODEL", "phi-4")
 38 | 
 39 | # MCP Protocol Models
 40 | class MCPHealthResponse(BaseModel):
 41 |     status: str = "healthy"
 42 |     version: str = "1.0.0"
 43 | 
 44 | class MCPPromptTemplate(BaseModel):
 45 |     id: str
 46 |     name: str
 47 |     description: str
 48 |     template: str
 49 |     parameters: Dict[str, Any] = Field(default_factory=dict)
 50 | 
 51 | class MCPPromptLibraryResponse(BaseModel):
 52 |     prompts: List[MCPPromptTemplate]
 53 | 
 54 | class MCPContextResponse(BaseModel):
 55 |     context_id: str
 56 |     content: str
 57 |     model: str
 58 |     prompt_id: Optional[str] = None
 59 |     parameters: Optional[Dict[str, Any]] = None
 60 | 
 61 | # Default prompt template
 62 | DEFAULT_TEMPLATE = MCPPromptTemplate(
 63 |     id="default",
 64 |     name="Default Template",
 65 |     description="Default prompt template for general use",
 66 |     template="{prompt}",
 67 |     parameters={"prompt": {"type": "string", "description": "The prompt to send to the model"}}
 68 | )
 69 | 
 70 | # In-memory prompt library
 71 | prompt_library = {
 72 |     "default": DEFAULT_TEMPLATE.dict()
 73 | }
 74 | 
 75 | # Health check endpoint
 76 | @app.get("/health", response_model=MCPHealthResponse)
 77 | async def health_check():
 78 |     """Health check endpoint"""
 79 |     return MCPHealthResponse()
 80 | 
 81 | # List prompts endpoint
 82 | @app.get("/prompts", response_model=MCPPromptLibraryResponse)
 83 | async def list_prompts():
 84 |     """List available prompt templates"""
 85 |     return MCPPromptLibraryResponse(prompts=[MCPPromptTemplate(**prompt) for prompt in prompt_library.values()])
 86 | 
 87 | # Get prompt endpoint
 88 | @app.get("/prompts/{prompt_id}", response_model=MCPPromptTemplate)
 89 | async def get_prompt(prompt_id: str):
 90 |     """Get a specific prompt template"""
 91 |     if prompt_id not in prompt_library:
 92 |         raise HTTPException(
 93 |             status_code=status.HTTP_404_NOT_FOUND,
 94 |             detail=f"Prompt template with ID {prompt_id} not found"
 95 |         )
 96 |     return MCPPromptTemplate(**prompt_library[prompt_id])
 97 | 
 98 | # Get context endpoint
 99 | @app.post("/context/{prompt_id}")
100 | async def get_context(prompt_id: str, request: Request):
101 |     """Get context from a prompt template"""
102 |     try:
103 |         # Get request data
104 |         data = await request.json()
105 |         parameters = data.get("parameters", {})
106 |         model = data.get("model", DEFAULT_MODEL)
107 |         stream = data.get("stream", False)
108 |         
109 |         # Get prompt template
110 |         if prompt_id not in prompt_library:
111 |             raise HTTPException(
112 |                 status_code=status.HTTP_404_NOT_FOUND,
113 |                 detail=f"Prompt template with ID {prompt_id} not found"
114 |             )
115 |         
116 |         prompt_template = prompt_library[prompt_id]
117 |         
118 |         # Process template
119 |         template = prompt_template["template"]
120 |         prompt_text = template.format(**parameters)
121 |         
122 |         # Create OpenAI-compatible request
123 |         openai_request = {
124 |             "model": model,
125 |             "messages": [{"role": "user", "content": prompt_text}],
126 |             "temperature": parameters.get("temperature", 0.7),
127 |             "max_tokens": parameters.get("max_tokens", 1024),
128 |             "stream": stream
129 |         }
130 |         
131 |         # If streaming is requested, return a streaming response
132 |         if stream:
133 |             return StreamingResponse(
134 |                 stream_from_modal(openai_request),
135 |                 media_type="text/event-stream"
136 |             )
137 |         
138 |         # Otherwise, make a regular request to Modal API
139 |         async with httpx.AsyncClient(timeout=60.0) as client:
140 |             headers = {
141 |                 "Authorization": f"Bearer {MODAL_API_KEY}",
142 |                 "Content-Type": "application/json"
143 |             }
144 |             
145 |             response = await client.post(
146 |                 f"{MODAL_API_URL}/v1/chat/completions",
147 |                 json=openai_request,
148 |                 headers=headers
149 |             )
150 |             
151 |             if response.status_code != 200:
152 |                 raise HTTPException(
153 |                     status_code=response.status_code,
154 |                     detail=f"Error from Modal API: {response.text}"
155 |                 )
156 |             
157 |             result = response.json()
158 |             
159 |             # Extract content from OpenAI response
160 |             content = ""
161 |             if "choices" in result and len(result["choices"]) > 0:
162 |                 if "message" in result["choices"][0] and "content" in result["choices"][0]["message"]:
163 |                     content = result["choices"][0]["message"]["content"]
164 |             
165 |             # Create MCP response
166 |             mcp_response = MCPContextResponse(
167 |                 context_id=result.get("id", ""),
168 |                 content=content,
169 |                 model=model,
170 |                 prompt_id=prompt_id,
171 |                 parameters=parameters
172 |             )
173 |             
174 |             return mcp_response.dict()
175 |             
176 |     except Exception as e:
177 |         logging.error(f"Error in get_context: {str(e)}")
178 |         raise HTTPException(
179 |             status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
180 |             detail=f"Error generating context: {str(e)}"
181 |         )
182 | 
183 | async def stream_from_modal(openai_request: Dict[str, Any]) -> AsyncIterator[str]:
184 |     """Stream response from Modal API"""
185 |     try:
186 |         async with httpx.AsyncClient(timeout=300.0) as client:
187 |             headers = {
188 |                 "Authorization": f"Bearer {MODAL_API_KEY}",
189 |                 "Content-Type": "application/json",
190 |                 "Accept": "text/event-stream"
191 |             }
192 |             
193 |             async with client.stream(
194 |                 "POST",
195 |                 f"{MODAL_API_URL}/v1/chat/completions",
196 |                 json=openai_request,
197 |                 headers=headers
198 |             ) as response:
199 |                 if response.status_code != 200:
200 |                     error_detail = await response.aread()
201 |                     yield f"data: {json.dumps({'error': f'Error from Modal API: {error_detail.decode()}'})}\n\n"
202 |                     yield "data: [DONE]\n\n"
203 |                     return
204 |                 
205 |                 # Process streaming response
206 |                 buffer = ""
207 |                 content_buffer = ""
208 |                 
209 |                 async for chunk in response.aiter_text():
210 |                     buffer += chunk
211 |                     
212 |                     # Process complete SSE messages
213 |                     while "\n\n" in buffer:
214 |                         message, buffer = buffer.split("\n\n", 1)
215 |                         
216 |                         if message.startswith("data: "):
217 |                             data = message[6:]  # Remove "data: " prefix
218 |                             
219 |                             if data == "[DONE]":
220 |                                 # End of stream, send final MCP response
221 |                                 final_response = MCPContextResponse(
222 |                                     context_id="stream-" + str(hash(content_buffer))[:8],
223 |                                     content=content_buffer,
224 |                                     model=openai_request.get("model", DEFAULT_MODEL),
225 |                                     prompt_id="default",
226 |                                     parameters={}
227 |                                 )
228 |                                 
229 |                                 yield f"data: {json.dumps(final_response.dict())}\n\n"
230 |                                 yield "data: [DONE]\n\n"
231 |                                 return
232 |                             
233 |                             try:
234 |                                 # Parse JSON data
235 |                                 chunk_data = json.loads(data)
236 |                                 
237 |                                 # Extract content from chunk
238 |                                 if 'choices' in chunk_data and len(chunk_data['choices']) > 0:
239 |                                     if 'delta' in chunk_data['choices'][0] and 'content' in chunk_data['choices'][0]['delta']:
240 |                                         content = chunk_data['choices'][0]['delta']['content']
241 |                                         content_buffer += content
242 |                                         
243 |                                         # Create partial MCP response
244 |                                         partial_response = {
245 |                                             "context_id": "stream-" + str(hash(content_buffer))[:8],
246 |                                             "content": content,
247 |                                             "model": openai_request.get("model", DEFAULT_MODEL),
248 |                                             "is_partial": True
249 |                                         }
250 |                                         
251 |                                         yield f"data: {json.dumps(partial_response)}\n\n"
252 |                                         
253 |                             except json.JSONDecodeError:
254 |                                 logging.error(f"Invalid JSON in stream: {data}")
255 |                 
256 |     except Exception as e:
257 |         logging.error(f"Error in stream_from_modal: {str(e)}")
258 |         yield f"data: {json.dumps({'error': str(e)})}\n\n"
259 |         yield "data: [DONE]\n\n"
260 | 
261 | # Add a custom prompt template
262 | @app.post("/prompts")
263 | async def add_prompt(prompt: MCPPromptTemplate):
264 |     """Add a new prompt template"""
265 |     prompt_library[prompt.id] = prompt.dict()
266 |     return {"status": "success", "message": f"Added prompt template with ID {prompt.id}"}
267 | 
268 | # Delete a prompt template
269 | @app.delete("/prompts/{prompt_id}")
270 | async def delete_prompt(prompt_id: str):
271 |     """Delete a prompt template"""
272 |     if prompt_id == "default":
273 |         raise HTTPException(
274 |             status_code=status.HTTP_400_BAD_REQUEST,
275 |             detail="Cannot delete the default prompt template"
276 |         )
277 |         
278 |     if prompt_id not in prompt_library:
279 |         raise HTTPException(
280 |             status_code=status.HTTP_404_NOT_FOUND,
281 |             detail=f"Prompt template with ID {prompt_id} not found"
282 |         )
283 |         
284 |     del prompt_library[prompt_id]
285 |     return {"status": "success", "message": f"Deleted prompt template with ID {prompt_id}"}
286 | 
287 | # Main entry point
288 | if __name__ == "__main__":
289 |     import uvicorn
290 |     uvicorn.run(app, host="0.0.0.0", port=8000)
291 | 
```

--------------------------------------------------------------------------------
/claude_code/lib/providers/openai.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | # claude_code/lib/providers/openai.py
  3 | """OpenAI provider implementation."""
  4 | 
  5 | import os
  6 | from typing import Dict, List, Generator, Optional, Any, Union
  7 | import time
  8 | import logging
  9 | import json
 10 | 
 11 | import tiktoken
 12 | from openai import OpenAI, RateLimitError, APIError
 13 | 
 14 | from .base import BaseProvider
 15 | 
 16 | logger = logging.getLogger(__name__)
 17 | 
 18 | # Model information including context window and pricing
 19 | MODEL_INFO = {
 20 |     "gpt-3.5-turbo": {
 21 |         "context_window": 16385,
 22 |         "input_cost_per_1k": 0.0015,
 23 |         "output_cost_per_1k": 0.002,
 24 |         "capabilities": ["function_calling", "json_mode"],
 25 |     },
 26 |     "gpt-4o": {
 27 |         "context_window": 128000,
 28 |         "input_cost_per_1k": 0.005,
 29 |         "output_cost_per_1k": 0.015,
 30 |         "capabilities": ["function_calling", "json_mode", "vision"],
 31 |     },
 32 |     "gpt-4-turbo": {
 33 |         "context_window": 128000, 
 34 |         "input_cost_per_1k": 0.01,
 35 |         "output_cost_per_1k": 0.03,
 36 |         "capabilities": ["function_calling", "json_mode", "vision"],
 37 |     },
 38 |     "gpt-4": {
 39 |         "context_window": 8192,
 40 |         "input_cost_per_1k": 0.03,
 41 |         "output_cost_per_1k": 0.06,
 42 |         "capabilities": ["function_calling", "json_mode"],
 43 |     },
 44 | }
 45 | 
 46 | DEFAULT_MODEL = "gpt-4o"
 47 | 
 48 | 
 49 | class OpenAIProvider(BaseProvider):
 50 |     """OpenAI API provider implementation."""
 51 |     
 52 |     def __init__(self, api_key: Optional[str] = None, model: Optional[str] = None):
 53 |         """Initialize the OpenAI provider.
 54 |         
 55 |         Args:
 56 |             api_key: OpenAI API key. If None, will use OPENAI_API_KEY environment variable
 57 |             model: Model to use. If None, will use DEFAULT_MODEL
 58 |         """
 59 |         self._api_key = api_key or os.environ.get("OPENAI_API_KEY")
 60 |         if not self._api_key:
 61 |             raise ValueError("OpenAI API key is required. Set OPENAI_API_KEY environment variable or pass api_key.")
 62 |         
 63 |         self._client = OpenAI(api_key=self._api_key)
 64 |         self._model = model or os.environ.get("OPENAI_MODEL", DEFAULT_MODEL)
 65 |         
 66 |         if self._model not in MODEL_INFO:
 67 |             logger.warning(f"Unknown model: {self._model}. Using {DEFAULT_MODEL} instead.")
 68 |             self._model = DEFAULT_MODEL
 69 |             
 70 |         # Cache for tokenizers
 71 |         self._tokenizers = {}
 72 |     
 73 |     @property
 74 |     def name(self) -> str:
 75 |         return "OpenAI"
 76 |     
 77 |     @property
 78 |     def available_models(self) -> List[str]:
 79 |         return list(MODEL_INFO.keys())
 80 |     
 81 |     @property
 82 |     def current_model(self) -> str:
 83 |         return self._model
 84 |     
 85 |     def set_model(self, model_name: str) -> None:
 86 |         if model_name not in MODEL_INFO:
 87 |             raise ValueError(f"Unknown model: {model_name}. Available models: {', '.join(self.available_models)}")
 88 |         self._model = model_name
 89 |     
 90 |     def generate_completion(self, 
 91 |                            messages: List[Dict[str, Any]], 
 92 |                            tools: Optional[List[Dict[str, Any]]] = None,
 93 |                            temperature: float = 0.0,
 94 |                            stream: bool = True) -> Union[Dict[str, Any], Generator[Dict[str, Any], None, None]]:
 95 |         """Generate a completion from OpenAI.
 96 |         
 97 |         Args:
 98 |             messages: List of message dictionaries with 'role' and 'content' keys
 99 |             tools: Optional list of tool dictionaries
100 |             temperature: Model temperature (0-1)
101 |             stream: Whether to stream the response
102 |             
103 |         Returns:
104 |             If stream=True, returns a generator of response chunks
105 |             If stream=False, returns the complete response
106 |         """
107 |         try:
108 |             # Convert tools to OpenAI format if provided
109 |             api_tools = None
110 |             if tools:
111 |                 api_tools = []
112 |                 for tool in tools:
113 |                     api_tools.append({
114 |                         "type": "function",
115 |                         "function": {
116 |                             "name": tool["name"],
117 |                             "description": tool["description"],
118 |                             "parameters": tool["parameters"]
119 |                         }
120 |                     })
121 |             
122 |             # Make the API call
123 |             response = self._client.chat.completions.create(
124 |                 model=self._model,
125 |                 messages=messages,
126 |                 tools=api_tools,
127 |                 temperature=temperature,
128 |                 stream=stream
129 |             )
130 |             
131 |             # Handle streaming and non-streaming responses
132 |             if stream:
133 |                 return self._process_streaming_response(response)
134 |             else:
135 |                 return {
136 |                     "content": response.choices[0].message.content,
137 |                     "tool_calls": response.choices[0].message.tool_calls,
138 |                     "finish_reason": response.choices[0].finish_reason,
139 |                     "usage": {
140 |                         "prompt_tokens": response.usage.prompt_tokens,
141 |                         "completion_tokens": response.usage.completion_tokens,
142 |                         "total_tokens": response.usage.total_tokens
143 |                     }
144 |                 }
145 |                 
146 |         except RateLimitError as e:
147 |             logger.error(f"Rate limit exceeded: {str(e)}")
148 |             raise
149 |         except APIError as e:
150 |             logger.error(f"API error: {str(e)}")
151 |             raise
152 |         except Exception as e:
153 |             logger.error(f"Error generating completion: {str(e)}")
154 |             raise
155 |     
156 |     def _process_streaming_response(self, response):
157 |         """Process a streaming response from OpenAI."""
158 |         current_tool_calls = []
159 |         tool_call_chunks = {}
160 |         
161 |         for chunk in response:
162 |             # Create a result chunk to yield
163 |             result_chunk = {
164 |                 "content": None,
165 |                 "tool_calls": None,
166 |                 "delta": True
167 |             }
168 |             
169 |             # Process content
170 |             delta = chunk.choices[0].delta
171 |             if delta.content:
172 |                 result_chunk["content"] = delta.content
173 |             
174 |             # Process tool calls
175 |             if delta.tool_calls:
176 |                 result_chunk["tool_calls"] = []
177 |                 
178 |                 for tool_call_delta in delta.tool_calls:
179 |                     # Initialize tool call in chunks dictionary if new
180 |                     idx = tool_call_delta.index
181 |                     if idx not in tool_call_chunks:
182 |                         tool_call_chunks[idx] = {
183 |                             "id": "",
184 |                             "function": {"name": "", "arguments": ""}
185 |                         }
186 |                     
187 |                     # Update tool call data
188 |                     if tool_call_delta.id:
189 |                         tool_call_chunks[idx]["id"] = tool_call_delta.id
190 |                     
191 |                     if tool_call_delta.function:
192 |                         if tool_call_delta.function.name:
193 |                             tool_call_chunks[idx]["function"]["name"] = tool_call_delta.function.name
194 |                         
195 |                         if tool_call_delta.function.arguments:
196 |                             tool_call_chunks[idx]["function"]["arguments"] += tool_call_delta.function.arguments
197 |                     
198 |                     # Add current state to result
199 |                     result_chunk["tool_calls"].append(tool_call_chunks[idx])
200 |             
201 |             # Yield the chunk
202 |             yield result_chunk
203 |         
204 |         # Final yield with complete tool calls
205 |         if tool_call_chunks:
206 |             complete_calls = list(tool_call_chunks.values())
207 |             yield {
208 |                 "content": None,
209 |                 "tool_calls": complete_calls,
210 |                 "delta": False,
211 |                 "finish_reason": "tool_calls"
212 |             }
213 |     
214 |     def _get_tokenizer(self, model: str = None) -> Any:
215 |         """Get a tokenizer for the specified model."""
216 |         model = model or self._model
217 |         
218 |         if model not in self._tokenizers:
219 |             try:
220 |                 encoder_name = "cl100k_base" if model.startswith("gpt-4") or model.startswith("gpt-3.5") else "p50k_base"
221 |                 self._tokenizers[model] = tiktoken.get_encoding(encoder_name)
222 |             except Exception as e:
223 |                 logger.error(f"Error loading tokenizer for {model}: {str(e)}")
224 |                 raise
225 |         
226 |         return self._tokenizers[model]
227 |     
228 |     def count_tokens(self, text: str) -> int:
229 |         """Count tokens in text."""
230 |         tokenizer = self._get_tokenizer()
231 |         return len(tokenizer.encode(text))
232 |     
233 |     def count_message_tokens(self, messages: List[Dict[str, Any]]) -> Dict[str, int]:
234 |         """Count tokens in a message list."""
235 |         # Simple approximation - in production, would need to match OpenAI's tokenization exactly
236 |         prompt_tokens = 0
237 |         
238 |         for message in messages:
239 |             # Add tokens for message role
240 |             prompt_tokens += 4  # ~4 tokens for role
241 |             
242 |             # Count content tokens
243 |             if "content" in message and message["content"]:
244 |                 prompt_tokens += self.count_tokens(message["content"])
245 |             
246 |             # Count tokens from any tool calls or tool results
247 |             if "tool_calls" in message and message["tool_calls"]:
248 |                 for tool_call in message["tool_calls"]:
249 |                     prompt_tokens += 4  # ~4 tokens for tool call overhead
250 |                     prompt_tokens += self.count_tokens(tool_call.get("function", {}).get("name", ""))
251 |                     prompt_tokens += self.count_tokens(tool_call.get("function", {}).get("arguments", ""))
252 |             
253 |             if "name" in message and message["name"]:
254 |                 prompt_tokens += self.count_tokens(message["name"])
255 |                 
256 |             if "tool_call_id" in message and message["tool_call_id"]:
257 |                 prompt_tokens += 10  # ~10 tokens for tool_call_id and overhead
258 |         
259 |         # Add ~3 tokens for message formatting
260 |         prompt_tokens += 3
261 |         
262 |         return {
263 |             "input": prompt_tokens,
264 |             "output": 0  # We don't know output tokens yet
265 |         }
266 |     
267 |     def get_model_info(self) -> Dict[str, Any]:
268 |         """Get information about the current model."""
269 |         return MODEL_INFO[self._model]
270 |     
271 |     @property
272 |     def cost_per_1k_tokens(self) -> Dict[str, float]:
273 |         """Get cost per 1K tokens for input and output."""
274 |         info = self.get_model_info()
275 |         return {
276 |             "input": info["input_cost_per_1k"],
277 |             "output": info["output_cost_per_1k"]
278 |         }
279 |     
280 |     def validate_api_key(self) -> bool:
281 |         """Validate the API key."""
282 |         try:
283 |             # Make a minimal API call to test the key
284 |             self._client.models.list(limit=1)
285 |             return True
286 |         except Exception as e:
287 |             logger.error(f"API key validation failed: {str(e)}")
288 |             return False
289 |     
290 |     def get_rate_limit_info(self) -> Dict[str, Any]:
291 |         """Get rate limit information."""
292 |         # OpenAI doesn't provide direct rate limit info via API
293 |         # This is a placeholder implementation
294 |         return {
295 |             "requests_per_minute": 3500,
296 |             "tokens_per_minute": 90000,
297 |             "reset_time": None
298 |         }
```

--------------------------------------------------------------------------------
/claude_code/lib/monitoring/server_metrics.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """Module for tracking MCP server metrics."""
  3 | 
  4 | import os
  5 | import time
  6 | import json
  7 | import logging
  8 | import threading
  9 | from typing import Dict, List, Any, Optional, Callable
 10 | from datetime import datetime, timedelta
 11 | from collections import deque, Counter
 12 | 
 13 | # Setup logging
 14 | logging.basicConfig(level=logging.INFO)
 15 | logger = logging.getLogger(__name__)
 16 | 
 17 | class ServerMetrics:
 18 |     """Tracks MCP server metrics for visualization."""
 19 |     
 20 |     def __init__(self, history_size: int = 100, save_interval: int = 60):
 21 |         """Initialize the server metrics tracker.
 22 |         
 23 |         Args:
 24 |             history_size: Number of data points to keep in history
 25 |             save_interval: How often to save metrics to disk (in seconds)
 26 |         """
 27 |         self._start_time = time.time()
 28 |         self._lock = threading.RLock()
 29 |         self._history_size = history_size
 30 |         self._save_interval = save_interval
 31 |         self._save_path = os.path.expanduser("~/.config/claude_code/metrics.json")
 32 |         
 33 |         # Ensure directory exists
 34 |         os.makedirs(os.path.dirname(self._save_path), exist_ok=True)
 35 |         
 36 |         # Metrics
 37 |         self._request_history = deque(maxlen=history_size)
 38 |         self._tool_calls = Counter()
 39 |         self._resource_calls = Counter()
 40 |         self._connections = 0
 41 |         self._active_connections = set()
 42 |         self._errors = Counter()
 43 |         
 44 |         # Time series data for charts
 45 |         self._time_series = {
 46 |             "tool_calls": deque([(time.time(), 0)] * 10, maxlen=10),
 47 |             "resource_calls": deque([(time.time(), 0)] * 10, maxlen=10)
 48 |         }
 49 |         
 50 |         # Start auto-save thread
 51 |         self._running = True
 52 |         self._save_thread = threading.Thread(target=self._auto_save, daemon=True)
 53 |         self._save_thread.start()
 54 |         
 55 |         # Load previous metrics if available
 56 |         self._load_metrics()
 57 |     
 58 |     def _auto_save(self):
 59 |         """Periodically save metrics to disk."""
 60 |         while self._running:
 61 |             time.sleep(self._save_interval)
 62 |             try:
 63 |                 self.save_metrics()
 64 |             except Exception as e:
 65 |                 logger.error(f"Error saving metrics: {e}")
 66 |     
 67 |     def _load_metrics(self):
 68 |         """Load metrics from disk if available."""
 69 |         try:
 70 |             if os.path.exists(self._save_path):
 71 |                 with open(self._save_path, 'r', encoding='utf-8') as f:
 72 |                     data = json.load(f)
 73 |                 
 74 |                 with self._lock:
 75 |                     # Load previous tool and resource calls
 76 |                     self._tool_calls = Counter(data.get("tool_calls", {}))
 77 |                     self._resource_calls = Counter(data.get("resource_calls", {}))
 78 |                     
 79 |                     # Don't load time-sensitive data like connections and history
 80 |                     
 81 |                     logger.info(f"Loaded metrics from {self._save_path}")
 82 |         except Exception as e:
 83 |             logger.error(f"Error loading metrics: {e}")
 84 |     
 85 |     def save_metrics(self):
 86 |         """Save metrics to disk."""
 87 |         try:
 88 |             with self._lock:
 89 |                 data = {
 90 |                     "tool_calls": dict(self._tool_calls),
 91 |                     "resource_calls": dict(self._resource_calls),
 92 |                     "total_connections": self._connections,
 93 |                     "last_saved": time.time()
 94 |                 }
 95 |             
 96 |             with open(self._save_path, 'w', encoding='utf-8') as f:
 97 |                 json.dump(data, f, indent=2)
 98 |             
 99 |             logger.debug(f"Metrics saved to {self._save_path}")
100 |         except Exception as e:
101 |             logger.error(f"Error saving metrics: {e}")
102 |     
103 |     def log_tool_call(self, tool_name: str, success: bool = True):
104 |         """Log a tool call.
105 |         
106 |         Args:
107 |             tool_name: The name of the tool that was called
108 |             success: Whether the call was successful
109 |         """
110 |         with self._lock:
111 |             self._tool_calls[tool_name] += 1
112 |             
113 |             # Add to request history
114 |             timestamp = time.time()
115 |             self._request_history.append({
116 |                 "type": "tool",
117 |                 "name": tool_name,
118 |                 "success": success,
119 |                 "timestamp": timestamp
120 |             })
121 |             
122 |             # Update time series
123 |             current_time = time.time()
124 |             last_time, count = self._time_series["tool_calls"][-1]
125 |             if current_time - last_time < 60:  # Less than a minute
126 |                 self._time_series["tool_calls"][-1] = (last_time, count + 1)
127 |             else:
128 |                 self._time_series["tool_calls"].append((current_time, 1))
129 |     
130 |     def log_resource_request(self, resource_uri: str, success: bool = True):
131 |         """Log a resource request.
132 |         
133 |         Args:
134 |             resource_uri: The URI of the requested resource
135 |             success: Whether the request was successful
136 |         """
137 |         with self._lock:
138 |             self._resource_calls[resource_uri] += 1
139 |             
140 |             # Add to request history
141 |             timestamp = time.time()
142 |             self._request_history.append({
143 |                 "type": "resource",
144 |                 "uri": resource_uri,
145 |                 "success": success,
146 |                 "timestamp": timestamp
147 |             })
148 |             
149 |             # Update time series
150 |             current_time = time.time()
151 |             last_time, count = self._time_series["resource_calls"][-1]
152 |             if current_time - last_time < 60:  # Less than a minute
153 |                 self._time_series["resource_calls"][-1] = (last_time, count + 1)
154 |             else:
155 |                 self._time_series["resource_calls"].append((current_time, 1))
156 |     
157 |     def log_connection(self, client_id: str, connected: bool = True):
158 |         """Log a client connection or disconnection.
159 |         
160 |         Args:
161 |             client_id: Client identifier
162 |             connected: True for connection, False for disconnection
163 |         """
164 |         with self._lock:
165 |             if connected:
166 |                 self._connections += 1
167 |                 self._active_connections.add(client_id)
168 |             else:
169 |                 self._active_connections.discard(client_id)
170 |             
171 |             # Add to request history
172 |             timestamp = time.time()
173 |             self._request_history.append({
174 |                 "type": "connection",
175 |                 "client_id": client_id,
176 |                 "action": "connect" if connected else "disconnect",
177 |                 "timestamp": timestamp
178 |             })
179 |     
180 |     def log_error(self, error_type: str, message: str):
181 |         """Log an error.
182 |         
183 |         Args:
184 |             error_type: Type of error
185 |             message: Error message
186 |         """
187 |         with self._lock:
188 |             self._errors[error_type] += 1
189 |             
190 |             # Add to request history
191 |             timestamp = time.time()
192 |             self._request_history.append({
193 |                 "type": "error",
194 |                 "error_type": error_type,
195 |                 "message": message,
196 |                 "timestamp": timestamp
197 |             })
198 |     
199 |     def get_uptime(self) -> str:
200 |         """Get the server uptime as a human-readable string.
201 |         
202 |         Returns:
203 |             Uptime string (e.g., "2 hours 15 minutes")
204 |         """
205 |         uptime_seconds = time.time() - self._start_time
206 |         uptime = timedelta(seconds=int(uptime_seconds))
207 |         
208 |         days = uptime.days
209 |         hours, remainder = divmod(uptime.seconds, 3600)
210 |         minutes, seconds = divmod(remainder, 60)
211 |         
212 |         parts = []
213 |         if days > 0:
214 |             parts.append(f"{days} {'day' if days == 1 else 'days'}")
215 |         if hours > 0 or days > 0:
216 |             parts.append(f"{hours} {'hour' if hours == 1 else 'hours'}")
217 |         if minutes > 0 or hours > 0 or days > 0:
218 |             parts.append(f"{minutes} {'minute' if minutes == 1 else 'minutes'}")
219 |         
220 |         if not parts:
221 |             return f"{seconds} seconds"
222 |         
223 |         return " ".join(parts)
224 |     
225 |     def get_active_connections_count(self) -> int:
226 |         """Get the number of active connections.
227 |         
228 |         Returns:
229 |             Number of active connections
230 |         """
231 |         with self._lock:
232 |             return len(self._active_connections)
233 |     
234 |     def get_total_connections(self) -> int:
235 |         """Get the total number of connections since startup.
236 |         
237 |         Returns:
238 |             Total connection count
239 |         """
240 |         with self._lock:
241 |             return self._connections
242 |     
243 |     def get_recent_activity(self, count: int = 10) -> List[Dict[str, Any]]:
244 |         """Get recent activity.
245 |         
246 |         Args:
247 |             count: Number of recent events to return
248 |             
249 |         Returns:
250 |             List of recent activity events
251 |         """
252 |         with self._lock:
253 |             recent = list(self._request_history)[-count:]
254 |             
255 |             # Format timestamps
256 |             for event in recent:
257 |                 ts = event["timestamp"]
258 |                 event["formatted_time"] = datetime.fromtimestamp(ts).strftime("%Y-%m-%d %H:%M:%S")
259 |             
260 |             return recent
261 |     
262 |     def get_tool_usage_stats(self) -> Dict[str, int]:
263 |         """Get statistics on tool usage.
264 |         
265 |         Returns:
266 |             Dictionary mapping tool names to call counts
267 |         """
268 |         with self._lock:
269 |             return dict(self._tool_calls)
270 |     
271 |     def get_resource_usage_stats(self) -> Dict[str, int]:
272 |         """Get statistics on resource usage.
273 |         
274 |         Returns:
275 |             Dictionary mapping resource URIs to request counts
276 |         """
277 |         with self._lock:
278 |             return dict(self._resource_calls)
279 |     
280 |     def get_error_stats(self) -> Dict[str, int]:
281 |         """Get statistics on errors.
282 |         
283 |         Returns:
284 |             Dictionary mapping error types to counts
285 |         """
286 |         with self._lock:
287 |             return dict(self._errors)
288 |     
289 |     def get_time_series_data(self) -> Dict[str, List[Dict[str, Any]]]:
290 |         """Get time series data for charts.
291 |         
292 |         Returns:
293 |             Dictionary with time series data
294 |         """
295 |         with self._lock:
296 |             result = {}
297 |             
298 |             # Convert deques to lists of dictionaries
299 |             for series_name, series_data in self._time_series.items():
300 |                 result[series_name] = [
301 |                     {"timestamp": ts, "value": val, "formatted_time": datetime.fromtimestamp(ts).strftime("%H:%M:%S")}
302 |                     for ts, val in series_data
303 |                 ]
304 |             
305 |             return result
306 |     
307 |     def get_all_metrics(self) -> Dict[str, Any]:
308 |         """Get all metrics data.
309 |         
310 |         Returns:
311 |             Dictionary with all metrics
312 |         """
313 |         return {
314 |             "uptime": self.get_uptime(),
315 |             "active_connections": self.get_active_connections_count(),
316 |             "total_connections": self.get_total_connections(),
317 |             "recent_activity": self.get_recent_activity(20),
318 |             "tool_usage": self.get_tool_usage_stats(),
319 |             "resource_usage": self.get_resource_usage_stats(),
320 |             "errors": self.get_error_stats(),
321 |             "time_series": self.get_time_series_data()
322 |         }
323 |     
324 |     def reset_stats(self):
325 |         """Reset all statistics but keep the start time."""
326 |         with self._lock:
327 |             self._request_history.clear()
328 |             self._tool_calls.clear()
329 |             self._resource_calls.clear()
330 |             self._connections = 0
331 |             self._active_connections.clear()
332 |             self._errors.clear()
333 |             
334 |             # Reset time series
335 |             current_time = time.time()
336 |             self._time_series = {
337 |                 "tool_calls": deque([(current_time - (600 - i * 60), 0) for i in range(10)], maxlen=10),
338 |                 "resource_calls": deque([(current_time - (600 - i * 60), 0) for i in range(10)], maxlen=10)
339 |             }
340 |     
341 |     def shutdown(self):
342 |         """Shutdown the metrics tracker and save data."""
343 |         self._running = False
344 |         self.save_metrics()
345 | 
346 | 
347 | # Singleton instance
348 | _metrics_instance = None
349 | 
350 | def get_metrics() -> ServerMetrics:
351 |     """Get or create the singleton metrics instance.
352 |     
353 |     Returns:
354 |         ServerMetrics instance
355 |     """
356 |     global _metrics_instance
357 |     if _metrics_instance is None:
358 |         _metrics_instance = ServerMetrics()
359 |     return _metrics_instance
```

--------------------------------------------------------------------------------
/claude_code/lib/monitoring/cost_tracker.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | # claude_code/lib/monitoring/cost_tracker.py
  3 | """Cost tracking and management."""
  4 | 
  5 | import logging
  6 | import json
  7 | import os
  8 | import time
  9 | from datetime import datetime
 10 | from typing import Dict, List, Optional, Any, Tuple
 11 | 
 12 | from rich.panel import Panel
 13 | from rich.table import Table
 14 | from rich.text import Text
 15 | from rich.box import ROUNDED
 16 | 
 17 | logger = logging.getLogger(__name__)
 18 | 
 19 | 
 20 | class CostTracker:
 21 |     """Tracks token usage and calculates costs for LLM interactions."""
 22 |     
 23 |     def __init__(self, budget_limit: Optional[float] = None, history_file: Optional[str] = None):
 24 |         """Initialize the cost tracker.
 25 |         
 26 |         Args:
 27 |             budget_limit: Optional budget limit in dollars
 28 |             history_file: Optional path to a file to store history
 29 |         """
 30 |         self.budget_limit = budget_limit
 31 |         self.history_file = history_file
 32 |         
 33 |         # Initialize session counters
 34 |         self.session_start = datetime.now()
 35 |         self.session_tokens_input = 0
 36 |         self.session_tokens_output = 0
 37 |         self.session_cost = 0.0
 38 |         
 39 |         # Request history
 40 |         self.requests: List[Dict[str, Any]] = []
 41 |         
 42 |         # Load history from file if provided
 43 |         self._load_history()
 44 |     
 45 |     def add_request(self, 
 46 |                    provider: str, 
 47 |                    model: str, 
 48 |                    tokens_input: int, 
 49 |                    tokens_output: int,
 50 |                    input_cost_per_1k: float,
 51 |                    output_cost_per_1k: float,
 52 |                    request_id: Optional[str] = None) -> Dict[str, Any]:
 53 |         """Add a request to the tracker.
 54 |         
 55 |         Args:
 56 |             provider: Provider name (e.g., "openai", "anthropic")
 57 |             model: Model name (e.g., "gpt-4o", "claude-3-opus")
 58 |             tokens_input: Number of input tokens
 59 |             tokens_output: Number of output tokens
 60 |             input_cost_per_1k: Cost per 1,000 input tokens
 61 |             output_cost_per_1k: Cost per 1,000 output tokens
 62 |             request_id: Optional request ID
 63 |             
 64 |         Returns:
 65 |             Dictionary with request information including costs
 66 |         """
 67 |         # Calculate costs
 68 |         input_cost = (tokens_input / 1000) * input_cost_per_1k
 69 |         output_cost = (tokens_output / 1000) * output_cost_per_1k
 70 |         total_cost = input_cost + output_cost
 71 |         
 72 |         # Update session counters
 73 |         self.session_tokens_input += tokens_input
 74 |         self.session_tokens_output += tokens_output
 75 |         self.session_cost += total_cost
 76 |         
 77 |         # Create request record
 78 |         request = {
 79 |             "id": request_id or f"{int(time.time())}-{len(self.requests)}",
 80 |             "timestamp": datetime.now().isoformat(),
 81 |             "provider": provider,
 82 |             "model": model,
 83 |             "tokens_input": tokens_input,
 84 |             "tokens_output": tokens_output,
 85 |             "input_cost": input_cost,
 86 |             "output_cost": output_cost,
 87 |             "total_cost": total_cost
 88 |         }
 89 |         
 90 |         # Add to history
 91 |         self.requests.append(request)
 92 |         
 93 |         # Save history
 94 |         self._save_history()
 95 |         
 96 |         # Log the request
 97 |         logger.info(
 98 |             f"Request: {provider}/{model}, " +
 99 |             f"Tokens: {tokens_input} in / {tokens_output} out, " +
100 |             f"Cost: ${total_cost:.4f}"
101 |         )
102 |         
103 |         return request
104 |     
105 |     def get_session_stats(self) -> Dict[str, Any]:
106 |         """Get statistics for the current session.
107 |         
108 |         Returns:
109 |             Dictionary with session statistics
110 |         """
111 |         return {
112 |             "start_time": self.session_start.isoformat(),
113 |             "duration_seconds": (datetime.now() - self.session_start).total_seconds(),
114 |             "tokens_input": self.session_tokens_input,
115 |             "tokens_output": self.session_tokens_output,
116 |             "total_tokens": self.session_tokens_input + self.session_tokens_output,
117 |             "total_cost": self.session_cost,
118 |             "request_count": len(self.requests),
119 |             "budget_limit": self.budget_limit,
120 |             "budget_remaining": None if self.budget_limit is None else self.budget_limit - self.session_cost
121 |         }
122 |     
123 |     def check_budget(self) -> Dict[str, Any]:
124 |         """Check if budget limit is approached or exceeded.
125 |         
126 |         Returns:
127 |             Dictionary with budget status information
128 |         """
129 |         if self.budget_limit is None:
130 |             return {
131 |                 "has_budget": False,
132 |                 "status": "no_limit",
133 |                 "message": "No budget limit set"
134 |             }
135 |         
136 |         remaining = self.budget_limit - self.session_cost
137 |         percentage_used = (self.session_cost / self.budget_limit) * 100
138 |         
139 |         if remaining <= 0:
140 |             status = "exceeded"
141 |             message = f"Budget exceeded by ${abs(remaining):.2f}"
142 |         elif percentage_used > 90:
143 |             status = "critical"
144 |             message = f"Budget critical: ${remaining:.2f} remaining ({percentage_used:.1f}% used)"
145 |         elif percentage_used > 75:
146 |             status = "warning"
147 |             message = f"Budget warning: ${remaining:.2f} remaining ({percentage_used:.1f}% used)"
148 |         else:
149 |             status = "ok"
150 |             message = f"Budget OK: ${remaining:.2f} remaining ({percentage_used:.1f}% used)"
151 |         
152 |         return {
153 |             "has_budget": True,
154 |             "status": status,
155 |             "message": message,
156 |             "limit": self.budget_limit,
157 |             "used": self.session_cost,
158 |             "remaining": remaining,
159 |             "percentage_used": percentage_used
160 |         }
161 |     
162 |     def get_usage_by_model(self) -> Dict[str, Dict[str, Any]]:
163 |         """Get usage statistics grouped by model.
164 |         
165 |         Returns:
166 |             Dictionary mapping "provider/model" to usage statistics
167 |         """
168 |         usage: Dict[str, Dict[str, Any]] = {}
169 |         
170 |         for request in self.requests:
171 |             key = f"{request['provider']}/{request['model']}"
172 |             
173 |             if key not in usage:
174 |                 usage[key] = {
175 |                     "provider": request["provider"],
176 |                     "model": request["model"],
177 |                     "request_count": 0,
178 |                     "tokens_input": 0,
179 |                     "tokens_output": 0,
180 |                     "total_cost": 0.0
181 |                 }
182 |             
183 |             usage[key]["request_count"] += 1
184 |             usage[key]["tokens_input"] += request["tokens_input"]
185 |             usage[key]["tokens_output"] += request["tokens_output"]
186 |             usage[key]["total_cost"] += request["total_cost"]
187 |         
188 |         return usage
189 |     
190 |     def get_cost_summary_panel(self) -> Panel:
191 |         """Create a Rich panel with cost summary information.
192 |         
193 |         Returns:
194 |             Rich Panel object
195 |         """
196 |         # Get stats and budget info
197 |         stats = self.get_session_stats()
198 |         budget = self.check_budget()
199 |         
200 |         # Create a table for the summary
201 |         table = Table(show_header=False, box=ROUNDED, expand=True)
202 |         table.add_column("Item", style="bold")
203 |         table.add_column("Value")
204 |         
205 |         # Add rows with token usage
206 |         table.add_row(
207 |             "Tokens (Input)",
208 |             f"{stats['tokens_input']:,}"
209 |         )
210 |         table.add_row(
211 |             "Tokens (Output)",
212 |             f"{stats['tokens_output']:,}"
213 |         )
214 |         table.add_row(
215 |             "Total Cost",
216 |             f"${stats['total_cost']:.4f}"
217 |         )
218 |         
219 |         # Add budget information if available
220 |         if budget["has_budget"]:
221 |             # Create styled text for budget status
222 |             status_text = Text(budget["message"])
223 |             if budget["status"] == "exceeded":
224 |                 status_text.stylize("bold red")
225 |             elif budget["status"] == "critical":
226 |                 status_text.stylize("bold yellow")
227 |             elif budget["status"] == "warning":
228 |                 status_text.stylize("yellow")
229 |             else:
230 |                 status_text.stylize("green")
231 |             
232 |             table.add_row("Budget", status_text)
233 |         
234 |         # Create the panel
235 |         title = "[bold]Cost & Usage Summary[/bold]"
236 |         return Panel(table, title=title, border_style="yellow")
237 |     
238 |     def reset_session(self) -> None:
239 |         """Reset the session counters but keep request history."""
240 |         self.session_start = datetime.now()
241 |         self.session_tokens_input = 0
242 |         self.session_tokens_output = 0
243 |         self.session_cost = 0.0
244 |         
245 |         logger.info("Cost tracking session reset")
246 |     
247 |     def _save_history(self) -> None:
248 |         """Save request history to file if configured."""
249 |         if not self.history_file:
250 |             return
251 |         
252 |         try:
253 |             # Ensure directory exists
254 |             directory = os.path.dirname(self.history_file)
255 |             if directory and not os.path.exists(directory):
256 |                 os.makedirs(directory, exist_ok=True)
257 |             
258 |             # Save history
259 |             with open(self.history_file, 'w', encoding='utf-8') as f:
260 |                 json.dump({
261 |                     "session_start": self.session_start.isoformat(),
262 |                     "budget_limit": self.budget_limit,
263 |                     "requests": self.requests,
264 |                     "updated_at": datetime.now().isoformat()
265 |                 }, f, indent=2)
266 |         except Exception as e:
267 |             logger.error(f"Failed to save cost history: {e}")
268 |     
269 |     def _load_history(self) -> None:
270 |         """Load request history from file if available."""
271 |         if not self.history_file or not os.path.exists(self.history_file):
272 |             return
273 |         
274 |         try:
275 |             with open(self.history_file, 'r', encoding='utf-8') as f:
276 |                 data = json.load(f)
277 |                 
278 |                 # Load session data
279 |                 self.session_start = datetime.fromisoformat(data.get('session_start', self.session_start.isoformat()))
280 |                 self.budget_limit = data.get('budget_limit', self.budget_limit)
281 |                 
282 |                 # Load requests
283 |                 self.requests = data.get('requests', [])
284 |                 
285 |                 # Recalculate session totals
286 |                 self.session_tokens_input = sum(r.get('tokens_input', 0) for r in self.requests)
287 |                 self.session_tokens_output = sum(r.get('tokens_output', 0) for r in self.requests)
288 |                 self.session_cost = sum(r.get('total_cost', 0) for r in self.requests)
289 |                 
290 |                 logger.info(f"Loaded cost history with {len(self.requests)} requests")
291 |         except Exception as e:
292 |             logger.error(f"Failed to load cost history: {e}")
293 |     
294 |     def generate_usage_report(self, format: str = "text") -> str:
295 |         """Generate a usage report.
296 |         
297 |         Args:
298 |             format: Output format ("text", "json", "markdown")
299 |             
300 |         Returns:
301 |             Formatted usage report
302 |         """
303 |         stats = self.get_session_stats()
304 |         model_usage = self.get_usage_by_model()
305 |         
306 |         if format == "json":
307 |             return json.dumps({
308 |                 "session": stats,
309 |                 "models": model_usage
310 |             }, indent=2)
311 |         
312 |         # Text or markdown format
313 |         lines = []
314 |         lines.append("# Usage Report" if format == "markdown" else "USAGE REPORT")
315 |         lines.append("")
316 |         
317 |         # Session summary
318 |         lines.append("## Session Summary" if format == "markdown" else "SESSION SUMMARY")
319 |         lines.append(f"- Start time: {stats['start_time']}")
320 |         lines.append(f"- Duration: {stats['duration_seconds'] / 60:.1f} minutes")
321 |         lines.append(f"- Requests: {stats['request_count']}")
322 |         lines.append(f"- Total tokens: {stats['total_tokens']:,} ({stats['tokens_input']:,} in / {stats['tokens_output']:,} out)")
323 |         lines.append(f"- Total cost: ${stats['total_cost']:.4f}")
324 |         if stats['budget_limit'] is not None:
325 |             lines.append(f"- Budget: ${stats['budget_limit']:.2f} (${stats['budget_remaining']:.2f} remaining)")
326 |         lines.append("")
327 |         
328 |         # Usage by model
329 |         lines.append("## Usage by Model" if format == "markdown" else "USAGE BY MODEL")
330 |         for key, usage in sorted(model_usage.items(), key=lambda x: x[1]['total_cost'], reverse=True):
331 |             lines.append(f"### {key}" if format == "markdown" else key.upper())
332 |             lines.append(f"- Requests: {usage['request_count']}")
333 |             lines.append(f"- Tokens: {usage['tokens_input'] + usage['tokens_output']:,} ({usage['tokens_input']:,} in / {usage['tokens_output']:,} out)")
334 |             lines.append(f"- Cost: ${usage['total_cost']:.4f}")
335 |             lines.append("")
336 |         
337 |         return "\n".join(lines)
```
Page 1/6FirstPrevNextLast