#
tokens: 4893/50000 6/6 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── chatgpt_server.py
├── Dockerfile
├── README.md
├── requirements.txt
└── smithery.yaml
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
 1 | # Byte-compiled / optimized / DLL files
 2 | __pycache__/
 3 | *.py[cod]
 4 | *$py.class
 5 | 
 6 | # Distribution / packaging
 7 | dist/
 8 | build/
 9 | *.egg-info/
10 | 
11 | # Virtual environments
12 | venv/
13 | env/
14 | ENV/
15 | 
16 | # Environment variables
17 | .env
18 | 
19 | # IDE files
20 | .vscode/
21 | .idea/
22 | *.swp
23 | *.swo
24 | 
25 | # Log files
26 | *.log
27 | 
28 | # Conversation cache
29 | conversations/
30 | 
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | [![MseeP.ai Security Assessment Badge](https://mseep.net/pr/billster45-mcp-chatgpt-responses-badge.png)](https://mseep.ai/app/billster45-mcp-chatgpt-responses)
  2 | 
  3 | # MCP ChatGPT Server
  4 | [![smithery badge](https://smithery.ai/badge/@billster45/mcp-chatgpt-responses)](https://smithery.ai/server/@billster45/mcp-chatgpt-responses)
  5 | 
  6 | This MCP server allows you to access OpenAI's ChatGPT API directly from Claude Desktop.
  7 | 
  8 | 📝 **Read about why I built this project**: [I Built an AI That Talks to Other AIs: Demystifying the MCP Hype](https://medium.com/@billcockerill/i-built-an-ai-that-talks-to-other-ais-demystifying-the-mcp-hype-88dc03520552)
  9 | 
 10 | ## Features
 11 | 
 12 | - Call the ChatGPT API with customisable parameters
 13 | - Aks Claude and ChatGPT to talk to each other in a long running discussion!
 14 | - Configure model versions, temperature, and other parameters
 15 | - Use web search to get up-to-date information from the internet
 16 | - Uses OpenAI's Responses API for automatic conversation state management
 17 | - Use your own OpenAI API key
 18 | 
 19 | ## Setup Instructions
 20 | 
 21 | ### Installing via Smithery
 22 | 
 23 | To install ChatGPT Server for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@billster45/mcp-chatgpt-responses):
 24 | 
 25 | ```bash
 26 | npx -y @smithery/cli install @billster45/mcp-chatgpt-responses --client claude
 27 | ```
 28 | 
 29 | ### Prerequisites
 30 | 
 31 | - Python 3.10 or higher
 32 | - [Claude Desktop](https://claude.ai/download) application
 33 | - [OpenAI API key](https://platform.openai.com/settings/organization/api-keys)
 34 | - [uv](https://github.com/astral-sh/uv) for Python package management
 35 | 
 36 | ### Installation
 37 | 
 38 | 1. Clone this repository:
 39 |    ```bash
 40 |    git clone https://github.com/billster45/mcp-chatgpt-responses.git
 41 |    cd mcp-chatgpt-responses
 42 |    ```
 43 | 
 44 | 2. Set up a virtual environment and install dependencies using uv:
 45 |    ```bash
 46 |    uv venv
 47 |    ```
 48 | 
 49 |    ```bash
 50 |    .venv\\Scripts\\activate
 51 |    ```
 52 |    
 53 |    ```bash
 54 |    uv pip install -r requirements.txt
 55 |    ```
 56 | 
 57 | ### Using with Claude Desktop
 58 | 
 59 | 1. Configure Claude Desktop to use this MCP server by following the instructions at:
 60 |    [MCP Quickstart Guide](https://modelcontextprotocol.io/quickstart/user#2-add-the-filesystem-mcp-server)
 61 | 
 62 | 2. Add the following configuration to your Claude Desktop config file (adjust paths as needed):
 63 |    ```json
 64 |    {
 65 |      "mcpServers": {
 66 |        "chatgpt": {
 67 |          "command": "uv",
 68 |          "args": [
 69 |            "--directory",
 70 |            "\\path\\to\\mcp-chatgpt-responses",
 71 |            "run",
 72 |            "chatgpt_server.py"
 73 |          ],
 74 |          "env": {
 75 |            "OPENAI_API_KEY": "your-api-key-here",
 76 |            "DEFAULT_MODEL": "gpt-4o",
 77 |            "DEFAULT_TEMPERATURE": "0.7",
 78 |            "MAX_TOKENS": "1000"
 79 |          }
 80 |        }
 81 |      }
 82 |    }
 83 |    ```
 84 | 
 85 | 3. Restart Claude Desktop.
 86 | 
 87 | 4. You can now use the ChatGPT API through Claude by asking questions that mention ChatGPT or that Claude might not be able to answer.
 88 | 
 89 | ## Available Tools
 90 | 
 91 | The MCP server provides the following tools:
 92 | 
 93 | 1. `ask_chatgpt(prompt, model, temperature, max_output_tokens, response_id)` - Send a prompt to ChatGPT and get a response
 94 | 
 95 | 2. `ask_chatgpt_with_web_search(prompt, model, temperature, max_output_tokens, response_id)` - Send a prompt to ChatGPT with web search enabled to get up-to-date information
 96 | 
 97 | ## Example Usage
 98 | 
 99 | ### Basic ChatGPT usage:
100 | 
101 | Tell Claude to ask ChatGPT a question!
102 | ```
103 | Use the ask_chatgpt tool to answer: What is the best way to learn Python?
104 | ```
105 | 
106 | Tell Claude to have a conversation with ChatGPT:
107 | ```
108 | Use the ask_chatgpt tool to have a two way conversation between you and ChatGPT about the topic that is most important to you.
109 | ```
110 | Note how in a turn taking conversation the response id allows ChatGPT to store the history of the conversation so its a genuine conversation and not just as series of API calls. This is called [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses#openai-apis-for-conversation-state).
111 | 
112 | ### With web search:
113 | 
114 | For questions that may benefit from up-to-date information:
115 | ```
116 | Use the ask_chatgpt_with_web_search tool to answer: What are the latest developments in quantum computing?
117 | ```
118 | 
119 | Now try web search in agentic way to plan your perfect day out based on the weather!
120 | ```
121 | Use the ask_chatgpt_with_web_search tool to find the weather tomorrow in New York, then based on that weather and what it returns, keep using the tool to build up a great day out for someone who loves food and parks
122 | ```
123 | 
124 | ## How It Works
125 | 
126 | This tool utilizes OpenAI's Responses API, which automatically maintains conversation state on OpenAI's servers. This approach:
127 | 
128 | 1. Simplifies code by letting OpenAI handle the conversation history
129 | 2. Provides more reliable context tracking
130 | 3. Improves the user experience by maintaining context across messages
131 | 4. Allows access to the latest information from the web with the web search tool
132 | 
133 | ## License
134 | 
135 | MIT License
136 | 
```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
1 | mcp>=1.2.0
2 | openai>=1.0.0
3 | python-dotenv>=1.0.0
4 | httpx>=0.25.0
5 | pydantic>=2.0.0
6 | 
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
 2 | FROM python:3.10-slim
 3 | 
 4 | WORKDIR /app
 5 | 
 6 | # Copy requirements file and install dependencies
 7 | COPY requirements.txt ./
 8 | RUN pip install --upgrade pip && pip install -r requirements.txt
 9 | 
10 | # Copy the rest of the application code
11 | COPY . .
12 | 
13 | # Expose port if necessary (not strictly required for stdio servers)
14 | 
15 | # Command to run the MCP server
16 | CMD ["python", "chatgpt_server.py"]
17 | 
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
 2 | 
 3 | startCommand:
 4 |   type: stdio
 5 |   configSchema:
 6 |     # JSON Schema defining the configuration options for the MCP.
 7 |     type: object
 8 |     required:
 9 |       - openaiApiKey
10 |     properties:
11 |       openaiApiKey:
12 |         type: string
13 |         default: ""
14 |         description: Your OpenAI API key
15 |       defaultModel:
16 |         type: string
17 |         default: gpt-4o
18 |         description: Default GPT model to use
19 |       defaultTemperature:
20 |         type: number
21 |         default: 0.7
22 |         description: Temperature value for generation
23 |       maxTokens:
24 |         type: number
25 |         default: 1000
26 |         description: Maximum number of tokens to generate
27 |   commandFunction:
28 |     # A JS function that produces the CLI command based on the given config to start the MCP on stdio.
29 |     |-
30 |     (config) => ({
31 |       command: 'python',
32 |       args: ['chatgpt_server.py'],
33 |       env: {
34 |         OPENAI_API_KEY: config.openaiApiKey,
35 |         DEFAULT_MODEL: config.defaultModel,
36 |         DEFAULT_TEMPERATURE: config.defaultTemperature.toString(),
37 |         MAX_TOKENS: config.maxTokens.toString()
38 |       }
39 |     })
40 |   exampleConfig:
41 |     openaiApiKey: sk-abc123example
42 |     defaultModel: gpt-4o
43 |     defaultTemperature: 0.7
44 |     maxTokens: 1000
45 | 
```

--------------------------------------------------------------------------------
/chatgpt_server.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | ChatGPT MCP Server
  4 | 
  5 | This MCP server provides tools to interact with OpenAI's ChatGPT API from Claude Desktop.
  6 | Uses the OpenAI Responses API for simplified conversation state management.
  7 | """
  8 | 
  9 | import os
 10 | import json
 11 | import logging
 12 | from typing import Optional, List, Dict, Any, Union
 13 | from contextlib import asynccontextmanager
 14 | 
 15 | from dotenv import load_dotenv
 16 | from openai import OpenAI, AsyncOpenAI
 17 | from pydantic import BaseModel, Field
 18 | 
 19 | from mcp.server.fastmcp import FastMCP, Context
 20 | 
 21 | # Load environment variables
 22 | load_dotenv()
 23 | 
 24 | # Configure logging
 25 | logging.basicConfig(
 26 |     level=logging.INFO,
 27 |     format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
 28 |     handlers=[logging.StreamHandler()]
 29 | )
 30 | logger = logging.getLogger("chatgpt_server")
 31 | 
 32 | # Check for API key
 33 | api_key = os.getenv("OPENAI_API_KEY")
 34 | if not api_key:
 35 |     raise ValueError("OPENAI_API_KEY environment variable is required")
 36 | 
 37 | # Default settings
 38 | DEFAULT_MODEL = os.getenv("DEFAULT_MODEL", "gpt-4o")
 39 | DEFAULT_TEMPERATURE = float(os.getenv("DEFAULT_TEMPERATURE", "0.7"))
 40 | MAX_OUTPUT_TOKENS = int(os.getenv("MAX_OUTPUT_TOKENS", "1000"))
 41 | 
 42 | # Initialize OpenAI client
 43 | client = OpenAI(api_key=api_key)
 44 | async_client = AsyncOpenAI(api_key=api_key)
 45 | 
 46 | # Model list for resource
 47 | AVAILABLE_MODELS = [
 48 |     "gpt-4o",
 49 |     "gpt-4-turbo",
 50 |     "gpt-4",
 51 |     "gpt-3.5-turbo",
 52 | ]
 53 | 
 54 | # Initialize FastMCP server
 55 | mcp = FastMCP(
 56 |     "ChatGPT API",
 57 |     dependencies=["openai", "python-dotenv", "httpx", "pydantic"],
 58 | )
 59 | 
 60 | 
 61 | class OpenAIRequest(BaseModel):
 62 |     """Model for OpenAI API request parameters"""
 63 |     model: str = Field(default=DEFAULT_MODEL, description="OpenAI model name")
 64 |     temperature: float = Field(default=DEFAULT_TEMPERATURE, description="Temperature (0-2)", ge=0, le=2)
 65 |     max_output_tokens: int = Field(default=MAX_OUTPUT_TOKENS, description="Maximum tokens in response", ge=1)
 66 |     response_id: Optional[str] = Field(default=None, description="Optional response ID for continuing a chat")
 67 | 
 68 | 
 69 | @asynccontextmanager
 70 | async def app_lifespan(server: FastMCP):
 71 |     """Initialize and clean up application resources"""
 72 |     logger.info("ChatGPT MCP Server starting up")
 73 |     try:
 74 |         yield {}
 75 |     finally:
 76 |         logger.info("ChatGPT MCP Server shutting down")
 77 | 
 78 | 
 79 | # Resources
 80 | 
 81 | @mcp.resource("chatgpt://models")
 82 | def available_models() -> str:
 83 |     """List available ChatGPT models"""
 84 |     return json.dumps(AVAILABLE_MODELS, indent=2)
 85 | 
 86 | 
 87 | # Helper function to extract text from response
 88 | def extract_text_from_response(response) -> str:
 89 |     """Extract text from various response structures"""
 90 |     try:
 91 |         # Log response structure for debugging
 92 |         logger.info(f"Response type: {type(response)}")
 93 |         
 94 |         # If response has output_text attribute, use it directly
 95 |         if hasattr(response, 'output_text'):
 96 |             return response.output_text
 97 |         
 98 |         # If response has output attribute (structured response)
 99 |         if hasattr(response, 'output') and response.output:
100 |             # Iterate through output items to find text content
101 |             for output_item in response.output:
102 |                 if hasattr(output_item, 'content'):
103 |                     for content_item in output_item.content:
104 |                         if hasattr(content_item, 'text'):
105 |                             return content_item.text
106 |         
107 |         # Handle case where output might be different structure
108 |         # Return a default message if we can't extract text
109 |         return "Response received but text content could not be extracted. You can view the response in the API logs."
110 |     
111 |     except Exception as e:
112 |         logger.error(f"Error extracting text from response: {str(e)}")
113 |         return "Error extracting response text. Please check the logs for details."
114 | 
115 | 
116 | # Tools
117 | 
118 | @mcp.tool()
119 | async def ask_chatgpt(
120 |     prompt: str,
121 |     model: str = DEFAULT_MODEL,
122 |     temperature: float = DEFAULT_TEMPERATURE,
123 |     max_output_tokens: int = MAX_OUTPUT_TOKENS,
124 |     response_id: Optional[str] = None,
125 |     ctx: Context = None,
126 | ) -> str:
127 |     """
128 |     Send a prompt to ChatGPT and get a response
129 |     
130 |     Args:
131 |         prompt: The message to send to ChatGPT
132 |         model: The OpenAI model to use (default: gpt-4o)
133 |         temperature: Sampling temperature (0-2, default: 0.7)
134 |         max_output_tokens: Maximum tokens in response (default: 1000)
135 |         response_id: Optional response ID for continuing a chat
136 |     
137 |     Returns:
138 |         ChatGPT's response
139 |     """
140 |     ctx.info(f"Calling ChatGPT with model: {model}")
141 |     
142 |     try:
143 |         # Format input based on whether this is a new conversation or continuing one
144 |         if response_id:
145 |             # For continuing a conversation
146 |             response = await async_client.responses.create(
147 |                 model=model,
148 |                 previous_response_id=response_id,
149 |                 input=[{"role": "user", "content": prompt}],
150 |                 temperature=temperature,
151 |                 max_output_tokens=max_output_tokens,
152 |             )
153 |         else:
154 |             # For starting a new conversation
155 |             response = await async_client.responses.create(
156 |                 model=model,
157 |                 input=prompt,
158 |                 temperature=temperature,
159 |                 max_output_tokens=max_output_tokens,
160 |             )
161 |         
162 |         # Extract the text content using the helper function
163 |         output_text = extract_text_from_response(response)
164 |         
165 |         # Return response with ID for reference
166 |         return f"{output_text}\n\n(Response ID: {response.id})"
167 |     
168 |     except Exception as e:
169 |         error_message = f"Error calling ChatGPT API: {str(e)}"
170 |         logger.error(error_message)
171 |         return error_message
172 | 
173 | 
174 | @mcp.tool()
175 | async def ask_chatgpt_with_web_search(
176 |     prompt: str,
177 |     model: str = DEFAULT_MODEL,
178 |     temperature: float = DEFAULT_TEMPERATURE,
179 |     max_output_tokens: int = MAX_OUTPUT_TOKENS,
180 |     response_id: Optional[str] = None,
181 |     ctx: Context = None,
182 | ) -> str:
183 |     """
184 |     Send a prompt to ChatGPT with web search capability enabled
185 |     
186 |     Args:
187 |         prompt: The message to send to ChatGPT
188 |         model: The OpenAI model to use (default: gpt-4o)
189 |         temperature: Sampling temperature (0-2, default: 0.7)
190 |         max_output_tokens: Maximum tokens in response (default: 1000)
191 |         response_id: Optional response ID for continuing a chat
192 |     
193 |     Returns:
194 |         ChatGPT's response with information from web search
195 |     """
196 |     ctx.info(f"Calling ChatGPT with web search using model: {model}")
197 |     
198 |     try:
199 |         # Define web search tool
200 |         web_search_tool = {"type": "web_search"}
201 |         
202 |         # Format input based on whether this is a new conversation or continuing one
203 |         if response_id:
204 |             # For continuing a conversation
205 |             response = await async_client.responses.create(
206 |                 model=model,
207 |                 previous_response_id=response_id,
208 |                 input=[{"role": "user", "content": prompt}],
209 |                 temperature=temperature,
210 |                 max_output_tokens=max_output_tokens,
211 |                 tools=[web_search_tool],
212 |             )
213 |         else:
214 |             # For starting a new conversation
215 |             response = await async_client.responses.create(
216 |                 model=model,
217 |                 input=prompt,
218 |                 temperature=temperature,
219 |                 max_output_tokens=max_output_tokens,
220 |                 tools=[web_search_tool],
221 |             )
222 |         
223 |         # Log response for debugging
224 |         logger.info(f"Web search response ID: {response.id}")
225 |         logger.info(f"Web search response structure: {dir(response)}")
226 |         
227 |         # Extract the text content using the helper function
228 |         output_text = extract_text_from_response(response)
229 |         
230 |         # Return response with ID for reference
231 |         return f"{output_text}\n\n(Response ID: {response.id})"
232 |     
233 |     except Exception as e:
234 |         error_message = f"Error calling ChatGPT with web search: {str(e)}"
235 |         logger.error(error_message)
236 |         return error_message
237 | 
238 | 
239 | if __name__ == "__main__":
240 |     # Run the server
241 |     mcp.run(transport='stdio')
```