#
tokens: 3215/50000 8/8 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── Dockerfile
├── LICENSE
├── preview.png
├── pyproject.toml
├── README.md
├── smithery.yaml
└── src
    └── mcp_server_openai
        ├── __init__.py
        ├── llm.py
        ├── server.py
        └── test_openai.py
```

# Files

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# OpenAI MCP Server
[![smithery badge](https://smithery.ai/badge/@thadius83/mcp-server-openai)](https://smithery.ai/server/@thadius83/mcp-server-openai)

Query OpenAI models directly from Claude using MCP protocol. This fork adds support for o3-mini and gpt-4o-mini models with improved message handling.

## Cline Auto Install

```
Set up the MCP server from https://github.com/thadius83/mcp-server-openai. 
Use "github.com/thadius83/mcp-server-openai" as the server name in cline_mcp_settings.json.
Analyse the readme and instructions below. Do not create new files in the repo, utilise the findings from pyproject.toml, src/mcp_server_openai/server.py, src/mcp_server_openai/llm.py
Once installed, demonstrate the server's capabilities by using one of its tools.

Installation Steps:
# Clone the repository
git clone https://github.com/thadius83/mcp-server-openai
cd mcp-server-openai

# Install the package
pip install .`

MCP Settings Configuration: The cline_mcp_settings.json should be configured with:
Correct server name format: "github.com/thadius83/mcp-server-openai"
Python module path structure for the server
PYTHONPATH environment variable pointing to the project directory
OpenAI API key passed as a command line argument

Example configuration:
{
  "mcpServers": {
    "github.com/thadius83/mcp-server-openai": {
      "command": "python",
      "args": [
        "-m",
        "src.mcp_server_openai.server",
        "--openai-api-key",
        "your-openai-api-key"
      ],
      "env": {
        "PYTHONPATH": "/path/to/mcp-server-openai"
      },
      "disabled": false,
      "autoApprove": []
    }
  }
}

Requirements:
Python >= 3.10
OpenAI API key
Dependencies installed via pip (mcp>=0.9.1, openai>=1.0.0, click>=8.0.0, pytest-asyncio)

Available Tools:
Tool Name: ask-openai
Description: Ask OpenAI assistant models a direct question
Models Available:
o3-mini (default)
gpt-4o-mini
Input Schema:
{
  "query": "Your question here",
  "model": "o3-mini" // optional, defaults to o3-mini
}
```

## Features

- Direct integration with OpenAI's API
- Support for multiple models:
  - o3-mini (default): Optimized for concise responses
  - gpt-4o-mini: Enhanced model for more detailed responses
- Configurable message formatting
- Error handling and logging
- Simple interface through MCP protocol

## Installation

### Installing via Smithery

To install OpenAI MCP Server for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@thadius83/mcp-server-openai):

```bash
npx -y @smithery/cli install @thadius83/mcp-server-openai --client claude
```

### Manual Installation

1. **Clone the Repository**:
```bash
git clone https://github.com/thadius83/mcp-server-openai.git
cd mcp-server-openai

# Install dependencies
pip install -e .
```

2. **Configure Claude Desktop**:
   
Add this server to your existing MCP settings configuration. Note: Keep any existing MCP servers in the configuration - just add this one alongside them.

Location:
- macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
- Windows: `%APPDATA%/Claude/claude_desktop_config.json`
- Linux: Check your home directory (`~/`) for the default MCP settings location

```json
{
  "mcpServers": {
    // ... keep your existing MCP servers here ...
    
    "github.com/thadius83/mcp-server-openai": {
      "command": "python",
      "args": ["-m", "src.mcp_server_openai.server", "--openai-api-key", "your-key-here"],
      "env": {
        "PYTHONPATH": "/path/to/your/mcp-server-openai"
      }
    }
  }
}
```

3. **Get an OpenAI API Key**:
   - Visit [OpenAI's website](https://openai.com)
   - Create an account or log in
   - Navigate to API settings
   - Generate a new API key
   - Add the key to your configuration file as shown above

4. **Restart Claude**:
   - After updating the configuration, restart Claude for the changes to take effect

## Usage

The server provides a single tool `ask-openai` that can be used to query OpenAI models. You can use it directly in Claude with the use_mcp_tool command:

```xml
<use_mcp_tool>
<server_name>github.com/thadius83/mcp-server-openai</server_name>
<tool_name>ask-openai</tool_name>
<arguments>
{
  "query": "What are the key features of Python's asyncio library?",
  "model": "o3-mini"  // Optional, defaults to o3-mini
}
</arguments>
</use_mcp_tool>
```

### Model Comparison

1. o3-mini (default)
   - Best for: Quick, concise answers
   - Style: Direct and efficient
   - Example response:
     ```
     Python's asyncio provides non-blocking, collaborative multitasking. Key features:
     1. Event Loop – Schedules and runs asynchronous tasks
     2. Coroutines – Functions you can pause and resume
     3. Tasks – Run coroutines concurrently
     4. Futures – Represent future results
     5. Non-blocking I/O – Efficient handling of I/O operations
     ```

2. gpt-4o-mini
   - Best for: More comprehensive explanations
   - Style: Detailed and thorough
   - Example response:
     ```
     Python's asyncio library provides a comprehensive framework for asynchronous programming.
     It includes an event loop for managing tasks, coroutines for writing non-blocking code,
     tasks for concurrent execution, futures for handling future results, and efficient I/O
     operations. The library also provides synchronization primitives and high-level APIs
     for network programming.
     ```

### Response Format

The tool returns responses in a standardized format:
```json
{
  "content": [
    {
      "type": "text",
      "text": "Response from the model..."
    }
  ]
}
```

## Troubleshooting

1. **Server Not Found**:
   - Verify the PYTHONPATH in your configuration points to the correct directory
   - Ensure Python and pip are properly installed
   - Try running `python -m src.mcp_server_openai.server --openai-api-key your-key-here` directly to check for errors

2. **Authentication Errors**:
   - Check that your OpenAI API key is valid
   - Ensure the key is correctly passed in the args array
   - Verify there are no extra spaces or characters in the key

3. **Model Errors**:
   - Confirm you're using supported models (o3-mini or gpt-4o-mini)
   - Check your query isn't empty
   - Ensure you're not exceeding token limits

## Development

```bash
# Install development dependencies
pip install -e ".[dev]"

# Run tests
pytest -v test_openai.py -s
```

## Changes from Original

- Added support for o3-mini and gpt-4o-mini models
- Improved message formatting
- Removed temperature parameter for better compatibility
- Updated documentation with detailed usage examples
- Added model comparison and response examples
- Enhanced installation instructions
- Added troubleshooting guide

## License

MIT License

```

--------------------------------------------------------------------------------
/src/mcp_server_openai/__init__.py:
--------------------------------------------------------------------------------

```python
from .server import main, serve
from .llm import LLMConnector

__version__ = "0.1.0"
```

--------------------------------------------------------------------------------
/src/mcp_server_openai/test_openai.py:
--------------------------------------------------------------------------------

```python
import pytest
from .llm import LLMConnector

@pytest.mark.asyncio
async def test_ask_openai():
    print("\nTesting OpenAI API call...")
    connector = LLMConnector("your-openai-key")
    response = await connector.ask_openai("Hello, how are you?")
    print(f"OpenAI Response: {response}")
    assert isinstance(response, str)
    assert len(response) > 0
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[project]
name = "mcp-server-openai"
version = "0.1.0"
description = "MCP server for OpenAI API integration"
requires-python = ">=3.10"
dependencies = [
    "mcp>=0.9.1",
    "openai>=1.0.0",
    "click>=8.0.0",
    "pytest-asyncio"
]

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[project.scripts]
mcp-server-openai = "mcp_server_openai.server:main"
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml

startCommand:
  type: stdio
  configSchema:
    # JSON Schema defining the configuration options for the MCP.
    type: object
    required:
      - openaiApiKey
    properties:
      openaiApiKey:
        type: string
        description: The API key for accessing OpenAI models.
  commandFunction:
    # A function that produces the CLI command to start the MCP on stdio.
    |-
    (config) => ({command: 'python', args: ['-m', 'src.mcp_server_openai.server', '--openai-api-key', config.openaiApiKey]})

```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
# Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
# Use the official Python image with the specified version
FROM python:3.10-slim

# Set the working directory
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . .

# Install the package and its dependencies
RUN pip install -e .

# Set environment variables
# Note: Replace 'your-openai-api-key' with your actual OpenAI API key or use a build argument or secret manager.
ENV OPENAI_API_KEY your-openai-api-key
ENV PYTHONPATH /app

# Command to run the MCP server
ENTRYPOINT ["python", "-m", "src.mcp_server_openai.server", "--openai-api-key", "your-openai-api-key"]

```

--------------------------------------------------------------------------------
/src/mcp_server_openai/llm.py:
--------------------------------------------------------------------------------

```python
import logging
from openai import AsyncOpenAI

logger = logging.getLogger(__name__)

class LLMConnector:
    def __init__(self, openai_api_key: str):
        self.client = AsyncOpenAI(api_key=openai_api_key)

    async def ask_openai(self, query: str, model: str = "o3-mini") -> str:
        try:
            messages = [
                {
                    "role": "developer",
                    "content": "You are a helpful assistant that provides clear and accurate technical responses."
                },
                {
                    "role": "system",
                    "content": "Ensure responses are well-structured and technically precise."
                },
                {
                    "role": "user",
                    "content": query
                }
            ]
            response = await self.client.chat.completions.create(
                messages=messages,
                model=model
            )
            return response.choices[0].message.content
        except Exception as e:
            logger.error(f"Failed to query OpenAI: {str(e)}")
            raise

```

--------------------------------------------------------------------------------
/src/mcp_server_openai/server.py:
--------------------------------------------------------------------------------

```python
import asyncio
import logging
import sys
from typing import Optional

import click
import mcp
import mcp.types as types
from mcp.server import Server, NotificationOptions
from mcp.server.models import InitializationOptions

from .llm import LLMConnector

logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)

def serve(openai_api_key: str) -> Server:
    server = Server("openai-server")
    connector = LLMConnector(openai_api_key)

    @server.list_tools()
    async def handle_list_tools() -> list[types.Tool]:
        return [
            types.Tool(
                name="ask-openai",
                description="Ask my assistant models a direct question",
                inputSchema={
                    "type": "object",
                    "properties": {
                        "query": {"type": "string", "description": "Ask assistant"},
                        "model": {"type": "string", "default": "o3-mini", "enum": ["o3-mini", "gpt-4o-mini"]}
                    },
                    "required": ["query"]
                }
            )
        ]

    @server.call_tool()
    async def handle_tool_call(name: str, arguments: dict | None) -> list[types.TextContent]:
        try:
            if not arguments:
                raise ValueError("No arguments provided")

            if name == "ask-openai":
                response = await connector.ask_openai(
                    query=arguments["query"],
                    model=arguments.get("model", "o3-mini")
                )
                return [types.TextContent(type="text", text=response)]

            raise ValueError(f"Unknown tool: {name}")
        except Exception as e:
            logger.error(f"Tool call failed: {str(e)}")
            return [types.TextContent(type="text", text=f"Error: {str(e)}")]

    return server

@click.command()
@click.option("--openai-api-key", envvar="OPENAI_API_KEY", required=True)
def main(openai_api_key: str):
    try:
        async def _run():
            async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
                server = serve(openai_api_key)
                await server.run(
                    read_stream, write_stream,
                    InitializationOptions(
                        server_name="openai-server",
                        server_version="0.1.0",
                        capabilities=server.get_capabilities(
                            notification_options=NotificationOptions(),
                            experimental_capabilities={}
                        )
                    )
                )
        asyncio.run(_run())
    except KeyboardInterrupt:
        logger.info("Server stopped by user")
    except Exception as e:
        logger.exception("Server failed")
        sys.exit(1)

if __name__ == "__main__":
    main()

```