#
tokens: 4772/50000 9/9 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .gitignore
├── .python-version
├── docs
│   └── openai-websearch-tool.md
├── LICENSE
├── pyproject.toml
├── README.md
├── src
│   └── openai_websearch_mcp
│       ├── __init__.py
│       ├── __main__.py
│       ├── cli.py
│       └── server.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
3.10

```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
__pycache__
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# OpenAI WebSearch MCP Server 🔍

[![PyPI version](https://badge.fury.io/py/openai-websearch-mcp.svg)](https://badge.fury.io/py/openai-websearch-mcp)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![MCP Compatible](https://img.shields.io/badge/MCP-Compatible-green.svg)](https://modelcontextprotocol.io/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

An advanced MCP server that provides intelligent web search capabilities using OpenAI's reasoning models. Perfect for AI assistants that need up-to-date information with smart reasoning capabilities.

## ✨ Features

- **🧠 Reasoning Model Support**: Full compatibility with OpenAI's latest reasoning models (gpt-5, gpt-5-mini, gpt-5-nano, o3, o4-mini)
- **⚡ Smart Effort Control**: Intelligent `reasoning_effort` defaults based on use case
- **🔄 Multi-Mode Search**: Fast iterations with gpt-5-mini or deep research with gpt-5
- **🌍 Localized Results**: Support for location-based search customization
- **📝 Rich Descriptions**: Complete parameter documentation for easy integration
- **🔧 Flexible Configuration**: Environment variable support for easy deployment

## 🚀 Quick Start

### One-Click Installation for Claude Desktop

```bash
OPENAI_API_KEY=sk-xxxx uvx --with openai-websearch-mcp openai-websearch-mcp-install
```

Replace `sk-xxxx` with your OpenAI API key from the [OpenAI Platform](https://platform.openai.com/).

## ⚙️ Configuration

### Claude Desktop

Add to your `claude_desktop_config.json`:

```json
{
  "mcpServers": {
    "openai-websearch-mcp": {
      "command": "uvx",
      "args": ["openai-websearch-mcp"],
      "env": {
        "OPENAI_API_KEY": "your-api-key-here",
        "OPENAI_DEFAULT_MODEL": "gpt-5-mini"
      }
    }
  }
}
```

### Cursor

Add to your MCP settings in Cursor:

1. Open Cursor Settings (`Cmd/Ctrl + ,`)
2. Search for "MCP" or go to Extensions → MCP
3. Add server configuration:

```json
{
  "mcpServers": {
    "openai-websearch-mcp": {
      "command": "uvx",
      "args": ["openai-websearch-mcp"],
      "env": {
        "OPENAI_API_KEY": "your-api-key-here",
        "OPENAI_DEFAULT_MODEL": "gpt-5-mini"
      }
    }
  }
}
```

### Claude Code

Claude Code automatically detects MCP servers configured for Claude Desktop. Use the same configuration as above for Claude Desktop.

### Local Development

For local testing, use the absolute path to your virtual environment:

```json
{
  "mcpServers": {
    "openai-websearch-mcp": {
      "command": "/path/to/your/project/.venv/bin/python",
      "args": ["-m", "openai_websearch_mcp"],
      "env": {
        "OPENAI_API_KEY": "your-api-key-here",
        "OPENAI_DEFAULT_MODEL": "gpt-5-mini",
        "PYTHONPATH": "/path/to/your/project/src"
      }
    }
  }
}
```

## 🛠️ Available Tools

### `openai_web_search`

Intelligent web search with reasoning model support.

#### Parameters

| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `input` | `string` | The search query or question to search for | *Required* |
| `model` | `string` | AI model to use. Supports gpt-4o, gpt-4o-mini, gpt-5, gpt-5-mini, gpt-5-nano, o3, o4-mini | `gpt-5-mini` |
| `reasoning_effort` | `string` | Reasoning effort level: low, medium, high, minimal | Smart default |
| `type` | `string` | Web search API version | `web_search_preview` |
| `search_context_size` | `string` | Context amount: low, medium, high | `medium` |
| `user_location` | `object` | Optional location for localized results | `null` |

## 💬 Usage Examples

Once configured, simply ask your AI assistant to search for information using natural language:

### Quick Search
> "Search for the latest developments in AI reasoning models using openai_web_search"

### Deep Research  
> "Use openai_web_search with gpt-5 and high reasoning effort to provide a comprehensive analysis of quantum computing breakthroughs"

### Localized Search
> "Search for local tech meetups in San Francisco this week using openai_web_search"

The AI assistant will automatically use the `openai_web_search` tool with appropriate parameters based on your request.

## 🤖 Model Selection Guide

### Quick Multi-Round Searches 🚀
- **Recommended**: `gpt-5-mini` with `reasoning_effort: "low"`
- **Use Case**: Fast iterations, real-time information, multiple quick queries
- **Benefits**: Lower latency, cost-effective for frequent searches

### Deep Research 🔬
- **Recommended**: `gpt-5` with `reasoning_effort: "medium"` or `"high"`
- **Use Case**: Comprehensive analysis, complex topics, detailed investigation
- **Benefits**: Multi-round reasoned results, no need for agent iterations

### Model Comparison

| Model | Reasoning | Default Effort | Best For |
|-------|-----------|----------------|----------|
| `gpt-4o` | ❌ | N/A | Standard search |
| `gpt-4o-mini` | ❌ | N/A | Basic queries |
| `gpt-5-mini` | ✅ | `low` | Fast iterations |
| `gpt-5` | ✅ | `medium` | Deep research |
| `gpt-5-nano` | ✅ | `medium` | Balanced approach |
| `o3` | ✅ | `medium` | Advanced reasoning |
| `o4-mini` | ✅ | `medium` | Efficient reasoning |

## 📦 Installation

### Using uvx (Recommended)

```bash
# Install and run directly
uvx openai-websearch-mcp

# Or install globally
uvx install openai-websearch-mcp
```

### Using pip

```bash
# Install from PyPI
pip install openai-websearch-mcp

# Run the server
python -m openai_websearch_mcp
```

### From Source

```bash
# Clone the repository
git clone https://github.com/yourusername/openai-websearch-mcp.git
cd openai-websearch-mcp

# Install dependencies
uv sync

# Run in development mode
uv run python -m openai_websearch_mcp
```

## 👩‍💻 Development

### Setup Development Environment

```bash
# Clone and setup
git clone https://github.com/yourusername/openai-websearch-mcp.git
cd openai-websearch-mcp

# Create virtual environment and install dependencies
uv sync

# Run tests
uv run python -m pytest

# Install in development mode
uv pip install -e .
```

### Environment Variables

| Variable | Description | Default |
|----------|-------------|---------|
| `OPENAI_API_KEY` | Your OpenAI API key | *Required* |
| `OPENAI_DEFAULT_MODEL` | Default model to use | `gpt-5-mini` |

## 🐛 Debugging

### Using MCP Inspector

```bash
# For uvx installations
npx @modelcontextprotocol/inspector uvx openai-websearch-mcp

# For pip installations
npx @modelcontextprotocol/inspector python -m openai_websearch_mcp
```

### Common Issues

**Issue**: "Unsupported parameter: 'reasoning.effort'"
**Solution**: This occurs when using non-reasoning models (gpt-4o, gpt-4o-mini) with reasoning_effort parameter. The server automatically handles this by only applying reasoning parameters to compatible models.

**Issue**: "No module named 'openai_websearch_mcp'"
**Solution**: Ensure you've installed the package correctly and your Python path includes the package location.

## 📄 License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

## 🙏 Acknowledgments

- 🤖 Generated with [Claude Code](https://claude.ai/code)
- 🔥 Powered by [OpenAI's Web Search API](https://openai.com)
- 🛠️ Built on the [Model Context Protocol](https://modelcontextprotocol.io/)

---

**Co-Authored-By**: Claude <[email protected]>
```

--------------------------------------------------------------------------------
/src/openai_websearch_mcp/__main__.py:
--------------------------------------------------------------------------------

```python
from openai_websearch_mcp import main

main()

```

--------------------------------------------------------------------------------
/src/openai_websearch_mcp/__init__.py:
--------------------------------------------------------------------------------

```python
from .server import mcp


def main():
    mcp.run()


if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[project]
name = "openai-websearch-mcp"
version = "0.4.2"
description = "using openai websearch as mcp server"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
    "pydantic_extra_types==2.10.3",
    "pydantic>=2.11.0,<3.0.0",
    "mcp==1.13.1",
    "tzdata==2025.1",
    "openai==1.66.2",
    "typer==0.15.2"
]


[project.scripts]
openai-websearch-mcp = "openai_websearch_mcp:main"
openai-websearch-mcp-install = "openai_websearch_mcp.cli:app"


[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

[tool.uv]
dev-dependencies = [
    "pydantic_extra_types==2.10.3",
    "pydantic>=2.11.0,<3.0.0",
    "mcp==1.13.1",
    "tzdata==2025.1",
    "openai==1.66.2",
    "typer==0.15.2"
]

```

--------------------------------------------------------------------------------
/docs/openai-websearch-tool.md:
--------------------------------------------------------------------------------

```markdown

# Web search
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.


## properties
`type` string

> Required
> The type of the web search tool. One of:
> web_search_preview
> web_search_preview_2025_03_11


`search_context_size` string

> Optional
> Defaults to medium
> High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.

`user_location` object or null

> Optional
> Approximate location parameters > for the search.


properties of `user_location`   
`type` string

> Required
> The type of location > approximation. Always approximate.

`city` string

> Optional
> Free text input for the city of the user, e.g. San Francisco.

`country` string

> Optional
> The two-letter ISO country code of the user, e.g. US.

`region` string

> Optional
> Free text input for the region of the user, e.g. California.

`timezone` string

> Optional
> The IANA timezone of the user, e.g. America/Los_Angeles.
```

--------------------------------------------------------------------------------
/src/openai_websearch_mcp/server.py:
--------------------------------------------------------------------------------

```python
from pydantic import BaseModel, Field
from typing import Literal, Optional, Annotated
from mcp.server.fastmcp import FastMCP
from openai import OpenAI
from pydantic_extra_types.timezone_name import TimeZoneName
from pydantic import BaseModel
import os

mcp = FastMCP(
    name="OpenAI Web Search",
    instructions="This MCP server provides access to OpenAI's websearch functionality through the Model Context Protocol."
)

class UserLocation(BaseModel):
    type: Literal["approximate"] = "approximate"
    city: str
    country: str = None
    region: str = None
    timezone: TimeZoneName


@mcp.tool(
    name="openai_web_search",
    description="""OpenAI Web Search with reasoning models. 

For quick multi-round searches: Use 'gpt-5-mini' with reasoning_effort='low' for fast iterations.

For deep research: Use 'gpt-5' with reasoning_effort='medium' or 'high'. 
The result is already multi-round reasoned, so agents don't need continuous iterations.

Supports: gpt-4o (no reasoning), gpt-5/gpt-5-mini/gpt-5-nano, o3/o4-mini (with reasoning).""",
)
def openai_web_search(
    input: Annotated[str, Field(description="The search query or question to search for")],
    model: Annotated[Optional[Literal["gpt-4o", "gpt-4o-mini", "gpt-5", "gpt-5-mini", "gpt-5-nano", "o3", "o4-mini"]], 
                     Field(description="AI model to use. Defaults to OPENAI_DEFAULT_MODEL env var or gpt-5-mini")] = None,
    reasoning_effort: Annotated[Optional[Literal["low", "medium", "high", "minimal"]], 
                                Field(description="Reasoning effort level for supported models (gpt-5, o3, o4-mini). Default: low for gpt-5-mini, medium for others")] = None,
    type: Annotated[Literal["web_search_preview", "web_search_preview_2025_03_11"], 
                    Field(description="Web search API version to use")] = "web_search_preview",
    search_context_size: Annotated[Literal["low", "medium", "high"], 
                                   Field(description="Amount of context to include in search results")] = "medium",
    user_location: Annotated[Optional[UserLocation], 
                            Field(description="Optional user location for localized search results")] = None,
) -> str:
    # 从环境变量读取默认模型,如果没有则使用 gpt-5-mini
    if model is None:
        model = os.getenv("OPENAI_DEFAULT_MODEL", "gpt-5-mini")
    
    client = OpenAI()
    
    # 判断是否为推理模型
    reasoning_models = ["gpt-5", "gpt-5-mini", "gpt-5-nano", "o3", "o4-mini"]
    
    # 构建请求参数
    request_params = {
        "model": model,
        "tools": [
            {
                "type": type,
                "search_context_size": search_context_size,
                "user_location": user_location.model_dump() if user_location else None,
            }
        ],
        "input": input,
    }
    
    # 对推理模型设置智能默认值
    if model in reasoning_models:
        if reasoning_effort is None:
            # gpt-5-mini 默认使用 low,其他推理模型默认 medium
            if model == "gpt-5-mini":
                reasoning_effort = "low"  # 快速搜索
            else:
                reasoning_effort = "medium"  # 深度研究
        request_params["reasoning"] = {"effort": reasoning_effort}
    
    response = client.responses.create(**request_params)
    return response.output_text


```

--------------------------------------------------------------------------------
/src/openai_websearch_mcp/cli.py:
--------------------------------------------------------------------------------

```python
import json
import sys
import getpass
from pathlib import Path
from typing import Optional, Dict
import logging
import sys
import os
import platform
import typer
from shutil import which
from openai import OpenAI


logger = logging.getLogger(__file__)

app = typer.Typer(
    name="openapi-websearch-mcp",
    help="openapi-websearch-mcp install tools",
    add_completion=False,
    no_args_is_help=True,  # Show help if no args provided
)



def get_claude_config_path() -> Path | None:
    """Get the Claude config directory based on platform."""
    if sys.platform == "win32":
        path = Path(Path.home(), "AppData", "Roaming", "Claude")
    elif sys.platform == "darwin":
        path = Path(Path.home(), "Library", "Application Support", "Claude")
    else:
        return None

    if path.exists():
        return path
    return None


def update_claude_config(
    server_name: str,
    command: str,
    args: list[str],
    *,
    env_vars: Optional[Dict[str, str]] = None,
) -> bool:
    """Add or update a FastMCP server in Claude's configuration.
    """
    config_dir = get_claude_config_path()
    if not config_dir:
        raise RuntimeError(
            "Claude Desktop config directory not found. Please ensure Claude Desktop "
            "is installed and has been run at least once to initialize its configuration."
        )

    config_file = config_dir / "claude_desktop_config.json"
    if not config_file.exists():
        try:
            config_file.write_text("{}")
        except Exception as e:
            logger.error(
                "Failed to create Claude config file",
                extra={
                    "error": str(e),
                    "config_file": str(config_file),
                },
            )
            return False

    try:
        config = json.loads(config_file.read_text())
        if "mcpServers" not in config:
            config["mcpServers"] = {}

        # Always preserve existing env vars and merge with new ones
        if (
            server_name in config["mcpServers"]
            and "env" in config["mcpServers"][server_name]
        ):
            existing_env = config["mcpServers"][server_name]["env"]
            if env_vars:
                # New vars take precedence over existing ones
                env_vars = {**existing_env, **env_vars}
            else:
                env_vars = existing_env

        server_config = {
            "command": command,
            "args": args,
        }

        # Add environment variables if specified
        if env_vars:
            server_config["env"] = env_vars

        config["mcpServers"][server_name] = server_config

        config_file.write_text(json.dumps(config, indent=2))
        logger.info(
            f"Added server '{server_name}' to Claude config",
            extra={"config_file": str(config_file)},
        )
        return True
    except Exception as e:
        logger.error(
            "Failed to update Claude config",
            extra={
                "error": str(e),
                "config_file": str(config_file),
            },
        )
        return False


@app.command()
def install() -> None:
    """Install a current server in the Claude desktop app.
    """

    name = "openai-websearch-mcp"

    env_dict = {}
    local_bin = Path(Path.home(), ".local", "bin")
    pyenv_shims = Path(Path.home(), ".pyenv", "shims")
    path = os.environ['PATH']
    python_version = platform.python_version()
    python_bin = Path(Path.home(), "Library", "Python", python_version, "bin")
    if sys.platform == "win32":
        env_dict["PATH"] = f"{local_bin};{pyenv_shims};{python_bin};{path}"
    else:
        env_dict["PATH"] = f"{local_bin}:{pyenv_shims}:{python_bin}:{path}"

    api_key = os.environ['OPENAI_API_KEY'] if "OPENAI_API_KEY" in os.environ else ""
    while api_key == "":
        api_key = getpass.getpass("Enter your OpenAI API key: ")
        if api_key != "":
            client = OpenAI(api_key=api_key)
            try:
                client.models.list()
            except Exception as e:
                logger.error(f"Failed to authenticate with OpenAI API: {str(e)}")
                api_key = ""

    env_dict["OPENAI_API_KEY"] = api_key
    
    # 添加默认模型配置(可选)
    default_model = os.environ.get("OPENAI_DEFAULT_MODEL", "gpt-5-mini")
    env_dict["OPENAI_DEFAULT_MODEL"] = default_model

    uv = which('uvx', path=env_dict['PATH'])
    command = uv if uv else "uvx"
    args = [name]

    # print("------------update_claude_config", command, args, env_dict)

    if update_claude_config(
        name,
        command,
        args,
        env_vars=env_dict,
    ):
        logger.info(f"Successfully installed {name} in Claude app")
    else:
        logger.error(f"Failed to install {name} in Claude app")
        sys.exit(1)

```