#
tokens: 6349/50000 9/9 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── .python-version
├── docs
│   └── openai-websearch-tool.md
├── LICENSE
├── pyproject.toml
├── README.md
├── src
│   └── openai_websearch_mcp
│       ├── __init__.py
│       ├── __main__.py
│       ├── cli.py
│       └── server.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
1 | 3.10
2 | 
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | __pycache__
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # OpenAI WebSearch MCP Server 🔍
  2 | 
  3 | [![PyPI version](https://badge.fury.io/py/openai-websearch-mcp.svg)](https://badge.fury.io/py/openai-websearch-mcp)
  4 | [![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
  5 | [![MCP Compatible](https://img.shields.io/badge/MCP-Compatible-green.svg)](https://modelcontextprotocol.io/)
  6 | [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
  7 | 
  8 | An advanced MCP server that provides intelligent web search capabilities using OpenAI's reasoning models. Perfect for AI assistants that need up-to-date information with smart reasoning capabilities.
  9 | 
 10 | ## ✨ Features
 11 | 
 12 | - **🧠 Reasoning Model Support**: Full compatibility with OpenAI's latest reasoning models (gpt-5, gpt-5-mini, gpt-5-nano, o3, o4-mini)
 13 | - **⚡ Smart Effort Control**: Intelligent `reasoning_effort` defaults based on use case
 14 | - **🔄 Multi-Mode Search**: Fast iterations with gpt-5-mini or deep research with gpt-5
 15 | - **🌍 Localized Results**: Support for location-based search customization
 16 | - **📝 Rich Descriptions**: Complete parameter documentation for easy integration
 17 | - **🔧 Flexible Configuration**: Environment variable support for easy deployment
 18 | 
 19 | ## 🚀 Quick Start
 20 | 
 21 | ### One-Click Installation for Claude Desktop
 22 | 
 23 | ```bash
 24 | OPENAI_API_KEY=sk-xxxx uvx --with openai-websearch-mcp openai-websearch-mcp-install
 25 | ```
 26 | 
 27 | Replace `sk-xxxx` with your OpenAI API key from the [OpenAI Platform](https://platform.openai.com/).
 28 | 
 29 | ## ⚙️ Configuration
 30 | 
 31 | ### Claude Desktop
 32 | 
 33 | Add to your `claude_desktop_config.json`:
 34 | 
 35 | ```json
 36 | {
 37 |   "mcpServers": {
 38 |     "openai-websearch-mcp": {
 39 |       "command": "uvx",
 40 |       "args": ["openai-websearch-mcp"],
 41 |       "env": {
 42 |         "OPENAI_API_KEY": "your-api-key-here",
 43 |         "OPENAI_DEFAULT_MODEL": "gpt-5-mini"
 44 |       }
 45 |     }
 46 |   }
 47 | }
 48 | ```
 49 | 
 50 | ### Cursor
 51 | 
 52 | Add to your MCP settings in Cursor:
 53 | 
 54 | 1. Open Cursor Settings (`Cmd/Ctrl + ,`)
 55 | 2. Search for "MCP" or go to Extensions → MCP
 56 | 3. Add server configuration:
 57 | 
 58 | ```json
 59 | {
 60 |   "mcpServers": {
 61 |     "openai-websearch-mcp": {
 62 |       "command": "uvx",
 63 |       "args": ["openai-websearch-mcp"],
 64 |       "env": {
 65 |         "OPENAI_API_KEY": "your-api-key-here",
 66 |         "OPENAI_DEFAULT_MODEL": "gpt-5-mini"
 67 |       }
 68 |     }
 69 |   }
 70 | }
 71 | ```
 72 | 
 73 | ### Claude Code
 74 | 
 75 | Claude Code automatically detects MCP servers configured for Claude Desktop. Use the same configuration as above for Claude Desktop.
 76 | 
 77 | ### Local Development
 78 | 
 79 | For local testing, use the absolute path to your virtual environment:
 80 | 
 81 | ```json
 82 | {
 83 |   "mcpServers": {
 84 |     "openai-websearch-mcp": {
 85 |       "command": "/path/to/your/project/.venv/bin/python",
 86 |       "args": ["-m", "openai_websearch_mcp"],
 87 |       "env": {
 88 |         "OPENAI_API_KEY": "your-api-key-here",
 89 |         "OPENAI_DEFAULT_MODEL": "gpt-5-mini",
 90 |         "PYTHONPATH": "/path/to/your/project/src"
 91 |       }
 92 |     }
 93 |   }
 94 | }
 95 | ```
 96 | 
 97 | ## 🛠️ Available Tools
 98 | 
 99 | ### `openai_web_search`
100 | 
101 | Intelligent web search with reasoning model support.
102 | 
103 | #### Parameters
104 | 
105 | | Parameter | Type | Description | Default |
106 | |-----------|------|-------------|---------|
107 | | `input` | `string` | The search query or question to search for | *Required* |
108 | | `model` | `string` | AI model to use. Supports gpt-4o, gpt-4o-mini, gpt-5, gpt-5-mini, gpt-5-nano, o3, o4-mini | `gpt-5-mini` |
109 | | `reasoning_effort` | `string` | Reasoning effort level: low, medium, high, minimal | Smart default |
110 | | `type` | `string` | Web search API version | `web_search_preview` |
111 | | `search_context_size` | `string` | Context amount: low, medium, high | `medium` |
112 | | `user_location` | `object` | Optional location for localized results | `null` |
113 | 
114 | ## 💬 Usage Examples
115 | 
116 | Once configured, simply ask your AI assistant to search for information using natural language:
117 | 
118 | ### Quick Search
119 | > "Search for the latest developments in AI reasoning models using openai_web_search"
120 | 
121 | ### Deep Research  
122 | > "Use openai_web_search with gpt-5 and high reasoning effort to provide a comprehensive analysis of quantum computing breakthroughs"
123 | 
124 | ### Localized Search
125 | > "Search for local tech meetups in San Francisco this week using openai_web_search"
126 | 
127 | The AI assistant will automatically use the `openai_web_search` tool with appropriate parameters based on your request.
128 | 
129 | ## 🤖 Model Selection Guide
130 | 
131 | ### Quick Multi-Round Searches 🚀
132 | - **Recommended**: `gpt-5-mini` with `reasoning_effort: "low"`
133 | - **Use Case**: Fast iterations, real-time information, multiple quick queries
134 | - **Benefits**: Lower latency, cost-effective for frequent searches
135 | 
136 | ### Deep Research 🔬
137 | - **Recommended**: `gpt-5` with `reasoning_effort: "medium"` or `"high"`
138 | - **Use Case**: Comprehensive analysis, complex topics, detailed investigation
139 | - **Benefits**: Multi-round reasoned results, no need for agent iterations
140 | 
141 | ### Model Comparison
142 | 
143 | | Model | Reasoning | Default Effort | Best For |
144 | |-------|-----------|----------------|----------|
145 | | `gpt-4o` | ❌ | N/A | Standard search |
146 | | `gpt-4o-mini` | ❌ | N/A | Basic queries |
147 | | `gpt-5-mini` | ✅ | `low` | Fast iterations |
148 | | `gpt-5` | ✅ | `medium` | Deep research |
149 | | `gpt-5-nano` | ✅ | `medium` | Balanced approach |
150 | | `o3` | ✅ | `medium` | Advanced reasoning |
151 | | `o4-mini` | ✅ | `medium` | Efficient reasoning |
152 | 
153 | ## 📦 Installation
154 | 
155 | ### Using uvx (Recommended)
156 | 
157 | ```bash
158 | # Install and run directly
159 | uvx openai-websearch-mcp
160 | 
161 | # Or install globally
162 | uvx install openai-websearch-mcp
163 | ```
164 | 
165 | ### Using pip
166 | 
167 | ```bash
168 | # Install from PyPI
169 | pip install openai-websearch-mcp
170 | 
171 | # Run the server
172 | python -m openai_websearch_mcp
173 | ```
174 | 
175 | ### From Source
176 | 
177 | ```bash
178 | # Clone the repository
179 | git clone https://github.com/yourusername/openai-websearch-mcp.git
180 | cd openai-websearch-mcp
181 | 
182 | # Install dependencies
183 | uv sync
184 | 
185 | # Run in development mode
186 | uv run python -m openai_websearch_mcp
187 | ```
188 | 
189 | ## 👩‍💻 Development
190 | 
191 | ### Setup Development Environment
192 | 
193 | ```bash
194 | # Clone and setup
195 | git clone https://github.com/yourusername/openai-websearch-mcp.git
196 | cd openai-websearch-mcp
197 | 
198 | # Create virtual environment and install dependencies
199 | uv sync
200 | 
201 | # Run tests
202 | uv run python -m pytest
203 | 
204 | # Install in development mode
205 | uv pip install -e .
206 | ```
207 | 
208 | ### Environment Variables
209 | 
210 | | Variable | Description | Default |
211 | |----------|-------------|---------|
212 | | `OPENAI_API_KEY` | Your OpenAI API key | *Required* |
213 | | `OPENAI_DEFAULT_MODEL` | Default model to use | `gpt-5-mini` |
214 | 
215 | ## 🐛 Debugging
216 | 
217 | ### Using MCP Inspector
218 | 
219 | ```bash
220 | # For uvx installations
221 | npx @modelcontextprotocol/inspector uvx openai-websearch-mcp
222 | 
223 | # For pip installations
224 | npx @modelcontextprotocol/inspector python -m openai_websearch_mcp
225 | ```
226 | 
227 | ### Common Issues
228 | 
229 | **Issue**: "Unsupported parameter: 'reasoning.effort'"
230 | **Solution**: This occurs when using non-reasoning models (gpt-4o, gpt-4o-mini) with reasoning_effort parameter. The server automatically handles this by only applying reasoning parameters to compatible models.
231 | 
232 | **Issue**: "No module named 'openai_websearch_mcp'"
233 | **Solution**: Ensure you've installed the package correctly and your Python path includes the package location.
234 | 
235 | ## 📄 License
236 | 
237 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
238 | 
239 | ## 🙏 Acknowledgments
240 | 
241 | - 🤖 Generated with [Claude Code](https://claude.ai/code)
242 | - 🔥 Powered by [OpenAI's Web Search API](https://openai.com)
243 | - 🛠️ Built on the [Model Context Protocol](https://modelcontextprotocol.io/)
244 | 
245 | ---
246 | 
247 | **Co-Authored-By**: Claude <[email protected]>
```

--------------------------------------------------------------------------------
/src/openai_websearch_mcp/__main__.py:
--------------------------------------------------------------------------------

```python
1 | from openai_websearch_mcp import main
2 | 
3 | main()
4 | 
```

--------------------------------------------------------------------------------
/src/openai_websearch_mcp/__init__.py:
--------------------------------------------------------------------------------

```python
 1 | from .server import mcp
 2 | 
 3 | 
 4 | def main():
 5 |     mcp.run()
 6 | 
 7 | 
 8 | if __name__ == "__main__":
 9 |     main()
10 | 
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "openai-websearch-mcp"
 3 | version = "0.4.2"
 4 | description = "using openai websearch as mcp server"
 5 | readme = "README.md"
 6 | requires-python = ">=3.10"
 7 | dependencies = [
 8 |     "pydantic_extra_types==2.10.3",
 9 |     "pydantic>=2.11.0,<3.0.0",
10 |     "mcp==1.13.1",
11 |     "tzdata==2025.1",
12 |     "openai==1.66.2",
13 |     "typer==0.15.2"
14 | ]
15 | 
16 | 
17 | [project.scripts]
18 | openai-websearch-mcp = "openai_websearch_mcp:main"
19 | openai-websearch-mcp-install = "openai_websearch_mcp.cli:app"
20 | 
21 | 
22 | [build-system]
23 | requires = ["hatchling"]
24 | build-backend = "hatchling.build"
25 | 
26 | [tool.uv]
27 | dev-dependencies = [
28 |     "pydantic_extra_types==2.10.3",
29 |     "pydantic>=2.11.0,<3.0.0",
30 |     "mcp==1.13.1",
31 |     "tzdata==2025.1",
32 |     "openai==1.66.2",
33 |     "typer==0.15.2"
34 | ]
35 | 
```

--------------------------------------------------------------------------------
/docs/openai-websearch-tool.md:
--------------------------------------------------------------------------------

```markdown
 1 | 
 2 | # Web search
 3 | This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
 4 | 
 5 | 
 6 | ## properties
 7 | `type` string
 8 | 
 9 | > Required
10 | > The type of the web search tool. One of:
11 | > web_search_preview
12 | > web_search_preview_2025_03_11
13 | 
14 | 
15 | `search_context_size` string
16 | 
17 | > Optional
18 | > Defaults to medium
19 | > High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
20 | 
21 | `user_location` object or null
22 | 
23 | > Optional
24 | > Approximate location parameters > for the search.
25 | 
26 | 
27 | properties of `user_location`   
28 | `type` string
29 | 
30 | > Required
31 | > The type of location > approximation. Always approximate.
32 | 
33 | `city` string
34 | 
35 | > Optional
36 | > Free text input for the city of the user, e.g. San Francisco.
37 | 
38 | `country` string
39 | 
40 | > Optional
41 | > The two-letter ISO country code of the user, e.g. US.
42 | 
43 | `region` string
44 | 
45 | > Optional
46 | > Free text input for the region of the user, e.g. California.
47 | 
48 | `timezone` string
49 | 
50 | > Optional
51 | > The IANA timezone of the user, e.g. America/Los_Angeles.
```

--------------------------------------------------------------------------------
/src/openai_websearch_mcp/server.py:
--------------------------------------------------------------------------------

```python
 1 | from pydantic import BaseModel, Field
 2 | from typing import Literal, Optional, Annotated
 3 | from mcp.server.fastmcp import FastMCP
 4 | from openai import OpenAI
 5 | from pydantic_extra_types.timezone_name import TimeZoneName
 6 | from pydantic import BaseModel
 7 | import os
 8 | 
 9 | mcp = FastMCP(
10 |     name="OpenAI Web Search",
11 |     instructions="This MCP server provides access to OpenAI's websearch functionality through the Model Context Protocol."
12 | )
13 | 
14 | class UserLocation(BaseModel):
15 |     type: Literal["approximate"] = "approximate"
16 |     city: str
17 |     country: str = None
18 |     region: str = None
19 |     timezone: TimeZoneName
20 | 
21 | 
22 | @mcp.tool(
23 |     name="openai_web_search",
24 |     description="""OpenAI Web Search with reasoning models. 
25 | 
26 | For quick multi-round searches: Use 'gpt-5-mini' with reasoning_effort='low' for fast iterations.
27 | 
28 | For deep research: Use 'gpt-5' with reasoning_effort='medium' or 'high'. 
29 | The result is already multi-round reasoned, so agents don't need continuous iterations.
30 | 
31 | Supports: gpt-4o (no reasoning), gpt-5/gpt-5-mini/gpt-5-nano, o3/o4-mini (with reasoning).""",
32 | )
33 | def openai_web_search(
34 |     input: Annotated[str, Field(description="The search query or question to search for")],
35 |     model: Annotated[Optional[Literal["gpt-4o", "gpt-4o-mini", "gpt-5", "gpt-5-mini", "gpt-5-nano", "o3", "o4-mini"]], 
36 |                      Field(description="AI model to use. Defaults to OPENAI_DEFAULT_MODEL env var or gpt-5-mini")] = None,
37 |     reasoning_effort: Annotated[Optional[Literal["low", "medium", "high", "minimal"]], 
38 |                                 Field(description="Reasoning effort level for supported models (gpt-5, o3, o4-mini). Default: low for gpt-5-mini, medium for others")] = None,
39 |     type: Annotated[Literal["web_search_preview", "web_search_preview_2025_03_11"], 
40 |                     Field(description="Web search API version to use")] = "web_search_preview",
41 |     search_context_size: Annotated[Literal["low", "medium", "high"], 
42 |                                    Field(description="Amount of context to include in search results")] = "medium",
43 |     user_location: Annotated[Optional[UserLocation], 
44 |                             Field(description="Optional user location for localized search results")] = None,
45 | ) -> str:
46 |     # 从环境变量读取默认模型,如果没有则使用 gpt-5-mini
47 |     if model is None:
48 |         model = os.getenv("OPENAI_DEFAULT_MODEL", "gpt-5-mini")
49 |     
50 |     client = OpenAI()
51 |     
52 |     # 判断是否为推理模型
53 |     reasoning_models = ["gpt-5", "gpt-5-mini", "gpt-5-nano", "o3", "o4-mini"]
54 |     
55 |     # 构建请求参数
56 |     request_params = {
57 |         "model": model,
58 |         "tools": [
59 |             {
60 |                 "type": type,
61 |                 "search_context_size": search_context_size,
62 |                 "user_location": user_location.model_dump() if user_location else None,
63 |             }
64 |         ],
65 |         "input": input,
66 |     }
67 |     
68 |     # 对推理模型设置智能默认值
69 |     if model in reasoning_models:
70 |         if reasoning_effort is None:
71 |             # gpt-5-mini 默认使用 low,其他推理模型默认 medium
72 |             if model == "gpt-5-mini":
73 |                 reasoning_effort = "low"  # 快速搜索
74 |             else:
75 |                 reasoning_effort = "medium"  # 深度研究
76 |         request_params["reasoning"] = {"effort": reasoning_effort}
77 |     
78 |     response = client.responses.create(**request_params)
79 |     return response.output_text
80 | 
81 | 
```

--------------------------------------------------------------------------------
/src/openai_websearch_mcp/cli.py:
--------------------------------------------------------------------------------

```python
  1 | import json
  2 | import sys
  3 | import getpass
  4 | from pathlib import Path
  5 | from typing import Optional, Dict
  6 | import logging
  7 | import sys
  8 | import os
  9 | import platform
 10 | import typer
 11 | from shutil import which
 12 | from openai import OpenAI
 13 | 
 14 | 
 15 | logger = logging.getLogger(__file__)
 16 | 
 17 | app = typer.Typer(
 18 |     name="openapi-websearch-mcp",
 19 |     help="openapi-websearch-mcp install tools",
 20 |     add_completion=False,
 21 |     no_args_is_help=True,  # Show help if no args provided
 22 | )
 23 | 
 24 | 
 25 | 
 26 | def get_claude_config_path() -> Path | None:
 27 |     """Get the Claude config directory based on platform."""
 28 |     if sys.platform == "win32":
 29 |         path = Path(Path.home(), "AppData", "Roaming", "Claude")
 30 |     elif sys.platform == "darwin":
 31 |         path = Path(Path.home(), "Library", "Application Support", "Claude")
 32 |     else:
 33 |         return None
 34 | 
 35 |     if path.exists():
 36 |         return path
 37 |     return None
 38 | 
 39 | 
 40 | def update_claude_config(
 41 |     server_name: str,
 42 |     command: str,
 43 |     args: list[str],
 44 |     *,
 45 |     env_vars: Optional[Dict[str, str]] = None,
 46 | ) -> bool:
 47 |     """Add or update a FastMCP server in Claude's configuration.
 48 |     """
 49 |     config_dir = get_claude_config_path()
 50 |     if not config_dir:
 51 |         raise RuntimeError(
 52 |             "Claude Desktop config directory not found. Please ensure Claude Desktop "
 53 |             "is installed and has been run at least once to initialize its configuration."
 54 |         )
 55 | 
 56 |     config_file = config_dir / "claude_desktop_config.json"
 57 |     if not config_file.exists():
 58 |         try:
 59 |             config_file.write_text("{}")
 60 |         except Exception as e:
 61 |             logger.error(
 62 |                 "Failed to create Claude config file",
 63 |                 extra={
 64 |                     "error": str(e),
 65 |                     "config_file": str(config_file),
 66 |                 },
 67 |             )
 68 |             return False
 69 | 
 70 |     try:
 71 |         config = json.loads(config_file.read_text())
 72 |         if "mcpServers" not in config:
 73 |             config["mcpServers"] = {}
 74 | 
 75 |         # Always preserve existing env vars and merge with new ones
 76 |         if (
 77 |             server_name in config["mcpServers"]
 78 |             and "env" in config["mcpServers"][server_name]
 79 |         ):
 80 |             existing_env = config["mcpServers"][server_name]["env"]
 81 |             if env_vars:
 82 |                 # New vars take precedence over existing ones
 83 |                 env_vars = {**existing_env, **env_vars}
 84 |             else:
 85 |                 env_vars = existing_env
 86 | 
 87 |         server_config = {
 88 |             "command": command,
 89 |             "args": args,
 90 |         }
 91 | 
 92 |         # Add environment variables if specified
 93 |         if env_vars:
 94 |             server_config["env"] = env_vars
 95 | 
 96 |         config["mcpServers"][server_name] = server_config
 97 | 
 98 |         config_file.write_text(json.dumps(config, indent=2))
 99 |         logger.info(
100 |             f"Added server '{server_name}' to Claude config",
101 |             extra={"config_file": str(config_file)},
102 |         )
103 |         return True
104 |     except Exception as e:
105 |         logger.error(
106 |             "Failed to update Claude config",
107 |             extra={
108 |                 "error": str(e),
109 |                 "config_file": str(config_file),
110 |             },
111 |         )
112 |         return False
113 | 
114 | 
115 | @app.command()
116 | def install() -> None:
117 |     """Install a current server in the Claude desktop app.
118 |     """
119 | 
120 |     name = "openai-websearch-mcp"
121 | 
122 |     env_dict = {}
123 |     local_bin = Path(Path.home(), ".local", "bin")
124 |     pyenv_shims = Path(Path.home(), ".pyenv", "shims")
125 |     path = os.environ['PATH']
126 |     python_version = platform.python_version()
127 |     python_bin = Path(Path.home(), "Library", "Python", python_version, "bin")
128 |     if sys.platform == "win32":
129 |         env_dict["PATH"] = f"{local_bin};{pyenv_shims};{python_bin};{path}"
130 |     else:
131 |         env_dict["PATH"] = f"{local_bin}:{pyenv_shims}:{python_bin}:{path}"
132 | 
133 |     api_key = os.environ['OPENAI_API_KEY'] if "OPENAI_API_KEY" in os.environ else ""
134 |     while api_key == "":
135 |         api_key = getpass.getpass("Enter your OpenAI API key: ")
136 |         if api_key != "":
137 |             client = OpenAI(api_key=api_key)
138 |             try:
139 |                 client.models.list()
140 |             except Exception as e:
141 |                 logger.error(f"Failed to authenticate with OpenAI API: {str(e)}")
142 |                 api_key = ""
143 | 
144 |     env_dict["OPENAI_API_KEY"] = api_key
145 |     
146 |     # 添加默认模型配置(可选)
147 |     default_model = os.environ.get("OPENAI_DEFAULT_MODEL", "gpt-5-mini")
148 |     env_dict["OPENAI_DEFAULT_MODEL"] = default_model
149 | 
150 |     uv = which('uvx', path=env_dict['PATH'])
151 |     command = uv if uv else "uvx"
152 |     args = [name]
153 | 
154 |     # print("------------update_claude_config", command, args, env_dict)
155 | 
156 |     if update_claude_config(
157 |         name,
158 |         command,
159 |         args,
160 |         env_vars=env_dict,
161 |     ):
162 |         logger.info(f"Successfully installed {name} in Claude app")
163 |     else:
164 |         logger.error(f"Failed to install {name} in Claude app")
165 |         sys.exit(1)
166 | 
```