# Directory Structure
```
├── .env
├── main.py
├── pyproject.toml
├── README.md
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/.env:
--------------------------------------------------------------------------------
```
1 | SERPER_API_KEY=e297b5743f3404f2ab1262b5a827e6c41000b1b0
2 |
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
1 | # 🤖 Claude AI Documentation Assistant 📚
2 |
3 | <div align="center">
4 |
5 | 
6 |
7 | *A powerful MCP server that supercharges Claude with documentation search capabilities*
8 |
9 | [](https://www.python.org/downloads/)
10 | [](https://opensource.org/licenses/MIT)
11 | [](http://makeapullrequest.com)
12 |
13 | </div>
14 |
15 | ## ✨ Features
16 |
17 | - 🔍 **Smart Documentation Search** - Search across multiple AI/ML library documentation
18 | - 🧠 **Claude Integration** - Seamless connection with Claude's advanced reasoning capabilities
19 | - 🌐 **Intelligent Web Search** - Leverages Serper API for targeted documentation lookup
20 | - 💨 **Fast Response Times** - Optimized for quick retrieval and processing
21 | - 🧩 **Extendable Architecture** - Easily add more documentation sources
22 |
23 | ## 📋 Prerequisites
24 |
25 | - 🐍 Python 3.8 or higher
26 | - 🔑 Claude Pro subscription
27 | - 🔐 Serper API key ([Get one here](https://serper.dev))
28 | - 💻 Claude Desktop application
29 |
30 | ## 🚀 Quick Start
31 |
32 | ### 1️⃣ Installation
33 |
34 | ```bash
35 | # Clone the repository
36 | git clone https://github.com/your-username/claude-docs-assistant.git
37 | cd claude-docs-assistant
38 |
39 | # Create a virtual environment (recommended)
40 | python -m venv venv
41 | source venv/bin/activate # On Windows: venv\Scripts\activate
42 |
43 | # Install dependencies
44 | pip install -r requirements.txt
45 | ```
46 |
47 | ### 2️⃣ Configuration
48 |
49 | Create a `.env` file in the project root with your API keys:
50 |
51 | ```
52 | SERPER_API_KEY=your_serper_api_key_here
53 | ```
54 |
55 | ### 3️⃣ Start the MCP Server
56 |
57 | ```bash
58 | python main.py
59 | ```
60 |
61 | You should see output indicating the server is running and waiting for Claude to connect.
62 |
63 | ### 4️⃣ Connect Claude Desktop App
64 |
65 | 1. 📱 Open the Claude Desktop App
66 | 2. ⚙️ Click on your profile icon and select "Settings"
67 | 3. 🧰 Navigate to the "Tools" section
68 | 4. ➕ Click "Add Tool"
69 | 5. 🔗 Select "Connect to a local tool"
70 | 6. 🖥️ Follow the prompts to connect to your running MCP server
71 | 7. ✅ Confirm the connection is successful
72 |
73 | ## 🎮 Using Your Claude Documentation Assistant
74 |
75 | Once connected, you can start asking Claude questions that will trigger the documentation search. For example:
76 |
77 | ```
78 | Could you explain how to use FAISS with LangChain? Please search the langchain documentation to help me.
79 | ```
80 |
81 | Claude will automatically use your MCP server to:
82 | 1. 🔍 Search for relevant documentation
83 | 2. 📥 Retrieve the content
84 | 3. 🧠 Process and explain the information
85 |
86 | ## 🔧 Under the Hood
87 |
88 | ### 📄 Code Structure
89 |
90 | ```
91 | claude-docs-assistant/
92 | ├── main.py # MCP server implementation
93 | ├── requirements.txt # Project dependencies
94 | ├── .env # Environment variables (API keys)
95 | └── README.md # This documentation
96 | ```
97 |
98 | ### 🔌 Supported Libraries
99 |
100 | The assistant currently supports searching documentation for:
101 |
102 | - 🦜 **LangChain**: `python.langchain.com/docs`
103 | - 🦙 **LlamaIndex**: `docs.llamaindex.ai/en/stable`
104 | - 🧠 **OpenAI**: `platform.openai.com/docs`
105 |
106 | ### 🧩 How It Works
107 |
108 | 1. 📡 The MCP server exposes a `get_docs` tool to Claude
109 | 2. 🔍 When invoked, the tool searches for documentation using Serper API
110 | 3. 📚 Results are scraped for their content
111 | 4. 🔄 Content is returned to Claude for analysis and explanation
112 |
113 | ## 🛠️ Advanced Configuration
114 |
115 | ### Adding New Documentation Sources
116 |
117 | Extend the `docs_urls` dictionary in `main.py`:
118 |
119 | ```python
120 | docs_urls = {
121 | "langchain": "python.langchain.com/docs",
122 | "llama-index": "docs.llamaindex.ai/en/stable",
123 | "openai": "platform.openai.com/docs",
124 | "huggingface": "huggingface.co/docs", # Add new documentation sources
125 | "tensorflow": "www.tensorflow.org/api_docs",
126 | }
127 | ```
128 |
129 | ### Customizing Search Behavior
130 |
131 | Modify the `search_web` function to adjust the number of results:
132 |
133 | ```python
134 | payload = json.dumps({"q": query, "num": 5}) # Increase from default 2
135 | ```
136 |
137 | ## 🔍 Troubleshooting
138 |
139 | ### Common Issues
140 |
141 | - **🚫 "Connection refused" error**: Ensure the MCP server is running before connecting Claude
142 | - **⏱️ Timeout errors**: Check your internet connection or increase the timeout value
143 | - **🔒 API key issues**: Verify your Serper API key is correct in the `.env` file
144 |
145 | ### Debugging Tips
146 |
147 | Add more detailed logging by modifying the main.py file:
148 |
149 | ```python
150 | import logging
151 | logging.basicConfig(level=logging.DEBUG)
152 | ```
153 |
154 | ## 📈 Performance Optimization
155 |
156 | - ⚡ For faster response times, consider caching frequently accessed documentation
157 | - 🧠 Limit the amount of text returned to Claude to avoid token limitations
158 | - 🌐 Use more specific queries to get more relevant documentation
159 |
160 | ## 🤝 Contributing
161 |
162 | Contributions are welcome! Here's how you can help:
163 |
164 | 1. 🍴 Fork the repository
165 | 2. 🌿 Create a feature branch (`git checkout -b feature/amazing-feature`)
166 | 3. 💾 Commit your changes (`git commit -m 'Add some amazing feature'`)
167 | 4. 📤 Push to the branch (`git push origin feature/amazing-feature`)
168 | 5. 🔍 Open a Pull Request
169 |
170 | ## 📜 License
171 |
172 | This project is licensed under the MIT License - see the LICENSE file for details.
173 |
174 | ## 🙏 Acknowledgements
175 |
176 | - [Anthropic](https://www.anthropic.com/) for creating Claude
177 | - [Serper.dev](https://serper.dev) for their search API
178 | - All the open-source libraries that make this project possible
179 |
180 | ---
181 |
182 | <div align="center">
183 | Made with ❤️ for Claude enthusiasts
184 | </div>
185 |
```
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
```toml
1 | [project]
2 | name = "documentation"
3 | version = "0.1.0"
4 | description = "Add your description here"
5 | readme = "README.md"
6 | requires-python = ">=3.13"
7 | dependencies = [
8 | "httpx>=0.28.1",
9 | "mcp[cli]>=1.5.0",
10 | ]
11 |
```
--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------
```python
1 | from mcp.server.fastmcp import FastMCP
2 | from dotenv import load_dotenv
3 | import httpx
4 | import json
5 | import os
6 | from bs4 import BeautifulSoup
7 | load_dotenv()
8 |
9 | mcp= FastMCP("docs")
10 | USER_AGENT = "docs-app/1.0"
11 | SERPER_URL = "https://google.serper.dev/search
12 |
13 | docs_urls = {
14 | "langchain": "python.langchain.com/docs",
15 | "llama-index": "docs.llamaindex.ai/en/stable",
16 | "openai": "platform.openai.com/docs",
17 | }
18 |
19 | async def search_web(query: str) -> dict | None:
20 | payload = json.dumps({"q": query, "num": 2})
21 |
22 | headers = {
23 | "X-API-KEY": os.getenv("SERPER_API_KEY"),
24 | "Content-Type": "application/json",
25 | }
26 |
27 | async with httpx.AsyncClient() as client:
28 | try:
29 | response = await client.post(
30 | SERPER_URL, headers=headers, data=payload, timeout=30.0
31 | )
32 | response.raise_for_status()
33 | return response.json()
34 | except httpx.TimeoutException:
35 | return {"organic": []}
36 |
37 |
38 | async def fetch_url(url: str):
39 | async with httpx.AsyncClient() as client:
40 | try:
41 | response = await client.get(url, timeout=30.0)
42 | soup = BeautifulSoup(response.text, "html.parser")
43 | text = soup.get_text()
44 | return text
45 | except httpx.TimeoutException:
46 | return "Timeout error"
47 |
48 | @mcp.tool()
49 | async def get_docs(query: str, library: str):
50 | """
51 | Search the latest docs for a given query and library.
52 | Supports langchain, openai, and llama-index.
53 |
54 | Args:
55 | query: The query to search for (e.g. "Chroma DB")
56 | library: The library to search in (e.g. "langchain")
57 |
58 | Returns:
59 | Text from the docs
60 | """
61 | if library not in docs_urls:
62 | raise ValueError(f"Library {library} not supported by this tool")
63 |
64 | query = f"site:{docs_urls[library]} {query}"
65 | results = await search_web(query)
66 | if len(results["organic"]) == 0:
67 | return "No results found"
68 |
69 | text = ""
70 | for result in results["organic"]:
71 | text += await fetch_url(result["link"])
72 | return text
73 |
74 |
75 | if __name__ == "__main__":
76 | mcp.run(transport="stdio")
```