#
tokens: 3936/50000 6/6 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── .python-version
├── LICENSE
├── main.py
├── pyproject.toml
├── README.md
├── requirements.txt
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
1 | 3.12
2 | 
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
 1 | # Python-generated files
 2 | __pycache__/
 3 | *.py[oc]
 4 | build/
 5 | dist/
 6 | wheels/
 7 | *.egg-info
 8 | 
 9 | # Virtual environments
10 | .venv
11 | 
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Documentation Retrieval MCP Server (DOCRET)
  2 | 
  3 | This project implements a Model Context Protocol (MCP) server that enables AI assistants to access up-to-date documentation for various Python libraries, including LangChain, LlamaIndex, and OpenAI. By leveraging this server, AI assistants can dynamically fetch and provide relevant information from official documentation sources. The goal is to ensure that AI applications always have access to the latest official documentation.
  4 | 
  5 | ## What is an MCP Server?
  6 | 
  7 | The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.
  8 | 
  9 | ## Features
 10 | 
 11 | - **Dynamic Documentation Retrieval**: Fetches the latest documentation content for specified Python libraries.
 12 | - **Asynchronous Web Searches**: Utilizes the **SERPER API** to perform efficient web searches within targeted documentation sites.
 13 | - **HTML Parsing**: Employs BeautifulSoup to extract readable text from HTML content.
 14 | - **Extensible Design**: Easily add support for additional libraries by updating the configuration.
 15 | 
 16 | ## Prerequisites
 17 | 
 18 | - Python 3.8 or higher
 19 | - UV for Python Package Management (or pip if you're a pleb)
 20 | - A Serper API key (for Google searches or "SERP"s)
 21 | - Claude Desktop or Claude Code (for testing)
 22 | 
 23 | ## Installation
 24 | 
 25 | ### 1. Clone the Repository
 26 | 
 27 | ```env
 28 | git clone https://github.com/Sreedeep-SS/docret-mcp-server.git
 29 | cd docret-mcp-server
 30 | ```
 31 | 
 32 | ### 2. Create and Activate a Virtual Environment
 33 | 
 34 | - **On macOS/Linux**:
 35 | 
 36 |   ```env
 37 |   python3 -m venv env
 38 |   source env/bin/activate
 39 |   ```
 40 | 
 41 | - **On Windows**:
 42 | 
 43 |   ```env
 44 |   python -m venv env
 45 |   .\env\Scripts\activate
 46 |   ```
 47 | 
 48 | ### 3. Install Dependencies
 49 | 
 50 | With the virtual environment activated, install the required dependencies:
 51 | 
 52 | ```env
 53 | pip install -r requirements.txt
 54 | ```
 55 | 
 56 | or if you are using uv:
 57 | 
 58 | ```env
 59 | uv sync
 60 | ```
 61 | 
 62 | ## Set Up Environment Variables
 63 | 
 64 | Before running the application, configure the required environment variables. This project uses the SERPER API for searching documentation and requires an API key.
 65 | 
 66 | 1. Create a `.env` file in the root directory of the project.
 67 | 2. Add the following environment variable:
 68 | 
 69 |    ```env
 70 |    SERPER_API_KEY=your_serper_api_key_here
 71 |    ```
 72 | 
 73 | Replace `your_serper_api_key_here` with your actual API key.
 74 | 
 75 | ## Running the MCP Server
 76 | 
 77 | Once the dependencies are installed and environment variables are set up, you can start the MCP server.
 78 | 
 79 | ```bash
 80 | python main.py
 81 | ```
 82 | 
 83 | This will launch the server and make it ready to handle requests.
 84 | 
 85 | ## Usage
 86 | 
 87 | The MCP server provides an API to fetch documentation content from supported libraries. It works by querying the SERPER API for relevant documentation links and scraping the page content.
 88 | 
 89 | ### Searching Documentation
 90 | 
 91 | To search for documentation on a specific topic within a library, use the `get_docs` function. This function takes two parameters:
 92 | 
 93 | - `query`: The topic to search for (e.g., "Chroma DB")
 94 | - `library`: The name of the library (e.g., "langchain")
 95 | 
 96 | Example usage:
 97 | 
 98 | ```python
 99 | from main import get_docs
100 | 
101 | result = await get_docs("memory management", "openai")
102 | print(result)
103 | ```
104 | 
105 | This will return the extracted text from the relevant OpenAI documentation pages.
106 | 
107 | ## Integrating with AI Assistants
108 | 
109 | You can integrate this MCP server with AI assistants like Claude or custom-built AI models. To configure the assistant to interact with the server, use the following configuration:
110 | 
111 | ```json
112 | {
113 |   "servers": [
114 |     {
115 |       "name": "Documentation Retrieval Server",
116 |       "command": "python /path/to/main.py"
117 |     }
118 |   ]
119 | }
120 | ```
121 | 
122 | Ensure that the correct path to `main.py` is specified.
123 | 
124 | ## Extending the MCP Server
125 | 
126 | The server currently supports the following libraries:
127 | 
128 | - **LangChain**
129 | - **LlamaIndex**
130 | - **OpenAI**
131 | 
132 | To add support for additional libraries, update the `docs_urls` dictionary in `main.py` with the library name and its documentation URL:
133 | 
134 | ```python
135 | docs_urls = {
136 |     "langchain": "python.langchain.com/docs",
137 |     "llama-index": "docs.llamaindex.ai/en/stable",
138 |     "openai": "platform.openai.com/docs",
139 |     "new-library": "new-library-docs-url.com",
140 | }
141 | ```
142 | 
143 | 📌 Roadmap
144 | 
145 | Surely this is really exciting for me and I'm looking forward to build more on this and stay updated with the latest news and ideas that can be implemented
146 | 
147 | This is what I have on my mind:
148 | 
149 | 1. #### **Add support for more libraries (e.g., Hugging Face, PyTorch)**
150 |    
151 |    - Expand the `docs_urls` dictionary with additional libraries.
152 |    - Modify the get_docs function to handle different formats of documentation pages.
153 |    - Use regex-based or AI-powered parsing to better extract meaningful content.
154 |    - Provide an API endpoint to dynamically add new libraries.
155 |  
156 | 
157 | 2. #### **Implement caching to reduce redundant API calls**
158 | 
159 |     - Use Redis or an in-memory caching mechanism like `functools.lru_cache`
160 |     - Implement time-based cache invalidation.
161 |     - Cache results per library and per search term.
162 | 
163 |     
164 | 3. #### **Optimize web scraping with AI-powered summarization**
165 | 
166 |     - Use `GPT-4`, `BART`, or `T5` for summarizing scraped documentation.
167 |     - There are also `Claude 3 Haiku`, `Gemini 1.5 Pro`, `GPT-4-mini`, `Open-mistral-nemo`, `Hugging Face Models` and many more that can be used. All of which are subject to debate. 
168 |     - Let users choose between raw documentation text and a summarized version.
169 |  
170 | 
171 | 4. #### **Introduce a REST API for external integrations**
172 | 
173 |     - Use FastAPI to expose API endpoints. (Just because)
174 |     - Build a simple frontend dashboard for API interaction. (Why not?)
175 | 
176 |  
177 | 
178 | 5. #### **Add unit tests for better reliabilityReferences**
179 | 
180 |     - Use pytest and unittest for API and scraping reliability tests. (Last thing we want is this thing turning into a nuclear bomb)
181 |     - Implement CI/CD workflows to automatically run tests on every push. (The bread and butter of course)
182 | 
183 | 
184 | 6. #### **More MCP tools that can be useful during development**
185 | 
186 |     - Database Integrations
187 |     - Google Docs/Sheets/Drive Integration
188 |     - File System Operations
189 |     - Git Integration
190 |     - Integrating Communication Platforms to convert ideas into product
191 |     - Docker and Kubernetes management
192 | 
193 | 
194 | 
195 | ## References
196 | 
197 | For more details on MCP servers and their implementation, refer to the guide:
198 | 
199 | - [Introducing the Model Context Protocol](https://www.anthropic.com/news/model-context-protocol)
200 | - [https://modelcontextprotocol.io/](https://modelcontextprotocol.io/)
201 | - [Adding MCP to your python project](https://github.com/modelcontextprotocol/python-sdk?tab=readme-ov-file#adding-mcp-to-your-python-project)
202 | 
203 | 
204 | ## License
205 | 
206 | This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for more details.
207 | 
208 | 
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "documentation"
 3 | version = "0.1.0"
 4 | description = "Add your description here"
 5 | readme = "README.md"
 6 | requires-python = ">=3.12"
 7 | dependencies = [
 8 |     "beautifulsoup4>=4.13.3",
 9 |     "httpx>=0.28.1",
10 |     "mcp[cli]>=1.5.0",
11 | ]
12 | 
```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
 1 | # This file was autogenerated by uv via the following command:
 2 | #    uv pip compile pyproject.toml -o requirements.txt
 3 | annotated-types==0.7.0
 4 |     # via pydantic
 5 | anyio==4.9.0
 6 |     # via
 7 |     #   httpx
 8 |     #   mcp
 9 |     #   sse-starlette
10 |     #   starlette
11 | beautifulsoup4==4.13.3
12 |     # via documentation (pyproject.toml)
13 | certifi==2025.1.31
14 |     # via
15 |     #   httpcore
16 |     #   httpx
17 | click==8.1.8
18 |     # via
19 |     #   typer
20 |     #   uvicorn
21 | colorama==0.4.6
22 |     # via click
23 | h11==0.14.0
24 |     # via
25 |     #   httpcore
26 |     #   uvicorn
27 | httpcore==1.0.7
28 |     # via httpx
29 | httpx==0.28.1
30 |     # via
31 |     #   documentation (pyproject.toml)
32 |     #   mcp
33 | httpx-sse==0.4.0
34 |     # via mcp
35 | idna==3.10
36 |     # via
37 |     #   anyio
38 |     #   httpx
39 | markdown-it-py==3.0.0
40 |     # via rich
41 | mcp==1.5.0
42 |     # via documentation (pyproject.toml)
43 | mdurl==0.1.2
44 |     # via markdown-it-py
45 | pydantic==2.10.6
46 |     # via
47 |     #   mcp
48 |     #   pydantic-settings
49 | pydantic-core==2.27.2
50 |     # via pydantic
51 | pydantic-settings==2.8.1
52 |     # via mcp
53 | pygments==2.19.1
54 |     # via rich
55 | python-dotenv==1.0.1
56 |     # via
57 |     #   mcp
58 |     #   pydantic-settings
59 | rich==13.9.4
60 |     # via typer
61 | shellingham==1.5.4
62 |     # via typer
63 | sniffio==1.3.1
64 |     # via anyio
65 | soupsieve==2.6
66 |     # via beautifulsoup4
67 | sse-starlette==2.2.1
68 |     # via mcp
69 | starlette==0.46.1
70 |     # via
71 |     #   mcp
72 |     #   sse-starlette
73 | typer==0.15.2
74 |     # via mcp
75 | typing-extensions==4.12.2
76 |     # via
77 |     #   anyio
78 |     #   beautifulsoup4
79 |     #   pydantic
80 |     #   pydantic-core
81 |     #   typer
82 | uvicorn==0.34.0
83 |     # via mcp
84 | 
```

--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------

```python
 1 | import json
 2 | import os
 3 | 
 4 | import httpx
 5 | import mcp
 6 | from bs4 import BeautifulSoup
 7 | from mcp.server.fastmcp import FastMCP
 8 | from dotenv import load_dotenv
 9 | 
10 | load_dotenv()
11 | 
12 | mcp = FastMCP("docs")
13 | 
14 | USER_AGENT = "docs-app/1.0"
15 | SERPER_URL = "https://google.serper.dev/search"
16 | 
17 | docs_urls = {
18 |     "langchain": "python.langchain.com/docs",
19 |     "llama-index": "docs.llamaindex.ai/en/stable",
20 |     "openai": "platform.openai.com/docs",
21 | }
22 | 
23 | 
24 | async def search_web(query: str) -> dict | None:
25 |     payload = json.dumps({"q": query, "num": 2})
26 | 
27 |     headers = {
28 |         "X-API-KEY": os.getenv("SERPER_API_KEY"),
29 |         "Content-Type": "application/json",
30 |     }
31 | 
32 |     async with httpx.AsyncClient() as client:
33 |         try:
34 |             response = await client.post(
35 |                 SERPER_URL, headers=headers, data=payload, timeout=30
36 |             )
37 | 
38 |             response.raise_for_status()
39 |             return response.json()
40 |         except httpx.TimeoutException:
41 |             return {"organic": {}}
42 | 
43 | 
44 | async def fetch_url(url: str):
45 |     async with httpx.AsyncClient() as client:
46 |         try:
47 |             response = await client.get(url, timout=30)
48 |             soup = BeautifulSoup(response.text, "html.parser")
49 |             text = soup.get_text()
50 |             return text
51 |         except httpx.TimeoutException:
52 |             return "Timeout Error"
53 | 
54 | 
55 | @mcp.tool()
56 | async def get_docs(query: str , library: str):
57 |     """
58 |     Search the docs for a given query and library.
59 |     Support langchain, openai and llama-index.
60 | 
61 |     Args:
62 |     :param query: The query to search for (e.g. "Chroma DB")
63 |     :param library: The library to search in (e.g. "langchain")
64 | 
65 |     :return:
66 |         The text from the docs
67 |     """
68 |     if library not in docs_urls:
69 |         raise ValueError(f"Library {library} not support by this tool")
70 | 
71 |     query = f"site:{docs_urls[library]} {query}"
72 |     results = await search_web(query)
73 |     if len(results["organic"]) == 0:
74 |         return "No results found"
75 | 
76 |     text = ""
77 |     for result in results["organic"]:
78 |         text += await fetch_url(result["link"])
79 |     return text
80 | 
81 | if __name__ == "__main__":
82 |     mcp.run(transport="stdio")
83 | 
```