#
tokens: 49188/50000 22/25 files (page 1/3)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 1 of 3. Use http://codebase.md/surya-madhav/mcp?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .DS_Store
├── .gitignore
├── docs
│   ├── 00-important-official-mcp-documentation.md
│   ├── 00-important-python-mcp-sdk.md
│   ├── 01-introduction-to-mcp.md
│   ├── 02-mcp-core-concepts.md
│   ├── 03-building-mcp-servers-python.md
│   ├── 04-connecting-to-mcp-servers.md
│   ├── 05-communication-protocols.md
│   ├── 06-troubleshooting-guide.md
│   ├── 07-extending-the-repo.md
│   └── 08-advanced-mcp-features.md
├── frontend
│   ├── app.py
│   ├── pages
│   │   ├── 01_My_Active_Servers.py
│   │   ├── 02_Settings.py
│   │   └── 03_Documentation.py
│   └── utils.py
├── LICENSE
├── README.md
├── requirements.txt
├── run.bat
├── run.sh
├── server.py
└── tools
    ├── __init__.py
    ├── crawl4ai_scraper.py
    ├── ddg_search.py
    └── web_scrape.py
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | tools/__pycache__/__init__.cpython-312.pyc
2 | tools/__pycache__/web_scrape.cpython-312.pyc
3 | .idea/
4 | **/__pycache__/**
5 | 
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Web Tools Server
  2 | 
  3 | A Model Context Protocol (MCP) server that provides tools for web-related operations. This server allows LLMs to interact with web content through standardized tools.
  4 | 
  5 | ## Current Tools
  6 | 
  7 | - **web_scrape**: Converts a URL to use r.jina.ai as a prefix and returns the markdown content
  8 | 
  9 | ## Installation
 10 | 
 11 | 1. Clone this repository:
 12 |    ```bash
 13 |    git clone <repository-url>
 14 |    cd MCP
 15 |    ```
 16 | 
 17 | 2. Install the required dependencies:
 18 |    ```bash
 19 |    pip install -r requirements.txt
 20 |    ```
 21 | 
 22 |    Alternatively, you can use [uv](https://github.com/astral-sh/uv) for faster installation:
 23 |    ```bash
 24 |    uv pip install -r requirements.txt
 25 |    ```
 26 | 
 27 | ## Running the Server and UI
 28 | 
 29 | This repository includes convenient scripts to run either the MCP server or the Streamlit UI.
 30 | 
 31 | ### Using the Run Scripts
 32 | 
 33 | On macOS/Linux:
 34 | ```bash
 35 | # Run the server with stdio transport (default)
 36 | ./run.sh server
 37 | 
 38 | # Run the server with SSE transport
 39 | ./run.sh server --transport sse --host localhost --port 5000
 40 | 
 41 | # Run the Streamlit UI
 42 | ./run.sh ui
 43 | ```
 44 | 
 45 | On Windows:
 46 | ```cmd
 47 | # Run the server with stdio transport (default)
 48 | run.bat server
 49 | 
 50 | # Run the server with SSE transport
 51 | run.bat server --transport sse --host localhost --port 5000
 52 | 
 53 | # Run the Streamlit UI
 54 | run.bat ui
 55 | ```
 56 | 
 57 | ### Running Manually
 58 | 
 59 | Alternatively, you can run the server directly:
 60 | 
 61 | #### Using stdio (default)
 62 | 
 63 | ```bash
 64 | python server.py
 65 | ```
 66 | 
 67 | #### Using SSE
 68 | 
 69 | ```bash
 70 | python server.py --transport sse --host localhost --port 5000
 71 | ```
 72 | 
 73 | This will start an HTTP server on `localhost:5000` that accepts MCP connections.
 74 | 
 75 | And to run the Streamlit UI manually:
 76 | 
 77 | ```bash
 78 | streamlit run streamlit_app.py
 79 | ```
 80 | 
 81 | ## Testing with MCP Inspector
 82 | 
 83 | The MCP Inspector is a tool for testing and debugging MCP servers. You can use it to interact with your server:
 84 | 
 85 | 1. Install the MCP Inspector:
 86 |    ```bash
 87 |    npm install -g @modelcontextprotocol/inspector
 88 |    ```
 89 | 
 90 | 2. Run the Inspector with your server:
 91 |    ```bash
 92 |    npx @modelcontextprotocol/inspector python server.py
 93 |    ```
 94 | 
 95 | 3. Use the Inspector interface to test the `web_scrape` tool by providing a URL like `example.com` and viewing the returned markdown content.
 96 | 
 97 | ## Integrating with Claude for Desktop
 98 | 
 99 | To use this server with Claude for Desktop:
100 | 
101 | 1. Make sure you have [Claude for Desktop](https://claude.ai/download) installed.
102 | 
103 | 2. Open the Claude for Desktop configuration file:
104 |    - Mac: `~/Library/Application Support/Claude/claude_desktop_config.json`
105 |    - Windows: `%APPDATA%\Claude\claude_desktop_config.json`
106 | 
107 | 3. Add the following configuration (adjust the path as needed):
108 | 
109 | ```json
110 | {
111 |   "mcpServers": {
112 |     "web-tools": {
113 |       "command": "python",
114 |       "args": [
115 |         "/absolute/path/to/MCP/server.py"
116 |       ]
117 |     }
118 |   }
119 | }
120 | ```
121 | 
122 | 4. Restart Claude for Desktop.
123 | 
124 | 5. You should now see the web_scrape tool available in Claude's interface. You can ask Claude to fetch content from a website, and it will use the tool.
125 | 
126 | ## Example Usage
127 | 
128 | Once integrated with Claude, you can ask questions like:
129 | 
130 | - "What's on the homepage of example.com?"
131 | - "Can you fetch and summarize the content from mozilla.org?"
132 | - "Get the content from wikipedia.org/wiki/Model_Context_Protocol and explain it to me."
133 | 
134 | Claude will use the web_scrape tool to fetch the content and provide it in its response.
135 | 
136 | ## Adding More Tools
137 | 
138 | To add more tools to this server:
139 | 
140 | 1. Create a new Python file in the `tools/` directory, e.g., `tools/new_tool.py`.
141 | 
142 | 2. Implement your tool function, following a similar pattern to the existing tools.
143 | 
144 | 3. Import your tool in `server.py` and register it with the MCP server:
145 | 
146 | ```python
147 | # Import your new tool
148 | from tools.new_tool import new_tool_function
149 | 
150 | # Register the tool with the MCP server
151 | @mcp.tool()
152 | async def new_tool(param1: str, param2: int) -> str:
153 |     """
154 |     Description of what your tool does.
155 |     
156 |     Args:
157 |         param1: Description of param1
158 |         param2: Description of param2
159 |         
160 |     Returns:
161 |         Description of return value
162 |     """
163 |     return await new_tool_function(param1, param2)
164 | ```
165 | 
166 | 4. Restart the server to apply the changes.
167 | 
168 | ## Streamlit UI
169 | 
170 | This repository includes a Streamlit application that allows you to connect to and test all your MCP servers configured in Claude for Desktop.
171 | 
172 | ### Running the Streamlit UI
173 | 
174 | ```bash
175 | streamlit run streamlit_app.py
176 | ```
177 | 
178 | This will start the Streamlit server and open a web browser with the UI.
179 | 
180 | ### Features
181 | 
182 | - Load and parse your Claude for Desktop configuration file
183 | - View all configured MCP servers
184 | - Connect to any server and view its available tools
185 | - Test tools by providing input parameters and viewing results
186 | - See available resources and prompts
187 | 
188 | ### Usage
189 | 
190 | 1. Start the Streamlit app
191 | 2. Enter the path to your Claude for Desktop configuration file (default path is pre-filled)
192 | 3. Click "Load Servers" to see all available MCP servers
193 | 4. Select a server tab and click "Connect" to load its tools
194 | 5. Select a tool and provide the required parameters
195 | 6. Click "Execute" to run the tool and see the results
196 | 
197 | ## Troubleshooting
198 | 
199 | - **Missing dependencies**: Make sure all dependencies in `requirements.txt` are installed.
200 | - **Connection issues**: Check that the server is running and the configuration in Claude for Desktop points to the correct path.
201 | - **Tool execution errors**: Look for error messages in the server output.
202 | - **Streamlit UI issues**: Make sure Streamlit is properly installed and the configuration file path is correct.
203 | 
204 | ## License
205 | 
206 | This project is available under the MIT License. See the LICENSE file for more details.
207 | 
```

--------------------------------------------------------------------------------
/tools/__init__.py:
--------------------------------------------------------------------------------

```python
1 | # This file allows the tools directory to be imported as a package
2 | 
```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
1 | mcp>=1.2.0
2 | httpx>=0.24.0
3 | streamlit>=1.26.0
4 | json5>=0.9.14
5 | subprocess-tee>=0.4.1
6 | rich>=13.7.0
7 | duckduckgo_search
8 | crawl4ai>=0.4.3
```

--------------------------------------------------------------------------------
/run.bat:
--------------------------------------------------------------------------------

```
 1 | @echo off
 2 | REM Script to run either the MCP server or the Streamlit UI
 3 | 
 4 | REM Check if Python is installed
 5 | where python >nul 2>nul
 6 | if %ERRORLEVEL% neq 0 (
 7 |     echo Python is not installed or not in your PATH. Please install Python first.
 8 |     exit /b 1
 9 | )
10 | 
11 | REM Check if pip is installed
12 | where pip >nul 2>nul
13 | if %ERRORLEVEL% neq 0 (
14 |     echo pip is not installed or not in your PATH. Please install pip first.
15 |     exit /b 1
16 | )
17 | 
18 | REM Function to check and install dependencies
19 | :check_dependencies
20 |     echo Checking dependencies...
21 |     
22 |     REM Check if requirements.txt exists
23 |     if not exist "requirements.txt" (
24 |         echo requirements.txt not found. Please run this script from the repository root.
25 |         exit /b 1
26 |     )
27 |     
28 |     REM Install dependencies
29 |     echo Installing dependencies from requirements.txt...
30 |     pip install -r requirements.txt
31 |     
32 |     if %ERRORLEVEL% neq 0 (
33 |         echo Failed to install dependencies. Please check the errors above.
34 |         exit /b 1
35 |     )
36 |     
37 |     echo Dependencies installed successfully.
38 |     exit /b 0
39 | 
40 | REM Function to run the MCP server
41 | :run_server
42 |     echo Starting MCP server...
43 |     echo Press Ctrl+C to stop the server.
44 |     python server.py %*
45 |     exit /b 0
46 | 
47 | REM Function to run the Streamlit UI
48 | :run_ui
49 |     echo Starting Streamlit UI...
50 |     echo Press Ctrl+C to stop the UI.
51 |     streamlit run streamlit_app.py
52 |     exit /b 0
53 | 
54 | REM Main script
55 | if "%1"=="server" (
56 |     call :check_dependencies
57 |     if %ERRORLEVEL% neq 0 exit /b 1
58 |     shift
59 |     call :run_server %*
60 | ) else if "%1"=="ui" (
61 |     call :check_dependencies
62 |     if %ERRORLEVEL% neq 0 exit /b 1
63 |     call :run_ui
64 | ) else (
65 |     echo MCP Tools Runner
66 |     echo Usage:
67 |     echo   run.bat server [args]  - Run the MCP server with optional arguments
68 |     echo   run.bat ui             - Run the Streamlit UI
69 |     echo.
70 |     echo Examples:
71 |     echo   run.bat server                        - Run the server with stdio transport
72 |     echo   run.bat server --transport sse        - Run the server with SSE transport
73 |     echo   run.bat ui                            - Start the Streamlit UI
74 | )
75 | 
```

--------------------------------------------------------------------------------
/run.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | # Script to run either the MCP server or the Streamlit UI
 3 | 
 4 | # Colors for output
 5 | RED='\033[0;31m'
 6 | GREEN='\033[0;32m'
 7 | BLUE='\033[0;34m'
 8 | YELLOW='\033[1;33m'
 9 | NC='\033[0m' # No Color
10 | 
11 | # Check if Python is installed
12 | if ! command -v python &> /dev/null
13 | then
14 |     echo -e "${RED}Python is not installed or not in your PATH. Please install Python first.${NC}"
15 |     exit 1
16 | fi
17 | 
18 | # Check if pip is installed
19 | if ! command -v pip &> /dev/null
20 | then
21 |     echo -e "${RED}pip is not installed or not in your PATH. Please install pip first.${NC}"
22 |     exit 1
23 | fi
24 | 
25 | # Function to check and install dependencies
26 | check_dependencies() {
27 |     echo -e "${BLUE}Checking dependencies...${NC}"
28 |     
29 |     # Check if requirements.txt exists
30 |     if [ ! -f "requirements.txt" ]; then
31 |         echo -e "${RED}requirements.txt not found. Please run this script from the repository root.${NC}"
32 |         exit 1
33 |     fi
34 |     
35 |     # Install dependencies
36 |     echo -e "${YELLOW}Installing dependencies from requirements.txt...${NC}"
37 |     pip install -r requirements.txt
38 |     
39 |     if [ $? -ne 0 ]; then
40 |         echo -e "${RED}Failed to install dependencies. Please check the errors above.${NC}"
41 |         exit 1
42 |     fi
43 |     
44 |     echo -e "${GREEN}Dependencies installed successfully.${NC}"
45 | }
46 | 
47 | # Function to run the MCP server
48 | run_server() {
49 |     echo -e "${BLUE}Starting MCP server...${NC}"
50 |     echo -e "${YELLOW}Press Ctrl+C to stop the server.${NC}"
51 |     python server.py "$@"
52 | }
53 | 
54 | # Function to run the Streamlit UI
55 | run_ui() {
56 |     echo -e "${BLUE}Starting MCP Dev Tools UI...${NC}"
57 |     echo -e "${YELLOW}Press Ctrl+C to stop the UI.${NC}"
58 |     # Use the new frontend/app.py file instead of app.py
59 |     streamlit run frontend/app.py
60 | }
61 | 
62 | # Main script
63 | case "$1" in
64 |     server)
65 |         shift # Remove the first argument
66 |         check_dependencies
67 |         run_server "$@"
68 |         ;;
69 |     ui)
70 |         check_dependencies
71 |         run_ui
72 |         ;;
73 |     *)
74 |         echo -e "${BLUE}MCP Dev Tools Runner${NC}"
75 |         echo -e "${YELLOW}Usage:${NC}"
76 |         echo -e "  ./run.sh server [args]  - Run the MCP server with optional arguments"
77 |         echo -e "  ./run.sh ui             - Run the MCP Dev Tools UI"
78 |         echo
79 |         echo -e "${YELLOW}Examples:${NC}"
80 |         echo -e "  ./run.sh server                        - Run the server with stdio transport"
81 |         echo -e "  ./run.sh server --transport sse        - Run the server with SSE transport"
82 |         echo -e "  ./run.sh ui                            - Start the MCP Dev Tools UI"
83 |         ;;
84 | esac
85 | 
```

--------------------------------------------------------------------------------
/tools/web_scrape.py:
--------------------------------------------------------------------------------

```python
 1 | """
 2 | Web scraping tool for MCP server.
 3 | 
 4 | This module provides functionality to convert regular URLs into r.jina.ai prefixed URLs
 5 | and fetch their content as markdown. The r.jina.ai service acts as a URL-to-markdown
 6 | converter, making web content more accessible for text processing and analysis.
 7 | 
 8 | Features:
 9 | - Automatic HTTP/HTTPS scheme addition if missing
10 | - URL conversion to r.jina.ai format
11 | - Asynchronous HTTP requests using httpx
12 | - Comprehensive error handling for various failure scenarios
13 | """
14 | 
15 | import httpx
16 | 
17 | async def fetch_url_as_markdown(url: str) -> str:
18 |     """
19 |     Convert a URL to use r.jina.ai as a prefix and fetch the markdown content.
20 |     
21 |     This function performs the following steps:
22 |     1. Ensures the URL has a proper HTTP/HTTPS scheme
23 |     2. Converts the URL to use r.jina.ai as a prefix
24 |     3. Fetches the content using an async HTTP client
25 |     4. Returns the markdown content or an error message
26 |     
27 |     Args:
28 |         url (str): The URL to convert and fetch. If the URL doesn't start with
29 |                   'http://' or 'https://', 'https://' will be automatically added.
30 |     
31 |     Returns:
32 |         str: The markdown content if successful, or a descriptive error message if:
33 |              - The HTTP request fails (e.g., 404, 500)
34 |              - The connection times out
35 |              - Any other unexpected error occurs
36 |     """
37 |     # Ensure URL has a scheme - default to https:// if none provided
38 |     if not url.startswith(('http://', 'https://')):
39 |         url = 'https://' + url
40 |     
41 |     # Convert the URL to use r.jina.ai as a markdown conversion service
42 |     converted_url = f"https://r.jina.ai/{url}"
43 |     
44 |     try:
45 |         # Use httpx for modern async HTTP requests with timeout and redirect handling
46 |         async with httpx.AsyncClient() as client:
47 |             response = await client.get(converted_url, follow_redirects=True, timeout=30.0)
48 |             response.raise_for_status()
49 |             return response.text
50 |     except httpx.HTTPStatusError as e:
51 |         # Handle HTTP errors (4xx, 5xx) with specific status code information
52 |         return f"Error: HTTP status error - {e.response.status_code}"
53 |     except httpx.RequestError as e:
54 |         # Handle network-related errors (timeouts, connection issues, etc.)
55 |         return f"Error: Request failed - {str(e)}"
56 |     except Exception as e:
57 |         # Handle any unexpected errors that weren't caught by the above
58 |         return f"Error: Unexpected error occurred - {str(e)}"
59 | 
60 | # Standalone test functionality
61 | if __name__ == "__main__":
62 |     import asyncio
63 |     
64 |     async def test():
65 |         # Example usage with a test URL
66 |         url = "example.com"
67 |         result = await fetch_url_as_markdown(url)
68 |         print(f"Fetched content from {url}:")
69 |         # Show preview of content (first 200 characters)
70 |         print(result[:200] + "..." if len(result) > 200 else result)
71 |     
72 |     # Run the test function in an async event loop
73 |     asyncio.run(test())
74 | 
```

--------------------------------------------------------------------------------
/frontend/pages/02_Settings.py:
--------------------------------------------------------------------------------

```python
 1 | import streamlit as st
 2 | import os
 3 | import json
 4 | import json5
 5 | import sys
 6 | 
 7 | # Add the parent directory to the Python path to import utils
 8 | sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
 9 | from frontend.utils import default_config_path, load_config
10 | 
11 | st.title("Settings")
12 | 
13 | # Settings container
14 | with st.container():
15 |     st.subheader("Configuration Settings")
16 |     
17 |     # Get current config path from session state
18 |     current_config_path = st.session_state.get('config_path', default_config_path)
19 |     
20 |     # Config file path selector (with unique key)
21 |     config_path = st.text_input(
22 |         "Path to Claude Desktop config file", 
23 |         value=current_config_path,
24 |         key="settings_config_path"
25 |     )
26 |     
27 |     # Update the session state if path changed
28 |     if config_path != current_config_path:
29 |         st.session_state.config_path = config_path
30 |         if 'debug_messages' in st.session_state:
31 |             st.session_state.debug_messages.append(f"Config path updated to: {config_path}")
32 |     
33 |     # Add a button to view the current config
34 |     if st.button("View Current Config", key="view_config_button"):
35 |         if os.path.exists(config_path):
36 |             with st.spinner("Loading config file..."):
37 |                 config_data = load_config(config_path)
38 |                 if config_data:
39 |                     with st.expander("Config File Content", expanded=True):
40 |                         st.json(config_data)
41 |                     
42 |                     # Update session state
43 |                     st.session_state.config_data = config_data
44 |                     if 'mcpServers' in config_data:
45 |                         st.session_state.servers = config_data.get('mcpServers', {})
46 |                         
47 |                         # Add debug message
48 |                         success_msg = f"Found {len(st.session_state.servers)} MCP servers in the config file"
49 |                         if 'debug_messages' in st.session_state:
50 |                             st.session_state.debug_messages.append(success_msg)
51 |                 else:
52 |                     st.error("Failed to load config file")
53 |         else:
54 |             st.error(f"Config file not found: {config_path}")
55 | 
56 | # Help section for adding new servers
57 | with st.expander("Adding New MCP Servers"):
58 |     st.markdown("""
59 |     ## How to Add New MCP Servers
60 |     
61 |     To add a new MCP server to your configuration:
62 |     
63 |     1. Edit the Claude Desktop config file (usually at `~/Library/Application Support/Claude/claude_desktop_config.json`)
64 |     
65 |     2. Add or modify the `mcpServers` section with your new server configuration:
66 |     
67 |     ```json
68 |     "mcpServers": {
69 |         "my-server-name": {
70 |             "command": "python",
71 |             "args": ["/path/to/your/server.py"],
72 |             "env": {
73 |                 "OPTIONAL_ENV_VAR": "value"
74 |             }
75 |         },
76 |         "another-server": {
77 |             "command": "npx",
78 |             "args": ["some-mcp-package"]
79 |         }
80 |     }
81 |     ```
82 |     
83 |     3. Save the file and reload it in the MCP Dev Tools
84 |     
85 |     The `command` is the executable to run (e.g., `python`, `node`, `npx`), and `args` is an array of arguments to pass to the command.
86 |     """)
87 | 
```

--------------------------------------------------------------------------------
/frontend/app.py:
--------------------------------------------------------------------------------

```python
  1 | import streamlit as st
  2 | import os
  3 | import sys
  4 | 
  5 | # Add the parent directory to the Python path to import utils
  6 | sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
  7 | from frontend.utils import default_config_path, check_node_installations
  8 | 
  9 | # Set page config
 10 | st.set_page_config(
 11 |     page_title="MCP Dev Tools",
 12 |     page_icon="🔌",
 13 |     layout="wide"
 14 | )
 15 | 
 16 | # Initialize session state
 17 | if 'debug_messages' not in st.session_state:
 18 |     st.session_state.debug_messages = []
 19 |     
 20 | if 'config_path' not in st.session_state:
 21 |     st.session_state.config_path = default_config_path
 22 | 
 23 | if 'servers' not in st.session_state:
 24 |     st.session_state.servers = {}
 25 | 
 26 | if 'active_server' not in st.session_state:
 27 |     st.session_state.active_server = None
 28 | 
 29 | def add_debug_message(message):
 30 |     """Add a debug message to the session state"""
 31 |     st.session_state.debug_messages.append(message)
 32 |     # Keep only the last 10 messages
 33 |     if len(st.session_state.debug_messages) > 10:
 34 |         st.session_state.debug_messages = st.session_state.debug_messages[-10:]
 35 | 
 36 | # Main app container
 37 | st.title("🔌 MCP Dev Tools")
 38 | st.write("Explore and interact with Model Control Protocol (MCP) servers")
 39 | 
 40 | # Sidebar for configuration and debug
 41 | with st.sidebar:
 42 |     st.title("MCP Dev Tools")
 43 |     
 44 |     # Node.js status
 45 |     st.subheader("Environment Status")
 46 |     node_info = check_node_installations()
 47 |     
 48 |     # Display Node.js status
 49 |     if node_info['node']['installed']:
 50 |         st.success(f"✅ Node.js {node_info['node']['version']}")
 51 |     else:
 52 |         st.error("❌ Node.js not found")
 53 |         st.markdown("[Install Node.js](https://nodejs.org/)")
 54 |     
 55 |     # Display npm status
 56 |     if node_info['npm']['installed']:
 57 |         st.success(f"✅ npm {node_info['npm']['version']}")
 58 |     else:
 59 |         st.error("❌ npm not found")
 60 |     
 61 |     # Display npx status
 62 |     if node_info['npx']['installed']:
 63 |         st.success(f"✅ npx {node_info['npx']['version']}")
 64 |     else:
 65 |         st.error("❌ npx not found")
 66 |         
 67 |     # Warning if Node.js components are missing
 68 |     if not all(info['installed'] for info in node_info.values()):
 69 |         st.warning("⚠️ Some Node.js components are missing. MCP servers that depend on Node.js (using npx) will not work.")
 70 |     
 71 |     # Debug information section at the bottom of sidebar
 72 |     st.divider()
 73 |     st.subheader("Debug Information")
 74 |     
 75 |     # Display debug messages
 76 |     if st.session_state.debug_messages:
 77 |         for msg in st.session_state.debug_messages:
 78 |             st.text(msg)
 79 |     else:
 80 |         st.text("No debug messages")
 81 |         
 82 |     # Clear debug messages button
 83 |     if st.button("Clear Debug Messages"):
 84 |         st.session_state.debug_messages = []
 85 |         st.rerun()
 86 | 
 87 | # Add a message for pages selection
 88 | st.info("Select a page from the sidebar to get started")
 89 | 
 90 | # Add welcome info
 91 | st.markdown("""
 92 | ## Welcome to MCP Dev Tools
 93 | 
 94 | This tool helps you explore and interact with Model Control Protocol (MCP) servers. You can:
 95 | 
 96 | 1. View and connect to available MCP servers
 97 | 2. Explore tools, resources, and prompts provided by each server 
 98 | 3. Configure and manage server connections
 99 | 
100 | Select an option from the sidebar to get started.
101 | """)
102 | 
103 | # Footer
104 | st.divider()
105 | st.write("MCP Dev Tools | Built with Streamlit")
106 | 
```

--------------------------------------------------------------------------------
/tools/ddg_search.py:
--------------------------------------------------------------------------------

```python
  1 | """
  2 | DuckDuckGo search tool for MCP server.
  3 | 
  4 | This module provides functionality to search the web using DuckDuckGo's search engine.
  5 | It leverages the duckduckgo_search package to perform text-based web searches and
  6 | returns formatted results.
  7 | 
  8 | Features:
  9 | - Web search with customizable parameters
 10 | - Region-specific search support
 11 | - SafeSearch filtering options
 12 | - Time-limited search results
 13 | - Maximum results configuration
 14 | - Error handling for rate limits and timeouts
 15 | """
 16 | 
 17 | from duckduckgo_search import DDGS
 18 | from duckduckgo_search.exceptions import (
 19 |     DuckDuckGoSearchException,
 20 |     RatelimitException,
 21 |     TimeoutException
 22 | )
 23 | 
 24 | async def search_duckduckgo(
 25 |     keywords: str,
 26 |     region: str = "wt-wt",
 27 |     safesearch: str = "moderate",
 28 |     timelimit: str = None,
 29 |     max_results: int = 10
 30 | ) -> str:
 31 |     """
 32 |     Perform a web search using DuckDuckGo and return formatted results.
 33 |     
 34 |     Args:
 35 |         keywords (str): The search query/keywords to search for.
 36 |         region (str, optional): Region code for search results. Defaults to "wt-wt" (worldwide).
 37 |         safesearch (str, optional): SafeSearch level: "on", "moderate", or "off". Defaults to "moderate".
 38 |         timelimit (str, optional): Time limit for results: "d" (day), "w" (week), "m" (month), "y" (year).
 39 |             Defaults to None (no time limit).
 40 |         max_results (int, optional): Maximum number of results to return. Defaults to 10.
 41 |     
 42 |     Returns:
 43 |         str: Formatted search results as text, or an error message if the search fails.
 44 |     """
 45 |     try:
 46 |         # Create a DuckDuckGo search instance
 47 |         ddgs = DDGS()
 48 |         
 49 |         # Perform the search with the given parameters
 50 |         results = ddgs.text(
 51 |             keywords=keywords,
 52 |             region=region,
 53 |             safesearch=safesearch,
 54 |             timelimit=timelimit,
 55 |             max_results=max_results
 56 |         )
 57 |         
 58 |         # Format the results into a readable string
 59 |         formatted_results = []
 60 |         
 61 |         # Check if results is empty
 62 |         if not results:
 63 |             return "No results found for your search query."
 64 |         
 65 |         # Process and format each result
 66 |         for i, result in enumerate(results, 1):
 67 |             formatted_result = (
 68 |                 f"{i}. {result.get('title', 'No title')}\n"
 69 |                 f"   URL: {result.get('href', 'No URL')}\n"
 70 |                 f"   {result.get('body', 'No description')}\n"
 71 |             )
 72 |             formatted_results.append(formatted_result)
 73 |         
 74 |         # Join all formatted results with a separator
 75 |         return "\n".join(formatted_results)
 76 |     
 77 |     except RatelimitException:
 78 |         return "Error: DuckDuckGo search rate limit exceeded. Please try again later."
 79 |     
 80 |     except TimeoutException:
 81 |         return "Error: The search request timed out. Please try again."
 82 |     
 83 |     except DuckDuckGoSearchException as e:
 84 |         return f"Error: DuckDuckGo search failed - {str(e)}"
 85 |     
 86 |     except Exception as e:
 87 |         return f"Error: An unexpected error occurred - {str(e)}"
 88 | 
 89 | # Standalone test functionality
 90 | if __name__ == "__main__":
 91 |     import asyncio
 92 |     
 93 |     async def test():
 94 |         # Example usage with a test query
 95 |         query = "Python programming language"
 96 |         result = await search_duckduckgo(query, max_results=3)
 97 |         print(f"Search results for '{query}':")
 98 |         print(result)
 99 |     
100 |     # Run the test function in an async event loop
101 |     asyncio.run(test())
102 | 
```

--------------------------------------------------------------------------------
/server.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | import sys
  3 | 
  4 | """
  5 | MCP Server with web scraping tool.
  6 | 
  7 | This server implements a Model Context Protocol (MCP) server that provides web scraping
  8 | functionality. It offers a tool to convert regular URLs into r.jina.ai prefixed URLs
  9 | and fetch their content as markdown. This allows for easy conversion of web content
 10 | into a markdown format suitable for various applications.
 11 | 
 12 | Key Features:
 13 | - URL conversion and fetching
 14 | - Support for both stdio and SSE transport mechanisms
 15 | - Command-line configuration options
 16 | - Asynchronous web scraping functionality
 17 | """
 18 | 
 19 | import argparse
 20 | from mcp.server.fastmcp import FastMCP
 21 | 
 22 | # Import our custom tools
 23 | from tools.web_scrape import fetch_url_as_markdown
 24 | from tools.ddg_search import search_duckduckgo
 25 | from tools.crawl4ai_scraper import crawl_and_extract_markdown
 26 | 
 27 | # Initialize the MCP server with a descriptive name that reflects its purpose
 28 | mcp = FastMCP("Web Tools")
 29 | 
 30 | @mcp.tool()
 31 | async def web_scrape(url: str) -> str:
 32 |     """
 33 |     Convert a URL to use r.jina.ai as a prefix and fetch the markdown content.
 34 |     This tool wraps the fetch_url_as_markdown function to expose it as an MCP tool.
 35 |     
 36 |     Args:
 37 |         url (str): The URL to convert and fetch. Can be with or without http(s):// prefix.
 38 |         
 39 |     Returns:
 40 |         str: The markdown content if successful, or an error message if not.
 41 |     """
 42 |     return await fetch_url_as_markdown(url)
 43 | 
 44 | @mcp.tool()
 45 | async def ddg_search(query: str, region: str = "wt-wt", safesearch: str = "moderate", timelimit: str = None, max_results: int = 10) -> str:
 46 |     """
 47 |     Search the web using DuckDuckGo and return formatted results.
 48 |     
 49 |     Args:
 50 |         query (str): The search query to look for.
 51 |         region (str, optional): Region code for search results, e.g., "wt-wt" (worldwide), "us-en" (US English). Defaults to "wt-wt".
 52 |         safesearch (str, optional): SafeSearch level: "on", "moderate", or "off". Defaults to "moderate".
 53 |         timelimit (str, optional): Time limit for results: "d" (day), "w" (week), "m" (month), "y" (year). Defaults to None.
 54 |         max_results (int, optional): Maximum number of results to return. Defaults to 10.
 55 |         
 56 |     Returns:
 57 |         str: Formatted search results as text, or an error message if the search fails.
 58 |     """
 59 |     return await search_duckduckgo(keywords=query, region=region, safesearch=safesearch, timelimit=timelimit, max_results=max_results)
 60 | 
 61 | @mcp.tool()
 62 | async def advanced_scrape(url: str) -> str:
 63 |     """
 64 |     Scrape a webpage using advanced techniques and return clean, well-formatted markdown.
 65 |     
 66 |     This tool uses Crawl4AI to extract the main content from a webpage while removing
 67 |     navigation bars, sidebars, footers, ads, and other non-essential elements. The result
 68 |     is clean, well-formatted markdown focused on the actual content of the page.
 69 |     
 70 |     Args:
 71 |         url (str): The URL to scrape. Can be with or without http(s):// prefix.
 72 |         
 73 |     Returns:
 74 |         str: Well-formatted markdown content if successful, or an error message if not.
 75 |     """
 76 |     return await crawl_and_extract_markdown(url)
 77 | 
 78 | if __name__ == "__main__":
 79 |     # Log Python version for debugging purposes
 80 |     print(f"Using Python {sys.version}", file=sys.stderr)
 81 |     
 82 |     # Set up command-line argument parsing with descriptive help messages
 83 |     parser = argparse.ArgumentParser(description="MCP Server with web tools")
 84 |     parser.add_argument(
 85 |         "--transport", 
 86 |         choices=["stdio", "sse"], 
 87 |         default="stdio",
 88 |         help="Transport mechanism to use (default: stdio)"
 89 |     )
 90 |     parser.add_argument(
 91 |         "--host", 
 92 |         default="localhost",
 93 |         help="Host to bind to when using SSE transport (default: localhost)"
 94 |     )
 95 |     parser.add_argument(
 96 |         "--port", 
 97 |         type=int, 
 98 |         default=5000,
 99 |         help="Port to bind to when using SSE transport (default: 5000)"
100 |     )    
101 |     args = parser.parse_args()
102 |     
103 |     # Start the server with the specified transport mechanism
104 |     if args.transport == "stdio":
105 |         print("Starting MCP server with stdio transport...", file=sys.stderr)
106 |         mcp.run(transport="stdio")
107 |     else:
108 |         print(f"Starting MCP server with SSE transport on {args.host}:{args.port}...", file=sys.stderr)
109 |         mcp.run(transport="sse", host=args.host, port=args.port)
110 | 
```

--------------------------------------------------------------------------------
/docs/01-introduction-to-mcp.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Introduction to Model Context Protocol (MCP)
  2 | 
  3 | ## What is MCP?
  4 | 
  5 | The Model Context Protocol (MCP) is an open standard that defines how Large Language Models (LLMs) like Claude, GPT, and others can interact with external systems, data sources, and tools. MCP establishes a standardized way for applications to provide context to LLMs, enabling them to access real-time data and perform actions beyond their training data.
  6 | 
  7 | Think of MCP as a "USB-C for AI" - a standard interface that allows different LLMs to connect to various data sources and tools without requiring custom integrations for each combination.
  8 | 
  9 | ## Why MCP Exists
 10 | 
 11 | Before MCP, integrating LLMs with external tools and data sources required:
 12 | 
 13 | 1. Custom integrations for each LLM and tool combination
 14 | 2. Proprietary protocols specific to each LLM provider
 15 | 3. Directly exposing APIs and data to the LLM, raising security concerns
 16 | 4. Duplicating integration efforts across different projects
 17 | 
 18 | MCP solves these problems by:
 19 | 
 20 | 1. **Standardization**: Defining a common protocol for all LLMs and tools
 21 | 2. **Separation of concerns**: Keeping LLM interactions separate from tool functionality
 22 | 3. **Security**: Providing controlled access to external systems
 23 | 4. **Reusability**: Allowing tools to be shared across different LLMs and applications
 24 | 
 25 | ## Key Benefits of MCP
 26 | 
 27 | - **Consistency**: Common interface across different LLMs and tools
 28 | - **Modularity**: Tools can be developed independently of LLMs
 29 | - **Security**: Fine-grained control over LLM access to systems
 30 | - **Ecosystem**: Growing library of pre-built tools and integrations
 31 | - **Flexibility**: Support for different transport mechanisms and deployment models
 32 | - **Vendor Agnosticism**: Not tied to any specific LLM provider
 33 | 
 34 | ## Core Architecture
 35 | 
 36 | MCP follows a client-server architecture:
 37 | 
 38 | ```mermaid
 39 | flowchart LR
 40 |     subgraph "Host Application"
 41 |         LLM[LLM Interface]
 42 |         Client[MCP Client]
 43 |     end
 44 |     subgraph "External Systems"
 45 |         Server1[MCP Server 1]
 46 |         Server2[MCP Server 2]
 47 |         Server3[MCP Server 3]
 48 |     end
 49 |     LLM <--> Client
 50 |     Client <--> Server1
 51 |     Client <--> Server2
 52 |     Client <--> Server3
 53 |     Server1 <--> DB[(Database)]
 54 |     Server2 <--> API[API Service]
 55 |     Server3 <--> Files[(File System)]
 56 | ```
 57 | 
 58 | - **MCP Host**: An application that hosts an LLM (like Claude desktop app)
 59 | - **MCP Client**: The component in the host that communicates with MCP servers
 60 | - **MCP Server**: A service that exposes tools, resources, and prompts to clients
 61 | - **Transport Layer**: The communication mechanism between clients and servers (stdio, SSE, etc.)
 62 | 
 63 | ## Core Components
 64 | 
 65 | MCP is built around three core primitives:
 66 | 
 67 | ### 1. Tools
 68 | 
 69 | Tools are functions that LLMs can call to perform actions or retrieve information. They follow a request-response pattern, where the LLM provides input parameters and receives a result.
 70 | 
 71 | Examples:
 72 | - Searching a database
 73 | - Calculating values
 74 | - Making API calls
 75 | - Manipulating files
 76 | 
 77 | ```mermaid
 78 | sequenceDiagram
 79 |     LLM->>MCP Client: Request tool execution
 80 |     MCP Client->>User: Request permission
 81 |     User->>MCP Client: Grant permission
 82 |     MCP Client->>MCP Server: Execute tool
 83 |     MCP Server->>MCP Client: Return result
 84 |     MCP Client->>LLM: Provide result
 85 | ```
 86 | 
 87 | ### 2. Resources
 88 | 
 89 | Resources are data sources that LLMs can read. They are identified by URIs and can be static or dynamic.
 90 | 
 91 | Examples:
 92 | - File contents
 93 | - Database records
 94 | - API responses
 95 | - System information
 96 | 
 97 | ```mermaid
 98 | sequenceDiagram
 99 |     LLM->>MCP Client: Request resource
100 |     MCP Client->>MCP Server: Get resource
101 |     MCP Server->>MCP Client: Return resource content
102 |     MCP Client->>LLM: Provide content
103 | ```
104 | 
105 | ### 3. Prompts
106 | 
107 | Prompts are templates that help LLMs interact with servers effectively. They provide structured ways to formulate requests.
108 | 
109 | Examples:
110 | - Query templates
111 | - Analysis frameworks
112 | - Structured response formats
113 | 
114 | ```mermaid
115 | sequenceDiagram
116 |     User->>MCP Client: Select prompt
117 |     MCP Client->>MCP Server: Get prompt template
118 |     MCP Server->>MCP Client: Return template
119 |     MCP Client->>LLM: Apply template to interaction
120 | ```
121 | 
122 | ## Control Flow
123 | 
124 | An important aspect of MCP is how control flows between components:
125 | 
126 | | Component | Control | Description |
127 | |-----------|---------|-------------|
128 | | Tools | Model-controlled | LLM decides when to use tools (with user permission) |
129 | | Resources | Application-controlled | The client app determines when to provide resources |
130 | | Prompts | User-controlled | Explicitly selected by users for specific interactions |
131 | 
132 | This separation of control ensures that each component is used appropriately and securely.
133 | 
134 | ## Transport Mechanisms
135 | 
136 | MCP supports multiple transport mechanisms for communication between clients and servers:
137 | 
138 | ### 1. Standard Input/Output (stdio)
139 | 
140 | Uses standard input and output streams for communication. Ideal for:
141 | - Local processes
142 | - Command-line tools
143 | - Simple integrations
144 | 
145 | ### 2. Server-Sent Events (SSE)
146 | 
147 | Uses HTTP with Server-Sent Events for server-to-client messages and HTTP POST for client-to-server messages. Suitable for:
148 | - Web applications
149 | - Remote services
150 | - Distributed systems
151 | 
152 | Both transports use [JSON-RPC](https://www.jsonrpc.org/) 2.0 as the messaging format.
153 | 
154 | ## The MCP Ecosystem
155 | 
156 | The MCP ecosystem consists of:
157 | 
158 | - **MCP Specification**: The formal protocol definition
159 | - **SDKs**: Libraries for building clients and servers in different languages
160 | - **Pre-built Servers**: Ready-to-use servers for common services
161 | - **Hosts**: Applications that support MCP for LLM interactions
162 | - **Tools**: Community-developed tools and integrations
163 | 
164 | ## Getting Started
165 | 
166 | To start working with MCP, you'll need:
167 | 
168 | 1. An MCP host (like Claude Desktop or a custom client)
169 | 2. Access to MCP servers (pre-built or custom)
170 | 3. Basic understanding of the MCP concepts
171 | 
172 | The following documents in this series will guide you through:
173 | - Building your own MCP servers
174 | - Using existing MCP servers
175 | - Troubleshooting common issues
176 | - Extending the ecosystem with new tools
177 | 
178 | ## Resources
179 | 
180 | - [Official MCP Documentation](https://modelcontextprotocol.io/)
181 | - [MCP GitHub Organization](https://github.com/modelcontextprotocol)
182 | - [MCP Specification](https://spec.modelcontextprotocol.io/)
183 | - [Example Servers](https://github.com/modelcontextprotocol/servers)
184 | 
```

--------------------------------------------------------------------------------
/frontend/pages/03_Documentation.py:
--------------------------------------------------------------------------------

```python
  1 | import streamlit as st
  2 | import os
  3 | import sys
  4 | from pathlib import Path
  5 | import re
  6 | 
  7 | # Add the parent directory to the Python path to import utils
  8 | sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
  9 | from frontend.utils import get_markdown_files
 10 | 
 11 | st.title("Documentation")
 12 | 
 13 | # Define the docs directory path
 14 | docs_dir = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), "docs")
 15 | 
 16 | # Helper function to calculate the height for a mermaid diagram
 17 | def calculate_diagram_height(mermaid_code):
 18 |     # Count the number of lines in the diagram
 19 |     line_count = len(mermaid_code.strip().split('\n'))
 20 |     
 21 |     # Estimate the height based on complexity and type
 22 |     base_height = 100  # Minimum height
 23 |     
 24 |     # Add height based on the number of lines
 25 |     line_height = 30 if line_count <= 5 else 25  # Adjust per-line height based on total lines for density
 26 |     height = base_height + (line_count * line_height)
 27 |     
 28 |     # Extra height for different diagram types
 29 |     if "flowchart" in mermaid_code.lower() or "graph" in mermaid_code.lower():
 30 |         height += 50
 31 |     elif "sequenceDiagram" in mermaid_code:
 32 |         height += 100  # Sequence diagrams typically need more height
 33 |     elif "classDiagram" in mermaid_code:
 34 |         height += 75
 35 |     
 36 |     # Extra height for diagrams with many connections
 37 |     if mermaid_code.count("-->") + mermaid_code.count("<--") + mermaid_code.count("-.-") > 5:
 38 |         height += 100
 39 |     
 40 |     # Extra height if many items in diagram
 41 |     node_count = len(re.findall(r'\[[^\]]+\]', mermaid_code))
 42 |     if node_count > 5:
 43 |         height += node_count * 20
 44 |     
 45 |     return height
 46 | 
 47 | # Helper function to extract and render mermaid diagrams
 48 | def render_markdown_with_mermaid(content):
 49 |     # Regular expression to find mermaid code blocks
 50 |     mermaid_pattern = r"```mermaid\s*([\s\S]*?)\s*```"
 51 |     
 52 |     # Find all mermaid diagrams
 53 |     mermaid_blocks = re.findall(mermaid_pattern, content)
 54 |     
 55 |     # Replace mermaid blocks with placeholders
 56 |     content_with_placeholders = re.sub(mermaid_pattern, "MERMAID_DIAGRAM_PLACEHOLDER", content)
 57 |     
 58 |     # Split content by placeholders
 59 |     parts = content_with_placeholders.split("MERMAID_DIAGRAM_PLACEHOLDER")
 60 |     
 61 |     # Render each part with mermaid diagrams in between
 62 |     for i, part in enumerate(parts):
 63 |         if part.strip():
 64 |             st.markdown(part)
 65 |         
 66 |         # Add mermaid diagram after this part (if there is one)
 67 |         if i < len(mermaid_blocks):
 68 |             mermaid_code = mermaid_blocks[i]
 69 |             
 70 |             # Calculate appropriate height for this diagram
 71 |             diagram_height = calculate_diagram_height(mermaid_code)
 72 |             
 73 |             # Render mermaid diagram using streamlit components
 74 |             st.components.v1.html(
 75 |                 f"""
 76 |                 <div class="mermaid" style="margin: 20px 0;">
 77 |                 {mermaid_code}
 78 |                 </div>
 79 |                 <script src="https://cdn.jsdelivr.net/npm/mermaid@9/dist/mermaid.min.js"></script>
 80 |                 <script>
 81 |                     mermaid.initialize({{ 
 82 |                         startOnLoad: true,
 83 |                         theme: 'default',
 84 |                         flowchart: {{ 
 85 |                             useMaxWidth: false,
 86 |                             htmlLabels: true,
 87 |                             curve: 'cardinal'
 88 |                         }}
 89 |                     }});
 90 |                 </script>
 91 |                 """,
 92 |                 height=diagram_height,
 93 |                 scrolling=True
 94 |             )
 95 | 
 96 | # Check if docs directory exists
 97 | if not os.path.exists(docs_dir):
 98 |     st.error(f"Documentation directory not found: {docs_dir}")
 99 | else:
100 |     # Get list of markdown files
101 |     markdown_files = get_markdown_files(docs_dir)
102 |     
103 |     # Sidebar for document selection
104 |     with st.sidebar:
105 |         st.subheader("Select Document")
106 |         
107 |         if not markdown_files:
108 |             st.info("No documentation files found")
109 |         else:
110 |             # Create options for the selectbox - use filenames without path and extension
111 |             file_options = [f.stem for f in markdown_files]
112 |             
113 |             # Select document
114 |             selected_doc = st.selectbox(
115 |                 "Choose a document", 
116 |                 file_options,
117 |                 format_func=lambda x: x.replace("-", " ").title(),
118 |                 key="doc_selection"
119 |             )
120 |             
121 |             # Find the selected file path
122 |             selected_file_path = next((f for f in markdown_files if f.stem == selected_doc), None)
123 |             
124 |             # Store selection in session state
125 |             if selected_file_path:
126 |                 st.session_state["selected_doc_path"] = str(selected_file_path)
127 | 
128 |     # Display the selected markdown file
129 |     if "selected_doc_path" in st.session_state:
130 |         selected_path = st.session_state["selected_doc_path"]
131 |         
132 |         try:
133 |             with open(selected_path, 'r') as f:
134 |                 content = f.read()
135 |             
136 |             # Set style for better code rendering
137 |             st.markdown(
138 |                 """
139 |                 <style>
140 |                 code {
141 |                     white-space: pre-wrap !important;
142 |                 }
143 |                 .mermaid {
144 |                     text-align: center !important;
145 |                 }
146 |                 </style>
147 |                 """, 
148 |                 unsafe_allow_html=True
149 |             )
150 |             
151 |             # Use the custom function to render markdown with mermaid
152 |             render_markdown_with_mermaid(content)
153 |             
154 |         except Exception as e:
155 |             st.error(f"Error loading document: {str(e)}")
156 |     else:
157 |         if markdown_files:
158 |             # Display the first document by default
159 |             try:
160 |                 with open(str(markdown_files[0]), 'r') as f:
161 |                     content = f.read()
162 |                 
163 |                 # Use the custom function to render markdown with mermaid
164 |                 render_markdown_with_mermaid(content)
165 |                 
166 |                 # Store the selected doc in session state
167 |                 st.session_state["selected_doc_path"] = str(markdown_files[0])
168 |             except Exception as e:
169 |                 st.error(f"Error loading default document: {str(e)}")
170 |         else:
171 |             st.info("Select a document from the sidebar to view documentation")
172 | 
```

--------------------------------------------------------------------------------
/frontend/utils.py:
--------------------------------------------------------------------------------

```python
  1 | import os
  2 | import json
  3 | import json5
  4 | import streamlit as st
  5 | import subprocess
  6 | import asyncio
  7 | import sys
  8 | import shutil
  9 | from pathlib import Path
 10 | from mcp import ClientSession, StdioServerParameters
 11 | from mcp.client.stdio import stdio_client
 12 | 
 13 | # Define default config path based on OS
 14 | default_config_path = os.path.expanduser("~/Library/Application Support/Claude/claude_desktop_config.json")
 15 | 
 16 | def load_config(config_path):
 17 |     """Load the Claude Desktop config file"""
 18 |     try:
 19 |         with open(config_path, 'r') as f:
 20 |             # Use json5 to handle potential JSON5 format (comments, trailing commas)
 21 |             return json5.load(f)
 22 |     except Exception as e:
 23 |         st.error(f"Error loading config file: {str(e)}")
 24 |         return None
 25 | 
 26 | def find_executable(name):
 27 |     """Find the full path to an executable"""
 28 |     path = shutil.which(name)
 29 |     if path:
 30 |         return path
 31 |     
 32 |     # Try common locations for Node.js executables
 33 |     if name in ['node', 'npm', 'npx']:
 34 |         # Check user's home directory for nvm or other Node.js installations
 35 |         home = Path.home()
 36 |         possible_paths = [
 37 |             home / '.nvm' / 'versions' / 'node' / '*' / 'bin' / name,
 38 |             home / 'node_modules' / '.bin' / name,
 39 |             home / '.npm-global' / 'bin' / name,
 40 |             # Add Mac Homebrew path
 41 |             Path('/usr/local/bin') / name,
 42 |             Path('/opt/homebrew/bin') / name,
 43 |         ]
 44 |         
 45 |         for p in possible_paths:
 46 |             if isinstance(p, Path) and '*' in str(p):
 47 |                 # Handle wildcard paths
 48 |                 parent = p.parent.parent
 49 |                 if parent.exists():
 50 |                     for version_dir in parent.glob('*'):
 51 |                         full_path = version_dir / 'bin' / name
 52 |                         if full_path.exists():
 53 |                             return str(full_path)
 54 |             elif Path(str(p)).exists():
 55 |                 return str(p)
 56 |     
 57 |     return None
 58 | 
 59 | def check_node_installations():
 60 |     """Check if Node.js, npm, and npx are installed and return their versions"""
 61 |     node_installed = bool(find_executable('node'))
 62 |     node_version = None
 63 |     npm_installed = bool(find_executable('npm'))
 64 |     npm_version = None
 65 |     npx_installed = bool(find_executable('npx'))
 66 |     npx_version = None
 67 | 
 68 |     if node_installed:
 69 |         try:
 70 |             node_version = subprocess.check_output([find_executable('node'), '--version']).decode().strip()
 71 |         except:
 72 |             pass
 73 | 
 74 |     if npm_installed:
 75 |         try:
 76 |             npm_version = subprocess.check_output([find_executable('npm'), '--version']).decode().strip()
 77 |         except:
 78 |             pass
 79 |             
 80 |     if npx_installed:
 81 |         try:
 82 |             npx_version = subprocess.check_output([find_executable('npx'), '--version']).decode().strip()
 83 |         except:
 84 |             pass
 85 |     
 86 |     return {
 87 |         'node': {'installed': node_installed, 'version': node_version},
 88 |         'npm': {'installed': npm_installed, 'version': npm_version},
 89 |         'npx': {'installed': npx_installed, 'version': npx_version}
 90 |     }
 91 | 
 92 | async def connect_to_server(command, args=None, env=None):
 93 |     """Connect to an MCP server and list its tools"""
 94 |     try:
 95 |         # Find the full path to the command
 96 |         print(f"Finding executable for command: {command}")
 97 |         full_command = find_executable(command)
 98 |         if not full_command:
 99 |             st.error(f"Command '{command}' not found. Make sure it's installed and in your PATH.")
100 |             if command == 'npx':
101 |                 st.error("Node.js may not be installed or properly configured. Install Node.js from https://nodejs.org")
102 |             return {"tools": [], "resources": [], "prompts": []}
103 |         
104 |         # Use the full path to the command
105 |         command = full_command
106 |         
107 |         server_params = StdioServerParameters(
108 |             command=command,
109 |             args=args or [],
110 |             env=env or {}
111 |         )
112 |         print(f"Connecting to server with command: {command} and args: {args}")
113 |         
114 |         async with stdio_client(server_params) as (read, write):
115 |             async with ClientSession(read, write) as session:
116 |                 await session.initialize()
117 |                 
118 |                 # List tools
119 |                 tools_result = await session.list_tools()
120 |                 
121 |                 # Try to list resources and prompts
122 |                 try:
123 |                     resources_result = await session.list_resources()
124 |                     resources = resources_result.resources if hasattr(resources_result, 'resources') else []
125 |                 except Exception:
126 |                     resources = []
127 |                 
128 |                 try:
129 |                     prompts_result = await session.list_prompts()
130 |                     prompts = prompts_result.prompts if hasattr(prompts_result, 'prompts') else []
131 |                 except Exception:
132 |                     prompts = []
133 |                 
134 |                 return {
135 |                     "tools": tools_result.tools if hasattr(tools_result, 'tools') else [],
136 |                     "resources": resources,
137 |                     "prompts": prompts
138 |                 }
139 |     except Exception as e:
140 |         st.error(f"Error connecting to server: {str(e)}")
141 |         return {"tools": [], "resources": [], "prompts": []}
142 | 
143 | async def call_tool(command, args, tool_name, tool_args):
144 |     """Call a specific tool and return the result"""
145 |     try:
146 |         # Find the full path to the command
147 |         full_command = find_executable(command)
148 |         if not full_command:
149 |             return f"Error: Command '{command}' not found. Make sure it's installed and in your PATH."
150 |         
151 |         # Use the full path to the command
152 |         command = full_command
153 |         
154 |         server_params = StdioServerParameters(
155 |             command=command,
156 |             args=args or [],
157 |             env={}
158 |         )
159 |         
160 |         async with stdio_client(server_params) as (read, write):
161 |             async with ClientSession(read, write) as session:
162 |                 await session.initialize()
163 |                 
164 |                 # Call the tool
165 |                 result = await session.call_tool(tool_name, arguments=tool_args)
166 |                 
167 |                 # Format the result
168 |                 if hasattr(result, 'content') and result.content:
169 |                     content_text = []
170 |                     for item in result.content:
171 |                         if hasattr(item, 'text'):
172 |                             content_text.append(item.text)
173 |                     return "\n".join(content_text)
174 |                 return "Tool executed, but no text content was returned."
175 |     except Exception as e:
176 |         return f"Error calling tool: {str(e)}"
177 | 
178 | def get_markdown_files(docs_folder):
179 |     """Get list of markdown files in the docs folder"""
180 |     docs_path = Path(docs_folder)
181 |     if not docs_path.exists() or not docs_path.is_dir():
182 |         return []
183 |     
184 |     return sorted([f for f in docs_path.glob('*.md')])
185 | 
```

--------------------------------------------------------------------------------
/tools/crawl4ai_scraper.py:
--------------------------------------------------------------------------------

```python
  1 | """
  2 | Crawl4AI web scraping tool for MCP server.
  3 | 
  4 | This module provides advanced web scraping functionality using Crawl4AI.
  5 | It extracts content from web pages, removes non-essential elements like
  6 | navigation bars, footers, and sidebars, and returns well-formatted markdown
  7 | that preserves document structure including headings, code blocks, tables,
  8 | and image references.
  9 | 
 10 | Features:
 11 | - Clean content extraction with navigation, sidebar, and footer removal
 12 | - Preserves document structure (headings, lists, tables, code blocks)
 13 | - Automatic conversion to well-formatted markdown
 14 | - Support for JavaScript-rendered content
 15 | - Content filtering to focus on the main article/content
 16 | - Comprehensive error handling
 17 | """
 18 | 
 19 | import asyncio
 20 | import os
 21 | import re
 22 | import logging
 23 | from typing import Optional
 24 | 
 25 | from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, BrowserConfig, CacheMode
 26 | from crawl4ai.content_filter_strategy import PruningContentFilter
 27 | from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator
 28 | 
 29 | # Set up logging
 30 | logging.basicConfig(
 31 |     level=logging.INFO,
 32 |     format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
 33 | )
 34 | logger = logging.getLogger("crawl4ai_scraper")
 35 | 
 36 | async def crawl_and_extract_markdown(url: str, query: Optional[str] = None) -> str:
 37 |     """
 38 |     Crawl a webpage and extract well-formatted markdown content.
 39 |     
 40 |     Args:
 41 |         url: The URL to crawl
 42 |         query: Optional search query to focus content on (if None, extracts main content)
 43 |     
 44 |     Returns:
 45 |         str: Well-formatted markdown content from the webpage
 46 |     
 47 |     Raises:
 48 |         Exception: If crawling fails or content extraction encounters errors
 49 |     """
 50 |     try:
 51 |         # Configure the browser for optimal rendering
 52 |         browser_config = BrowserConfig(
 53 |             headless=True,
 54 |             viewport_width=1920,  # Wider viewport to capture more content
 55 |             viewport_height=1080,  # Taller viewport for the same reason
 56 |             java_script_enabled=True,
 57 |             text_mode=False,  # Set to False to ensure all content is loaded
 58 |             user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
 59 |         )
 60 |         
 61 |         # Create a content filter for removing unwanted elements
 62 |         content_filter = PruningContentFilter(
 63 |             threshold=0.1,  # Very low threshold to keep more content
 64 |             threshold_type="dynamic",  # Dynamic threshold based on page content
 65 |             min_word_threshold=2  # Include very short text blocks for headings/code
 66 |         )
 67 |         
 68 |         # Configure markdown generator with options for structure preservation
 69 |         markdown_generator = DefaultMarkdownGenerator(
 70 |             content_filter=content_filter,
 71 |             options={
 72 |                 "body_width": 0,         # No wrapping
 73 |                 "ignore_images": False,   # Keep image references
 74 |                 "citations": True,        # Include link citations
 75 |                 "escape_html": False,     # Don't escape HTML in code blocks
 76 |                 "include_sup_sub": True,  # Preserve superscript/subscript
 77 |                 "pad_tables": True,       # Better table formatting
 78 |                 "mark_code": True,        # Better code block preservation
 79 |                 "code_language": "",      # Default code language
 80 |                 "wrap_links": False       # Preserve link formatting
 81 |             }
 82 |         )
 83 |         
 84 |         # Configure the crawler run for optimal structure extraction
 85 |         run_config = CrawlerRunConfig(
 86 |             verbose=False,
 87 |             # Content filtering
 88 |             markdown_generator=markdown_generator,
 89 |             word_count_threshold=2,  # Extremely low to include very short text blocks
 90 |             
 91 |             # Tag exclusions - remove unwanted elements
 92 |             excluded_tags=["nav", "footer", "aside"],
 93 |             excluded_selector=".nav, .navbar, .sidebar, .footer, #footer, #sidebar, " +
 94 |                              ".ads, .advertisement, .navigation, #navigation, " +
 95 |                              ".menu, #menu, .toc, .table-of-contents",
 96 |             
 97 |             # Wait conditions for JS content
 98 |             wait_until="networkidle",
 99 |             wait_for="css:pre, code, h1, h2, h3, table",  # Wait for important structural elements 
100 |             page_timeout=60000,
101 |             
102 |             # Don't limit to specific selectors to get full content
103 |             css_selector=None,
104 |             
105 |             # Other options
106 |             remove_overlay_elements=True,    # Remove modal popups
107 |             remove_forms=True,               # Remove forms
108 |             scan_full_page=True,             # Scan the full page
109 |             scroll_delay=0.5,                # Slower scroll for better content loading
110 |             cache_mode=CacheMode.BYPASS      # Bypass cache for fresh content
111 |         )
112 |         
113 |         # Create crawler and run it
114 |         async with AsyncWebCrawler(config=browser_config) as crawler:
115 |             result = await crawler.arun(url=url, config=run_config)
116 |             
117 |             if not result.success:
118 |                 raise Exception(f"Crawl failed: {result.error_message}")
119 |             
120 |             # Extract the title from metadata if available
121 |             title = "Untitled Document"
122 |             if result.metadata and "title" in result.metadata:
123 |                 title = result.metadata["title"]
124 |             
125 |             # Choose the best markdown content
126 |             markdown_content = ""
127 |             
128 |             # Try to get the best version of the markdown
129 |             if hasattr(result, "markdown_v2") and result.markdown_v2:
130 |                 if hasattr(result.markdown_v2, 'raw_markdown') and result.markdown_v2.raw_markdown:
131 |                     markdown_content = result.markdown_v2.raw_markdown
132 |                 elif hasattr(result.markdown_v2, 'markdown_with_citations') and result.markdown_v2.markdown_with_citations:
133 |                     markdown_content = result.markdown_v2.markdown_with_citations
134 |             elif hasattr(result, "markdown") and result.markdown:
135 |                 if isinstance(result.markdown, str):
136 |                     markdown_content = result.markdown
137 |                 elif hasattr(result.markdown, 'raw_markdown'):
138 |                     markdown_content = result.markdown.raw_markdown
139 |             elif result.cleaned_html:
140 |                 from html2text import html2text
141 |                 markdown_content = html2text(result.cleaned_html)
142 |             
143 |             # Post-process the markdown to fix common issues
144 |             
145 |             # 1. Fix code blocks - ensure they have proper formatting
146 |             markdown_content = re.sub(r'```\s*\n', '```python\n', markdown_content)
147 |             
148 |             # 2. Fix broken headings - ensure space after # characters
149 |             markdown_content = re.sub(r'^(#{1,6})([^#\s])', r'\1 \2', markdown_content, flags=re.MULTILINE)
150 |             
151 |             # 3. Add spacing between sections for readability
152 |             markdown_content = re.sub(r'(\n#{1,6} .+?\n)(?=[^\n])', r'\1\n', markdown_content)
153 |             
154 |             # 4. Fix bullet points - ensure proper spacing
155 |             markdown_content = re.sub(r'^\*([^\s])', r'* \1', markdown_content, flags=re.MULTILINE)
156 |             
157 |             # 5. Format the final content with title and URL
158 |             final_content = f"Title: {title}\n\nURL Source: {result.url}\n\nMarkdown Content:\n{markdown_content}"
159 |             
160 |             return final_content
161 |                 
162 |     except Exception as e:
163 |         logger.error(f"Error crawling {url}: {str(e)}")
164 |         raise Exception(f"Error crawling {url}: {str(e)}")
165 | 
166 | # Standalone test functionality
167 | if __name__ == "__main__":
168 |     import argparse
169 |     
170 |     parser = argparse.ArgumentParser(description="Extract structured markdown content from a webpage")
171 |     parser.add_argument("url", nargs="?", default="https://docs.llamaindex.ai/en/stable/understanding/agent/", 
172 |                         help="URL to crawl (default: https://docs.llamaindex.ai/en/stable/understanding/agent/)")
173 |     parser.add_argument("--output", help="Output file to save the markdown (default: scraped_content.md)")
174 |     parser.add_argument("--query", help="Optional search query to focus content")
175 |     
176 |     args = parser.parse_args()
177 |     
178 |     async def test():
179 |         url = args.url
180 |         print(f"Scraping {url}...")
181 |         
182 |         try:
183 |             if args.query:
184 |                 result = await crawl_and_extract_markdown(url, args.query)
185 |             else:
186 |                 result = await crawl_and_extract_markdown(url)
187 |             
188 |             # Show preview of content
189 |             preview_length = min(1000, len(result))
190 |             print("\nResult Preview (first 1000 chars):")
191 |             print(result[:preview_length] + "...\n" if len(result) > preview_length else result)
192 |             
193 |             # Print statistics
194 |             print(f"\nMarkdown length: {len(result)} characters")
195 |             
196 |             # Save to file
197 |             output_file = args.output if args.output else "scraped_content.md"
198 |             with open(output_file, "w", encoding="utf-8") as f:
199 |                 f.write(result)
200 |             print(f"Full content saved to '{output_file}'")
201 |             
202 |             return 0
203 |         except Exception as e:
204 |             print(f"Error: {str(e)}")
205 |             return 1
206 |     
207 |     # Run the test function in an async event loop
208 |     exit_code = asyncio.run(test())
209 |     import sys
210 |     sys.exit(exit_code)
211 | 
```

--------------------------------------------------------------------------------
/frontend/pages/01_My_Active_Servers.py:
--------------------------------------------------------------------------------

```python
  1 | import os
  2 | import json
  3 | import streamlit as st
  4 | import asyncio
  5 | import sys
  6 | 
  7 | # Add the parent directory to the Python path to import utils
  8 | sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
  9 | from frontend.utils import load_config, connect_to_server, call_tool, default_config_path
 10 | 
 11 | st.title("My Active MCP Servers")
 12 | 
 13 | # Configuration and server selection in the sidebar
 14 | with st.sidebar:
 15 |     st.subheader("Configuration")
 16 |     
 17 |     # Config file path input with unique key
 18 |     config_path = st.text_input(
 19 |         "Path to config file", 
 20 |         value=st.session_state.get('config_path', default_config_path),
 21 |         key="config_path_input_sidebar"
 22 |     )
 23 |     
 24 |     # Update the session state with the new path
 25 |     st.session_state.config_path = config_path
 26 |     
 27 |     if st.button("Load Servers", key="load_servers_sidebar"):
 28 |         if os.path.exists(config_path):
 29 |             config_data = load_config(config_path)
 30 |             if config_data and 'mcpServers' in config_data:
 31 |                 st.session_state.config_data = config_data
 32 |                 st.session_state.servers = config_data.get('mcpServers', {})
 33 |                 
 34 |                 # Add debug message
 35 |                 message = f"Found {len(st.session_state.servers)} MCP servers in config"
 36 |                 if 'debug_messages' in st.session_state:
 37 |                     st.session_state.debug_messages.append(message)
 38 |                 
 39 |                 st.success(message)
 40 |             else:
 41 |                 error_msg = "No MCP servers found in config"
 42 |                 if 'debug_messages' in st.session_state:
 43 |                     st.session_state.debug_messages.append(error_msg)
 44 |                 st.error(error_msg)
 45 |         else:
 46 |             error_msg = f"Config file not found: {config_path}"
 47 |             if 'debug_messages' in st.session_state:
 48 |                 st.session_state.debug_messages.append(error_msg)
 49 |             st.error(error_msg)
 50 |     
 51 |     # Server selection dropdown
 52 |     st.divider()
 53 |     st.subheader("Server Selection")
 54 |     
 55 |     if 'servers' in st.session_state and st.session_state.servers:
 56 |         server_names = list(st.session_state.servers.keys())
 57 |         selected_server = st.selectbox(
 58 |             "Select an MCP server", 
 59 |             server_names,
 60 |             key="server_selection_sidebar"
 61 |         )
 62 |         
 63 |         if st.button("Connect", key="connect_button_sidebar"):
 64 |             server_config = st.session_state.servers.get(selected_server, {})
 65 |             command = server_config.get('command')
 66 |             args = server_config.get('args', [])
 67 |             env = server_config.get('env', {})
 68 |             
 69 |             with st.spinner(f"Connecting to {selected_server}..."):
 70 |                 # Add debug message
 71 |                 debug_msg = f"Connecting to {selected_server}..."
 72 |                 if 'debug_messages' in st.session_state:
 73 |                     st.session_state.debug_messages.append(debug_msg)
 74 |                 
 75 |                 # Connect to the server
 76 |                 server_info = asyncio.run(connect_to_server(command, args, env))
 77 |                 st.session_state[f'server_info_{selected_server}'] = server_info
 78 |                 st.session_state.active_server = selected_server
 79 |                 
 80 |                 # Add debug message about connection success/failure
 81 |                 if server_info.get('tools'):
 82 |                     success_msg = f"Connected to {selected_server}: {len(server_info['tools'])} tools"
 83 |                     if 'debug_messages' in st.session_state:
 84 |                         st.session_state.debug_messages.append(success_msg)
 85 |                 else:
 86 |                     error_msg = f"Connected but no tools found"
 87 |                     if 'debug_messages' in st.session_state:
 88 |                         st.session_state.debug_messages.append(error_msg)
 89 |                 
 90 |                 # Force the page to refresh to show connected server details
 91 |                 st.rerun()
 92 |     else:
 93 |         st.info("Load config to see servers")
 94 | 
 95 | # Main area: Only display content when a server is connected
 96 | if 'active_server' in st.session_state and st.session_state.active_server:
 97 |     active_server = st.session_state.active_server
 98 |     server_info_key = f'server_info_{active_server}'
 99 |     
100 |     if server_info_key in st.session_state:
101 |         st.subheader(f"Connected to: {active_server}")
102 |         
103 |         server_info = st.session_state[server_info_key]
104 |         server_config = st.session_state.servers.get(active_server, {})
105 |         
106 |         # Display server configuration
107 |         with st.expander("Server Configuration"):
108 |             st.json(server_config)
109 |         
110 |         # Display tools
111 |         if server_info.get('tools'):
112 |             st.subheader("Available Tools")
113 |             
114 |             # Create tabs for each tool
115 |             tool_tabs = st.tabs([tool.name for tool in server_info['tools']])
116 |             
117 |             for i, tool in enumerate(server_info['tools']):
118 |                 with tool_tabs[i]:
119 |                     st.markdown(f"**Description:** {tool.description or 'No description provided'}")
120 |                     
121 |                     # Tool schema
122 |                     if hasattr(tool, 'inputSchema') and tool.inputSchema:
123 |                         with st.expander("Input Schema"):
124 |                             st.json(tool.inputSchema)
125 |                         
126 |                         # Generate form for tool inputs
127 |                         st.subheader("Call Tool")
128 |                         
129 |                         # Create a form
130 |                         with st.form(key=f"tool_form_{active_server}_{tool.name}"):
131 |                             # Fix duplicate ID error by adding unique keys for form fields
132 |                             tool_inputs = {}
133 |                             
134 |                             # Check if input schema has properties
135 |                             if 'properties' in tool.inputSchema:
136 |                                 # Create form inputs based on schema properties
137 |                                 for param_name, param_schema in tool.inputSchema['properties'].items():
138 |                                     param_type = param_schema.get('type', 'string')
139 |                                     
140 |                                     # Create unique key for each form field
141 |                                     field_key = f"{active_server}_{tool.name}_{param_name}"
142 |                                     
143 |                                     if param_type == 'string':
144 |                                         tool_inputs[param_name] = st.text_input(
145 |                                             f"{param_name}", 
146 |                                             help=param_schema.get('description', ''),
147 |                                             key=field_key
148 |                                         )
149 |                                     elif param_type == 'number' or param_type == 'integer':
150 |                                         tool_inputs[param_name] = st.number_input(
151 |                                             f"{param_name}", 
152 |                                             help=param_schema.get('description', ''),
153 |                                             key=field_key
154 |                                         )
155 |                                     elif param_type == 'boolean':
156 |                                         tool_inputs[param_name] = st.checkbox(
157 |                                             f"{param_name}", 
158 |                                             help=param_schema.get('description', ''),
159 |                                             key=field_key
160 |                                         )
161 |                                     # Add more types as needed
162 |                             
163 |                             # Submit button
164 |                             submit_button = st.form_submit_button(f"Execute {tool.name}")
165 |                             
166 |                             if submit_button:
167 |                                 # Get server config
168 |                                 command = server_config.get('command')
169 |                                 args = server_config.get('args', [])
170 |                                 
171 |                                 with st.spinner(f"Executing {tool.name}..."):
172 |                                     # Add debug message
173 |                                     if 'debug_messages' in st.session_state:
174 |                                         st.session_state.debug_messages.append(f"Executing {tool.name}")
175 |                                     
176 |                                     # Call the tool
177 |                                     result = asyncio.run(call_tool(command, args, tool.name, tool_inputs))
178 |                                     
179 |                                     # Display result
180 |                                     st.subheader("Result")
181 |                                     st.write(result)
182 |                     else:
183 |                         st.warning("No input schema available for this tool")
184 |         
185 |         # Display resources if any
186 |         if server_info.get('resources'):
187 |             with st.expander("Resources"):
188 |                 for resource in server_info['resources']:
189 |                     st.write(f"**{resource.name}:** {resource.uri}")
190 |                     if hasattr(resource, 'description') and resource.description:
191 |                         st.write(resource.description)
192 |                     st.divider()
193 |         
194 |         # Display prompts if any
195 |         if server_info.get('prompts'):
196 |             with st.expander("Prompts"):
197 |                 for prompt in server_info['prompts']:
198 |                     st.write(f"**{prompt.name}**")
199 |                     if hasattr(prompt, 'description') and prompt.description:
200 |                         st.write(prompt.description)
201 |                     st.divider()
202 |     else:
203 |         st.info(f"Server {active_server} is selected but not connected. Click 'Connect' in the sidebar.")
204 | else:
205 |     # Initial state when no server is connected
206 |     st.info("Select a server from the sidebar and click 'Connect' to start interacting with it.")
207 | 
```

--------------------------------------------------------------------------------
/docs/08-advanced-mcp-features.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Advanced MCP Features
  2 | 
  3 | This document explores advanced features and configurations for Model Context Protocol (MCP) servers. These techniques can help you build more powerful, secure, and maintainable MCP implementations.
  4 | 
  5 | ## Advanced Configuration
  6 | 
  7 | ### Server Lifecycle Management
  8 | 
  9 | The MCP server lifecycle can be managed with the `lifespan` parameter to set up resources on startup and clean them up on shutdown:
 10 | 
 11 | ```python
 12 | from contextlib import asynccontextmanager
 13 | from typing import AsyncIterator, Dict, Any
 14 | from mcp.server.fastmcp import FastMCP
 15 | 
 16 | @asynccontextmanager
 17 | async def server_lifespan(server: FastMCP) -> AsyncIterator[Dict[str, Any]]:
 18 |     """Manage server lifecycle."""
 19 |     print("Server starting up...")
 20 |     
 21 |     # Initialize resources
 22 |     db_connection = await initialize_database()
 23 |     cache = initialize_cache()
 24 |     
 25 |     try:
 26 |         # Yield context to server
 27 |         yield {
 28 |             "db": db_connection,
 29 |             "cache": cache
 30 |         }
 31 |     finally:
 32 |         # Clean up resources
 33 |         print("Server shutting down...")
 34 |         await db_connection.close()
 35 |         cache.clear()
 36 | 
 37 | # Create server with lifespan
 38 | mcp = FastMCP("AdvancedServer", lifespan=server_lifespan)
 39 | 
 40 | # Access lifespan context in tools
 41 | @mcp.tool()
 42 | async def query_database(sql: str, ctx: Context) -> str:
 43 |     """Run a database query."""
 44 |     db = ctx.request_context.lifespan_context["db"]
 45 |     results = await db.execute(sql)
 46 |     return results
 47 | ```
 48 | 
 49 | ### Dependency Specification
 50 | 
 51 | You can specify dependencies for your server to ensure it has everything it needs:
 52 | 
 53 | ```python
 54 | # Specify dependencies for the server
 55 | mcp = FastMCP(
 56 |     "DependentServer",
 57 |     dependencies=[
 58 |         "pandas>=1.5.0",
 59 |         "numpy>=1.23.0",
 60 |         "scikit-learn>=1.1.0"
 61 |     ]
 62 | )
 63 | ```
 64 | 
 65 | This helps with:
 66 | - Documentation for users
 67 | - Verification during installation
 68 | - Clarity about requirements
 69 | 
 70 | ### Environment Variables
 71 | 
 72 | Use environment variables for configuration:
 73 | 
 74 | ```python
 75 | import os
 76 | from dotenv import load_dotenv
 77 | 
 78 | # Load environment variables from .env file
 79 | load_dotenv()
 80 | 
 81 | # Access environment variables
 82 | API_KEY = os.environ.get("MY_API_KEY")
 83 | BASE_URL = os.environ.get("MY_BASE_URL", "https://api.default.com")
 84 | DEBUG = os.environ.get("DEBUG", "false").lower() == "true"
 85 | 
 86 | # Create server with configuration
 87 | mcp = FastMCP(
 88 |     "ConfigurableServer",
 89 |     config={
 90 |         "api_key": API_KEY,
 91 |         "base_url": BASE_URL,
 92 |         "debug": DEBUG
 93 |     }
 94 | )
 95 | 
 96 | # Access configuration in tools
 97 | @mcp.tool()
 98 | async def call_api(endpoint: str, ctx: Context) -> str:
 99 |     """Call an API endpoint."""
100 |     config = ctx.server.config
101 |     base_url = config["base_url"]
102 |     api_key = config["api_key"]
103 |     
104 |     # Use configuration
105 |     async with httpx.AsyncClient() as client:
106 |         response = await client.get(
107 |             f"{base_url}/{endpoint}",
108 |             headers={"Authorization": f"Bearer {api_key}"}
109 |         )
110 |         return response.text
111 | ```
112 | 
113 | ## Advanced Logging
114 | 
115 | ### Structured Logging
116 | 
117 | Implement structured logging for better analysis:
118 | 
119 | ```python
120 | import logging
121 | import json
122 | from datetime import datetime
123 | 
124 | class StructuredFormatter(logging.Formatter):
125 |     """Format logs as JSON for structured logging."""
126 |     
127 |     def format(self, record):
128 |         log_data = {
129 |             "timestamp": datetime.utcnow().isoformat(),
130 |             "level": record.levelname,
131 |             "name": record.name,
132 |             "message": record.getMessage(),
133 |             "module": record.module,
134 |             "function": record.funcName,
135 |             "line": record.lineno
136 |         }
137 |         
138 |         # Add exception info if present
139 |         if record.exc_info:
140 |             log_data["exception"] = self.formatException(record.exc_info)
141 |         
142 |         # Add custom fields if present
143 |         if hasattr(record, "data"):
144 |             log_data.update(record.data)
145 |         
146 |         return json.dumps(log_data)
147 | 
148 | # Set up structured logging
149 | logger = logging.getLogger("mcp")
150 | handler = logging.FileHandler("mcp_server.log")
151 | handler.setFormatter(StructuredFormatter())
152 | logger.addHandler(handler)
153 | logger.setLevel(logging.DEBUG)
154 | 
155 | # Log with extra data
156 | def log_with_data(level, message, **kwargs):
157 |     record = logging.LogRecord(
158 |         name="mcp",
159 |         level=level,
160 |         pathname="",
161 |         lineno=0,
162 |         msg=message,
163 |         args=(),
164 |         exc_info=None
165 |     )
166 |     record.data = kwargs
167 |     logger.handle(record)
168 | 
169 | # Usage
170 | log_with_data(
171 |     logging.INFO,
172 |     "Tool execution completed",
173 |     tool="web_scrape",
174 |     url="example.com",
175 |     execution_time=1.25,
176 |     result_size=1024
177 | )
178 | ```
179 | 
180 | ### Client Notifications
181 | 
182 | Send logging messages to clients:
183 | 
184 | ```python
185 | @mcp.tool()
186 | async def process_data(data: str, ctx: Context) -> str:
187 |     """Process data with client notifications."""
188 |     try:
189 |         # Send info message to client
190 |         ctx.info("Starting data processing")
191 |         
192 |         # Process data in steps
193 |         ctx.info("Step 1: Parsing data")
194 |         parsed_data = parse_data(data)
195 |         
196 |         ctx.info("Step 2: Analyzing data")
197 |         analysis = analyze_data(parsed_data)
198 |         
199 |         ctx.info("Step 3: Generating report")
200 |         report = generate_report(analysis)
201 |         
202 |         ctx.info("Processing complete")
203 |         return report
204 |         
205 |     except Exception as e:
206 |         # Send error message to client
207 |         ctx.error(f"Processing failed: {str(e)}")
208 |         raise
209 | ```
210 | 
211 | ### Progress Reporting
212 | 
213 | Report progress for long-running operations:
214 | 
215 | ```python
216 | @mcp.tool()
217 | async def process_large_file(file_path: str, ctx: Context) -> str:
218 |     """Process a large file with progress reporting."""
219 |     try:
220 |         # Get file size
221 |         file_size = os.path.getsize(file_path)
222 |         bytes_processed = 0
223 |         
224 |         # Open file
225 |         async with aiofiles.open(file_path, "rb") as f:
226 |             # Process in chunks
227 |             chunk_size = 1024 * 1024  # 1 MB
228 |             while True:
229 |                 chunk = await f.read(chunk_size)
230 |                 if not chunk:
231 |                     break
232 |                     
233 |                 # Process chunk
234 |                 process_chunk(chunk)
235 |                 
236 |                 # Update progress
237 |                 bytes_processed += len(chunk)
238 |                 progress = min(100, int(bytes_processed * 100 / file_size))
239 |                 await ctx.report_progress(progress)
240 |                 
241 |                 # Log milestone
242 |                 if progress % 10 == 0:
243 |                     ctx.info(f"Processed {progress}% of file")
244 |         
245 |         return f"File processing complete. Processed {file_size} bytes."
246 |         
247 |     except Exception as e:
248 |         ctx.error(f"File processing failed: {str(e)}")
249 |         return f"Error: {str(e)}"
250 | ```
251 | 
252 | ## Security Features
253 | 
254 | ### Input Validation
255 | 
256 | Implement thorough input validation:
257 | 
258 | ```python
259 | from pydantic import BaseModel, Field, validator
260 | 
261 | class SearchParams(BaseModel):
262 |     """Validated search parameters."""
263 |     query: str = Field(..., min_length=1, max_length=100)
264 |     days: int = Field(7, ge=1, le=30)
265 |     limit: int = Field(5, ge=1, le=100)
266 |     
267 |     @validator('query')
268 |     def query_must_be_valid(cls, v):
269 |         import re
270 |         if not re.match(r'^[a-zA-Z0-9\s\-.,?!]+$', v):
271 |             raise ValueError('Query contains invalid characters')
272 |         return v
273 | 
274 | @mcp.tool()
275 | async def search_with_validation(params: dict) -> str:
276 |     """Search with validated parameters."""
277 |     try:
278 |         # Validate parameters
279 |         validated = SearchParams(**params)
280 |         
281 |         # Proceed with validated parameters
282 |         results = await perform_search(
283 |             validated.query,
284 |             validated.days,
285 |             validated.limit
286 |         )
287 |         
288 |         return format_results(results)
289 |         
290 |     except Exception as e:
291 |         return f"Validation error: {str(e)}"
292 | ```
293 | 
294 | ### Rate Limiting
295 | 
296 | Implement rate limiting to prevent abuse:
297 | 
298 | ```python
299 | import time
300 | from functools import wraps
301 | 
302 | # Simple rate limiter
303 | class RateLimiter:
304 |     def __init__(self, calls_per_minute=60):
305 |         self.calls_per_minute = calls_per_minute
306 |         self.interval = 60 / calls_per_minute  # seconds per call
307 |         self.last_call_times = {}
308 |     
309 |     async def limit(self, key):
310 |         """Limit calls for a specific key."""
311 |         now = time.time()
312 |         
313 |         # Initialize if first call
314 |         if key not in self.last_call_times:
315 |             self.last_call_times[key] = [now]
316 |             return
317 |         
318 |         # Get calls within the last minute
319 |         minute_ago = now - 60
320 |         recent_calls = [t for t in self.last_call_times[key] if t > minute_ago]
321 |         
322 |         # Check if rate limit exceeded
323 |         if len(recent_calls) >= self.calls_per_minute:
324 |             oldest_call = min(recent_calls)
325 |             wait_time = 60 - (now - oldest_call)
326 |             raise ValueError(f"Rate limit exceeded. Try again in {wait_time:.1f} seconds.")
327 |         
328 |         # Update call times
329 |         self.last_call_times[key] = recent_calls + [now]
330 | 
331 | # Create rate limiter
332 | rate_limiter = RateLimiter(calls_per_minute=10)
333 | 
334 | # Apply rate limiting to a tool
335 | @mcp.tool()
336 | async def rate_limited_api_call(endpoint: str) -> str:
337 |     """Call API with rate limiting."""
338 |     try:
339 |         # Apply rate limit
340 |         await rate_limiter.limit("api_call")
341 |         
342 |         # Proceed with API call
343 |         async with httpx.AsyncClient() as client:
344 |             response = await client.get(f"https://api.example.com/{endpoint}")
345 |             return response.text
346 |             
347 |     except ValueError as e:
348 |         return f"Error: {str(e)}"
349 | ```
350 | 
351 | ### Access Control
352 | 
353 | Implement access controls for sensitive operations:
354 | 
355 | ```python
356 | # Define access levels
357 | class AccessLevel:
358 |     READ = 1
359 |     WRITE = 2
360 |     ADMIN = 3
361 | 
362 | # Access control decorator
363 | def require_access(level):
364 |     def decorator(func):
365 |         @wraps(func)
366 |         async def wrapper(*args, **kwargs):
367 |             # Get context from args
368 |             ctx = None
369 |             for arg in args:
370 |                 if isinstance(arg, Context):
371 |                     ctx = arg
372 |                     break
373 |             
374 |             if ctx is None:
375 |                 for arg_name, arg_value in kwargs.items():
376 |                     if isinstance(arg_value, Context):
377 |                         ctx = arg_value
378 |                         break
379 |             
380 |             if ctx is None:
381 |                 return "Error: Context not provided"
382 |             
383 |             # Check access level
384 |             user_level = get_user_access_level(ctx)
385 |             if user_level < level:
386 |                 return "Error: Insufficient permissions"
387 |             
388 |             # Proceed with function
389 |             return await func(*args, **kwargs)
390 |         return wrapper
391 |     return decorator
392 | 
393 | # Get user access level from context
394 | def get_user_access_level(ctx):
395 |     # In practice, this would use authentication information
396 |     # For demonstration, return READ
397 | 
```

--------------------------------------------------------------------------------
/docs/02-mcp-core-concepts.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Core Concepts: Tools, Resources, and Prompts
  2 | 
  3 | ## Understanding the Core Primitives
  4 | 
  5 | The Model Context Protocol (MCP) is built around three foundational primitives that determine how LLMs interact with external systems:
  6 | 
  7 | 1. **Tools**: Functions that LLMs can call to perform actions
  8 | 2. **Resources**: Data sources that LLMs can access
  9 | 3. **Prompts**: Templates that guide LLM interactions
 10 | 
 11 | Each primitive serves a distinct purpose in the MCP ecosystem and comes with its own control flow, usage patterns, and implementation considerations. Understanding when and how to use each is essential for effective MCP development.
 12 | 
 13 | ## The Control Matrix
 14 | 
 15 | A key concept in MCP is who controls each primitive:
 16 | 
 17 | | Primitive | Control          | Access Pattern  | Typical Use Cases                     | Security Model                     |
 18 | |-----------|------------------|-----------------|--------------------------------------|-----------------------------------|
 19 | | Tools     | Model-controlled | Execute         | API calls, calculations, processing   | User permission before execution   |
 20 | | Resources | App-controlled   | Read            | Files, database records, context      | App decides which resources to use |
 21 | | Prompts   | User-controlled  | Apply template  | Structured queries, common workflows  | Explicitly user-selected          |
 22 | 
 23 | This control matrix ensures that each component operates within appropriate boundaries and security constraints.
 24 | 
 25 | ## Tools in Depth
 26 | 
 27 | ### What Are Tools?
 28 | 
 29 | Tools are executable functions that allow LLMs to perform actions and retrieve information. They are analogous to API endpoints but specifically designed for LLM consumption.
 30 | 
 31 | ```mermaid
 32 | flowchart LR
 33 |     LLM[LLM] -->|Request + Parameters| Tool[Tool]
 34 |     Tool -->|Result| LLM
 35 |     Tool -->|Execute| Action[Action]
 36 |     Action -->|Result| Tool
 37 | ```
 38 | 
 39 | ### Key Characteristics of Tools
 40 | 
 41 | - **Model-controlled**: The LLM decides when to call a tool
 42 | - **Request-response pattern**: Tools accept parameters and return results
 43 | - **Side effects**: Tools may have side effects (e.g., modifying data)
 44 | - **Permission-based**: Tool execution typically requires user permission
 45 | - **Formal schema**: Tools have well-defined input and output schemas
 46 | 
 47 | ### When to Use Tools
 48 | 
 49 | Use tools when:
 50 | 
 51 | - The LLM needs to perform an action (not just read data)
 52 | - The operation has potential side effects
 53 | - The operation requires specific parameters
 54 | - You want the LLM to decide when to use the functionality
 55 | - The operation produces results that affect further LLM reasoning
 56 | 
 57 | ### Tool Example: Weather Service
 58 | 
 59 | ```python
 60 | @mcp.tool()
 61 | async def get_forecast(latitude: float, longitude: float) -> str:
 62 |     """
 63 |     Get weather forecast for a location.
 64 |     
 65 |     Args:
 66 |         latitude: Latitude of the location
 67 |         longitude: Longitude of the location
 68 |     
 69 |     Returns:
 70 |         Formatted forecast text
 71 |     """
 72 |     # Implementation details...
 73 |     return forecast_text
 74 | ```
 75 | 
 76 | ### Tool Schema
 77 | 
 78 | Each tool provides a JSON Schema that defines its input parameters:
 79 | 
 80 | ```json
 81 | {
 82 |   "name": "get_forecast",
 83 |   "description": "Get weather forecast for a location",
 84 |   "inputSchema": {
 85 |     "type": "object",
 86 |     "properties": {
 87 |       "latitude": {
 88 |         "type": "number",
 89 |         "description": "Latitude of the location"
 90 |       },
 91 |       "longitude": {
 92 |         "type": "number",
 93 |         "description": "Longitude of the location"
 94 |       }
 95 |     },
 96 |     "required": ["latitude", "longitude"]
 97 |   }
 98 | }
 99 | ```
100 | 
101 | ### Tool Execution Flow
102 | 
103 | ```mermaid
104 | sequenceDiagram
105 |     participant LLM
106 |     participant Client
107 |     participant User
108 |     participant Server
109 |     participant External as External System
110 |     
111 |     LLM->>Client: Request tool execution
112 |     Client->>User: Request permission
113 |     User->>Client: Grant permission
114 |     Client->>Server: Call tool with parameters
115 |     Server->>External: Execute operation
116 |     External->>Server: Return operation result
117 |     Server->>Client: Return formatted result
118 |     Client->>LLM: Provide result for reasoning
119 | ```
120 | 
121 | ## Resources in Depth
122 | 
123 | ### What Are Resources?
124 | 
125 | Resources are data sources that provide context to LLMs. They represent content that an LLM can read but not modify directly.
126 | 
127 | ```mermaid
128 | flowchart LR
129 |     Resource[Resource] -->|Content| Client[MCP Client]
130 |     Client -->|Context| LLM[LLM]
131 |     DB[(Database)] -->|Data| Resource
132 |     Files[(Files)] -->|Data| Resource
133 |     API[APIs] -->|Data| Resource
134 | ```
135 | 
136 | ### Key Characteristics of Resources
137 | 
138 | - **Application-controlled**: The client app decides which resources to provide
139 | - **Read-only**: Resources are for reading, not modification
140 | - **URI-based**: Resources are identified by URI schemes
141 | - **Content-focused**: Resources provide data, not functionality
142 | - **Context-providing**: Resources enhance the LLM's understanding
143 | 
144 | ### When to Use Resources
145 | 
146 | Use resources when:
147 | 
148 | - The LLM needs to read data but not modify it
149 | - The data provides context for reasoning
150 | - The content is static or infrequently changing
151 | - You want control over what data the LLM can access
152 | - The data is too large or complex to include in prompts
153 | 
154 | ### Resource Example: File Reader
155 | 
156 | ```python
157 | @mcp.resource("file://{path}")
158 | async def get_file_content(path: str) -> str:
159 |     """
160 |     Get the content of a file.
161 |     
162 |     Args:
163 |         path: Path to the file
164 |     
165 |     Returns:
166 |         File content as text
167 |     """
168 |     # Implementation details...
169 |     return file_content
170 | ```
171 | 
172 | ### Resource URI Templates
173 | 
174 | Resources often use URI templates to create dynamic resources:
175 | 
176 | ```
177 | file://{path}
178 | database://{table}/{id}
179 | api://{endpoint}/{parameter}
180 | ```
181 | 
182 | This allows for flexible resource addressing while maintaining structure.
183 | 
184 | ### Resource Access Flow
185 | 
186 | ```mermaid
187 | sequenceDiagram
188 |     participant LLM
189 |     participant Client
190 |     participant Server
191 |     participant DataSource as Data Source
192 |     
193 |     Client->>Server: List available resources
194 |     Server->>Client: Return resource list
195 |     Client->>Client: Select relevant resources
196 |     Client->>Server: Request resource content
197 |     Server->>DataSource: Fetch data
198 |     DataSource->>Server: Return data
199 |     Server->>Client: Return formatted content
200 |     Client->>LLM: Provide as context
201 | ```
202 | 
203 | ## Prompts in Depth
204 | 
205 | ### What Are Prompts?
206 | 
207 | Prompts are templates that guide LLM interactions with servers. They provide structured patterns for common operations and workflows.
208 | 
209 | ```mermaid
210 | flowchart LR
211 |     User[User] -->|Select| Prompt[Prompt Template]
212 |     Prompt -->|Apply| Interaction[LLM Interaction]
213 |     Interaction -->|Result| User
214 | ```
215 | 
216 | ### Key Characteristics of Prompts
217 | 
218 | - **User-controlled**: Explicitly selected by users for specific tasks
219 | - **Template-based**: Provide structured formats for interactions
220 | - **Parameterized**: Accept arguments to customize behavior
221 | - **Workflow-oriented**: Often encapsulate multi-step processes
222 | - **Reusable**: Designed for repeated use across similar tasks
223 | 
224 | ### When to Use Prompts
225 | 
226 | Use prompts when:
227 | 
228 | - Users perform similar tasks repeatedly
229 | - Complex interactions can be standardized
230 | - You want to ensure consistent LLM behavior
231 | - The interaction follows a predictable pattern
232 | - Users need guidance on how to interact with a tool
233 | 
234 | ### Prompt Example: Code Review
235 | 
236 | ```python
237 | @mcp.prompt()
238 | def code_review(code: str) -> str:
239 |     """
240 |     Create a prompt for code review.
241 |     
242 |     Args:
243 |         code: The code to review
244 |     
245 |     Returns:
246 |         Formatted prompt for LLM
247 |     """
248 |     return f"""
249 |     Please review this code:
250 |     
251 |     ```
252 |     {code}
253 |     ```
254 |     
255 |     Focus on:
256 |     1. Potential bugs
257 |     2. Performance issues
258 |     3. Security concerns
259 |     4. Code style and readability
260 |     """
261 | ```
262 | 
263 | ### Prompt Schema
264 | 
265 | Prompts define their parameters and description:
266 | 
267 | ```json
268 | {
269 |   "name": "code_review",
270 |   "description": "Generate a code review for the provided code",
271 |   "arguments": [
272 |     {
273 |       "name": "code",
274 |       "description": "The code to review",
275 |       "required": true
276 |     }
277 |   ]
278 | }
279 | ```
280 | 
281 | ### Prompt Usage Flow
282 | 
283 | ```mermaid
284 | sequenceDiagram
285 |     participant User
286 |     participant Client
287 |     participant Server
288 |     participant LLM
289 |     
290 |     User->>Client: Browse available prompts
291 |     Client->>Server: List prompts
292 |     Server->>Client: Return prompt list
293 |     Client->>User: Display prompt options
294 |     User->>Client: Select prompt and provide args
295 |     Client->>Server: Get prompt template
296 |     Server->>Client: Return filled template
297 |     Client->>LLM: Use template for interaction
298 |     LLM->>Client: Generate response
299 |     Client->>User: Show response
300 | ```
301 | 
302 | ## Comparing the Primitives
303 | 
304 | ### Tools vs. Resources
305 | 
306 | | Aspect           | Tools                          | Resources                      |
307 | |------------------|--------------------------------|--------------------------------|
308 | | **Purpose**      | Perform actions                | Provide data                   |
309 | | **Control**      | Model-controlled (with permission) | Application-controlled         |
310 | | **Operations**   | Execute functions              | Read content                   |
311 | | **Side Effects** | May have side effects          | No side effects (read-only)    |
312 | | **Schema**       | Input parameters, return value | URI template, content type     |
313 | | **Use Case**     | API calls, calculations        | Files, database records        |
314 | | **Security**     | Permission required            | Pre-selected by application    |
315 | 
316 | ### Tools vs. Prompts
317 | 
318 | | Aspect           | Tools                          | Prompts                        |
319 | |------------------|--------------------------------|--------------------------------|
320 | | **Purpose**      | Perform actions                | Guide interactions             |
321 | | **Control**      | Model-controlled               | User-controlled                |
322 | | **Operations**   | Execute functions              | Apply templates                |
323 | | **Customization**| Input parameters               | Template arguments             |
324 | | **Use Case**     | Specific operations            | Standardized workflows         |
325 | | **User Interface**| Usually invisible             | Typically visible in UI        |
326 | 
327 | ### Resources vs. Prompts
328 | 
329 | | Aspect           | Resources                     | Prompts                        |
330 | |------------------|-------------------------------|--------------------------------|
331 | | **Purpose**      | Provide data                  | Guide interactions             |
332 | | **Control**      | Application-controlled        | User-controlled                |
333 | | **Content**      | Dynamic data                  | Structured templates           |
334 | | **Use Case**     | Context enhancement           | Standardized workflows         |
335 | | **Persistence**  | May be cached or real-time    | Generally static               |
336 | 
337 | ## Deciding Which Primitive to Use
338 | 
339 | When designing MCP servers, choosing the right primitive is critical. Use this decision tree:
340 | 
341 | ```mermaid
342 | flowchart TD
343 |     A[Start] --> B{Does it perform\nan action?}
344 |     B -->|Yes| C{Should the LLM\ndecide when\nto use it?}
345 |     B -->|No| D{Is it providing\ndata only?}
346 |     
347 |     C -->|Yes| E[Use a Tool]
348 |     C -->|No| F{Is it a common\nworkflow pattern?}
349 |     
350 |     D -->|Yes| G[Use a Resource]
351 |     D -->|No| F
352 |     
353 |     F -->|Yes| H[Use a Prompt]
354 |     F -->|No| I{Does it modify\ndata?}
355 |     
356 |     I -->|Yes| E
357 |     I -->|No| G
358 | ```
359 | 
360 | ### Practical Guidelines
361 | 
362 | 1. **Use Tools when**:
363 |    - The operation performs actions or has side effects
364 |    - The LLM should decide when to use the functionality
365 |    - The operation requires specific input parameters
366 |    - You need to run calculations or process data
367 | 
368 | 2. **Use Resources when**:
369 |    - You need to provide read-only data to the LLM
370 |    - The content is large or structured
371 |    - The data needs to be selected by the application
372 |    - The data provides context for reasoning
373 | 
374 | 3. **Use Prompts when**:
375 |    - Users perform similar tasks repeatedly
376 |    - The interaction follows a predictable pattern
377 |    - You want to ensure consistent behavior
378 |    - Users need guidance on complex interactions
379 | 
380 | ## Combining Primitives
381 | 
382 | For complex systems, you'll often combine multiple primitives:
383 | 
384 | ```mermaid
385 | flowchart LR
386 |     User[User] -->|Selects| Prompt[Prompt]
387 |     Prompt -->|Guides| LLM[LLM]
388 |     LLM -->|Reads| Resource[Resource]
389 |     LLM -->|Calls| Tool[Tool]
390 |     Resource -->|Informs| LLM
391 |     Tool -->|Returns to| LLM
392 |     LLM -->|Responds to| User
393 | ```
394 | 
395 | Example combinations:
396 | 
397 | 1. **Resource + Tool**: Read a file (resource) then analyze its content (tool)
398 | 2. **Prompt + Tool**: Use a standard query format (prompt) to execute a search (tool)
399 | 3. **Resource + Prompt**: Load context (resource) then apply a structured analysis template (prompt)
400 | 4. **All Three**: Load context (resource), apply analysis template (prompt), and execute operations (tool)
401 | 
402 | ## Best Practices
403 | 
404 | ### Tools
405 | - Keep tools focused on single responsibilities
406 | - Provide clear descriptions and parameter documentation
407 | - Handle errors gracefully and return informative messages
408 | - Implement timeouts for long-running operations
409 | - Log tool usage for debugging and monitoring
410 | 
411 | ### Resources
412 | - Use clear URI schemes that indicate content type
413 | - Implement caching for frequently used resources
414 | - Handle large resources efficiently (pagination, streaming)
415 | - Provide metadata about resources (size, type, etc.)
416 | - Secure access to sensitive resources
417 | 
418 | ### Prompts
419 | - Design for reusability across similar tasks
420 | - Keep prompt templates simple and focused
421 | - Document expected arguments clearly
422 | - Provide examples of how to use prompts
423 | - Test prompts with different inputs
424 | 
425 | ## Conclusion
426 | 
427 | Understanding the differences between tools, resources, and prompts is fundamental to effective MCP development. By choosing the right primitives for each use case and following best practices, you can create powerful, flexible, and secure MCP servers that enhance LLM capabilities.
428 | 
429 | The next document in this series will guide you through building MCP servers using Python, where you'll implement these concepts in practice.
430 | 
```

--------------------------------------------------------------------------------
/docs/03-building-mcp-servers-python.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Building MCP Servers with Python
  2 | 
  3 | This guide provides a comprehensive walkthrough for building Model Context Protocol (MCP) servers using Python. We'll cover everything from basic setup to advanced techniques, with practical examples and best practices.
  4 | 
  5 | ## Prerequisites
  6 | 
  7 | Before starting, ensure you have:
  8 | 
  9 | - Python 3.10 or higher installed
 10 | - Basic knowledge of Python and async programming
 11 | - Understanding of MCP core concepts (tools, resources, prompts)
 12 | - A development environment with your preferred code editor
 13 | 
 14 | ## Setting Up Your Environment
 15 | 
 16 | ### Installation
 17 | 
 18 | Start by creating a virtual environment and installing the MCP package:
 19 | 
 20 | ```bash
 21 | # Create a virtual environment
 22 | python -m venv venv
 23 | source venv/bin/activate  # On Windows: venv\Scripts\activate
 24 | 
 25 | # Install MCP
 26 | pip install mcp
 27 | ```
 28 | 
 29 | Alternatively, if you're using [uv](https://github.com/astral-sh/uv) for package management:
 30 | 
 31 | ```bash
 32 | # Create a virtual environment
 33 | uv venv
 34 | source .venv/bin/activate  # On Windows: .venv\Scripts\activate
 35 | 
 36 | # Install MCP
 37 | uv pip install mcp
 38 | ```
 39 | 
 40 | ### Project Structure
 41 | 
 42 | A well-organized MCP server project typically follows this structure:
 43 | 
 44 | ```
 45 | my-mcp-server/
 46 | ├── requirements.txt
 47 | ├── server.py
 48 | ├── tools/
 49 | │   ├── __init__.py
 50 | │   ├── tool_module1.py
 51 | │   └── tool_module2.py
 52 | ├── resources/
 53 | │   ├── __init__.py
 54 | │   └── resource_modules.py
 55 | └── prompts/
 56 |     ├── __init__.py
 57 |     └── prompt_modules.py
 58 | ```
 59 | 
 60 | This modular structure keeps your code organized and makes it easier to add new functionality over time.
 61 | 
 62 | ## Creating Your First MCP Server
 63 | 
 64 | ### Basic Server Structure
 65 | 
 66 | Let's create a simple MCP server with a "hello world" tool:
 67 | 
 68 | ```python
 69 | # server.py
 70 | from mcp.server.fastmcp import FastMCP
 71 | 
 72 | # Create a server
 73 | mcp = FastMCP("HelloWorld")
 74 | 
 75 | @mcp.tool()
 76 | def hello(name: str = "World") -> str:
 77 |     """
 78 |     Say hello to a name.
 79 |     
 80 |     Args:
 81 |         name: The name to greet (default: "World")
 82 |     
 83 |     Returns:
 84 |         A greeting message
 85 |     """
 86 |     return f"Hello, {name}!"
 87 | 
 88 | if __name__ == "__main__":
 89 |     # Run the server
 90 |     mcp.run()
 91 | ```
 92 | 
 93 | This basic server:
 94 | 1. Creates a FastMCP server named "HelloWorld"
 95 | 2. Defines a simple tool called "hello" that takes a name parameter
 96 | 3. Runs the server using the default stdio transport
 97 | 
 98 | ### Running Your Server
 99 | 
100 | To run your server:
101 | 
102 | ```bash
103 | python server.py
104 | ```
105 | 
106 | The server will start and wait for connections on the standard input/output streams.
107 | 
108 | ### FastMCP vs. Low-Level API
109 | 
110 | The MCP Python SDK provides two ways to create servers:
111 | 
112 | 1. **FastMCP**: A high-level API that simplifies server creation through decorators
113 | 2. **Low-Level API**: Provides more control but requires more boilerplate code
114 | 
115 | Most developers should start with FastMCP, as it handles many details automatically.
116 | 
117 | ## Implementing Tools
118 | 
119 | Tools are the most common primitive in MCP servers. They allow LLMs to perform actions and retrieve information.
120 | 
121 | ### Basic Tool Example
122 | 
123 | Here's how to implement a simple calculator tool:
124 | 
125 | ```python
126 | @mcp.tool()
127 | def calculate(operation: str, a: float, b: float) -> float:
128 |     """
129 |     Perform basic arithmetic operations.
130 |     
131 |     Args:
132 |         operation: The operation to perform (add, subtract, multiply, divide)
133 |         a: First number
134 |         b: Second number
135 |     
136 |     Returns:
137 |         The result of the operation
138 |     """
139 |     if operation == "add":
140 |         return a + b
141 |     elif operation == "subtract":
142 |         return a - b
143 |     elif operation == "multiply":
144 |         return a * b
145 |     elif operation == "divide":
146 |         if b == 0:
147 |             raise ValueError("Cannot divide by zero")
148 |         return a / b
149 |     else:
150 |         raise ValueError(f"Unknown operation: {operation}")
151 | ```
152 | 
153 | ### Asynchronous Tools
154 | 
155 | For operations that involve I/O or might take time, use async tools:
156 | 
157 | ```python
158 | @mcp.tool()
159 | async def fetch_weather(city: str) -> str:
160 |     """
161 |     Fetch weather information for a city.
162 |     
163 |     Args:
164 |         city: The city name
165 |     
166 |     Returns:
167 |         Weather information
168 |     """
169 |     async with httpx.AsyncClient() as client:
170 |         response = await client.get(f"https://weather-api.example.com/{city}")
171 |         data = response.json()
172 |         return f"Temperature: {data['temp']}°C, Conditions: {data['conditions']}"
173 | ```
174 | 
175 | ### Tool Parameters
176 | 
177 | Tools can have:
178 | 
179 | - Required parameters
180 | - Optional parameters with defaults
181 | - Type hints that are used to generate schema
182 | - Docstrings that provide descriptions
183 | 
184 | ```python
185 | @mcp.tool()
186 | def search_database(
187 |     query: str,
188 |     limit: int = 10,
189 |     offset: int = 0,
190 |     sort_by: str = "relevance"
191 | ) -> list:
192 |     """
193 |     Search the database for records matching the query.
194 |     
195 |     Args:
196 |         query: The search query string
197 |         limit: Maximum number of results to return (default: 10)
198 |         offset: Number of results to skip (default: 0)
199 |         sort_by: Field to sort results by (default: "relevance")
200 |     
201 |     Returns:
202 |         List of matching records
203 |     """
204 |     # Implementation details...
205 |     return results
206 | ```
207 | 
208 | ### Error Handling in Tools
209 | 
210 | Proper error handling is essential for robust tools:
211 | 
212 | ```python
213 | @mcp.tool()
214 | def divide(a: float, b: float) -> float:
215 |     """
216 |     Divide two numbers.
217 |     
218 |     Args:
219 |         a: Numerator
220 |         b: Denominator
221 |     
222 |     Returns:
223 |         The division result
224 |     
225 |     Raises:
226 |         ValueError: If attempting to divide by zero
227 |     """
228 |     try:
229 |         if b == 0:
230 |             raise ValueError("Cannot divide by zero")
231 |         return a / b
232 |     except Exception as e:
233 |         # Log the error for debugging
234 |         logging.error(f"Error in divide tool: {str(e)}")
235 |         # Re-raise with a user-friendly message
236 |         raise ValueError(f"Division failed: {str(e)}")
237 | ```
238 | 
239 | ### Grouping Related Tools
240 | 
241 | For complex servers, organize related tools into modules:
242 | 
243 | ```python
244 | # tools/math_tools.py
245 | def register_math_tools(mcp):
246 |     @mcp.tool()
247 |     def add(a: float, b: float) -> float:
248 |         """Add two numbers."""
249 |         return a + b
250 |     
251 |     @mcp.tool()
252 |     def subtract(a: float, b: float) -> float:
253 |         """Subtract b from a."""
254 |         return a - b
255 |     
256 |     # More math tools...
257 | 
258 | # server.py
259 | from tools.math_tools import register_math_tools
260 | 
261 | mcp = FastMCP("MathServer")
262 | register_math_tools(mcp)
263 | ```
264 | 
265 | ## Implementing Resources
266 | 
267 | Resources provide data to LLMs through URI-based access patterns.
268 | 
269 | ### Basic Resource Example
270 | 
271 | Here's a simple file resource:
272 | 
273 | ```python
274 | @mcp.resource("file://{path}")
275 | async def get_file(path: str) -> str:
276 |     """
277 |     Get the content of a file.
278 |     
279 |     Args:
280 |         path: Path to the file
281 |     
282 |     Returns:
283 |         The file content
284 |     """
285 |     try:
286 |         async with aiofiles.open(path, "r") as f:
287 |             return await f.read()
288 |     except Exception as e:
289 |         raise ValueError(f"Failed to read file: {str(e)}")
290 | ```
291 | 
292 | ### Dynamic Resources
293 | 
294 | Resources can be dynamic and parameterized:
295 | 
296 | ```python
297 | @mcp.resource("database://{table}/{id}")
298 | async def get_database_record(table: str, id: str) -> str:
299 |     """
300 |     Get a record from the database.
301 |     
302 |     Args:
303 |         table: The table name
304 |         id: The record ID
305 |     
306 |     Returns:
307 |         The record data
308 |     """
309 |     # Implementation details...
310 |     return json.dumps(record)
311 | ```
312 | 
313 | ### Resource Metadata
314 | 
315 | Resources can include metadata:
316 | 
317 | ```python
318 | @mcp.resource("api://{endpoint}")
319 | async def get_api_data(endpoint: str) -> tuple:
320 |     """
321 |     Get data from an API endpoint.
322 |     
323 |     Args:
324 |         endpoint: The API endpoint path
325 |     
326 |     Returns:
327 |         A tuple of (content, mime_type)
328 |     """
329 |     async with httpx.AsyncClient() as client:
330 |         response = await client.get(f"https://api.example.com/{endpoint}")
331 |         return response.text, response.headers.get("content-type", "text/plain")
332 | ```
333 | 
334 | ### Binary Resources
335 | 
336 | Resources can return binary data:
337 | 
338 | ```python
339 | from mcp.server.fastmcp import Image
340 | 
341 | @mcp.resource("image://{path}")
342 | async def get_image(path: str) -> Image:
343 |     """
344 |     Get an image file.
345 |     
346 |     Args:
347 |         path: Path to the image
348 |     
349 |     Returns:
350 |         The image data
351 |     """
352 |     with open(path, "rb") as f:
353 |         data = f.read()
354 |     return Image(data=data, format=path.split(".")[-1])
355 | ```
356 | 
357 | ## Implementing Prompts
358 | 
359 | Prompts are templates that help LLMs interact with your server effectively.
360 | 
361 | ### Basic Prompt Example
362 | 
363 | Here's a simple query prompt:
364 | 
365 | ```python
366 | @mcp.prompt()
367 | def search_query(query: str) -> str:
368 |     """
369 |     Create a search query prompt.
370 |     
371 |     Args:
372 |         query: The search query
373 |     
374 |     Returns:
375 |         Formatted search query prompt
376 |     """
377 |     return f"""
378 |     Please search for information about:
379 |     
380 |     {query}
381 |     
382 |     Focus on the most relevant and up-to-date information.
383 |     """
384 | ```
385 | 
386 | ### Multi-Message Prompts
387 | 
388 | Prompts can include multiple messages:
389 | 
390 | ```python
391 | from mcp.types import UserMessage, AssistantMessage
392 | 
393 | @mcp.prompt()
394 | def debug_error(error: str) -> list:
395 |     """
396 |     Create a debugging conversation.
397 |     
398 |     Args:
399 |         error: The error message
400 |     
401 |     Returns:
402 |         A list of messages
403 |     """
404 |     return [
405 |         UserMessage(f"I'm getting this error: {error}"),
406 |         AssistantMessage("Let me help debug that. What have you tried so far?")
407 |     ]
408 | ```
409 | 
410 | ## Transport Options
411 | 
412 | MCP supports different transport mechanisms for communication between clients and servers.
413 | 
414 | ### STDIO Transport (Default)
415 | 
416 | The default transport uses standard input/output streams:
417 | 
418 | ```python
419 | if __name__ == "__main__":
420 |     mcp.run(transport="stdio")
421 | ```
422 | 
423 | This is ideal for local processes and command-line tools.
424 | 
425 | ### SSE Transport
426 | 
427 | Server-Sent Events (SSE) transport is used for web applications:
428 | 
429 | ```python
430 | if __name__ == "__main__":
431 |     mcp.run(transport="sse", host="localhost", port=5000)
432 | ```
433 | 
434 | This starts an HTTP server that accepts MCP connections through SSE.
435 | 
436 | ## Context and Lifespan
437 | 
438 | ### Using Context
439 | 
440 | The `Context` object provides access to the current request context:
441 | 
442 | ```python
443 | from mcp.server.fastmcp import Context
444 | 
445 | @mcp.tool()
446 | async def log_message(message: str, ctx: Context) -> str:
447 |     """
448 |     Log a message and return a confirmation.
449 |     
450 |     Args:
451 |         message: The message to log
452 |         ctx: The request context
453 |     
454 |     Returns:
455 |         Confirmation message
456 |     """
457 |     ctx.info(f"User logged: {message}")
458 |     return f"Message logged: {message}"
459 | ```
460 | 
461 | ### Progress Reporting
462 | 
463 | For long-running tools, report progress:
464 | 
465 | ```python
466 | @mcp.tool()
467 | async def process_files(files: list[str], ctx: Context) -> str:
468 |     """
469 |     Process multiple files with progress tracking.
470 |     
471 |     Args:
472 |         files: List of file paths
473 |         ctx: The request context
474 |     
475 |     Returns:
476 |         Processing summary
477 |     """
478 |     total = len(files)
479 |     for i, file in enumerate(files):
480 |         # Report progress (0-100%)
481 |         await ctx.report_progress(i * 100 // total)
482 |         # Process the file...
483 |         ctx.info(f"Processing {file}")
484 |     
485 |     return f"Processed {total} files"
486 | ```
487 | 
488 | ### Lifespan Management
489 | 
490 | For servers that need initialization and cleanup:
491 | 
492 | ```python
493 | from contextlib import asynccontextmanager
494 | from typing import AsyncIterator
495 | 
496 | @asynccontextmanager
497 | async def lifespan(server: FastMCP) -> AsyncIterator[dict]:
498 |     """Manage server lifecycle."""
499 |     # Setup (runs on startup)
500 |     db = await Database.connect()
501 |     try:
502 |         yield {"db": db}  # Pass context to handlers
503 |     finally:
504 |         # Cleanup (runs on shutdown)
505 |         await db.disconnect()
506 | 
507 | # Create server with lifespan
508 | mcp = FastMCP("DatabaseServer", lifespan=lifespan)
509 | 
510 | @mcp.tool()
511 | async def query_db(sql: str, ctx: Context) -> list:
512 |     """Run a database query."""
513 |     db = ctx.request_context.lifespan_context["db"]
514 |     return await db.execute(sql)
515 | ```
516 | 
517 | ## Testing MCP Servers
518 | 
519 | ### Using the MCP Inspector
520 | 
521 | The MCP Inspector is a tool for testing MCP servers:
522 | 
523 | ```bash
524 | # Install the inspector
525 | npm install -g @modelcontextprotocol/inspector
526 | 
527 | # Run your server with the inspector
528 | npx @modelcontextprotocol/inspector python server.py
529 | ```
530 | 
531 | This opens a web interface where you can:
532 | - See available tools, resources, and prompts
533 | - Test tools with different parameters
534 | - View tool execution results
535 | - Explore resource content
536 | 
537 | ### Manual Testing
538 | 
539 | You can also test your server programmatically:
540 | 
541 | ```python
542 | import asyncio
543 | from mcp import ClientSession, StdioServerParameters
544 | from mcp.client.stdio import stdio_client
545 | 
546 | async def test_server():
547 |     # Connect to the server
548 |     server_params = StdioServerParameters(
549 |         command="python",
550 |         args=["server.py"]
551 |     )
552 |     
553 |     async with stdio_client(server_params) as (read, write):
554 |         async with ClientSession(read, write) as session:
555 |             # Initialize the connection
556 |             await session.initialize()
557 |             
558 |             # List tools
559 |             tools = await session.list_tools()
560 |             print(f"Available tools: {[tool.name for tool in tools.tools]}")
561 |             
562 |             # Call a tool
563 |             result = await session.call_tool("hello", {"name": "MCP"})
564 |             print(f"Tool result: {result.content[0].text}")
565 | 
566 | if __name__ == "__main__":
567 |     asyncio.run(test_server())
568 | ```
569 | 
570 | ## Debugging MCP Servers
571 | 
572 | ### Logging
573 | 
574 | Use logging to debug your server:
575 | 
576 | ```python
577 | import logging
578 | 
579 | # Configure logging
580 | logging.basicConfig(
581 |     level=logging.DEBUG,
582 |     format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
583 | )
584 | 
585 | # Access the MCP logger
586 | logger = logging.getLogger("mcp")
587 | ```
588 | 
589 | ### Common Issues
590 | 
591 | 1. **Schema Generation**:
592 |    - Ensure type hints are accurate
593 |    - Provide docstrings for tools
594 |    - Check parameter names and types
595 | 
596 | 2. **Async/Sync Mismatch**:
597 |    - Use `async def` for tools that use async operations
598 |    - Don't mix async and sync code without proper handling
599 | 
600 | 3. **Transport Issues**:
601 |    - Check that stdio is not mixed with print statements
602 |    - Ensure ports are available for SSE transport
603 |    - Verify network settings for remote connections
604 | 
605 | ## Deployment Options
606 | 
607 | ### Local Deployment
608 | 
609 | For local use with Claude Desktop:
610 | 
611 | 1. Edit the Claude Desktop config file:
612 |    ```json
613 |    {
614 |      "mcpServers": {
615 |        "my-server": {
616 |          "command": "python",
617 |          "args": ["/path/to/server.py"]
618 |        }
619 |      }
620 |    }
621 |    ```
622 | 
623 | 2. Restart Claude Desktop
624 | 
625 | ### Web Deployment
626 | 
627 | For web deployment with SSE transport:
628 | 
629 | 1. Set up a web server (e.g., nginx) to proxy requests
630 | 2. Use a process manager (e.g., systemd, supervisor) to keep the server running
631 | 3. Configure the server to use SSE transport with appropriate host/port
632 | 
633 | Example systemd service:
634 | 
635 | ```ini
636 | [Unit]
637 | Description=MCP Server
638 | After=network.target
639 | 
640 | [Service]
641 | User=mcp
642 | WorkingDirectory=/path/to/server
643 | ExecStart=/path/to/venv/bin/python server.py --transport sse --host 127.0.0.1 --port 5000
644 | Restart=on-failure
645 | 
646 | [Install]
647 | WantedBy=multi-user.target
648 | ```
649 | 
650 | ## Security Considerations
651 | 
652 | When building MCP servers, consider these security aspects:
653 | 
654 | 1. **Input Validation**:
655 |    - Validate all parameters
656 |    - Sanitize file paths and system commands
657 |    - Use allowlists for sensitive operations
658 | 
659 | 2. **Resource Access**:
660 |    - Limit access to specific directories
661 |    - Avoid exposing sensitive information
662 |    - Use proper permissions for files
663 | 
664 | 3. **Error Handling**:
665 |    - Don't expose internal errors to clients
666 |    - Log security-relevant errors
667 |    - Implement proper error recovery
668 | 
669 | 4. **Authentication**:
670 |    - Implement authentication for sensitive operations
671 |    - Use secure tokens or credentials
672 |    - Verify client identity when needed
673 | 
674 | ## Example: Web Scraping Server
675 | 
676 | Let's build a complete web scraping server that fetches and returns content from URLs:
677 | 
678 | ```python
679 | # server.py
680 | import httpx
681 | from mcp.server.fastmcp import FastMCP
682 | 
683 | # Create the server
684 | mcp = FastMCP("WebScraper")
685 | 
686 | @mcp.tool()
687 | async def web_scrape(url: str) -> str:
688 |     """
689 |     Fetch content from a URL and return it.
690 |     
691 |     Args:
692 |         url: The URL to scrape
693 |     
694 |     Returns:
695 |         The page content
696 |     """
697 |     # Ensure URL has a scheme
698 |     if not url.startswith(('http://', 'https://')):
699 |         url = 'https://' + url
700 |     
701 |     # Fetch the content
702 |     try:
703 |         async with httpx.AsyncClient() as client:
704 |             response = await client.get(url, follow_redirects=True)
705 |             response.raise_for_status()
706 |             return response.text
707 |     except httpx.HTTPStatusError as e:
708 |         return f"Error: HTTP status error - {e.response.status_code}"
709 |     except httpx.RequestError as e:
710 |         return f"Error: Request failed - {str(e)}"
711 |     except Exception as e:
712 |         return f"Error: Unexpected error occurred - {str(e)}"
713 | 
714 | if __name__ == "__main__":
715 |     mcp.run()
716 | ```
717 | 
718 | ## Conclusion
719 | 
720 | Building MCP servers with Python is a powerful way to extend LLM capabilities. By following the patterns and practices in this guide, you can create robust, maintainable MCP servers that integrate with Claude and other LLMs.
721 | 
722 | In the next document, we'll explore how to connect to MCP servers from different clients.
723 | 
```

--------------------------------------------------------------------------------
/docs/00-important-python-mcp-sdk.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Python SDK
  2 | 
  3 | <div align="center">
  4 | 
  5 | <strong>Python implementation of the Model Context Protocol (MCP)</strong>
  6 | 
  7 | [![PyPI][pypi-badge]][pypi-url]
  8 | [![MIT licensed][mit-badge]][mit-url]
  9 | [![Python Version][python-badge]][python-url]
 10 | [![Documentation][docs-badge]][docs-url]
 11 | [![Specification][spec-badge]][spec-url]
 12 | [![GitHub Discussions][discussions-badge]][discussions-url]
 13 | 
 14 | </div>
 15 | 
 16 | <!-- omit in toc -->
 17 | ## Table of Contents
 18 | 
 19 | - [Overview](#overview)
 20 | - [Installation](#installation)
 21 | - [Quickstart](#quickstart)
 22 | - [What is MCP?](#what-is-mcp)
 23 | - [Core Concepts](#core-concepts)
 24 |   - [Server](#server)
 25 |   - [Resources](#resources)
 26 |   - [Tools](#tools)
 27 |   - [Prompts](#prompts)
 28 |   - [Images](#images)
 29 |   - [Context](#context)
 30 | - [Running Your Server](#running-your-server)
 31 |   - [Development Mode](#development-mode)
 32 |   - [Claude Desktop Integration](#claude-desktop-integration)
 33 |   - [Direct Execution](#direct-execution)
 34 | - [Examples](#examples)
 35 |   - [Echo Server](#echo-server)
 36 |   - [SQLite Explorer](#sqlite-explorer)
 37 | - [Advanced Usage](#advanced-usage)
 38 |   - [Low-Level Server](#low-level-server)
 39 |   - [Writing MCP Clients](#writing-mcp-clients)
 40 |   - [MCP Primitives](#mcp-primitives)
 41 |   - [Server Capabilities](#server-capabilities)
 42 | - [Documentation](#documentation)
 43 | - [Contributing](#contributing)
 44 | - [License](#license)
 45 | 
 46 | [pypi-badge]: https://img.shields.io/pypi/v/mcp.svg
 47 | [pypi-url]: https://pypi.org/project/mcp/
 48 | [mit-badge]: https://img.shields.io/pypi/l/mcp.svg
 49 | [mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE
 50 | [python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg
 51 | [python-url]: https://www.python.org/downloads/
 52 | [docs-badge]: https://img.shields.io/badge/docs-modelcontextprotocol.io-blue.svg
 53 | [docs-url]: https://modelcontextprotocol.io
 54 | [spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg
 55 | [spec-url]: https://spec.modelcontextprotocol.io
 56 | [discussions-badge]: https://img.shields.io/github/discussions/modelcontextprotocol/python-sdk
 57 | [discussions-url]: https://github.com/modelcontextprotocol/python-sdk/discussions
 58 | 
 59 | ## Overview
 60 | 
 61 | The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:
 62 | 
 63 | - Build MCP clients that can connect to any MCP server
 64 | - Create MCP servers that expose resources, prompts and tools
 65 | - Use standard transports like stdio and SSE
 66 | - Handle all MCP protocol messages and lifecycle events
 67 | 
 68 | ## Installation
 69 | 
 70 | We recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects:
 71 | 
 72 | ```bash
 73 | uv add "mcp[cli]"
 74 | ```
 75 | 
 76 | Alternatively:
 77 | ```bash
 78 | pip install mcp
 79 | ```
 80 | 
 81 | ## Quickstart
 82 | 
 83 | Let's create a simple MCP server that exposes a calculator tool and some data:
 84 | 
 85 | ```python
 86 | # server.py
 87 | from mcp.server.fastmcp import FastMCP
 88 | 
 89 | # Create an MCP server
 90 | mcp = FastMCP("Demo")
 91 | 
 92 | # Add an addition tool
 93 | @mcp.tool()
 94 | def add(a: int, b: int) -> int:
 95 |     """Add two numbers"""
 96 |     return a + b
 97 | 
 98 | # Add a dynamic greeting resource
 99 | @mcp.resource("greeting://{name}")
100 | def get_greeting(name: str) -> str:
101 |     """Get a personalized greeting"""
102 |     return f"Hello, {name}!"
103 | ```
104 | 
105 | You can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running:
106 | ```bash
107 | mcp install server.py
108 | ```
109 | 
110 | Alternatively, you can test it with the MCP Inspector:
111 | ```bash
112 | mcp dev server.py
113 | ```
114 | 
115 | ## What is MCP?
116 | 
117 | The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
118 | 
119 | - Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
120 | - Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
121 | - Define interaction patterns through **Prompts** (reusable templates for LLM interactions)
122 | - And more!
123 | 
124 | ## Core Concepts
125 | 
126 | ### Server
127 | 
128 | The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
129 | 
130 | ```python
131 | # Add lifespan support for startup/shutdown with strong typing
132 | from dataclasses import dataclass
133 | from typing import AsyncIterator
134 | from mcp.server.fastmcp import FastMCP
135 | 
136 | # Create a named server
137 | mcp = FastMCP("My App")
138 | 
139 | # Specify dependencies for deployment and development
140 | mcp = FastMCP("My App", dependencies=["pandas", "numpy"])
141 | 
142 | @dataclass
143 | class AppContext:
144 |     db: Database  # Replace with your actual DB type
145 | 
146 | @asynccontextmanager
147 | async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
148 |     """Manage application lifecycle with type-safe context"""
149 |     try:
150 |         # Initialize on startup
151 |         await db.connect()
152 |         yield AppContext(db=db)
153 |     finally:
154 |         # Cleanup on shutdown
155 |         await db.disconnect()
156 | 
157 | # Pass lifespan to server
158 | mcp = FastMCP("My App", lifespan=app_lifespan)
159 | 
160 | # Access type-safe lifespan context in tools
161 | @mcp.tool()
162 | def query_db(ctx: Context) -> str:
163 |     """Tool that uses initialized resources"""
164 |     db = ctx.request_context.lifespan_context["db"]
165 |     return db.query()
166 | ```
167 | 
168 | ### Resources
169 | 
170 | Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:
171 | 
172 | ```python
173 | @mcp.resource("config://app")
174 | def get_config() -> str:
175 |     """Static configuration data"""
176 |     return "App configuration here"
177 | 
178 | @mcp.resource("users://{user_id}/profile")
179 | def get_user_profile(user_id: str) -> str:
180 |     """Dynamic user data"""
181 |     return f"Profile data for user {user_id}"
182 | ```
183 | 
184 | ### Tools
185 | 
186 | Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:
187 | 
188 | ```python
189 | @mcp.tool()
190 | def calculate_bmi(weight_kg: float, height_m: float) -> float:
191 |     """Calculate BMI given weight in kg and height in meters"""
192 |     return weight_kg / (height_m ** 2)
193 | 
194 | @mcp.tool()
195 | async def fetch_weather(city: str) -> str:
196 |     """Fetch current weather for a city"""
197 |     async with httpx.AsyncClient() as client:
198 |         response = await client.get(f"https://api.weather.com/{city}")
199 |         return response.text
200 | ```
201 | 
202 | ### Prompts
203 | 
204 | Prompts are reusable templates that help LLMs interact with your server effectively:
205 | 
206 | ```python
207 | @mcp.prompt()
208 | def review_code(code: str) -> str:
209 |     return f"Please review this code:\n\n{code}"
210 | 
211 | @mcp.prompt()
212 | def debug_error(error: str) -> list[Message]:
213 |     return [
214 |         UserMessage("I'm seeing this error:"),
215 |         UserMessage(error),
216 |         AssistantMessage("I'll help debug that. What have you tried so far?")
217 |     ]
218 | ```
219 | 
220 | ### Images
221 | 
222 | FastMCP provides an `Image` class that automatically handles image data:
223 | 
224 | ```python
225 | from mcp.server.fastmcp import FastMCP, Image
226 | from PIL import Image as PILImage
227 | 
228 | @mcp.tool()
229 | def create_thumbnail(image_path: str) -> Image:
230 |     """Create a thumbnail from an image"""
231 |     img = PILImage.open(image_path)
232 |     img.thumbnail((100, 100))
233 |     return Image(data=img.tobytes(), format="png")
234 | ```
235 | 
236 | ### Context
237 | 
238 | The Context object gives your tools and resources access to MCP capabilities:
239 | 
240 | ```python
241 | from mcp.server.fastmcp import FastMCP, Context
242 | 
243 | @mcp.tool()
244 | async def long_task(files: list[str], ctx: Context) -> str:
245 |     """Process multiple files with progress tracking"""
246 |     for i, file in enumerate(files):
247 |         ctx.info(f"Processing {file}")
248 |         await ctx.report_progress(i, len(files))
249 |         data, mime_type = await ctx.read_resource(f"file://{file}")
250 |     return "Processing complete"
251 | ```
252 | 
253 | ## Running Your Server
254 | 
255 | ### Development Mode
256 | 
257 | The fastest way to test and debug your server is with the MCP Inspector:
258 | 
259 | ```bash
260 | mcp dev server.py
261 | 
262 | # Add dependencies
263 | mcp dev server.py --with pandas --with numpy
264 | 
265 | # Mount local code
266 | mcp dev server.py --with-editable .
267 | ```
268 | 
269 | ### Claude Desktop Integration
270 | 
271 | Once your server is ready, install it in Claude Desktop:
272 | 
273 | ```bash
274 | mcp install server.py
275 | 
276 | # Custom name
277 | mcp install server.py --name "My Analytics Server"
278 | 
279 | # Environment variables
280 | mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...
281 | mcp install server.py -f .env
282 | ```
283 | 
284 | ### Direct Execution
285 | 
286 | For advanced scenarios like custom deployments:
287 | 
288 | ```python
289 | from mcp.server.fastmcp import FastMCP
290 | 
291 | mcp = FastMCP("My App")
292 | 
293 | if __name__ == "__main__":
294 |     mcp.run()
295 | ```
296 | 
297 | Run it with:
298 | ```bash
299 | python server.py
300 | # or
301 | mcp run server.py
302 | ```
303 | 
304 | ## Examples
305 | 
306 | ### Echo Server
307 | 
308 | A simple server demonstrating resources, tools, and prompts:
309 | 
310 | ```python
311 | from mcp.server.fastmcp import FastMCP
312 | 
313 | mcp = FastMCP("Echo")
314 | 
315 | @mcp.resource("echo://{message}")
316 | def echo_resource(message: str) -> str:
317 |     """Echo a message as a resource"""
318 |     return f"Resource echo: {message}"
319 | 
320 | @mcp.tool()
321 | def echo_tool(message: str) -> str:
322 |     """Echo a message as a tool"""
323 |     return f"Tool echo: {message}"
324 | 
325 | @mcp.prompt()
326 | def echo_prompt(message: str) -> str:
327 |     """Create an echo prompt"""
328 |     return f"Please process this message: {message}"
329 | ```
330 | 
331 | ### SQLite Explorer
332 | 
333 | A more complex example showing database integration:
334 | 
335 | ```python
336 | from mcp.server.fastmcp import FastMCP
337 | import sqlite3
338 | 
339 | mcp = FastMCP("SQLite Explorer")
340 | 
341 | @mcp.resource("schema://main")
342 | def get_schema() -> str:
343 |     """Provide the database schema as a resource"""
344 |     conn = sqlite3.connect("database.db")
345 |     schema = conn.execute(
346 |         "SELECT sql FROM sqlite_master WHERE type='table'"
347 |     ).fetchall()
348 |     return "\n".join(sql[0] for sql in schema if sql[0])
349 | 
350 | @mcp.tool()
351 | def query_data(sql: str) -> str:
352 |     """Execute SQL queries safely"""
353 |     conn = sqlite3.connect("database.db")
354 |     try:
355 |         result = conn.execute(sql).fetchall()
356 |         return "\n".join(str(row) for row in result)
357 |     except Exception as e:
358 |         return f"Error: {str(e)}"
359 | ```
360 | 
361 | ## Advanced Usage
362 | 
363 | ### Low-Level Server
364 | 
365 | For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:
366 | 
367 | ```python
368 | from contextlib import asynccontextmanager
369 | from typing import AsyncIterator
370 | 
371 | @asynccontextmanager
372 | async def server_lifespan(server: Server) -> AsyncIterator[dict]:
373 |     """Manage server startup and shutdown lifecycle."""
374 |     try:
375 |         # Initialize resources on startup
376 |         await db.connect()
377 |         yield {"db": db}
378 |     finally:
379 |         # Clean up on shutdown
380 |         await db.disconnect()
381 | 
382 | # Pass lifespan to server
383 | server = Server("example-server", lifespan=server_lifespan)
384 | 
385 | # Access lifespan context in handlers
386 | @server.call_tool()
387 | async def query_db(name: str, arguments: dict) -> list:
388 |     ctx = server.request_context
389 |     db = ctx.lifespan_context["db"]
390 |     return await db.query(arguments["query"])
391 | ```
392 | 
393 | The lifespan API provides:
394 | - A way to initialize resources when the server starts and clean them up when it stops
395 | - Access to initialized resources through the request context in handlers
396 | - Type-safe context passing between lifespan and request handlers
397 | 
398 | ```python
399 | from mcp.server.lowlevel import Server, NotificationOptions
400 | from mcp.server.models import InitializationOptions
401 | import mcp.server.stdio
402 | import mcp.types as types
403 | 
404 | # Create a server instance
405 | server = Server("example-server")
406 | 
407 | @server.list_prompts()
408 | async def handle_list_prompts() -> list[types.Prompt]:
409 |     return [
410 |         types.Prompt(
411 |             name="example-prompt",
412 |             description="An example prompt template",
413 |             arguments=[
414 |                 types.PromptArgument(
415 |                     name="arg1",
416 |                     description="Example argument",
417 |                     required=True
418 |                 )
419 |             ]
420 |         )
421 |     ]
422 | 
423 | @server.get_prompt()
424 | async def handle_get_prompt(
425 |     name: str,
426 |     arguments: dict[str, str] | None
427 | ) -> types.GetPromptResult:
428 |     if name != "example-prompt":
429 |         raise ValueError(f"Unknown prompt: {name}")
430 | 
431 |     return types.GetPromptResult(
432 |         description="Example prompt",
433 |         messages=[
434 |             types.PromptMessage(
435 |                 role="user",
436 |                 content=types.TextContent(
437 |                     type="text",
438 |                     text="Example prompt text"
439 |                 )
440 |             )
441 |         ]
442 |     )
443 | 
444 | async def run():
445 |     async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
446 |         await server.run(
447 |             read_stream,
448 |             write_stream,
449 |             InitializationOptions(
450 |                 server_name="example",
451 |                 server_version="0.1.0",
452 |                 capabilities=server.get_capabilities(
453 |                     notification_options=NotificationOptions(),
454 |                     experimental_capabilities={},
455 |                 )
456 |             )
457 |         )
458 | 
459 | if __name__ == "__main__":
460 |     import asyncio
461 |     asyncio.run(run())
462 | ```
463 | 
464 | ### Writing MCP Clients
465 | 
466 | The SDK provides a high-level client interface for connecting to MCP servers:
467 | 
468 | ```python
469 | from mcp import ClientSession, StdioServerParameters
470 | from mcp.client.stdio import stdio_client
471 | 
472 | # Create server parameters for stdio connection
473 | server_params = StdioServerParameters(
474 |     command="python", # Executable
475 |     args=["example_server.py"], # Optional command line arguments
476 |     env=None # Optional environment variables
477 | )
478 | 
479 | # Optional: create a sampling callback
480 | async def handle_sampling_message(message: types.CreateMessageRequestParams) -> types.CreateMessageResult:
481 |     return types.CreateMessageResult(
482 |         role="assistant",
483 |         content=types.TextContent(
484 |             type="text",
485 |             text="Hello, world! from model",
486 |         ),
487 |         model="gpt-3.5-turbo",
488 |         stopReason="endTurn",
489 |     )
490 | 
491 | async def run():
492 |     async with stdio_client(server_params) as (read, write):
493 |         async with ClientSession(read, write, sampling_callback=handle_sampling_message) as session:
494 |             # Initialize the connection
495 |             await session.initialize()
496 | 
497 |             # List available prompts
498 |             prompts = await session.list_prompts()
499 | 
500 |             # Get a prompt
501 |             prompt = await session.get_prompt("example-prompt", arguments={"arg1": "value"})
502 | 
503 |             # List available resources
504 |             resources = await session.list_resources()
505 | 
506 |             # List available tools
507 |             tools = await session.list_tools()
508 | 
509 |             # Read a resource
510 |             content, mime_type = await session.read_resource("file://some/path")
511 | 
512 |             # Call a tool
513 |             result = await session.call_tool("tool-name", arguments={"arg1": "value"})
514 | 
515 | if __name__ == "__main__":
516 |     import asyncio
517 |     asyncio.run(run())
518 | ```
519 | 
520 | ### MCP Primitives
521 | 
522 | The MCP protocol defines three core primitives that servers can implement:
523 | 
524 | | Primitive | Control               | Description                                         | Example Use                  |
525 | |-----------|-----------------------|-----------------------------------------------------|------------------------------|
526 | | Prompts   | User-controlled       | Interactive templates invoked by user choice        | Slash commands, menu options |
527 | | Resources | Application-controlled| Contextual data managed by the client application   | File contents, API responses |
528 | | Tools     | Model-controlled      | Functions exposed to the LLM to take actions        | API calls, data updates      |
529 | 
530 | ### Server Capabilities
531 | 
532 | MCP servers declare capabilities during initialization:
533 | 
534 | | Capability  | Feature Flag                 | Description                        |
535 | |-------------|------------------------------|------------------------------------|
536 | | `prompts`   | `listChanged`                | Prompt template management         |
537 | | `resources` | `subscribe`<br/>`listChanged`| Resource exposure and updates      |
538 | | `tools`     | `listChanged`                | Tool discovery and execution       |
539 | | `logging`   | -                            | Server logging configuration       |
540 | | `completion`| -                            | Argument completion suggestions    |
541 | 
542 | ## Documentation
543 | 
544 | - [Model Context Protocol documentation](https://modelcontextprotocol.io)
545 | - [Model Context Protocol specification](https://spec.modelcontextprotocol.io)
546 | - [Officially supported servers](https://github.com/modelcontextprotocol/servers)
547 | 
548 | ## Contributing
549 | 
550 | We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the [contributing guide](CONTRIBUTING.md) to get started.
551 | 
552 | ## License
553 | 
554 | This project is licensed under the MIT License - see the LICENSE file for details.
```

--------------------------------------------------------------------------------
/docs/05-communication-protocols.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Communication Protocols
  2 | 
  3 | This document provides a detailed exploration of the communication protocols used in the Model Context Protocol (MCP). Understanding these protocols is essential for developing robust MCP servers and clients, and for troubleshooting connection issues.
  4 | 
  5 | ## Protocol Overview
  6 | 
  7 | MCP uses a layered protocol architecture:
  8 | 
  9 | ```mermaid
 10 | flowchart TB
 11 |     subgraph Application
 12 |         Tools["Tools, Resources, Prompts"]
 13 |     end
 14 |     subgraph Protocol
 15 |         Messages["MCP Message Format"]
 16 |         JSONRPC["JSON-RPC 2.0"]
 17 |     end
 18 |     subgraph Transport
 19 |         STDIO["STDIO Transport"]
 20 |         SSE["SSE Transport"]
 21 |     end
 22 |     
 23 |     Tools <--> Messages
 24 |     Messages <--> JSONRPC
 25 |     JSONRPC <--> STDIO
 26 |     JSONRPC <--> SSE
 27 | ```
 28 | 
 29 | The layers are:
 30 | 
 31 | 1. **Application Layer**: Defines tools, resources, and prompts
 32 | 2. **Protocol Layer**: Specifies message formats and semantics
 33 | 3. **Transport Layer**: Handles the physical transmission of messages
 34 | 
 35 | ## Message Format
 36 | 
 37 | MCP uses [JSON-RPC 2.0](https://www.jsonrpc.org/specification) as its message format. This provides a standardized way to structure requests, responses, and notifications.
 38 | 
 39 | ### JSON-RPC Structure
 40 | 
 41 | There are three types of messages in JSON-RPC:
 42 | 
 43 | 1. **Requests**: Messages that require a response
 44 | 2. **Responses**: Replies to requests (success or error)
 45 | 3. **Notifications**: One-way messages that don't expect a response
 46 | 
 47 | ### Request Format
 48 | 
 49 | ```json
 50 | {
 51 |   "jsonrpc": "2.0",
 52 |   "id": 1,
 53 |   "method": "tools/call",
 54 |   "params": {
 55 |     "name": "tool_name",
 56 |     "arguments": {
 57 |       "param1": "value1",
 58 |       "param2": 42
 59 |     }
 60 |   }
 61 | }
 62 | ```
 63 | 
 64 | Key components:
 65 | - `jsonrpc`: Always "2.0" to indicate JSON-RPC 2.0
 66 | - `id`: A unique identifier for matching responses to requests
 67 | - `method`: The operation to perform (e.g., "tools/call")
 68 | - `params`: Parameters for the method
 69 | 
 70 | ### Response Format (Success)
 71 | 
 72 | ```json
 73 | {
 74 |   "jsonrpc": "2.0",
 75 |   "id": 1,
 76 |   "result": {
 77 |     "content": [
 78 |       {
 79 |         "type": "text",
 80 |         "text": "Operation result"
 81 |       }
 82 |     ]
 83 |   }
 84 | }
 85 | ```
 86 | 
 87 | Key components:
 88 | - `jsonrpc`: Always "2.0"
 89 | - `id`: Matches the id from the request
 90 | - `result`: The operation result (structure depends on the method)
 91 | 
 92 | ### Response Format (Error)
 93 | 
 94 | ```json
 95 | {
 96 |   "jsonrpc": "2.0",
 97 |   "id": 1,
 98 |   "error": {
 99 |     "code": -32602,
100 |     "message": "Invalid parameters",
101 |     "data": {
102 |       "details": "Parameter 'param1' is required"
103 |     }
104 |   }
105 | }
106 | ```
107 | 
108 | Key components:
109 | - `jsonrpc`: Always "2.0"
110 | - `id`: Matches the id from the request
111 | - `error`: Error information with code, message, and optional data
112 | 
113 | ### Notification Format
114 | 
115 | ```json
116 | {
117 |   "jsonrpc": "2.0",
118 |   "method": "notifications/resources/list_changed",
119 |   "params": {}
120 | }
121 | ```
122 | 
123 | Key components:
124 | - `jsonrpc`: Always "2.0"
125 | - `method`: The notification type
126 | - `params`: Parameters for the notification (if any)
127 | - No `id` field (distinguishes notifications from requests)
128 | 
129 | ## Transport Methods
130 | 
131 | MCP supports two main transport methods:
132 | 
133 | ### STDIO Transport
134 | 
135 | Standard Input/Output (STDIO) transport uses standard input and output streams for communication. This is particularly useful for local processes.
136 | 
137 | ```mermaid
138 | flowchart LR
139 |     Client["MCP Client"]
140 |     Server["MCP Server Process"]
141 |     
142 |     Client -->|stdin| Server
143 |     Server -->|stdout| Client
144 | ```
145 | 
146 | #### Message Framing
147 | 
148 | STDIO transport uses a simple message framing format:
149 | 
150 | ```
151 | Content-Length: <length>\r\n
152 | \r\n
153 | <message>
154 | ```
155 | 
156 | Where:
157 | - `<length>` is the length of the message in bytes
158 | - `<message>` is the JSON-RPC message
159 | 
160 | Example:
161 | 
162 | ```
163 | Content-Length: 76
164 |  
165 | {"jsonrpc":"2.0","method":"initialize","id":0,"params":{"version":"1.0.0"}}
166 | ```
167 | 
168 | #### Implementation Details
169 | 
170 | STDIO transport is implemented by:
171 | 
172 | 1. Starting a child process
173 | 2. Writing to the process's standard input
174 | 3. Reading from the process's standard output
175 | 4. Parsing messages according to the framing format
176 | 
177 | Python implementation example:
178 | 
179 | ```python
180 | async def read_message(reader):
181 |     # Read headers
182 |     headers = {}
183 |     while True:
184 |         line = await reader.readline()
185 |         line = line.decode('utf-8').strip()
186 |         if not line:
187 |             break
188 |         key, value = line.split(': ', 1)
189 |         headers[key] = value
190 |     
191 |     # Get content length
192 |     content_length = int(headers.get('Content-Length', 0))
193 |     
194 |     # Read content
195 |     content = await reader.read(content_length)
196 |     return json.loads(content)
197 | 
198 | async def write_message(writer, message):
199 |     # Serialize message
200 |     content = json.dumps(message).encode('utf-8')
201 |     
202 |     # Write headers
203 |     header = f'Content-Length: {len(content)}\r\n\r\n'
204 |     writer.write(header.encode('utf-8'))
205 |     
206 |     # Write content
207 |     writer.write(content)
208 |     await writer.drain()
209 | ```
210 | 
211 | #### Advantages and Limitations
212 | 
213 | Advantages:
214 | - Simple to implement
215 | - Works well for local processes
216 | - No network configuration required
217 | - Natural process lifecycle management
218 | 
219 | Limitations:
220 | - Only works for local processes
221 | - Limited to one client per server
222 | - No built-in authentication
223 | - Potential blocking issues
224 | 
225 | ### SSE Transport
226 | 
227 | Server-Sent Events (SSE) transport uses HTTP for client-to-server requests and SSE for server-to-client messages. This is suitable for web applications and remote servers.
228 | 
229 | ```mermaid
230 | flowchart LR
231 |     Client["MCP Client"]
232 |     Server["MCP Server (HTTP)"]
233 |     
234 |     Client -->|HTTP POST| Server
235 |     Server -->|SSE Events| Client
236 | ```
237 | 
238 | #### Client-to-Server Messages
239 | 
240 | Client-to-server messages are sent using HTTP POST requests:
241 | 
242 | ```
243 | POST /message HTTP/1.1
244 | Content-Type: application/json
245 | 
246 | {"jsonrpc":"2.0","method":"tools/call","id":1,"params":{...}}
247 | ```
248 | 
249 | #### Server-to-Client Messages
250 | 
251 | Server-to-client messages are sent using SSE events:
252 | 
253 | ```
254 | event: message
255 | data: {"jsonrpc":"2.0","id":1,"result":{...}}
256 | 
257 | ```
258 | 
259 | #### Implementation Details
260 | 
261 | SSE transport implementation requires:
262 | 
263 | 1. An HTTP server endpoint for accepting client POST requests
264 | 2. An SSE endpoint for sending server messages to clients
265 | 3. Proper HTTP and SSE headers and formatting
266 | 
267 | Python implementation example (using aiohttp):
268 | 
269 | ```python
270 | from aiohttp import web
271 | import json
272 | 
273 | # For server-to-client messages (SSE)
274 | async def sse_handler(request):
275 |     response = web.Response(
276 |         content_type='text/event-stream',
277 |         headers={
278 |             'Cache-Control': 'no-cache',
279 |             'Connection': 'keep-alive',
280 |             'Access-Control-Allow-Origin': '*'
281 |         }
282 |     )
283 |     
284 |     response.enable_chunked_encoding()
285 |     
286 |     # Get the response writer
287 |     writer = response.write
288 |     
289 |     # Store the client connection
290 |     client_id = request.query.get('id', 'unknown')
291 |     clients[client_id] = writer
292 |     
293 |     # Keep the connection open
294 |     while True:
295 |         await asyncio.sleep(1)
296 |     
297 |     return response
298 | 
299 | # For client-to-server messages (HTTP POST)
300 | async def message_handler(request):
301 |     # Parse the message
302 |     data = await request.json()
303 |     
304 |     # Process the message
305 |     result = await process_message(data)
306 |     
307 |     # If it's a request (has an ID), send the response via SSE
308 |     if 'id' in data:
309 |         client_id = request.query.get('id', 'unknown')
310 |         if client_id in clients:
311 |             writer = clients[client_id]
312 |             message = json.dumps(result)
313 |             await writer(f'event: message\ndata: {message}\n\n')
314 |     
315 |     # Return an acknowledgment
316 |     return web.Response(text='OK')
317 | 
318 | # Send an SSE message to a client
319 | async def send_sse_message(client_id, message):
320 |     if client_id in clients:
321 |         writer = clients[client_id]
322 |         data = json.dumps(message)
323 |         await writer(f'event: message\ndata: {data}\n\n')
324 | ```
325 | 
326 | #### Advantages and Limitations
327 | 
328 | Advantages:
329 | - Works over standard HTTP
330 | - Supports remote clients
331 | - Can serve multiple clients
332 | - Integrates with web infrastructure
333 | 
334 | Limitations:
335 | - More complex to implement
336 | - Requires HTTP server
337 | - Connection management is more challenging
338 | - Potential firewall issues
339 | 
340 | ## Protocol Lifecycle
341 | 
342 | The MCP protocol follows a defined lifecycle:
343 | 
344 | ```mermaid
345 | sequenceDiagram
346 |     participant Client
347 |     participant Server
348 |     
349 |     Note over Client,Server: Initialization Phase
350 |     
351 |     Client->>Server: initialize request
352 |     Server->>Client: initialize response
353 |     Client->>Server: initialized notification
354 |     
355 |     Note over Client,Server: Operation Phase
356 |     
357 |     Client->>Server: tools/list request
358 |     Server->>Client: tools/list response
359 |     Client->>Server: tools/call request
360 |     Server->>Client: tools/call response
361 |     
362 |     Note over Client,Server: Termination Phase
363 |     
364 |     Client->>Server: exit notification
365 |     Note over Client,Server: Connection Closed
366 | ```
367 | 
368 | ### Initialization Phase
369 | 
370 | The initialization phase establishes the connection and negotiates capabilities:
371 | 
372 | 1. **initialize request**: Client sends protocol version and supported capabilities
373 | 2. **initialize response**: Server responds with its version and capabilities
374 | 3. **initialized notification**: Client acknowledges initialization
375 | 
376 | Initialize request example:
377 | 
378 | ```json
379 | {
380 |   "jsonrpc": "2.0",
381 |   "id": 0,
382 |   "method": "initialize",
383 |   "params": {
384 |     "clientInfo": {
385 |       "name": "example-client",
386 |       "version": "1.0.0"
387 |     },
388 |     "capabilities": {
389 |       "tools": {
390 |         "listChanged": true
391 |       },
392 |       "resources": {
393 |         "listChanged": true,
394 |         "subscribe": true
395 |       },
396 |       "prompts": {
397 |         "listChanged": true
398 |       }
399 |     }
400 |   }
401 | }
402 | ```
403 | 
404 | Initialize response example:
405 | 
406 | ```json
407 | {
408 |   "jsonrpc": "2.0",
409 |   "id": 0,
410 |   "result": {
411 |     "serverInfo": {
412 |       "name": "example-server",
413 |       "version": "1.0.0"
414 |     },
415 |     "capabilities": {
416 |       "tools": {
417 |         "listChanged": true
418 |       },
419 |       "resources": {
420 |         "listChanged": true,
421 |         "subscribe": true
422 |       },
423 |       "prompts": {
424 |         "listChanged": true
425 |       },
426 |       "experimental": {}
427 |     }
428 |   }
429 | }
430 | ```
431 | 
432 | Initialized notification example:
433 | 
434 | ```json
435 | {
436 |   "jsonrpc": "2.0",
437 |   "method": "initialized",
438 |   "params": {}
439 | }
440 | ```
441 | 
442 | ### Operation Phase
443 | 
444 | During the operation phase, clients and servers exchange various requests and notifications:
445 | 
446 | 1. **Feature Discovery**: Listing tools, resources, and prompts
447 | 2. **Tool Execution**: Calling tools and receiving results
448 | 3. **Resource Access**: Reading resources and subscribing to changes
449 | 4. **Prompt Usage**: Getting prompt templates
450 | 5. **Notifications**: Receiving updates about changes
451 | 
452 | ### Termination Phase
453 | 
454 | The termination phase cleanly closes the connection:
455 | 
456 | 1. **exit notification**: Client indicates it's closing the connection
457 | 2. **Connection closure**: Transport connection is closed
458 | 
459 | Exit notification example:
460 | 
461 | ```json
462 | {
463 |   "jsonrpc": "2.0",
464 |   "method": "exit",
465 |   "params": {}
466 | }
467 | ```
468 | 
469 | ## Message Types and Methods
470 | 
471 | MCP defines several standard message types for different operations:
472 | 
473 | ### Tools Methods
474 | 
475 | | Method | Type | Description |
476 | |--------|------|-------------|
477 | | `tools/list` | Request/Response | List available tools |
478 | | `tools/call` | Request/Response | Execute a tool with parameters |
479 | | `notifications/tools/list_changed` | Notification | Notify that the tool list has changed |
480 | 
481 | Example tools/list request:
482 | ```json
483 | {
484 |   "jsonrpc": "2.0",
485 |   "id": 1,
486 |   "method": "tools/list"
487 | }
488 | ```
489 | 
490 | Example tools/list response:
491 | ```json
492 | {
493 |   "jsonrpc": "2.0",
494 |   "id": 1,
495 |   "result": {
496 |     "tools": [
497 |       {
498 |         "name": "web_scrape",
499 |         "description": "Scrape content from a URL",
500 |         "inputSchema": {
501 |           "type": "object",
502 |           "properties": {
503 |             "url": {
504 |               "type": "string",
505 |               "description": "The URL to scrape"
506 |             }
507 |           },
508 |           "required": ["url"]
509 |         }
510 |       }
511 |     ]
512 |   }
513 | }
514 | ```
515 | 
516 | ### Resources Methods
517 | 
518 | | Method | Type | Description |
519 | |--------|------|-------------|
520 | | `resources/list` | Request/Response | List available resources |
521 | | `resources/read` | Request/Response | Read a resource by URI |
522 | | `resources/subscribe` | Request/Response | Subscribe to resource updates |
523 | | `resources/unsubscribe` | Request/Response | Unsubscribe from resource updates |
524 | | `notifications/resources/list_changed` | Notification | Notify that the resource list has changed |
525 | | `notifications/resources/updated` | Notification | Notify that a resource has been updated |
526 | 
527 | Example resources/read request:
528 | ```json
529 | {
530 |   "jsonrpc": "2.0",
531 |   "id": 2,
532 |   "method": "resources/read",
533 |   "params": {
534 |     "uri": "file:///path/to/file.txt"
535 |   }
536 | }
537 | ```
538 | 
539 | Example resources/read response:
540 | ```json
541 | {
542 |   "jsonrpc": "2.0",
543 |   "id": 2,
544 |   "result": {
545 |     "contents": [
546 |       {
547 |         "uri": "file:///path/to/file.txt",
548 |         "text": "File content goes here",
549 |         "mimeType": "text/plain"
550 |       }
551 |     ]
552 |   }
553 | }
554 | ```
555 | 
556 | ### Prompts Methods
557 | 
558 | | Method | Type | Description |
559 | |--------|------|-------------|
560 | | `prompts/list` | Request/Response | List available prompts |
561 | | `prompts/get` | Request/Response | Get a prompt by name |
562 | | `notifications/prompts/list_changed` | Notification | Notify that the prompt list has changed |
563 | 
564 | Example prompts/get request:
565 | ```json
566 | {
567 |   "jsonrpc": "2.0",
568 |   "id": 3,
569 |   "method": "prompts/get",
570 |   "params": {
571 |     "name": "code_review",
572 |     "arguments": {
573 |       "language": "python",
574 |       "code": "def hello(): print('Hello, world!')"
575 |     }
576 |   }
577 | }
578 | ```
579 | 
580 | Example prompts/get response:
581 | ```json
582 | {
583 |   "jsonrpc": "2.0",
584 |   "id": 3,
585 |   "result": {
586 |     "messages": [
587 |       {
588 |         "role": "user",
589 |         "content": {
590 |           "type": "text",
591 |           "text": "Please review this Python code:\n\ndef hello(): print('Hello, world!')"
592 |         }
593 |       }
594 |     ]
595 |   }
596 | }
597 | ```
598 | 
599 | ### Logging and Progress
600 | 
601 | | Method | Type | Description |
602 | |--------|------|-------------|
603 | | `notifications/logging/message` | Notification | Log a message |
604 | | `notifications/progress` | Notification | Report progress of a long-running operation |
605 | 
606 | Example logging notification:
607 | ```json
608 | {
609 |   "jsonrpc": "2.0",
610 |   "method": "notifications/logging/message",
611 |   "params": {
612 |     "level": "info",
613 |     "message": "Operation started",
614 |     "data": { 
615 |       "operation": "file_processing" 
616 |     }
617 |   }
618 | }
619 | ```
620 | 
621 | Example progress notification:
622 | ```json
623 | {
624 |   "jsonrpc": "2.0",
625 |   "method": "notifications/progress",
626 |   "params": {
627 |     "token": "operation-123",
628 |     "value": 50
629 |   }
630 | }
631 | ```
632 | 
633 | ## Error Codes
634 | 
635 | MCP uses standard JSON-RPC error codes plus additional codes for specific errors:
636 | 
637 | | Code | Name | Description |
638 | |------|------|-------------|
639 | | -32700 | Parse Error | Invalid JSON |
640 | | -32600 | Invalid Request | Request not conforming to JSON-RPC |
641 | | -32601 | Method Not Found | Method not supported |
642 | | -32602 | Invalid Params | Invalid parameters |
643 | | -32603 | Internal Error | Internal server error |
644 | | -32000 | Server Error | Server-specific error |
645 | | -32001 | Resource Not Found | Resource URI not found |
646 | | -32002 | Tool Not Found | Tool name not found |
647 | | -32003 | Prompt Not Found | Prompt name not found |
648 | | -32004 | Execution Failed | Tool execution failed |
649 | | -32005 | Permission Denied | Operation not permitted |
650 | 
651 | ## Protocol Extensions
652 | 
653 | The MCP protocol supports extensions through the "experimental" capability field:
654 | 
655 | ```json
656 | {
657 |   "capabilities": {
658 |     "experimental": {
659 |       "customFeature": {
660 |         "enabled": true,
661 |         "options": { ... }
662 |       }
663 |     }
664 |   }
665 | }
666 | ```
667 | 
668 | Extensions should follow these guidelines:
669 | 
670 | 1. Use namespaced method names (e.g., "customFeature/operation")
671 | 2. Document the extension clearly
672 | 3. Provide fallback behavior when the extension is not supported
673 | 4. Consider standardization for widely used extensions
674 | 
675 | ## Troubleshooting Protocol Issues
676 | 
677 | Common protocol issues include:
678 | 
679 | ### Initialization Problems
680 | 
681 | 1. **Version Mismatch**: Client and server using incompatible protocol versions
682 |    - Check version in initialize request/response
683 |    - Update client or server to compatible versions
684 | 
685 | 2. **Capability Negotiation Failure**: Client and server capabilities don't match
686 |    - Verify capabilities in initialize request/response
687 |    - Update client or server to support required capabilities
688 | 
689 | ### Message Format Issues
690 | 
691 | 1. **Invalid JSON**: Message contains malformed JSON
692 |    - Check message format before sending
693 |    - Validate JSON with a schema
694 | 
695 | 2. **Missing Fields**: Required fields are missing
696 |    - Ensure all required fields are present
697 |    - Use a protocol validation library
698 | 
699 | 3. **Incorrect Types**: Fields have incorrect types
700 |    - Validate field types before sending
701 |    - Use typed interfaces for messages
702 | 
703 | ### Transport Issues
704 | 
705 | 1. **Connection Lost**: Transport connection unexpectedly closed
706 |    - Implement reconnection logic
707 |    - Handle connection failures gracefully
708 | 
709 | 2. **Message Framing**: Incorrect message framing (STDIO)
710 |    - Ensure Content-Length is correct
711 |    - Validate message framing format
712 | 
713 | 3. **SSE Connection**: SSE connection issues
714 |    - Check network connectivity
715 |    - Verify SSE endpoint is accessible
716 | 
717 | ### Tool Call Issues
718 | 
719 | 1. **Invalid Parameters**: Tool parameters don't match schema
720 |    - Validate parameters against schema
721 |    - Provide descriptive error messages
722 | 
723 | 2. **Execution Failure**: Tool execution fails
724 |    - Handle exceptions in tool implementation
725 |    - Return appropriate error responses
726 | 
727 | ### Debugging Techniques
728 | 
729 | 1. **Message Logging**: Log all protocol messages
730 |    - Set up logging before and after sending/receiving
731 |    - Log both raw and parsed messages
732 | 
733 | 2. **Protocol Tracing**: Enable protocol tracing
734 |    - Set environment variables for trace logging
735 |    - Use MCP Inspector for visual tracing
736 | 
737 | 3. **Transport Monitoring**: Monitor transport state
738 |    - Check connection status
739 |    - Log transport events
740 | 
741 | ## Conclusion
742 | 
743 | Understanding the MCP communication protocols is essential for building robust MCP servers and clients. By following the standard message formats and transport mechanisms, you can ensure reliable communication between LLMs and external tools and data sources.
744 | 
745 | In the next document, we'll explore common troubleshooting techniques and solutions for MCP servers.
746 | 
```

--------------------------------------------------------------------------------
/docs/07-extending-the-repo.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Extending the Repository with New Tools
  2 | 
  3 | This guide explains how to add new tools to the MCP repository. You'll learn best practices for tool design, implementation strategies, and integration techniques that maintain the repository's modular structure.
  4 | 
  5 | ## Understanding the Repository Structure
  6 | 
  7 | Before adding new tools, it's important to understand the existing structure:
  8 | 
  9 | ```
 10 | /MCP/
 11 | ├── LICENSE
 12 | ├── README.md
 13 | ├── requirements.txt
 14 | ├── server.py
 15 | ├── streamlit_app.py
 16 | ├── run.sh
 17 | ├── run.bat
 18 | ├── tools/
 19 | │   ├── __init__.py
 20 | │   └── web_scrape.py
 21 | └── docs/
 22 |     └── *.md
 23 | ```
 24 | 
 25 | Key components:
 26 | 
 27 | 1. **server.py**: The main MCP server that registers and exposes tools
 28 | 2. **tools/**: Directory containing individual tool implementations
 29 | 3. **streamlit_app.py**: UI for interacting with MCP servers
 30 | 4. **requirements.txt**: Python dependencies
 31 | 5. **run.sh/run.bat**: Convenience scripts for running the server or UI
 32 | 
 33 | ## Planning Your New Tool
 34 | 
 35 | Before implementation, plan your tool carefully:
 36 | 
 37 | ### 1. Define the Purpose
 38 | 
 39 | Clearly define what your tool will do:
 40 | 
 41 | - What problem does it solve?
 42 | - How does it extend the capabilities of an LLM?
 43 | - Does it retrieve information, process data, or perform actions?
 44 | 
 45 | ### 2. Choose a Tool Type
 46 | 
 47 | MCP supports different types of tools:
 48 | 
 49 | - **Information retrieval tools**: Fetch information from external sources
 50 | - **Processing tools**: Transform or analyze data
 51 | - **Action tools**: Perform operations with side effects
 52 | - **Integration tools**: Connect to external services or APIs
 53 | 
 54 | ### 3. Design the Interface
 55 | 
 56 | Consider the tool's interface:
 57 | 
 58 | - What parameters does it need?
 59 | - What will it return?
 60 | - How will it handle errors?
 61 | - What schema will describe it?
 62 | 
 63 | Example interface design:
 64 | 
 65 | ```
 66 | Tool: search_news
 67 | Purpose: Search for recent news articles by keyword
 68 | Parameters:
 69 |   - query (string): Search query
 70 |   - days (int, optional): How recent the news should be (default: 7)
 71 |   - limit (int, optional): Maximum number of results (default: 5)
 72 | Returns:
 73 |   - List of articles with titles, sources, and summaries
 74 | Errors:
 75 |   - Handle API timeouts
 76 |   - Handle rate limiting
 77 |   - Handle empty results
 78 | ```
 79 | 
 80 | ## Implementing Your Tool
 81 | 
 82 | Now that you've planned your tool, it's time to implement it.
 83 | 
 84 | ### 1. Create a New Tool Module
 85 | 
 86 | Create a new Python file in the `tools` directory:
 87 | 
 88 | ```bash
 89 | touch tools/my_new_tool.py
 90 | ```
 91 | 
 92 | ### 2. Implement the Tool Function
 93 | 
 94 | Write the core functionality in your new tool file:
 95 | 
 96 | ```python
 97 | # tools/my_new_tool.py
 98 | """
 99 | MCP tool for [description of your tool].
100 | """
101 | 
102 | import httpx
103 | import asyncio
104 | import json
105 | from typing import List, Dict, Any, Optional
106 | 
107 | 
108 | async def search_news(query: str, days: int = 7, limit: int = 5) -> List[Dict[str, Any]]:
109 |     """
110 |     Search for recent news articles based on a query.
111 |     
112 |     Args:
113 |         query: Search terms
114 |         days: How recent the news should be (in days)
115 |         limit: Maximum number of results to return
116 |         
117 |     Returns:
118 |         List of news articles with title, source, and summary
119 |     """
120 |     # Implementation details
121 |     try:
122 |         # API call
123 |         async with httpx.AsyncClient() as client:
124 |             response = await client.get(
125 |                 "https://newsapi.example.com/v2/everything",
126 |                 params={
127 |                     "q": query,
128 |                     "from": f"-{days}d",
129 |                     "pageSize": limit,
130 |                     "apiKey": "YOUR_API_KEY"  # In production, use environment variables
131 |                 }
132 |             )
133 |             response.raise_for_status()
134 |             data = response.json()
135 |             
136 |             # Process and return results
137 |             articles = data.get("articles", [])
138 |             results = []
139 |             
140 |             for article in articles[:limit]:
141 |                 results.append({
142 |                     "title": article.get("title", "No title"),
143 |                     "source": article.get("source", {}).get("name", "Unknown source"),
144 |                     "url": article.get("url", ""),
145 |                     "summary": article.get("description", "No description")
146 |                 })
147 |                 
148 |             return results
149 |             
150 |     except httpx.HTTPStatusError as e:
151 |         # Handle API errors
152 |         return [{"error": f"API error: {e.response.status_code}"}]
153 |     except httpx.RequestError as e:
154 |         # Handle connection errors
155 |         return [{"error": f"Connection error: {str(e)}"}]
156 |     except Exception as e:
157 |         # Handle unexpected errors
158 |         return [{"error": f"Unexpected error: {str(e)}"}]
159 | 
160 | 
161 | # For testing outside of MCP
162 | if __name__ == "__main__":
163 |     async def test():
164 |         results = await search_news("python programming")
165 |         print(json.dumps(results, indent=2))
166 |     
167 |     asyncio.run(test())
168 | ```
169 | 
170 | ### 3. Add Required Dependencies
171 | 
172 | If your tool needs additional dependencies, add them to the requirements.txt file:
173 | 
174 | ```bash
175 | # Add to requirements.txt
176 | httpx>=0.24.0
177 | dateutil>=2.8.2
178 | ```
179 | 
180 | ### 4. Register the Tool in the Server
181 | 
182 | Update the main server.py file to import and register your new tool:
183 | 
184 | ```python
185 | # server.py
186 | from mcp.server.fastmcp import FastMCP
187 | 
188 | # Import existing tools
189 | from tools.web_scrape import fetch_url_as_markdown
190 | 
191 | # Import your new tool
192 | from tools.my_new_tool import search_news
193 | 
194 | # Create an MCP server
195 | mcp = FastMCP("Web Tools")
196 | 
197 | # Register existing tools
198 | @mcp.tool()
199 | async def web_scrape(url: str) -> str:
200 |     """
201 |     Convert a URL to use r.jina.ai as a prefix and fetch the markdown content.
202 |     
203 |     Args:
204 |         url (str): The URL to convert and fetch.
205 |         
206 |     Returns:
207 |         str: The markdown content if successful, or an error message if not.
208 |     """
209 |     return await fetch_url_as_markdown(url)
210 | 
211 | # Register your new tool
212 | @mcp.tool()
213 | async def news_search(query: str, days: int = 7, limit: int = 5) -> str:
214 |     """
215 |     Search for recent news articles based on a query.
216 |     
217 |     Args:
218 |         query: Search terms
219 |         days: How recent the news should be (in days, default: 7)
220 |         limit: Maximum number of results to return (default: 5)
221 |         
222 |     Returns:
223 |         Formatted text with news article information
224 |     """
225 |     articles = await search_news(query, days, limit)
226 |     
227 |     # Format the results as text
228 |     if articles and "error" in articles[0]:
229 |         return articles[0]["error"]
230 |     
231 |     if not articles:
232 |         return "No news articles found for the given query."
233 |     
234 |     results = []
235 |     for i, article in enumerate(articles, 1):
236 |         results.append(f"## {i}. {article['title']}")
237 |         results.append(f"Source: {article['source']}")
238 |         results.append(f"URL: {article['url']}")
239 |         results.append(f"\n{article['summary']}\n")
240 |     
241 |     return "\n".join(results)
242 | 
243 | if __name__ == "__main__":
244 |     mcp.run()
245 | ```
246 | 
247 | ## Best Practices for Tool Implementation
248 | 
249 | ### Error Handling
250 | 
251 | Robust error handling is essential for reliable tools:
252 | 
253 | ```python
254 | try:
255 |     # Operation that might fail
256 |     result = await perform_operation()
257 |     return result
258 | except SpecificError as e:
259 |     # Handle specific error cases
260 |     return f"Operation failed: {str(e)}"
261 | except Exception as e:
262 |     # Catch-all for unexpected errors
263 |     logging.error(f"Unexpected error: {str(e)}")
264 |     return "An unexpected error occurred. Please try again later."
265 | ```
266 | 
267 | ### Input Validation
268 | 
269 | Validate inputs before processing:
270 | 
271 | ```python
272 | def validate_search_params(query: str, days: int, limit: int) -> Optional[str]:
273 |     """Validate search parameters and return error message if invalid."""
274 |     if not query or len(query.strip()) == 0:
275 |         return "Search query cannot be empty"
276 |     
277 |     if days < 1 or days > 30:
278 |         return "Days must be between 1 and 30"
279 |     
280 |     if limit < 1 or limit > 100:
281 |         return "Limit must be between 1 and 100"
282 |     
283 |     return None
284 | 
285 | # In the tool function
286 | error = validate_search_params(query, days, limit)
287 | if error:
288 |     return error
289 | ```
290 | 
291 | ### Security Considerations
292 | 
293 | Implement security best practices:
294 | 
295 | ```python
296 | # Sanitize inputs
297 | def sanitize_query(query: str) -> str:
298 |     """Remove potentially dangerous characters from query."""
299 |     import re
300 |     return re.sub(r'[^\w\s\-.,?!]', '', query)
301 | 
302 | # Use environment variables for secrets
303 | import os
304 | api_key = os.environ.get("NEWS_API_KEY")
305 | if not api_key:
306 |     return "API key not configured. Please set the NEWS_API_KEY environment variable."
307 | 
308 | # Implement rate limiting
309 | from functools import lru_cache
310 | import time
311 | 
312 | @lru_cache(maxsize=100)
313 | def get_last_call_time():
314 |     return time.time()
315 | 
316 | def respect_rate_limit(min_interval=1.0):
317 |     """Ensure minimum time between API calls."""
318 |     last_call = get_last_call_time()
319 |     now = time.time()
320 |     if now - last_call < min_interval:
321 |         time.sleep(min_interval - (now - last_call))
322 |     get_last_call_time.cache_clear()
323 |     get_last_call_time()
324 | ```
325 | 
326 | ### Docstrings and Comments
327 | 
328 | Write clear documentation:
329 | 
330 | ```python
331 | async def translate_text(text: str, target_language: str) -> str:
332 |     """
333 |     Translate text to another language.
334 |     
335 |     This tool uses an external API to translate text from one language to another.
336 |     It automatically detects the source language and translates to the specified
337 |     target language.
338 |     
339 |     Args:
340 |         text: The text to translate
341 |         target_language: ISO 639-1 language code (e.g., 'es' for Spanish)
342 |         
343 |     Returns:
344 |         Translated text in the target language
345 |         
346 |     Raises:
347 |         ValueError: If the target language is not supported
348 |     """
349 |     # Implementation
350 | ```
351 | 
352 | ### Testing
353 | 
354 | Include tests for your tools:
355 | 
356 | ```python
357 | # tools/tests/test_my_new_tool.py
358 | import pytest
359 | import asyncio
360 | from tools.my_new_tool import search_news
361 | 
362 | @pytest.mark.asyncio
363 | async def test_search_news_valid_query():
364 |     """Test search_news with a valid query."""
365 |     results = await search_news("test query")
366 |     assert isinstance(results, list)
367 |     assert len(results) > 0
368 | 
369 | @pytest.mark.asyncio
370 | async def test_search_news_empty_query():
371 |     """Test search_news with an empty query."""
372 |     results = await search_news("")
373 |     assert isinstance(results, list)
374 |     assert "error" in results[0]
375 | 
376 | # Run tests
377 | if __name__ == "__main__":
378 |     asyncio.run(pytest.main(["-xvs", "test_my_new_tool.py"]))
379 | ```
380 | 
381 | ## Managing Tool Configurations
382 | 
383 | For tools that require configuration, follow these practices:
384 | 
385 | ### Environment Variables
386 | 
387 | Use environment variables for configuration:
388 | 
389 | ```python
390 | # tools/my_new_tool.py
391 | import os
392 | 
393 | API_KEY = os.environ.get("MY_TOOL_API_KEY")
394 | BASE_URL = os.environ.get("MY_TOOL_BASE_URL", "https://api.default.com")
395 | ```
396 | 
397 | ### Configuration Files
398 | 
399 | For more complex configurations, use configuration files:
400 | 
401 | ```python
402 | # tools/config.py
403 | import json
404 | import os
405 | from pathlib import Path
406 | 
407 | def load_config(tool_name):
408 |     """Load tool-specific configuration."""
409 |     config_dir = Path(os.environ.get("MCP_CONFIG_DIR", "~/.mcp")).expanduser()
410 |     config_path = config_dir / f"{tool_name}.json"
411 |     
412 |     if not config_path.exists():
413 |         return {}
414 |     
415 |     try:
416 |         with open(config_path, "r") as f:
417 |             return json.load(f)
418 |     except Exception as e:
419 |         print(f"Error loading config: {str(e)}")
420 |         return {}
421 | 
422 | # In your tool file
423 | from tools.config import load_config
424 | 
425 | config = load_config("my_new_tool")
426 | api_key = config.get("api_key", os.environ.get("MY_TOOL_API_KEY", ""))
427 | ```
428 | 
429 | ## Advanced Tool Patterns
430 | 
431 | ### Composition
432 | 
433 | Compose multiple tools for complex functionality:
434 | 
435 | ```python
436 | async def search_and_summarize(query: str) -> str:
437 |     """Search for news and summarize the results."""
438 |     # First search for news
439 |     articles = await search_news(query, days=3, limit=3)
440 |     
441 |     if not articles or "error" in articles[0]:
442 |         return "Failed to find news articles."
443 |     
444 |     # Then summarize each article
445 |     summaries = []
446 |     for article in articles:
447 |         summary = await summarize_text(article["summary"])
448 |         summaries.append(f"Title: {article['title']}\nSummary: {summary}")
449 |     
450 |     return "\n\n".join(summaries)
451 | ```
452 | 
453 | ### Stateful Tools
454 | 
455 | For tools that need to maintain state:
456 | 
457 | ```python
458 | # tools/stateful_tool.py
459 | from typing import Dict, Any
460 | import json
461 | import os
462 | from pathlib import Path
463 | 
464 | class SessionStore:
465 |     """Simple file-based session store."""
466 |     
467 |     def __init__(self, tool_name):
468 |         self.storage_dir = Path(os.environ.get("MCP_STORAGE_DIR", "~/.mcp/storage")).expanduser()
469 |         self.storage_dir.mkdir(parents=True, exist_ok=True)
470 |         self.tool_name = tool_name
471 |         self.sessions: Dict[str, Dict[str, Any]] = {}
472 |         self._load()
473 |     
474 |     def _get_storage_path(self):
475 |         return self.storage_dir / f"{self.tool_name}_sessions.json"
476 |     
477 |     def _load(self):
478 |         path = self._get_storage_path()
479 |         if path.exists():
480 |             try:
481 |                 with open(path, "r") as f:
482 |                     self.sessions = json.load(f)
483 |             except Exception:
484 |                 self.sessions = {}
485 |     
486 |     def _save(self):
487 |         with open(self._get_storage_path(), "w") as f:
488 |             json.dump(self.sessions, f, indent=2)
489 |     
490 |     def get(self, session_id, key, default=None):
491 |         session = self.sessions.get(session_id, {})
492 |         return session.get(key, default)
493 |     
494 |     def set(self, session_id, key, value):
495 |         if session_id not in self.sessions:
496 |             self.sessions[session_id] = {}
497 |         self.sessions[session_id][key] = value
498 |         self._save()
499 |     
500 |     def clear(self, session_id):
501 |         if session_id in self.sessions:
502 |             del self.sessions[session_id]
503 |             self._save()
504 | 
505 | # Usage in a tool
506 | from tools.stateful_tool import SessionStore
507 | 
508 | # Initialize store
509 | session_store = SessionStore("conversation")
510 | 
511 | async def remember_fact(session_id: str, fact: str) -> str:
512 |     """Remember a fact for later recall."""
513 |     facts = session_store.get(session_id, "facts", [])
514 |     facts.append(fact)
515 |     session_store.set(session_id, "facts", facts)
516 |     return f"I'll remember that: {fact}"
517 | 
518 | async def recall_facts(session_id: str) -> str:
519 |     """Recall previously stored facts."""
520 |     facts = session_store.get(session_id, "facts", [])
521 |     if not facts:
522 |         return "I don't have any facts stored for this session."
523 |     
524 |     return "Here are the facts I remember:\n- " + "\n- ".join(facts)
525 | ```
526 | 
527 | ### Long-Running Operations
528 | 
529 | For tools that take time to complete:
530 | 
531 | ```python
532 | from mcp.server.fastmcp import FastMCP, Context
533 | 
534 | @mcp.tool()
535 | async def process_large_dataset(dataset_url: str, ctx: Context) -> str:
536 |     """Process a large dataset with progress reporting."""
537 |     try:
538 |         # Download dataset
539 |         ctx.info(f"Downloading dataset from {dataset_url}")
540 |         await ctx.report_progress(10)
541 |         
542 |         # Process in chunks
543 |         total_chunks = 10
544 |         for i in range(total_chunks):
545 |             ctx.info(f"Processing chunk {i+1}/{total_chunks}")
546 |             # Process chunk
547 |             await asyncio.sleep(1)  # Simulate work
548 |             await ctx.report_progress(10 + (i+1) * 80 // total_chunks)
549 |         
550 |         # Finalize
551 |         ctx.info("Finalizing results")
552 |         await ctx.report_progress(90)
553 |         await asyncio.sleep(1)  # Simulate work
554 |         
555 |         # Complete
556 |         await ctx.report_progress(100)
557 |         return "Dataset processing complete. Found 42 insights."
558 |         
559 |     except Exception as e:
560 |         ctx.info(f"Error: {str(e)}")
561 |         return f"Processing failed: {str(e)}"
562 | ```
563 | 
564 | ## Adding a Resource
565 | 
566 | In addition to tools, you might want to add a resource to your MCP server:
567 | 
568 | ```python
569 | # server.py
570 | @mcp.resource("weather://{location}")
571 | async def get_weather(location: str) -> str:
572 |     """
573 |     Get weather information for a location.
574 |     
575 |     Args:
576 |         location: City name or coordinates
577 |     
578 |     Returns:
579 |         Weather information as text
580 |     """
581 |     try:
582 |         # Fetch weather data
583 |         async with httpx.AsyncClient() as client:
584 |             response = await client.get(
585 |                 f"https://api.weatherapi.com/v1/current.json",
586 |                 params={
587 |                     "q": location,
588 |                     "key": os.environ.get("WEATHER_API_KEY", "")
589 |                 }
590 |             )
591 |             response.raise_for_status()
592 |             data = response.json()
593 |         
594 |         # Format weather data
595 |         location_data = data.get("location", {})
596 |         current_data = data.get("current", {})
597 |         
598 |         weather_info = f"""
599 |         Weather for {location_data.get('name', location)}, {location_data.get('country', '')}
600 |         
601 |         Temperature: {current_data.get('temp_c', 'N/A')}°C / {current_data.get('temp_f', 'N/A')}°F
602 |         Condition: {current_data.get('condition', {}).get('text', 'N/A')}
603 |         Wind: {current_data.get('wind_kph', 'N/A')} kph, {current_data.get('wind_dir', 'N/A')}
604 |         Humidity: {current_data.get('humidity', 'N/A')}%
605 |         Updated: {current_data.get('last_updated', 'N/A')}
606 |         """
607 |         
608 |         return weather_info
609 |         
610 |     except Exception as e:
611 |         return f"Error fetching weather: {str(e)}"
612 | ```
613 | 
614 | ## Adding a Prompt
615 | 
616 | You can also add a prompt to your MCP server:
617 | 
618 | ```python
619 | # server.py
620 | @mcp.prompt()
621 | def analyze_sentiment(text: str) -> str:
622 |     """
623 |     Create a prompt for sentiment analysis.
624 |     
625 |     Args:
626 |         text: The text to analyze
627 |     
628 |     Returns:
629 |         A prompt for sentiment analysis
630 |     """
631 |     return f"""
632 |     Please analyze the sentiment of the following text and categorize it as positive, negative, or neutral. 
633 |     Provide a brief explanation for your categorization and highlight key phrases that indicate the sentiment.
634 |     
635 |     Text to analyze:
636 |     
637 |     {text}
638 |     
639 |     Your analysis:
640 |     """
641 | ```
642 | 
643 | ## Conclusion
644 | 
645 | Extending the MCP repository with new tools is a powerful way to enhance the capabilities of LLMs. By following the patterns and practices outlined in this guide, you can create robust, reusable tools that integrate seamlessly with the existing repository structure.
646 | 
647 | Remember these key principles:
648 | 
649 | 1. **Plan before coding**: Define the purpose and interface of your tool
650 | 2. **Follow best practices**: Implement proper error handling, input validation, and security
651 | 3. **Document thoroughly**: Write clear docstrings and comments
652 | 4. **Test rigorously**: Create tests for your tools
653 | 5. **Consider configurations**: Use environment variables or configuration files
654 | 6. **Explore advanced patterns**: Implement composition, state, and long-running operations as needed
655 | 
656 | In the next document, we'll explore example use cases for your MCP server and tools.
657 | 
```
Page 1/3FirstPrevNextLast