#
tokens: 46528/50000 24/25 files (page 1/2)
lines: off (toggle) GitHub
raw markdown copy
This is page 1 of 2. Use http://codebase.md/surya-madhav/mcp?page={x} to view the full context.

# Directory Structure

```
├── .DS_Store
├── .gitignore
├── docs
│   ├── 00-important-official-mcp-documentation.md
│   ├── 00-important-python-mcp-sdk.md
│   ├── 01-introduction-to-mcp.md
│   ├── 02-mcp-core-concepts.md
│   ├── 03-building-mcp-servers-python.md
│   ├── 04-connecting-to-mcp-servers.md
│   ├── 05-communication-protocols.md
│   ├── 06-troubleshooting-guide.md
│   ├── 07-extending-the-repo.md
│   └── 08-advanced-mcp-features.md
├── frontend
│   ├── app.py
│   ├── pages
│   │   ├── 01_My_Active_Servers.py
│   │   ├── 02_Settings.py
│   │   └── 03_Documentation.py
│   └── utils.py
├── LICENSE
├── README.md
├── requirements.txt
├── run.bat
├── run.sh
├── server.py
└── tools
    ├── __init__.py
    ├── crawl4ai_scraper.py
    ├── ddg_search.py
    └── web_scrape.py
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
tools/__pycache__/__init__.cpython-312.pyc
tools/__pycache__/web_scrape.cpython-312.pyc
.idea/
**/__pycache__/**

```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# MCP Web Tools Server

A Model Context Protocol (MCP) server that provides tools for web-related operations. This server allows LLMs to interact with web content through standardized tools.

## Current Tools

- **web_scrape**: Converts a URL to use r.jina.ai as a prefix and returns the markdown content

## Installation

1. Clone this repository:
   ```bash
   git clone <repository-url>
   cd MCP
   ```

2. Install the required dependencies:
   ```bash
   pip install -r requirements.txt
   ```

   Alternatively, you can use [uv](https://github.com/astral-sh/uv) for faster installation:
   ```bash
   uv pip install -r requirements.txt
   ```

## Running the Server and UI

This repository includes convenient scripts to run either the MCP server or the Streamlit UI.

### Using the Run Scripts

On macOS/Linux:
```bash
# Run the server with stdio transport (default)
./run.sh server

# Run the server with SSE transport
./run.sh server --transport sse --host localhost --port 5000

# Run the Streamlit UI
./run.sh ui
```

On Windows:
```cmd
# Run the server with stdio transport (default)
run.bat server

# Run the server with SSE transport
run.bat server --transport sse --host localhost --port 5000

# Run the Streamlit UI
run.bat ui
```

### Running Manually

Alternatively, you can run the server directly:

#### Using stdio (default)

```bash
python server.py
```

#### Using SSE

```bash
python server.py --transport sse --host localhost --port 5000
```

This will start an HTTP server on `localhost:5000` that accepts MCP connections.

And to run the Streamlit UI manually:

```bash
streamlit run streamlit_app.py
```

## Testing with MCP Inspector

The MCP Inspector is a tool for testing and debugging MCP servers. You can use it to interact with your server:

1. Install the MCP Inspector:
   ```bash
   npm install -g @modelcontextprotocol/inspector
   ```

2. Run the Inspector with your server:
   ```bash
   npx @modelcontextprotocol/inspector python server.py
   ```

3. Use the Inspector interface to test the `web_scrape` tool by providing a URL like `example.com` and viewing the returned markdown content.

## Integrating with Claude for Desktop

To use this server with Claude for Desktop:

1. Make sure you have [Claude for Desktop](https://claude.ai/download) installed.

2. Open the Claude for Desktop configuration file:
   - Mac: `~/Library/Application Support/Claude/claude_desktop_config.json`
   - Windows: `%APPDATA%\Claude\claude_desktop_config.json`

3. Add the following configuration (adjust the path as needed):

```json
{
  "mcpServers": {
    "web-tools": {
      "command": "python",
      "args": [
        "/absolute/path/to/MCP/server.py"
      ]
    }
  }
}
```

4. Restart Claude for Desktop.

5. You should now see the web_scrape tool available in Claude's interface. You can ask Claude to fetch content from a website, and it will use the tool.

## Example Usage

Once integrated with Claude, you can ask questions like:

- "What's on the homepage of example.com?"
- "Can you fetch and summarize the content from mozilla.org?"
- "Get the content from wikipedia.org/wiki/Model_Context_Protocol and explain it to me."

Claude will use the web_scrape tool to fetch the content and provide it in its response.

## Adding More Tools

To add more tools to this server:

1. Create a new Python file in the `tools/` directory, e.g., `tools/new_tool.py`.

2. Implement your tool function, following a similar pattern to the existing tools.

3. Import your tool in `server.py` and register it with the MCP server:

```python
# Import your new tool
from tools.new_tool import new_tool_function

# Register the tool with the MCP server
@mcp.tool()
async def new_tool(param1: str, param2: int) -> str:
    """
    Description of what your tool does.
    
    Args:
        param1: Description of param1
        param2: Description of param2
        
    Returns:
        Description of return value
    """
    return await new_tool_function(param1, param2)
```

4. Restart the server to apply the changes.

## Streamlit UI

This repository includes a Streamlit application that allows you to connect to and test all your MCP servers configured in Claude for Desktop.

### Running the Streamlit UI

```bash
streamlit run streamlit_app.py
```

This will start the Streamlit server and open a web browser with the UI.

### Features

- Load and parse your Claude for Desktop configuration file
- View all configured MCP servers
- Connect to any server and view its available tools
- Test tools by providing input parameters and viewing results
- See available resources and prompts

### Usage

1. Start the Streamlit app
2. Enter the path to your Claude for Desktop configuration file (default path is pre-filled)
3. Click "Load Servers" to see all available MCP servers
4. Select a server tab and click "Connect" to load its tools
5. Select a tool and provide the required parameters
6. Click "Execute" to run the tool and see the results

## Troubleshooting

- **Missing dependencies**: Make sure all dependencies in `requirements.txt` are installed.
- **Connection issues**: Check that the server is running and the configuration in Claude for Desktop points to the correct path.
- **Tool execution errors**: Look for error messages in the server output.
- **Streamlit UI issues**: Make sure Streamlit is properly installed and the configuration file path is correct.

## License

This project is available under the MIT License. See the LICENSE file for more details.

```

--------------------------------------------------------------------------------
/tools/__init__.py:
--------------------------------------------------------------------------------

```python
# This file allows the tools directory to be imported as a package

```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
mcp>=1.2.0
httpx>=0.24.0
streamlit>=1.26.0
json5>=0.9.14
subprocess-tee>=0.4.1
rich>=13.7.0
duckduckgo_search
crawl4ai>=0.4.3
```

--------------------------------------------------------------------------------
/run.bat:
--------------------------------------------------------------------------------

```
@echo off
REM Script to run either the MCP server or the Streamlit UI

REM Check if Python is installed
where python >nul 2>nul
if %ERRORLEVEL% neq 0 (
    echo Python is not installed or not in your PATH. Please install Python first.
    exit /b 1
)

REM Check if pip is installed
where pip >nul 2>nul
if %ERRORLEVEL% neq 0 (
    echo pip is not installed or not in your PATH. Please install pip first.
    exit /b 1
)

REM Function to check and install dependencies
:check_dependencies
    echo Checking dependencies...
    
    REM Check if requirements.txt exists
    if not exist "requirements.txt" (
        echo requirements.txt not found. Please run this script from the repository root.
        exit /b 1
    )
    
    REM Install dependencies
    echo Installing dependencies from requirements.txt...
    pip install -r requirements.txt
    
    if %ERRORLEVEL% neq 0 (
        echo Failed to install dependencies. Please check the errors above.
        exit /b 1
    )
    
    echo Dependencies installed successfully.
    exit /b 0

REM Function to run the MCP server
:run_server
    echo Starting MCP server...
    echo Press Ctrl+C to stop the server.
    python server.py %*
    exit /b 0

REM Function to run the Streamlit UI
:run_ui
    echo Starting Streamlit UI...
    echo Press Ctrl+C to stop the UI.
    streamlit run streamlit_app.py
    exit /b 0

REM Main script
if "%1"=="server" (
    call :check_dependencies
    if %ERRORLEVEL% neq 0 exit /b 1
    shift
    call :run_server %*
) else if "%1"=="ui" (
    call :check_dependencies
    if %ERRORLEVEL% neq 0 exit /b 1
    call :run_ui
) else (
    echo MCP Tools Runner
    echo Usage:
    echo   run.bat server [args]  - Run the MCP server with optional arguments
    echo   run.bat ui             - Run the Streamlit UI
    echo.
    echo Examples:
    echo   run.bat server                        - Run the server with stdio transport
    echo   run.bat server --transport sse        - Run the server with SSE transport
    echo   run.bat ui                            - Start the Streamlit UI
)

```

--------------------------------------------------------------------------------
/run.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
# Script to run either the MCP server or the Streamlit UI

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color

# Check if Python is installed
if ! command -v python &> /dev/null
then
    echo -e "${RED}Python is not installed or not in your PATH. Please install Python first.${NC}"
    exit 1
fi

# Check if pip is installed
if ! command -v pip &> /dev/null
then
    echo -e "${RED}pip is not installed or not in your PATH. Please install pip first.${NC}"
    exit 1
fi

# Function to check and install dependencies
check_dependencies() {
    echo -e "${BLUE}Checking dependencies...${NC}"
    
    # Check if requirements.txt exists
    if [ ! -f "requirements.txt" ]; then
        echo -e "${RED}requirements.txt not found. Please run this script from the repository root.${NC}"
        exit 1
    fi
    
    # Install dependencies
    echo -e "${YELLOW}Installing dependencies from requirements.txt...${NC}"
    pip install -r requirements.txt
    
    if [ $? -ne 0 ]; then
        echo -e "${RED}Failed to install dependencies. Please check the errors above.${NC}"
        exit 1
    fi
    
    echo -e "${GREEN}Dependencies installed successfully.${NC}"
}

# Function to run the MCP server
run_server() {
    echo -e "${BLUE}Starting MCP server...${NC}"
    echo -e "${YELLOW}Press Ctrl+C to stop the server.${NC}"
    python server.py "$@"
}

# Function to run the Streamlit UI
run_ui() {
    echo -e "${BLUE}Starting MCP Dev Tools UI...${NC}"
    echo -e "${YELLOW}Press Ctrl+C to stop the UI.${NC}"
    # Use the new frontend/app.py file instead of app.py
    streamlit run frontend/app.py
}

# Main script
case "$1" in
    server)
        shift # Remove the first argument
        check_dependencies
        run_server "$@"
        ;;
    ui)
        check_dependencies
        run_ui
        ;;
    *)
        echo -e "${BLUE}MCP Dev Tools Runner${NC}"
        echo -e "${YELLOW}Usage:${NC}"
        echo -e "  ./run.sh server [args]  - Run the MCP server with optional arguments"
        echo -e "  ./run.sh ui             - Run the MCP Dev Tools UI"
        echo
        echo -e "${YELLOW}Examples:${NC}"
        echo -e "  ./run.sh server                        - Run the server with stdio transport"
        echo -e "  ./run.sh server --transport sse        - Run the server with SSE transport"
        echo -e "  ./run.sh ui                            - Start the MCP Dev Tools UI"
        ;;
esac

```

--------------------------------------------------------------------------------
/tools/web_scrape.py:
--------------------------------------------------------------------------------

```python
"""
Web scraping tool for MCP server.

This module provides functionality to convert regular URLs into r.jina.ai prefixed URLs
and fetch their content as markdown. The r.jina.ai service acts as a URL-to-markdown
converter, making web content more accessible for text processing and analysis.

Features:
- Automatic HTTP/HTTPS scheme addition if missing
- URL conversion to r.jina.ai format
- Asynchronous HTTP requests using httpx
- Comprehensive error handling for various failure scenarios
"""

import httpx

async def fetch_url_as_markdown(url: str) -> str:
    """
    Convert a URL to use r.jina.ai as a prefix and fetch the markdown content.
    
    This function performs the following steps:
    1. Ensures the URL has a proper HTTP/HTTPS scheme
    2. Converts the URL to use r.jina.ai as a prefix
    3. Fetches the content using an async HTTP client
    4. Returns the markdown content or an error message
    
    Args:
        url (str): The URL to convert and fetch. If the URL doesn't start with
                  'http://' or 'https://', 'https://' will be automatically added.
    
    Returns:
        str: The markdown content if successful, or a descriptive error message if:
             - The HTTP request fails (e.g., 404, 500)
             - The connection times out
             - Any other unexpected error occurs
    """
    # Ensure URL has a scheme - default to https:// if none provided
    if not url.startswith(('http://', 'https://')):
        url = 'https://' + url
    
    # Convert the URL to use r.jina.ai as a markdown conversion service
    converted_url = f"https://r.jina.ai/{url}"
    
    try:
        # Use httpx for modern async HTTP requests with timeout and redirect handling
        async with httpx.AsyncClient() as client:
            response = await client.get(converted_url, follow_redirects=True, timeout=30.0)
            response.raise_for_status()
            return response.text
    except httpx.HTTPStatusError as e:
        # Handle HTTP errors (4xx, 5xx) with specific status code information
        return f"Error: HTTP status error - {e.response.status_code}"
    except httpx.RequestError as e:
        # Handle network-related errors (timeouts, connection issues, etc.)
        return f"Error: Request failed - {str(e)}"
    except Exception as e:
        # Handle any unexpected errors that weren't caught by the above
        return f"Error: Unexpected error occurred - {str(e)}"

# Standalone test functionality
if __name__ == "__main__":
    import asyncio
    
    async def test():
        # Example usage with a test URL
        url = "example.com"
        result = await fetch_url_as_markdown(url)
        print(f"Fetched content from {url}:")
        # Show preview of content (first 200 characters)
        print(result[:200] + "..." if len(result) > 200 else result)
    
    # Run the test function in an async event loop
    asyncio.run(test())

```

--------------------------------------------------------------------------------
/frontend/pages/02_Settings.py:
--------------------------------------------------------------------------------

```python
import streamlit as st
import os
import json
import json5
import sys

# Add the parent directory to the Python path to import utils
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from frontend.utils import default_config_path, load_config

st.title("Settings")

# Settings container
with st.container():
    st.subheader("Configuration Settings")
    
    # Get current config path from session state
    current_config_path = st.session_state.get('config_path', default_config_path)
    
    # Config file path selector (with unique key)
    config_path = st.text_input(
        "Path to Claude Desktop config file", 
        value=current_config_path,
        key="settings_config_path"
    )
    
    # Update the session state if path changed
    if config_path != current_config_path:
        st.session_state.config_path = config_path
        if 'debug_messages' in st.session_state:
            st.session_state.debug_messages.append(f"Config path updated to: {config_path}")
    
    # Add a button to view the current config
    if st.button("View Current Config", key="view_config_button"):
        if os.path.exists(config_path):
            with st.spinner("Loading config file..."):
                config_data = load_config(config_path)
                if config_data:
                    with st.expander("Config File Content", expanded=True):
                        st.json(config_data)
                    
                    # Update session state
                    st.session_state.config_data = config_data
                    if 'mcpServers' in config_data:
                        st.session_state.servers = config_data.get('mcpServers', {})
                        
                        # Add debug message
                        success_msg = f"Found {len(st.session_state.servers)} MCP servers in the config file"
                        if 'debug_messages' in st.session_state:
                            st.session_state.debug_messages.append(success_msg)
                else:
                    st.error("Failed to load config file")
        else:
            st.error(f"Config file not found: {config_path}")

# Help section for adding new servers
with st.expander("Adding New MCP Servers"):
    st.markdown("""
    ## How to Add New MCP Servers
    
    To add a new MCP server to your configuration:
    
    1. Edit the Claude Desktop config file (usually at `~/Library/Application Support/Claude/claude_desktop_config.json`)
    
    2. Add or modify the `mcpServers` section with your new server configuration:
    
    ```json
    "mcpServers": {
        "my-server-name": {
            "command": "python",
            "args": ["/path/to/your/server.py"],
            "env": {
                "OPTIONAL_ENV_VAR": "value"
            }
        },
        "another-server": {
            "command": "npx",
            "args": ["some-mcp-package"]
        }
    }
    ```
    
    3. Save the file and reload it in the MCP Dev Tools
    
    The `command` is the executable to run (e.g., `python`, `node`, `npx`), and `args` is an array of arguments to pass to the command.
    """)

```

--------------------------------------------------------------------------------
/frontend/app.py:
--------------------------------------------------------------------------------

```python
import streamlit as st
import os
import sys

# Add the parent directory to the Python path to import utils
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from frontend.utils import default_config_path, check_node_installations

# Set page config
st.set_page_config(
    page_title="MCP Dev Tools",
    page_icon="🔌",
    layout="wide"
)

# Initialize session state
if 'debug_messages' not in st.session_state:
    st.session_state.debug_messages = []
    
if 'config_path' not in st.session_state:
    st.session_state.config_path = default_config_path

if 'servers' not in st.session_state:
    st.session_state.servers = {}

if 'active_server' not in st.session_state:
    st.session_state.active_server = None

def add_debug_message(message):
    """Add a debug message to the session state"""
    st.session_state.debug_messages.append(message)
    # Keep only the last 10 messages
    if len(st.session_state.debug_messages) > 10:
        st.session_state.debug_messages = st.session_state.debug_messages[-10:]

# Main app container
st.title("🔌 MCP Dev Tools")
st.write("Explore and interact with Model Control Protocol (MCP) servers")

# Sidebar for configuration and debug
with st.sidebar:
    st.title("MCP Dev Tools")
    
    # Node.js status
    st.subheader("Environment Status")
    node_info = check_node_installations()
    
    # Display Node.js status
    if node_info['node']['installed']:
        st.success(f"✅ Node.js {node_info['node']['version']}")
    else:
        st.error("❌ Node.js not found")
        st.markdown("[Install Node.js](https://nodejs.org/)")
    
    # Display npm status
    if node_info['npm']['installed']:
        st.success(f"✅ npm {node_info['npm']['version']}")
    else:
        st.error("❌ npm not found")
    
    # Display npx status
    if node_info['npx']['installed']:
        st.success(f"✅ npx {node_info['npx']['version']}")
    else:
        st.error("❌ npx not found")
        
    # Warning if Node.js components are missing
    if not all(info['installed'] for info in node_info.values()):
        st.warning("⚠️ Some Node.js components are missing. MCP servers that depend on Node.js (using npx) will not work.")
    
    # Debug information section at the bottom of sidebar
    st.divider()
    st.subheader("Debug Information")
    
    # Display debug messages
    if st.session_state.debug_messages:
        for msg in st.session_state.debug_messages:
            st.text(msg)
    else:
        st.text("No debug messages")
        
    # Clear debug messages button
    if st.button("Clear Debug Messages"):
        st.session_state.debug_messages = []
        st.rerun()

# Add a message for pages selection
st.info("Select a page from the sidebar to get started")

# Add welcome info
st.markdown("""
## Welcome to MCP Dev Tools

This tool helps you explore and interact with Model Control Protocol (MCP) servers. You can:

1. View and connect to available MCP servers
2. Explore tools, resources, and prompts provided by each server 
3. Configure and manage server connections

Select an option from the sidebar to get started.
""")

# Footer
st.divider()
st.write("MCP Dev Tools | Built with Streamlit")

```

--------------------------------------------------------------------------------
/tools/ddg_search.py:
--------------------------------------------------------------------------------

```python
"""
DuckDuckGo search tool for MCP server.

This module provides functionality to search the web using DuckDuckGo's search engine.
It leverages the duckduckgo_search package to perform text-based web searches and
returns formatted results.

Features:
- Web search with customizable parameters
- Region-specific search support
- SafeSearch filtering options
- Time-limited search results
- Maximum results configuration
- Error handling for rate limits and timeouts
"""

from duckduckgo_search import DDGS
from duckduckgo_search.exceptions import (
    DuckDuckGoSearchException,
    RatelimitException,
    TimeoutException
)

async def search_duckduckgo(
    keywords: str,
    region: str = "wt-wt",
    safesearch: str = "moderate",
    timelimit: str = None,
    max_results: int = 10
) -> str:
    """
    Perform a web search using DuckDuckGo and return formatted results.
    
    Args:
        keywords (str): The search query/keywords to search for.
        region (str, optional): Region code for search results. Defaults to "wt-wt" (worldwide).
        safesearch (str, optional): SafeSearch level: "on", "moderate", or "off". Defaults to "moderate".
        timelimit (str, optional): Time limit for results: "d" (day), "w" (week), "m" (month), "y" (year).
            Defaults to None (no time limit).
        max_results (int, optional): Maximum number of results to return. Defaults to 10.
    
    Returns:
        str: Formatted search results as text, or an error message if the search fails.
    """
    try:
        # Create a DuckDuckGo search instance
        ddgs = DDGS()
        
        # Perform the search with the given parameters
        results = ddgs.text(
            keywords=keywords,
            region=region,
            safesearch=safesearch,
            timelimit=timelimit,
            max_results=max_results
        )
        
        # Format the results into a readable string
        formatted_results = []
        
        # Check if results is empty
        if not results:
            return "No results found for your search query."
        
        # Process and format each result
        for i, result in enumerate(results, 1):
            formatted_result = (
                f"{i}. {result.get('title', 'No title')}\n"
                f"   URL: {result.get('href', 'No URL')}\n"
                f"   {result.get('body', 'No description')}\n"
            )
            formatted_results.append(formatted_result)
        
        # Join all formatted results with a separator
        return "\n".join(formatted_results)
    
    except RatelimitException:
        return "Error: DuckDuckGo search rate limit exceeded. Please try again later."
    
    except TimeoutException:
        return "Error: The search request timed out. Please try again."
    
    except DuckDuckGoSearchException as e:
        return f"Error: DuckDuckGo search failed - {str(e)}"
    
    except Exception as e:
        return f"Error: An unexpected error occurred - {str(e)}"

# Standalone test functionality
if __name__ == "__main__":
    import asyncio
    
    async def test():
        # Example usage with a test query
        query = "Python programming language"
        result = await search_duckduckgo(query, max_results=3)
        print(f"Search results for '{query}':")
        print(result)
    
    # Run the test function in an async event loop
    asyncio.run(test())

```

--------------------------------------------------------------------------------
/server.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
import sys

"""
MCP Server with web scraping tool.

This server implements a Model Context Protocol (MCP) server that provides web scraping
functionality. It offers a tool to convert regular URLs into r.jina.ai prefixed URLs
and fetch their content as markdown. This allows for easy conversion of web content
into a markdown format suitable for various applications.

Key Features:
- URL conversion and fetching
- Support for both stdio and SSE transport mechanisms
- Command-line configuration options
- Asynchronous web scraping functionality
"""

import argparse
from mcp.server.fastmcp import FastMCP

# Import our custom tools
from tools.web_scrape import fetch_url_as_markdown
from tools.ddg_search import search_duckduckgo
from tools.crawl4ai_scraper import crawl_and_extract_markdown

# Initialize the MCP server with a descriptive name that reflects its purpose
mcp = FastMCP("Web Tools")

@mcp.tool()
async def web_scrape(url: str) -> str:
    """
    Convert a URL to use r.jina.ai as a prefix and fetch the markdown content.
    This tool wraps the fetch_url_as_markdown function to expose it as an MCP tool.
    
    Args:
        url (str): The URL to convert and fetch. Can be with or without http(s):// prefix.
        
    Returns:
        str: The markdown content if successful, or an error message if not.
    """
    return await fetch_url_as_markdown(url)

@mcp.tool()
async def ddg_search(query: str, region: str = "wt-wt", safesearch: str = "moderate", timelimit: str = None, max_results: int = 10) -> str:
    """
    Search the web using DuckDuckGo and return formatted results.
    
    Args:
        query (str): The search query to look for.
        region (str, optional): Region code for search results, e.g., "wt-wt" (worldwide), "us-en" (US English). Defaults to "wt-wt".
        safesearch (str, optional): SafeSearch level: "on", "moderate", or "off". Defaults to "moderate".
        timelimit (str, optional): Time limit for results: "d" (day), "w" (week), "m" (month), "y" (year). Defaults to None.
        max_results (int, optional): Maximum number of results to return. Defaults to 10.
        
    Returns:
        str: Formatted search results as text, or an error message if the search fails.
    """
    return await search_duckduckgo(keywords=query, region=region, safesearch=safesearch, timelimit=timelimit, max_results=max_results)

@mcp.tool()
async def advanced_scrape(url: str) -> str:
    """
    Scrape a webpage using advanced techniques and return clean, well-formatted markdown.
    
    This tool uses Crawl4AI to extract the main content from a webpage while removing
    navigation bars, sidebars, footers, ads, and other non-essential elements. The result
    is clean, well-formatted markdown focused on the actual content of the page.
    
    Args:
        url (str): The URL to scrape. Can be with or without http(s):// prefix.
        
    Returns:
        str: Well-formatted markdown content if successful, or an error message if not.
    """
    return await crawl_and_extract_markdown(url)

if __name__ == "__main__":
    # Log Python version for debugging purposes
    print(f"Using Python {sys.version}", file=sys.stderr)
    
    # Set up command-line argument parsing with descriptive help messages
    parser = argparse.ArgumentParser(description="MCP Server with web tools")
    parser.add_argument(
        "--transport", 
        choices=["stdio", "sse"], 
        default="stdio",
        help="Transport mechanism to use (default: stdio)"
    )
    parser.add_argument(
        "--host", 
        default="localhost",
        help="Host to bind to when using SSE transport (default: localhost)"
    )
    parser.add_argument(
        "--port", 
        type=int, 
        default=5000,
        help="Port to bind to when using SSE transport (default: 5000)"
    )    
    args = parser.parse_args()
    
    # Start the server with the specified transport mechanism
    if args.transport == "stdio":
        print("Starting MCP server with stdio transport...", file=sys.stderr)
        mcp.run(transport="stdio")
    else:
        print(f"Starting MCP server with SSE transport on {args.host}:{args.port}...", file=sys.stderr)
        mcp.run(transport="sse", host=args.host, port=args.port)

```

--------------------------------------------------------------------------------
/docs/01-introduction-to-mcp.md:
--------------------------------------------------------------------------------

```markdown
# Introduction to Model Context Protocol (MCP)

## What is MCP?

The Model Context Protocol (MCP) is an open standard that defines how Large Language Models (LLMs) like Claude, GPT, and others can interact with external systems, data sources, and tools. MCP establishes a standardized way for applications to provide context to LLMs, enabling them to access real-time data and perform actions beyond their training data.

Think of MCP as a "USB-C for AI" - a standard interface that allows different LLMs to connect to various data sources and tools without requiring custom integrations for each combination.

## Why MCP Exists

Before MCP, integrating LLMs with external tools and data sources required:

1. Custom integrations for each LLM and tool combination
2. Proprietary protocols specific to each LLM provider
3. Directly exposing APIs and data to the LLM, raising security concerns
4. Duplicating integration efforts across different projects

MCP solves these problems by:

1. **Standardization**: Defining a common protocol for all LLMs and tools
2. **Separation of concerns**: Keeping LLM interactions separate from tool functionality
3. **Security**: Providing controlled access to external systems
4. **Reusability**: Allowing tools to be shared across different LLMs and applications

## Key Benefits of MCP

- **Consistency**: Common interface across different LLMs and tools
- **Modularity**: Tools can be developed independently of LLMs
- **Security**: Fine-grained control over LLM access to systems
- **Ecosystem**: Growing library of pre-built tools and integrations
- **Flexibility**: Support for different transport mechanisms and deployment models
- **Vendor Agnosticism**: Not tied to any specific LLM provider

## Core Architecture

MCP follows a client-server architecture:

```mermaid
flowchart LR
    subgraph "Host Application"
        LLM[LLM Interface]
        Client[MCP Client]
    end
    subgraph "External Systems"
        Server1[MCP Server 1]
        Server2[MCP Server 2]
        Server3[MCP Server 3]
    end
    LLM <--> Client
    Client <--> Server1
    Client <--> Server2
    Client <--> Server3
    Server1 <--> DB[(Database)]
    Server2 <--> API[API Service]
    Server3 <--> Files[(File System)]
```

- **MCP Host**: An application that hosts an LLM (like Claude desktop app)
- **MCP Client**: The component in the host that communicates with MCP servers
- **MCP Server**: A service that exposes tools, resources, and prompts to clients
- **Transport Layer**: The communication mechanism between clients and servers (stdio, SSE, etc.)

## Core Components

MCP is built around three core primitives:

### 1. Tools

Tools are functions that LLMs can call to perform actions or retrieve information. They follow a request-response pattern, where the LLM provides input parameters and receives a result.

Examples:
- Searching a database
- Calculating values
- Making API calls
- Manipulating files

```mermaid
sequenceDiagram
    LLM->>MCP Client: Request tool execution
    MCP Client->>User: Request permission
    User->>MCP Client: Grant permission
    MCP Client->>MCP Server: Execute tool
    MCP Server->>MCP Client: Return result
    MCP Client->>LLM: Provide result
```

### 2. Resources

Resources are data sources that LLMs can read. They are identified by URIs and can be static or dynamic.

Examples:
- File contents
- Database records
- API responses
- System information

```mermaid
sequenceDiagram
    LLM->>MCP Client: Request resource
    MCP Client->>MCP Server: Get resource
    MCP Server->>MCP Client: Return resource content
    MCP Client->>LLM: Provide content
```

### 3. Prompts

Prompts are templates that help LLMs interact with servers effectively. They provide structured ways to formulate requests.

Examples:
- Query templates
- Analysis frameworks
- Structured response formats

```mermaid
sequenceDiagram
    User->>MCP Client: Select prompt
    MCP Client->>MCP Server: Get prompt template
    MCP Server->>MCP Client: Return template
    MCP Client->>LLM: Apply template to interaction
```

## Control Flow

An important aspect of MCP is how control flows between components:

| Component | Control | Description |
|-----------|---------|-------------|
| Tools | Model-controlled | LLM decides when to use tools (with user permission) |
| Resources | Application-controlled | The client app determines when to provide resources |
| Prompts | User-controlled | Explicitly selected by users for specific interactions |

This separation of control ensures that each component is used appropriately and securely.

## Transport Mechanisms

MCP supports multiple transport mechanisms for communication between clients and servers:

### 1. Standard Input/Output (stdio)

Uses standard input and output streams for communication. Ideal for:
- Local processes
- Command-line tools
- Simple integrations

### 2. Server-Sent Events (SSE)

Uses HTTP with Server-Sent Events for server-to-client messages and HTTP POST for client-to-server messages. Suitable for:
- Web applications
- Remote services
- Distributed systems

Both transports use [JSON-RPC](https://www.jsonrpc.org/) 2.0 as the messaging format.

## The MCP Ecosystem

The MCP ecosystem consists of:

- **MCP Specification**: The formal protocol definition
- **SDKs**: Libraries for building clients and servers in different languages
- **Pre-built Servers**: Ready-to-use servers for common services
- **Hosts**: Applications that support MCP for LLM interactions
- **Tools**: Community-developed tools and integrations

## Getting Started

To start working with MCP, you'll need:

1. An MCP host (like Claude Desktop or a custom client)
2. Access to MCP servers (pre-built or custom)
3. Basic understanding of the MCP concepts

The following documents in this series will guide you through:
- Building your own MCP servers
- Using existing MCP servers
- Troubleshooting common issues
- Extending the ecosystem with new tools

## Resources

- [Official MCP Documentation](https://modelcontextprotocol.io/)
- [MCP GitHub Organization](https://github.com/modelcontextprotocol)
- [MCP Specification](https://spec.modelcontextprotocol.io/)
- [Example Servers](https://github.com/modelcontextprotocol/servers)

```

--------------------------------------------------------------------------------
/frontend/pages/03_Documentation.py:
--------------------------------------------------------------------------------

```python
import streamlit as st
import os
import sys
from pathlib import Path
import re

# Add the parent directory to the Python path to import utils
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from frontend.utils import get_markdown_files

st.title("Documentation")

# Define the docs directory path
docs_dir = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))), "docs")

# Helper function to calculate the height for a mermaid diagram
def calculate_diagram_height(mermaid_code):
    # Count the number of lines in the diagram
    line_count = len(mermaid_code.strip().split('\n'))
    
    # Estimate the height based on complexity and type
    base_height = 100  # Minimum height
    
    # Add height based on the number of lines
    line_height = 30 if line_count <= 5 else 25  # Adjust per-line height based on total lines for density
    height = base_height + (line_count * line_height)
    
    # Extra height for different diagram types
    if "flowchart" in mermaid_code.lower() or "graph" in mermaid_code.lower():
        height += 50
    elif "sequenceDiagram" in mermaid_code:
        height += 100  # Sequence diagrams typically need more height
    elif "classDiagram" in mermaid_code:
        height += 75
    
    # Extra height for diagrams with many connections
    if mermaid_code.count("-->") + mermaid_code.count("<--") + mermaid_code.count("-.-") > 5:
        height += 100
    
    # Extra height if many items in diagram
    node_count = len(re.findall(r'\[[^\]]+\]', mermaid_code))
    if node_count > 5:
        height += node_count * 20
    
    return height

# Helper function to extract and render mermaid diagrams
def render_markdown_with_mermaid(content):
    # Regular expression to find mermaid code blocks
    mermaid_pattern = r"```mermaid\s*([\s\S]*?)\s*```"
    
    # Find all mermaid diagrams
    mermaid_blocks = re.findall(mermaid_pattern, content)
    
    # Replace mermaid blocks with placeholders
    content_with_placeholders = re.sub(mermaid_pattern, "MERMAID_DIAGRAM_PLACEHOLDER", content)
    
    # Split content by placeholders
    parts = content_with_placeholders.split("MERMAID_DIAGRAM_PLACEHOLDER")
    
    # Render each part with mermaid diagrams in between
    for i, part in enumerate(parts):
        if part.strip():
            st.markdown(part)
        
        # Add mermaid diagram after this part (if there is one)
        if i < len(mermaid_blocks):
            mermaid_code = mermaid_blocks[i]
            
            # Calculate appropriate height for this diagram
            diagram_height = calculate_diagram_height(mermaid_code)
            
            # Render mermaid diagram using streamlit components
            st.components.v1.html(
                f"""
                <div class="mermaid" style="margin: 20px 0;">
                {mermaid_code}
                </div>
                <script src="https://cdn.jsdelivr.net/npm/mermaid@9/dist/mermaid.min.js"></script>
                <script>
                    mermaid.initialize({{ 
                        startOnLoad: true,
                        theme: 'default',
                        flowchart: {{ 
                            useMaxWidth: false,
                            htmlLabels: true,
                            curve: 'cardinal'
                        }}
                    }});
                </script>
                """,
                height=diagram_height,
                scrolling=True
            )

# Check if docs directory exists
if not os.path.exists(docs_dir):
    st.error(f"Documentation directory not found: {docs_dir}")
else:
    # Get list of markdown files
    markdown_files = get_markdown_files(docs_dir)
    
    # Sidebar for document selection
    with st.sidebar:
        st.subheader("Select Document")
        
        if not markdown_files:
            st.info("No documentation files found")
        else:
            # Create options for the selectbox - use filenames without path and extension
            file_options = [f.stem for f in markdown_files]
            
            # Select document
            selected_doc = st.selectbox(
                "Choose a document", 
                file_options,
                format_func=lambda x: x.replace("-", " ").title(),
                key="doc_selection"
            )
            
            # Find the selected file path
            selected_file_path = next((f for f in markdown_files if f.stem == selected_doc), None)
            
            # Store selection in session state
            if selected_file_path:
                st.session_state["selected_doc_path"] = str(selected_file_path)

    # Display the selected markdown file
    if "selected_doc_path" in st.session_state:
        selected_path = st.session_state["selected_doc_path"]
        
        try:
            with open(selected_path, 'r') as f:
                content = f.read()
            
            # Set style for better code rendering
            st.markdown(
                """
                <style>
                code {
                    white-space: pre-wrap !important;
                }
                .mermaid {
                    text-align: center !important;
                }
                </style>
                """, 
                unsafe_allow_html=True
            )
            
            # Use the custom function to render markdown with mermaid
            render_markdown_with_mermaid(content)
            
        except Exception as e:
            st.error(f"Error loading document: {str(e)}")
    else:
        if markdown_files:
            # Display the first document by default
            try:
                with open(str(markdown_files[0]), 'r') as f:
                    content = f.read()
                
                # Use the custom function to render markdown with mermaid
                render_markdown_with_mermaid(content)
                
                # Store the selected doc in session state
                st.session_state["selected_doc_path"] = str(markdown_files[0])
            except Exception as e:
                st.error(f"Error loading default document: {str(e)}")
        else:
            st.info("Select a document from the sidebar to view documentation")

```

--------------------------------------------------------------------------------
/frontend/utils.py:
--------------------------------------------------------------------------------

```python
import os
import json
import json5
import streamlit as st
import subprocess
import asyncio
import sys
import shutil
from pathlib import Path
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

# Define default config path based on OS
default_config_path = os.path.expanduser("~/Library/Application Support/Claude/claude_desktop_config.json")

def load_config(config_path):
    """Load the Claude Desktop config file"""
    try:
        with open(config_path, 'r') as f:
            # Use json5 to handle potential JSON5 format (comments, trailing commas)
            return json5.load(f)
    except Exception as e:
        st.error(f"Error loading config file: {str(e)}")
        return None

def find_executable(name):
    """Find the full path to an executable"""
    path = shutil.which(name)
    if path:
        return path
    
    # Try common locations for Node.js executables
    if name in ['node', 'npm', 'npx']:
        # Check user's home directory for nvm or other Node.js installations
        home = Path.home()
        possible_paths = [
            home / '.nvm' / 'versions' / 'node' / '*' / 'bin' / name,
            home / 'node_modules' / '.bin' / name,
            home / '.npm-global' / 'bin' / name,
            # Add Mac Homebrew path
            Path('/usr/local/bin') / name,
            Path('/opt/homebrew/bin') / name,
        ]
        
        for p in possible_paths:
            if isinstance(p, Path) and '*' in str(p):
                # Handle wildcard paths
                parent = p.parent.parent
                if parent.exists():
                    for version_dir in parent.glob('*'):
                        full_path = version_dir / 'bin' / name
                        if full_path.exists():
                            return str(full_path)
            elif Path(str(p)).exists():
                return str(p)
    
    return None

def check_node_installations():
    """Check if Node.js, npm, and npx are installed and return their versions"""
    node_installed = bool(find_executable('node'))
    node_version = None
    npm_installed = bool(find_executable('npm'))
    npm_version = None
    npx_installed = bool(find_executable('npx'))
    npx_version = None

    if node_installed:
        try:
            node_version = subprocess.check_output([find_executable('node'), '--version']).decode().strip()
        except:
            pass

    if npm_installed:
        try:
            npm_version = subprocess.check_output([find_executable('npm'), '--version']).decode().strip()
        except:
            pass
            
    if npx_installed:
        try:
            npx_version = subprocess.check_output([find_executable('npx'), '--version']).decode().strip()
        except:
            pass
    
    return {
        'node': {'installed': node_installed, 'version': node_version},
        'npm': {'installed': npm_installed, 'version': npm_version},
        'npx': {'installed': npx_installed, 'version': npx_version}
    }

async def connect_to_server(command, args=None, env=None):
    """Connect to an MCP server and list its tools"""
    try:
        # Find the full path to the command
        print(f"Finding executable for command: {command}")
        full_command = find_executable(command)
        if not full_command:
            st.error(f"Command '{command}' not found. Make sure it's installed and in your PATH.")
            if command == 'npx':
                st.error("Node.js may not be installed or properly configured. Install Node.js from https://nodejs.org")
            return {"tools": [], "resources": [], "prompts": []}
        
        # Use the full path to the command
        command = full_command
        
        server_params = StdioServerParameters(
            command=command,
            args=args or [],
            env=env or {}
        )
        print(f"Connecting to server with command: {command} and args: {args}")
        
        async with stdio_client(server_params) as (read, write):
            async with ClientSession(read, write) as session:
                await session.initialize()
                
                # List tools
                tools_result = await session.list_tools()
                
                # Try to list resources and prompts
                try:
                    resources_result = await session.list_resources()
                    resources = resources_result.resources if hasattr(resources_result, 'resources') else []
                except Exception:
                    resources = []
                
                try:
                    prompts_result = await session.list_prompts()
                    prompts = prompts_result.prompts if hasattr(prompts_result, 'prompts') else []
                except Exception:
                    prompts = []
                
                return {
                    "tools": tools_result.tools if hasattr(tools_result, 'tools') else [],
                    "resources": resources,
                    "prompts": prompts
                }
    except Exception as e:
        st.error(f"Error connecting to server: {str(e)}")
        return {"tools": [], "resources": [], "prompts": []}

async def call_tool(command, args, tool_name, tool_args):
    """Call a specific tool and return the result"""
    try:
        # Find the full path to the command
        full_command = find_executable(command)
        if not full_command:
            return f"Error: Command '{command}' not found. Make sure it's installed and in your PATH."
        
        # Use the full path to the command
        command = full_command
        
        server_params = StdioServerParameters(
            command=command,
            args=args or [],
            env={}
        )
        
        async with stdio_client(server_params) as (read, write):
            async with ClientSession(read, write) as session:
                await session.initialize()
                
                # Call the tool
                result = await session.call_tool(tool_name, arguments=tool_args)
                
                # Format the result
                if hasattr(result, 'content') and result.content:
                    content_text = []
                    for item in result.content:
                        if hasattr(item, 'text'):
                            content_text.append(item.text)
                    return "\n".join(content_text)
                return "Tool executed, but no text content was returned."
    except Exception as e:
        return f"Error calling tool: {str(e)}"

def get_markdown_files(docs_folder):
    """Get list of markdown files in the docs folder"""
    docs_path = Path(docs_folder)
    if not docs_path.exists() or not docs_path.is_dir():
        return []
    
    return sorted([f for f in docs_path.glob('*.md')])

```

--------------------------------------------------------------------------------
/tools/crawl4ai_scraper.py:
--------------------------------------------------------------------------------

```python
"""
Crawl4AI web scraping tool for MCP server.

This module provides advanced web scraping functionality using Crawl4AI.
It extracts content from web pages, removes non-essential elements like
navigation bars, footers, and sidebars, and returns well-formatted markdown
that preserves document structure including headings, code blocks, tables,
and image references.

Features:
- Clean content extraction with navigation, sidebar, and footer removal
- Preserves document structure (headings, lists, tables, code blocks)
- Automatic conversion to well-formatted markdown
- Support for JavaScript-rendered content
- Content filtering to focus on the main article/content
- Comprehensive error handling
"""

import asyncio
import os
import re
import logging
from typing import Optional

from crawl4ai import AsyncWebCrawler, CrawlerRunConfig, BrowserConfig, CacheMode
from crawl4ai.content_filter_strategy import PruningContentFilter
from crawl4ai.markdown_generation_strategy import DefaultMarkdownGenerator

# Set up logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger("crawl4ai_scraper")

async def crawl_and_extract_markdown(url: str, query: Optional[str] = None) -> str:
    """
    Crawl a webpage and extract well-formatted markdown content.
    
    Args:
        url: The URL to crawl
        query: Optional search query to focus content on (if None, extracts main content)
    
    Returns:
        str: Well-formatted markdown content from the webpage
    
    Raises:
        Exception: If crawling fails or content extraction encounters errors
    """
    try:
        # Configure the browser for optimal rendering
        browser_config = BrowserConfig(
            headless=True,
            viewport_width=1920,  # Wider viewport to capture more content
            viewport_height=1080,  # Taller viewport for the same reason
            java_script_enabled=True,
            text_mode=False,  # Set to False to ensure all content is loaded
            user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
        )
        
        # Create a content filter for removing unwanted elements
        content_filter = PruningContentFilter(
            threshold=0.1,  # Very low threshold to keep more content
            threshold_type="dynamic",  # Dynamic threshold based on page content
            min_word_threshold=2  # Include very short text blocks for headings/code
        )
        
        # Configure markdown generator with options for structure preservation
        markdown_generator = DefaultMarkdownGenerator(
            content_filter=content_filter,
            options={
                "body_width": 0,         # No wrapping
                "ignore_images": False,   # Keep image references
                "citations": True,        # Include link citations
                "escape_html": False,     # Don't escape HTML in code blocks
                "include_sup_sub": True,  # Preserve superscript/subscript
                "pad_tables": True,       # Better table formatting
                "mark_code": True,        # Better code block preservation
                "code_language": "",      # Default code language
                "wrap_links": False       # Preserve link formatting
            }
        )
        
        # Configure the crawler run for optimal structure extraction
        run_config = CrawlerRunConfig(
            verbose=False,
            # Content filtering
            markdown_generator=markdown_generator,
            word_count_threshold=2,  # Extremely low to include very short text blocks
            
            # Tag exclusions - remove unwanted elements
            excluded_tags=["nav", "footer", "aside"],
            excluded_selector=".nav, .navbar, .sidebar, .footer, #footer, #sidebar, " +
                             ".ads, .advertisement, .navigation, #navigation, " +
                             ".menu, #menu, .toc, .table-of-contents",
            
            # Wait conditions for JS content
            wait_until="networkidle",
            wait_for="css:pre, code, h1, h2, h3, table",  # Wait for important structural elements 
            page_timeout=60000,
            
            # Don't limit to specific selectors to get full content
            css_selector=None,
            
            # Other options
            remove_overlay_elements=True,    # Remove modal popups
            remove_forms=True,               # Remove forms
            scan_full_page=True,             # Scan the full page
            scroll_delay=0.5,                # Slower scroll for better content loading
            cache_mode=CacheMode.BYPASS      # Bypass cache for fresh content
        )
        
        # Create crawler and run it
        async with AsyncWebCrawler(config=browser_config) as crawler:
            result = await crawler.arun(url=url, config=run_config)
            
            if not result.success:
                raise Exception(f"Crawl failed: {result.error_message}")
            
            # Extract the title from metadata if available
            title = "Untitled Document"
            if result.metadata and "title" in result.metadata:
                title = result.metadata["title"]
            
            # Choose the best markdown content
            markdown_content = ""
            
            # Try to get the best version of the markdown
            if hasattr(result, "markdown_v2") and result.markdown_v2:
                if hasattr(result.markdown_v2, 'raw_markdown') and result.markdown_v2.raw_markdown:
                    markdown_content = result.markdown_v2.raw_markdown
                elif hasattr(result.markdown_v2, 'markdown_with_citations') and result.markdown_v2.markdown_with_citations:
                    markdown_content = result.markdown_v2.markdown_with_citations
            elif hasattr(result, "markdown") and result.markdown:
                if isinstance(result.markdown, str):
                    markdown_content = result.markdown
                elif hasattr(result.markdown, 'raw_markdown'):
                    markdown_content = result.markdown.raw_markdown
            elif result.cleaned_html:
                from html2text import html2text
                markdown_content = html2text(result.cleaned_html)
            
            # Post-process the markdown to fix common issues
            
            # 1. Fix code blocks - ensure they have proper formatting
            markdown_content = re.sub(r'```\s*\n', '```python\n', markdown_content)
            
            # 2. Fix broken headings - ensure space after # characters
            markdown_content = re.sub(r'^(#{1,6})([^#\s])', r'\1 \2', markdown_content, flags=re.MULTILINE)
            
            # 3. Add spacing between sections for readability
            markdown_content = re.sub(r'(\n#{1,6} .+?\n)(?=[^\n])', r'\1\n', markdown_content)
            
            # 4. Fix bullet points - ensure proper spacing
            markdown_content = re.sub(r'^\*([^\s])', r'* \1', markdown_content, flags=re.MULTILINE)
            
            # 5. Format the final content with title and URL
            final_content = f"Title: {title}\n\nURL Source: {result.url}\n\nMarkdown Content:\n{markdown_content}"
            
            return final_content
                
    except Exception as e:
        logger.error(f"Error crawling {url}: {str(e)}")
        raise Exception(f"Error crawling {url}: {str(e)}")

# Standalone test functionality
if __name__ == "__main__":
    import argparse
    
    parser = argparse.ArgumentParser(description="Extract structured markdown content from a webpage")
    parser.add_argument("url", nargs="?", default="https://docs.llamaindex.ai/en/stable/understanding/agent/", 
                        help="URL to crawl (default: https://docs.llamaindex.ai/en/stable/understanding/agent/)")
    parser.add_argument("--output", help="Output file to save the markdown (default: scraped_content.md)")
    parser.add_argument("--query", help="Optional search query to focus content")
    
    args = parser.parse_args()
    
    async def test():
        url = args.url
        print(f"Scraping {url}...")
        
        try:
            if args.query:
                result = await crawl_and_extract_markdown(url, args.query)
            else:
                result = await crawl_and_extract_markdown(url)
            
            # Show preview of content
            preview_length = min(1000, len(result))
            print("\nResult Preview (first 1000 chars):")
            print(result[:preview_length] + "...\n" if len(result) > preview_length else result)
            
            # Print statistics
            print(f"\nMarkdown length: {len(result)} characters")
            
            # Save to file
            output_file = args.output if args.output else "scraped_content.md"
            with open(output_file, "w", encoding="utf-8") as f:
                f.write(result)
            print(f"Full content saved to '{output_file}'")
            
            return 0
        except Exception as e:
            print(f"Error: {str(e)}")
            return 1
    
    # Run the test function in an async event loop
    exit_code = asyncio.run(test())
    import sys
    sys.exit(exit_code)

```

--------------------------------------------------------------------------------
/frontend/pages/01_My_Active_Servers.py:
--------------------------------------------------------------------------------

```python
import os
import json
import streamlit as st
import asyncio
import sys

# Add the parent directory to the Python path to import utils
sys.path.append(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
from frontend.utils import load_config, connect_to_server, call_tool, default_config_path

st.title("My Active MCP Servers")

# Configuration and server selection in the sidebar
with st.sidebar:
    st.subheader("Configuration")
    
    # Config file path input with unique key
    config_path = st.text_input(
        "Path to config file", 
        value=st.session_state.get('config_path', default_config_path),
        key="config_path_input_sidebar"
    )
    
    # Update the session state with the new path
    st.session_state.config_path = config_path
    
    if st.button("Load Servers", key="load_servers_sidebar"):
        if os.path.exists(config_path):
            config_data = load_config(config_path)
            if config_data and 'mcpServers' in config_data:
                st.session_state.config_data = config_data
                st.session_state.servers = config_data.get('mcpServers', {})
                
                # Add debug message
                message = f"Found {len(st.session_state.servers)} MCP servers in config"
                if 'debug_messages' in st.session_state:
                    st.session_state.debug_messages.append(message)
                
                st.success(message)
            else:
                error_msg = "No MCP servers found in config"
                if 'debug_messages' in st.session_state:
                    st.session_state.debug_messages.append(error_msg)
                st.error(error_msg)
        else:
            error_msg = f"Config file not found: {config_path}"
            if 'debug_messages' in st.session_state:
                st.session_state.debug_messages.append(error_msg)
            st.error(error_msg)
    
    # Server selection dropdown
    st.divider()
    st.subheader("Server Selection")
    
    if 'servers' in st.session_state and st.session_state.servers:
        server_names = list(st.session_state.servers.keys())
        selected_server = st.selectbox(
            "Select an MCP server", 
            server_names,
            key="server_selection_sidebar"
        )
        
        if st.button("Connect", key="connect_button_sidebar"):
            server_config = st.session_state.servers.get(selected_server, {})
            command = server_config.get('command')
            args = server_config.get('args', [])
            env = server_config.get('env', {})
            
            with st.spinner(f"Connecting to {selected_server}..."):
                # Add debug message
                debug_msg = f"Connecting to {selected_server}..."
                if 'debug_messages' in st.session_state:
                    st.session_state.debug_messages.append(debug_msg)
                
                # Connect to the server
                server_info = asyncio.run(connect_to_server(command, args, env))
                st.session_state[f'server_info_{selected_server}'] = server_info
                st.session_state.active_server = selected_server
                
                # Add debug message about connection success/failure
                if server_info.get('tools'):
                    success_msg = f"Connected to {selected_server}: {len(server_info['tools'])} tools"
                    if 'debug_messages' in st.session_state:
                        st.session_state.debug_messages.append(success_msg)
                else:
                    error_msg = f"Connected but no tools found"
                    if 'debug_messages' in st.session_state:
                        st.session_state.debug_messages.append(error_msg)
                
                # Force the page to refresh to show connected server details
                st.rerun()
    else:
        st.info("Load config to see servers")

# Main area: Only display content when a server is connected
if 'active_server' in st.session_state and st.session_state.active_server:
    active_server = st.session_state.active_server
    server_info_key = f'server_info_{active_server}'
    
    if server_info_key in st.session_state:
        st.subheader(f"Connected to: {active_server}")
        
        server_info = st.session_state[server_info_key]
        server_config = st.session_state.servers.get(active_server, {})
        
        # Display server configuration
        with st.expander("Server Configuration"):
            st.json(server_config)
        
        # Display tools
        if server_info.get('tools'):
            st.subheader("Available Tools")
            
            # Create tabs for each tool
            tool_tabs = st.tabs([tool.name for tool in server_info['tools']])
            
            for i, tool in enumerate(server_info['tools']):
                with tool_tabs[i]:
                    st.markdown(f"**Description:** {tool.description or 'No description provided'}")
                    
                    # Tool schema
                    if hasattr(tool, 'inputSchema') and tool.inputSchema:
                        with st.expander("Input Schema"):
                            st.json(tool.inputSchema)
                        
                        # Generate form for tool inputs
                        st.subheader("Call Tool")
                        
                        # Create a form
                        with st.form(key=f"tool_form_{active_server}_{tool.name}"):
                            # Fix duplicate ID error by adding unique keys for form fields
                            tool_inputs = {}
                            
                            # Check if input schema has properties
                            if 'properties' in tool.inputSchema:
                                # Create form inputs based on schema properties
                                for param_name, param_schema in tool.inputSchema['properties'].items():
                                    param_type = param_schema.get('type', 'string')
                                    
                                    # Create unique key for each form field
                                    field_key = f"{active_server}_{tool.name}_{param_name}"
                                    
                                    if param_type == 'string':
                                        tool_inputs[param_name] = st.text_input(
                                            f"{param_name}", 
                                            help=param_schema.get('description', ''),
                                            key=field_key
                                        )
                                    elif param_type == 'number' or param_type == 'integer':
                                        tool_inputs[param_name] = st.number_input(
                                            f"{param_name}", 
                                            help=param_schema.get('description', ''),
                                            key=field_key
                                        )
                                    elif param_type == 'boolean':
                                        tool_inputs[param_name] = st.checkbox(
                                            f"{param_name}", 
                                            help=param_schema.get('description', ''),
                                            key=field_key
                                        )
                                    # Add more types as needed
                            
                            # Submit button
                            submit_button = st.form_submit_button(f"Execute {tool.name}")
                            
                            if submit_button:
                                # Get server config
                                command = server_config.get('command')
                                args = server_config.get('args', [])
                                
                                with st.spinner(f"Executing {tool.name}..."):
                                    # Add debug message
                                    if 'debug_messages' in st.session_state:
                                        st.session_state.debug_messages.append(f"Executing {tool.name}")
                                    
                                    # Call the tool
                                    result = asyncio.run(call_tool(command, args, tool.name, tool_inputs))
                                    
                                    # Display result
                                    st.subheader("Result")
                                    st.write(result)
                    else:
                        st.warning("No input schema available for this tool")
        
        # Display resources if any
        if server_info.get('resources'):
            with st.expander("Resources"):
                for resource in server_info['resources']:
                    st.write(f"**{resource.name}:** {resource.uri}")
                    if hasattr(resource, 'description') and resource.description:
                        st.write(resource.description)
                    st.divider()
        
        # Display prompts if any
        if server_info.get('prompts'):
            with st.expander("Prompts"):
                for prompt in server_info['prompts']:
                    st.write(f"**{prompt.name}**")
                    if hasattr(prompt, 'description') and prompt.description:
                        st.write(prompt.description)
                    st.divider()
    else:
        st.info(f"Server {active_server} is selected but not connected. Click 'Connect' in the sidebar.")
else:
    # Initial state when no server is connected
    st.info("Select a server from the sidebar and click 'Connect' to start interacting with it.")

```

--------------------------------------------------------------------------------
/docs/08-advanced-mcp-features.md:
--------------------------------------------------------------------------------

```markdown
# Advanced MCP Features

This document explores advanced features and configurations for Model Context Protocol (MCP) servers. These techniques can help you build more powerful, secure, and maintainable MCP implementations.

## Advanced Configuration

### Server Lifecycle Management

The MCP server lifecycle can be managed with the `lifespan` parameter to set up resources on startup and clean them up on shutdown:

```python
from contextlib import asynccontextmanager
from typing import AsyncIterator, Dict, Any
from mcp.server.fastmcp import FastMCP

@asynccontextmanager
async def server_lifespan(server: FastMCP) -> AsyncIterator[Dict[str, Any]]:
    """Manage server lifecycle."""
    print("Server starting up...")
    
    # Initialize resources
    db_connection = await initialize_database()
    cache = initialize_cache()
    
    try:
        # Yield context to server
        yield {
            "db": db_connection,
            "cache": cache
        }
    finally:
        # Clean up resources
        print("Server shutting down...")
        await db_connection.close()
        cache.clear()

# Create server with lifespan
mcp = FastMCP("AdvancedServer", lifespan=server_lifespan)

# Access lifespan context in tools
@mcp.tool()
async def query_database(sql: str, ctx: Context) -> str:
    """Run a database query."""
    db = ctx.request_context.lifespan_context["db"]
    results = await db.execute(sql)
    return results
```

### Dependency Specification

You can specify dependencies for your server to ensure it has everything it needs:

```python
# Specify dependencies for the server
mcp = FastMCP(
    "DependentServer",
    dependencies=[
        "pandas>=1.5.0",
        "numpy>=1.23.0",
        "scikit-learn>=1.1.0"
    ]
)
```

This helps with:
- Documentation for users
- Verification during installation
- Clarity about requirements

### Environment Variables

Use environment variables for configuration:

```python
import os
from dotenv import load_dotenv

# Load environment variables from .env file
load_dotenv()

# Access environment variables
API_KEY = os.environ.get("MY_API_KEY")
BASE_URL = os.environ.get("MY_BASE_URL", "https://api.default.com")
DEBUG = os.environ.get("DEBUG", "false").lower() == "true"

# Create server with configuration
mcp = FastMCP(
    "ConfigurableServer",
    config={
        "api_key": API_KEY,
        "base_url": BASE_URL,
        "debug": DEBUG
    }
)

# Access configuration in tools
@mcp.tool()
async def call_api(endpoint: str, ctx: Context) -> str:
    """Call an API endpoint."""
    config = ctx.server.config
    base_url = config["base_url"]
    api_key = config["api_key"]
    
    # Use configuration
    async with httpx.AsyncClient() as client:
        response = await client.get(
            f"{base_url}/{endpoint}",
            headers={"Authorization": f"Bearer {api_key}"}
        )
        return response.text
```

## Advanced Logging

### Structured Logging

Implement structured logging for better analysis:

```python
import logging
import json
from datetime import datetime

class StructuredFormatter(logging.Formatter):
    """Format logs as JSON for structured logging."""
    
    def format(self, record):
        log_data = {
            "timestamp": datetime.utcnow().isoformat(),
            "level": record.levelname,
            "name": record.name,
            "message": record.getMessage(),
            "module": record.module,
            "function": record.funcName,
            "line": record.lineno
        }
        
        # Add exception info if present
        if record.exc_info:
            log_data["exception"] = self.formatException(record.exc_info)
        
        # Add custom fields if present
        if hasattr(record, "data"):
            log_data.update(record.data)
        
        return json.dumps(log_data)

# Set up structured logging
logger = logging.getLogger("mcp")
handler = logging.FileHandler("mcp_server.log")
handler.setFormatter(StructuredFormatter())
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)

# Log with extra data
def log_with_data(level, message, **kwargs):
    record = logging.LogRecord(
        name="mcp",
        level=level,
        pathname="",
        lineno=0,
        msg=message,
        args=(),
        exc_info=None
    )
    record.data = kwargs
    logger.handle(record)

# Usage
log_with_data(
    logging.INFO,
    "Tool execution completed",
    tool="web_scrape",
    url="example.com",
    execution_time=1.25,
    result_size=1024
)
```

### Client Notifications

Send logging messages to clients:

```python
@mcp.tool()
async def process_data(data: str, ctx: Context) -> str:
    """Process data with client notifications."""
    try:
        # Send info message to client
        ctx.info("Starting data processing")
        
        # Process data in steps
        ctx.info("Step 1: Parsing data")
        parsed_data = parse_data(data)
        
        ctx.info("Step 2: Analyzing data")
        analysis = analyze_data(parsed_data)
        
        ctx.info("Step 3: Generating report")
        report = generate_report(analysis)
        
        ctx.info("Processing complete")
        return report
        
    except Exception as e:
        # Send error message to client
        ctx.error(f"Processing failed: {str(e)}")
        raise
```

### Progress Reporting

Report progress for long-running operations:

```python
@mcp.tool()
async def process_large_file(file_path: str, ctx: Context) -> str:
    """Process a large file with progress reporting."""
    try:
        # Get file size
        file_size = os.path.getsize(file_path)
        bytes_processed = 0
        
        # Open file
        async with aiofiles.open(file_path, "rb") as f:
            # Process in chunks
            chunk_size = 1024 * 1024  # 1 MB
            while True:
                chunk = await f.read(chunk_size)
                if not chunk:
                    break
                    
                # Process chunk
                process_chunk(chunk)
                
                # Update progress
                bytes_processed += len(chunk)
                progress = min(100, int(bytes_processed * 100 / file_size))
                await ctx.report_progress(progress)
                
                # Log milestone
                if progress % 10 == 0:
                    ctx.info(f"Processed {progress}% of file")
        
        return f"File processing complete. Processed {file_size} bytes."
        
    except Exception as e:
        ctx.error(f"File processing failed: {str(e)}")
        return f"Error: {str(e)}"
```

## Security Features

### Input Validation

Implement thorough input validation:

```python
from pydantic import BaseModel, Field, validator

class SearchParams(BaseModel):
    """Validated search parameters."""
    query: str = Field(..., min_length=1, max_length=100)
    days: int = Field(7, ge=1, le=30)
    limit: int = Field(5, ge=1, le=100)
    
    @validator('query')
    def query_must_be_valid(cls, v):
        import re
        if not re.match(r'^[a-zA-Z0-9\s\-.,?!]+$', v):
            raise ValueError('Query contains invalid characters')
        return v

@mcp.tool()
async def search_with_validation(params: dict) -> str:
    """Search with validated parameters."""
    try:
        # Validate parameters
        validated = SearchParams(**params)
        
        # Proceed with validated parameters
        results = await perform_search(
            validated.query,
            validated.days,
            validated.limit
        )
        
        return format_results(results)
        
    except Exception as e:
        return f"Validation error: {str(e)}"
```

### Rate Limiting

Implement rate limiting to prevent abuse:

```python
import time
from functools import wraps

# Simple rate limiter
class RateLimiter:
    def __init__(self, calls_per_minute=60):
        self.calls_per_minute = calls_per_minute
        self.interval = 60 / calls_per_minute  # seconds per call
        self.last_call_times = {}
    
    async def limit(self, key):
        """Limit calls for a specific key."""
        now = time.time()
        
        # Initialize if first call
        if key not in self.last_call_times:
            self.last_call_times[key] = [now]
            return
        
        # Get calls within the last minute
        minute_ago = now - 60
        recent_calls = [t for t in self.last_call_times[key] if t > minute_ago]
        
        # Check if rate limit exceeded
        if len(recent_calls) >= self.calls_per_minute:
            oldest_call = min(recent_calls)
            wait_time = 60 - (now - oldest_call)
            raise ValueError(f"Rate limit exceeded. Try again in {wait_time:.1f} seconds.")
        
        # Update call times
        self.last_call_times[key] = recent_calls + [now]

# Create rate limiter
rate_limiter = RateLimiter(calls_per_minute=10)

# Apply rate limiting to a tool
@mcp.tool()
async def rate_limited_api_call(endpoint: str) -> str:
    """Call API with rate limiting."""
    try:
        # Apply rate limit
        await rate_limiter.limit("api_call")
        
        # Proceed with API call
        async with httpx.AsyncClient() as client:
            response = await client.get(f"https://api.example.com/{endpoint}")
            return response.text
            
    except ValueError as e:
        return f"Error: {str(e)}"
```

### Access Control

Implement access controls for sensitive operations:

```python
# Define access levels
class AccessLevel:
    READ = 1
    WRITE = 2
    ADMIN = 3

# Access control decorator
def require_access(level):
    def decorator(func):
        @wraps(func)
        async def wrapper(*args, **kwargs):
            # Get context from args
            ctx = None
            for arg in args:
                if isinstance(arg, Context):
                    ctx = arg
                    break
            
            if ctx is None:
                for arg_name, arg_value in kwargs.items():
                    if isinstance(arg_value, Context):
                        ctx = arg_value
                        break
            
            if ctx is None:
                return "Error: Context not provided"
            
            # Check access level
            user_level = get_user_access_level(ctx)
            if user_level < level:
                return "Error: Insufficient permissions"
            
            # Proceed with function
            return await func(*args, **kwargs)
        return wrapper
    return decorator

# Get user access level from context
def get_user_access_level(ctx):
    # In practice, this would use authentication information
    # For demonstration, return READ

```

--------------------------------------------------------------------------------
/docs/02-mcp-core-concepts.md:
--------------------------------------------------------------------------------

```markdown
# MCP Core Concepts: Tools, Resources, and Prompts

## Understanding the Core Primitives

The Model Context Protocol (MCP) is built around three foundational primitives that determine how LLMs interact with external systems:

1. **Tools**: Functions that LLMs can call to perform actions
2. **Resources**: Data sources that LLMs can access
3. **Prompts**: Templates that guide LLM interactions

Each primitive serves a distinct purpose in the MCP ecosystem and comes with its own control flow, usage patterns, and implementation considerations. Understanding when and how to use each is essential for effective MCP development.

## The Control Matrix

A key concept in MCP is who controls each primitive:

| Primitive | Control          | Access Pattern  | Typical Use Cases                     | Security Model                     |
|-----------|------------------|-----------------|--------------------------------------|-----------------------------------|
| Tools     | Model-controlled | Execute         | API calls, calculations, processing   | User permission before execution   |
| Resources | App-controlled   | Read            | Files, database records, context      | App decides which resources to use |
| Prompts   | User-controlled  | Apply template  | Structured queries, common workflows  | Explicitly user-selected          |

This control matrix ensures that each component operates within appropriate boundaries and security constraints.

## Tools in Depth

### What Are Tools?

Tools are executable functions that allow LLMs to perform actions and retrieve information. They are analogous to API endpoints but specifically designed for LLM consumption.

```mermaid
flowchart LR
    LLM[LLM] -->|Request + Parameters| Tool[Tool]
    Tool -->|Result| LLM
    Tool -->|Execute| Action[Action]
    Action -->|Result| Tool
```

### Key Characteristics of Tools

- **Model-controlled**: The LLM decides when to call a tool
- **Request-response pattern**: Tools accept parameters and return results
- **Side effects**: Tools may have side effects (e.g., modifying data)
- **Permission-based**: Tool execution typically requires user permission
- **Formal schema**: Tools have well-defined input and output schemas

### When to Use Tools

Use tools when:

- The LLM needs to perform an action (not just read data)
- The operation has potential side effects
- The operation requires specific parameters
- You want the LLM to decide when to use the functionality
- The operation produces results that affect further LLM reasoning

### Tool Example: Weather Service

```python
@mcp.tool()
async def get_forecast(latitude: float, longitude: float) -> str:
    """
    Get weather forecast for a location.
    
    Args:
        latitude: Latitude of the location
        longitude: Longitude of the location
    
    Returns:
        Formatted forecast text
    """
    # Implementation details...
    return forecast_text
```

### Tool Schema

Each tool provides a JSON Schema that defines its input parameters:

```json
{
  "name": "get_forecast",
  "description": "Get weather forecast for a location",
  "inputSchema": {
    "type": "object",
    "properties": {
      "latitude": {
        "type": "number",
        "description": "Latitude of the location"
      },
      "longitude": {
        "type": "number",
        "description": "Longitude of the location"
      }
    },
    "required": ["latitude", "longitude"]
  }
}
```

### Tool Execution Flow

```mermaid
sequenceDiagram
    participant LLM
    participant Client
    participant User
    participant Server
    participant External as External System
    
    LLM->>Client: Request tool execution
    Client->>User: Request permission
    User->>Client: Grant permission
    Client->>Server: Call tool with parameters
    Server->>External: Execute operation
    External->>Server: Return operation result
    Server->>Client: Return formatted result
    Client->>LLM: Provide result for reasoning
```

## Resources in Depth

### What Are Resources?

Resources are data sources that provide context to LLMs. They represent content that an LLM can read but not modify directly.

```mermaid
flowchart LR
    Resource[Resource] -->|Content| Client[MCP Client]
    Client -->|Context| LLM[LLM]
    DB[(Database)] -->|Data| Resource
    Files[(Files)] -->|Data| Resource
    API[APIs] -->|Data| Resource
```

### Key Characteristics of Resources

- **Application-controlled**: The client app decides which resources to provide
- **Read-only**: Resources are for reading, not modification
- **URI-based**: Resources are identified by URI schemes
- **Content-focused**: Resources provide data, not functionality
- **Context-providing**: Resources enhance the LLM's understanding

### When to Use Resources

Use resources when:

- The LLM needs to read data but not modify it
- The data provides context for reasoning
- The content is static or infrequently changing
- You want control over what data the LLM can access
- The data is too large or complex to include in prompts

### Resource Example: File Reader

```python
@mcp.resource("file://{path}")
async def get_file_content(path: str) -> str:
    """
    Get the content of a file.
    
    Args:
        path: Path to the file
    
    Returns:
        File content as text
    """
    # Implementation details...
    return file_content
```

### Resource URI Templates

Resources often use URI templates to create dynamic resources:

```
file://{path}
database://{table}/{id}
api://{endpoint}/{parameter}
```

This allows for flexible resource addressing while maintaining structure.

### Resource Access Flow

```mermaid
sequenceDiagram
    participant LLM
    participant Client
    participant Server
    participant DataSource as Data Source
    
    Client->>Server: List available resources
    Server->>Client: Return resource list
    Client->>Client: Select relevant resources
    Client->>Server: Request resource content
    Server->>DataSource: Fetch data
    DataSource->>Server: Return data
    Server->>Client: Return formatted content
    Client->>LLM: Provide as context
```

## Prompts in Depth

### What Are Prompts?

Prompts are templates that guide LLM interactions with servers. They provide structured patterns for common operations and workflows.

```mermaid
flowchart LR
    User[User] -->|Select| Prompt[Prompt Template]
    Prompt -->|Apply| Interaction[LLM Interaction]
    Interaction -->|Result| User
```

### Key Characteristics of Prompts

- **User-controlled**: Explicitly selected by users for specific tasks
- **Template-based**: Provide structured formats for interactions
- **Parameterized**: Accept arguments to customize behavior
- **Workflow-oriented**: Often encapsulate multi-step processes
- **Reusable**: Designed for repeated use across similar tasks

### When to Use Prompts

Use prompts when:

- Users perform similar tasks repeatedly
- Complex interactions can be standardized
- You want to ensure consistent LLM behavior
- The interaction follows a predictable pattern
- Users need guidance on how to interact with a tool

### Prompt Example: Code Review

```python
@mcp.prompt()
def code_review(code: str) -> str:
    """
    Create a prompt for code review.
    
    Args:
        code: The code to review
    
    Returns:
        Formatted prompt for LLM
    """
    return f"""
    Please review this code:
    
    ```
    {code}
    ```
    
    Focus on:
    1. Potential bugs
    2. Performance issues
    3. Security concerns
    4. Code style and readability
    """
```

### Prompt Schema

Prompts define their parameters and description:

```json
{
  "name": "code_review",
  "description": "Generate a code review for the provided code",
  "arguments": [
    {
      "name": "code",
      "description": "The code to review",
      "required": true
    }
  ]
}
```

### Prompt Usage Flow

```mermaid
sequenceDiagram
    participant User
    participant Client
    participant Server
    participant LLM
    
    User->>Client: Browse available prompts
    Client->>Server: List prompts
    Server->>Client: Return prompt list
    Client->>User: Display prompt options
    User->>Client: Select prompt and provide args
    Client->>Server: Get prompt template
    Server->>Client: Return filled template
    Client->>LLM: Use template for interaction
    LLM->>Client: Generate response
    Client->>User: Show response
```

## Comparing the Primitives

### Tools vs. Resources

| Aspect           | Tools                          | Resources                      |
|------------------|--------------------------------|--------------------------------|
| **Purpose**      | Perform actions                | Provide data                   |
| **Control**      | Model-controlled (with permission) | Application-controlled         |
| **Operations**   | Execute functions              | Read content                   |
| **Side Effects** | May have side effects          | No side effects (read-only)    |
| **Schema**       | Input parameters, return value | URI template, content type     |
| **Use Case**     | API calls, calculations        | Files, database records        |
| **Security**     | Permission required            | Pre-selected by application    |

### Tools vs. Prompts

| Aspect           | Tools                          | Prompts                        |
|------------------|--------------------------------|--------------------------------|
| **Purpose**      | Perform actions                | Guide interactions             |
| **Control**      | Model-controlled               | User-controlled                |
| **Operations**   | Execute functions              | Apply templates                |
| **Customization**| Input parameters               | Template arguments             |
| **Use Case**     | Specific operations            | Standardized workflows         |
| **User Interface**| Usually invisible             | Typically visible in UI        |

### Resources vs. Prompts

| Aspect           | Resources                     | Prompts                        |
|------------------|-------------------------------|--------------------------------|
| **Purpose**      | Provide data                  | Guide interactions             |
| **Control**      | Application-controlled        | User-controlled                |
| **Content**      | Dynamic data                  | Structured templates           |
| **Use Case**     | Context enhancement           | Standardized workflows         |
| **Persistence**  | May be cached or real-time    | Generally static               |

## Deciding Which Primitive to Use

When designing MCP servers, choosing the right primitive is critical. Use this decision tree:

```mermaid
flowchart TD
    A[Start] --> B{Does it perform\nan action?}
    B -->|Yes| C{Should the LLM\ndecide when\nto use it?}
    B -->|No| D{Is it providing\ndata only?}
    
    C -->|Yes| E[Use a Tool]
    C -->|No| F{Is it a common\nworkflow pattern?}
    
    D -->|Yes| G[Use a Resource]
    D -->|No| F
    
    F -->|Yes| H[Use a Prompt]
    F -->|No| I{Does it modify\ndata?}
    
    I -->|Yes| E
    I -->|No| G
```

### Practical Guidelines

1. **Use Tools when**:
   - The operation performs actions or has side effects
   - The LLM should decide when to use the functionality
   - The operation requires specific input parameters
   - You need to run calculations or process data

2. **Use Resources when**:
   - You need to provide read-only data to the LLM
   - The content is large or structured
   - The data needs to be selected by the application
   - The data provides context for reasoning

3. **Use Prompts when**:
   - Users perform similar tasks repeatedly
   - The interaction follows a predictable pattern
   - You want to ensure consistent behavior
   - Users need guidance on complex interactions

## Combining Primitives

For complex systems, you'll often combine multiple primitives:

```mermaid
flowchart LR
    User[User] -->|Selects| Prompt[Prompt]
    Prompt -->|Guides| LLM[LLM]
    LLM -->|Reads| Resource[Resource]
    LLM -->|Calls| Tool[Tool]
    Resource -->|Informs| LLM
    Tool -->|Returns to| LLM
    LLM -->|Responds to| User
```

Example combinations:

1. **Resource + Tool**: Read a file (resource) then analyze its content (tool)
2. **Prompt + Tool**: Use a standard query format (prompt) to execute a search (tool)
3. **Resource + Prompt**: Load context (resource) then apply a structured analysis template (prompt)
4. **All Three**: Load context (resource), apply analysis template (prompt), and execute operations (tool)

## Best Practices

### Tools
- Keep tools focused on single responsibilities
- Provide clear descriptions and parameter documentation
- Handle errors gracefully and return informative messages
- Implement timeouts for long-running operations
- Log tool usage for debugging and monitoring

### Resources
- Use clear URI schemes that indicate content type
- Implement caching for frequently used resources
- Handle large resources efficiently (pagination, streaming)
- Provide metadata about resources (size, type, etc.)
- Secure access to sensitive resources

### Prompts
- Design for reusability across similar tasks
- Keep prompt templates simple and focused
- Document expected arguments clearly
- Provide examples of how to use prompts
- Test prompts with different inputs

## Conclusion

Understanding the differences between tools, resources, and prompts is fundamental to effective MCP development. By choosing the right primitives for each use case and following best practices, you can create powerful, flexible, and secure MCP servers that enhance LLM capabilities.

The next document in this series will guide you through building MCP servers using Python, where you'll implement these concepts in practice.

```

--------------------------------------------------------------------------------
/docs/03-building-mcp-servers-python.md:
--------------------------------------------------------------------------------

```markdown
# Building MCP Servers with Python

This guide provides a comprehensive walkthrough for building Model Context Protocol (MCP) servers using Python. We'll cover everything from basic setup to advanced techniques, with practical examples and best practices.

## Prerequisites

Before starting, ensure you have:

- Python 3.10 or higher installed
- Basic knowledge of Python and async programming
- Understanding of MCP core concepts (tools, resources, prompts)
- A development environment with your preferred code editor

## Setting Up Your Environment

### Installation

Start by creating a virtual environment and installing the MCP package:

```bash
# Create a virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install MCP
pip install mcp
```

Alternatively, if you're using [uv](https://github.com/astral-sh/uv) for package management:

```bash
# Create a virtual environment
uv venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install MCP
uv pip install mcp
```

### Project Structure

A well-organized MCP server project typically follows this structure:

```
my-mcp-server/
├── requirements.txt
├── server.py
├── tools/
│   ├── __init__.py
│   ├── tool_module1.py
│   └── tool_module2.py
├── resources/
│   ├── __init__.py
│   └── resource_modules.py
└── prompts/
    ├── __init__.py
    └── prompt_modules.py
```

This modular structure keeps your code organized and makes it easier to add new functionality over time.

## Creating Your First MCP Server

### Basic Server Structure

Let's create a simple MCP server with a "hello world" tool:

```python
# server.py
from mcp.server.fastmcp import FastMCP

# Create a server
mcp = FastMCP("HelloWorld")

@mcp.tool()
def hello(name: str = "World") -> str:
    """
    Say hello to a name.
    
    Args:
        name: The name to greet (default: "World")
    
    Returns:
        A greeting message
    """
    return f"Hello, {name}!"

if __name__ == "__main__":
    # Run the server
    mcp.run()
```

This basic server:
1. Creates a FastMCP server named "HelloWorld"
2. Defines a simple tool called "hello" that takes a name parameter
3. Runs the server using the default stdio transport

### Running Your Server

To run your server:

```bash
python server.py
```

The server will start and wait for connections on the standard input/output streams.

### FastMCP vs. Low-Level API

The MCP Python SDK provides two ways to create servers:

1. **FastMCP**: A high-level API that simplifies server creation through decorators
2. **Low-Level API**: Provides more control but requires more boilerplate code

Most developers should start with FastMCP, as it handles many details automatically.

## Implementing Tools

Tools are the most common primitive in MCP servers. They allow LLMs to perform actions and retrieve information.

### Basic Tool Example

Here's how to implement a simple calculator tool:

```python
@mcp.tool()
def calculate(operation: str, a: float, b: float) -> float:
    """
    Perform basic arithmetic operations.
    
    Args:
        operation: The operation to perform (add, subtract, multiply, divide)
        a: First number
        b: Second number
    
    Returns:
        The result of the operation
    """
    if operation == "add":
        return a + b
    elif operation == "subtract":
        return a - b
    elif operation == "multiply":
        return a * b
    elif operation == "divide":
        if b == 0:
            raise ValueError("Cannot divide by zero")
        return a / b
    else:
        raise ValueError(f"Unknown operation: {operation}")
```

### Asynchronous Tools

For operations that involve I/O or might take time, use async tools:

```python
@mcp.tool()
async def fetch_weather(city: str) -> str:
    """
    Fetch weather information for a city.
    
    Args:
        city: The city name
    
    Returns:
        Weather information
    """
    async with httpx.AsyncClient() as client:
        response = await client.get(f"https://weather-api.example.com/{city}")
        data = response.json()
        return f"Temperature: {data['temp']}°C, Conditions: {data['conditions']}"
```

### Tool Parameters

Tools can have:

- Required parameters
- Optional parameters with defaults
- Type hints that are used to generate schema
- Docstrings that provide descriptions

```python
@mcp.tool()
def search_database(
    query: str,
    limit: int = 10,
    offset: int = 0,
    sort_by: str = "relevance"
) -> list:
    """
    Search the database for records matching the query.
    
    Args:
        query: The search query string
        limit: Maximum number of results to return (default: 10)
        offset: Number of results to skip (default: 0)
        sort_by: Field to sort results by (default: "relevance")
    
    Returns:
        List of matching records
    """
    # Implementation details...
    return results
```

### Error Handling in Tools

Proper error handling is essential for robust tools:

```python
@mcp.tool()
def divide(a: float, b: float) -> float:
    """
    Divide two numbers.
    
    Args:
        a: Numerator
        b: Denominator
    
    Returns:
        The division result
    
    Raises:
        ValueError: If attempting to divide by zero
    """
    try:
        if b == 0:
            raise ValueError("Cannot divide by zero")
        return a / b
    except Exception as e:
        # Log the error for debugging
        logging.error(f"Error in divide tool: {str(e)}")
        # Re-raise with a user-friendly message
        raise ValueError(f"Division failed: {str(e)}")
```

### Grouping Related Tools

For complex servers, organize related tools into modules:

```python
# tools/math_tools.py
def register_math_tools(mcp):
    @mcp.tool()
    def add(a: float, b: float) -> float:
        """Add two numbers."""
        return a + b
    
    @mcp.tool()
    def subtract(a: float, b: float) -> float:
        """Subtract b from a."""
        return a - b
    
    # More math tools...

# server.py
from tools.math_tools import register_math_tools

mcp = FastMCP("MathServer")
register_math_tools(mcp)
```

## Implementing Resources

Resources provide data to LLMs through URI-based access patterns.

### Basic Resource Example

Here's a simple file resource:

```python
@mcp.resource("file://{path}")
async def get_file(path: str) -> str:
    """
    Get the content of a file.
    
    Args:
        path: Path to the file
    
    Returns:
        The file content
    """
    try:
        async with aiofiles.open(path, "r") as f:
            return await f.read()
    except Exception as e:
        raise ValueError(f"Failed to read file: {str(e)}")
```

### Dynamic Resources

Resources can be dynamic and parameterized:

```python
@mcp.resource("database://{table}/{id}")
async def get_database_record(table: str, id: str) -> str:
    """
    Get a record from the database.
    
    Args:
        table: The table name
        id: The record ID
    
    Returns:
        The record data
    """
    # Implementation details...
    return json.dumps(record)
```

### Resource Metadata

Resources can include metadata:

```python
@mcp.resource("api://{endpoint}")
async def get_api_data(endpoint: str) -> tuple:
    """
    Get data from an API endpoint.
    
    Args:
        endpoint: The API endpoint path
    
    Returns:
        A tuple of (content, mime_type)
    """
    async with httpx.AsyncClient() as client:
        response = await client.get(f"https://api.example.com/{endpoint}")
        return response.text, response.headers.get("content-type", "text/plain")
```

### Binary Resources

Resources can return binary data:

```python
from mcp.server.fastmcp import Image

@mcp.resource("image://{path}")
async def get_image(path: str) -> Image:
    """
    Get an image file.
    
    Args:
        path: Path to the image
    
    Returns:
        The image data
    """
    with open(path, "rb") as f:
        data = f.read()
    return Image(data=data, format=path.split(".")[-1])
```

## Implementing Prompts

Prompts are templates that help LLMs interact with your server effectively.

### Basic Prompt Example

Here's a simple query prompt:

```python
@mcp.prompt()
def search_query(query: str) -> str:
    """
    Create a search query prompt.
    
    Args:
        query: The search query
    
    Returns:
        Formatted search query prompt
    """
    return f"""
    Please search for information about:
    
    {query}
    
    Focus on the most relevant and up-to-date information.
    """
```

### Multi-Message Prompts

Prompts can include multiple messages:

```python
from mcp.types import UserMessage, AssistantMessage

@mcp.prompt()
def debug_error(error: str) -> list:
    """
    Create a debugging conversation.
    
    Args:
        error: The error message
    
    Returns:
        A list of messages
    """
    return [
        UserMessage(f"I'm getting this error: {error}"),
        AssistantMessage("Let me help debug that. What have you tried so far?")
    ]
```

## Transport Options

MCP supports different transport mechanisms for communication between clients and servers.

### STDIO Transport (Default)

The default transport uses standard input/output streams:

```python
if __name__ == "__main__":
    mcp.run(transport="stdio")
```

This is ideal for local processes and command-line tools.

### SSE Transport

Server-Sent Events (SSE) transport is used for web applications:

```python
if __name__ == "__main__":
    mcp.run(transport="sse", host="localhost", port=5000)
```

This starts an HTTP server that accepts MCP connections through SSE.

## Context and Lifespan

### Using Context

The `Context` object provides access to the current request context:

```python
from mcp.server.fastmcp import Context

@mcp.tool()
async def log_message(message: str, ctx: Context) -> str:
    """
    Log a message and return a confirmation.
    
    Args:
        message: The message to log
        ctx: The request context
    
    Returns:
        Confirmation message
    """
    ctx.info(f"User logged: {message}")
    return f"Message logged: {message}"
```

### Progress Reporting

For long-running tools, report progress:

```python
@mcp.tool()
async def process_files(files: list[str], ctx: Context) -> str:
    """
    Process multiple files with progress tracking.
    
    Args:
        files: List of file paths
        ctx: The request context
    
    Returns:
        Processing summary
    """
    total = len(files)
    for i, file in enumerate(files):
        # Report progress (0-100%)
        await ctx.report_progress(i * 100 // total)
        # Process the file...
        ctx.info(f"Processing {file}")
    
    return f"Processed {total} files"
```

### Lifespan Management

For servers that need initialization and cleanup:

```python
from contextlib import asynccontextmanager
from typing import AsyncIterator

@asynccontextmanager
async def lifespan(server: FastMCP) -> AsyncIterator[dict]:
    """Manage server lifecycle."""
    # Setup (runs on startup)
    db = await Database.connect()
    try:
        yield {"db": db}  # Pass context to handlers
    finally:
        # Cleanup (runs on shutdown)
        await db.disconnect()

# Create server with lifespan
mcp = FastMCP("DatabaseServer", lifespan=lifespan)

@mcp.tool()
async def query_db(sql: str, ctx: Context) -> list:
    """Run a database query."""
    db = ctx.request_context.lifespan_context["db"]
    return await db.execute(sql)
```

## Testing MCP Servers

### Using the MCP Inspector

The MCP Inspector is a tool for testing MCP servers:

```bash
# Install the inspector
npm install -g @modelcontextprotocol/inspector

# Run your server with the inspector
npx @modelcontextprotocol/inspector python server.py
```

This opens a web interface where you can:
- See available tools, resources, and prompts
- Test tools with different parameters
- View tool execution results
- Explore resource content

### Manual Testing

You can also test your server programmatically:

```python
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

async def test_server():
    # Connect to the server
    server_params = StdioServerParameters(
        command="python",
        args=["server.py"]
    )
    
    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write) as session:
            # Initialize the connection
            await session.initialize()
            
            # List tools
            tools = await session.list_tools()
            print(f"Available tools: {[tool.name for tool in tools.tools]}")
            
            # Call a tool
            result = await session.call_tool("hello", {"name": "MCP"})
            print(f"Tool result: {result.content[0].text}")

if __name__ == "__main__":
    asyncio.run(test_server())
```

## Debugging MCP Servers

### Logging

Use logging to debug your server:

```python
import logging

# Configure logging
logging.basicConfig(
    level=logging.DEBUG,
    format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)

# Access the MCP logger
logger = logging.getLogger("mcp")
```

### Common Issues

1. **Schema Generation**:
   - Ensure type hints are accurate
   - Provide docstrings for tools
   - Check parameter names and types

2. **Async/Sync Mismatch**:
   - Use `async def` for tools that use async operations
   - Don't mix async and sync code without proper handling

3. **Transport Issues**:
   - Check that stdio is not mixed with print statements
   - Ensure ports are available for SSE transport
   - Verify network settings for remote connections

## Deployment Options

### Local Deployment

For local use with Claude Desktop:

1. Edit the Claude Desktop config file:
   ```json
   {
     "mcpServers": {
       "my-server": {
         "command": "python",
         "args": ["/path/to/server.py"]
       }
     }
   }
   ```

2. Restart Claude Desktop

### Web Deployment

For web deployment with SSE transport:

1. Set up a web server (e.g., nginx) to proxy requests
2. Use a process manager (e.g., systemd, supervisor) to keep the server running
3. Configure the server to use SSE transport with appropriate host/port

Example systemd service:

```ini
[Unit]
Description=MCP Server
After=network.target

[Service]
User=mcp
WorkingDirectory=/path/to/server
ExecStart=/path/to/venv/bin/python server.py --transport sse --host 127.0.0.1 --port 5000
Restart=on-failure

[Install]
WantedBy=multi-user.target
```

## Security Considerations

When building MCP servers, consider these security aspects:

1. **Input Validation**:
   - Validate all parameters
   - Sanitize file paths and system commands
   - Use allowlists for sensitive operations

2. **Resource Access**:
   - Limit access to specific directories
   - Avoid exposing sensitive information
   - Use proper permissions for files

3. **Error Handling**:
   - Don't expose internal errors to clients
   - Log security-relevant errors
   - Implement proper error recovery

4. **Authentication**:
   - Implement authentication for sensitive operations
   - Use secure tokens or credentials
   - Verify client identity when needed

## Example: Web Scraping Server

Let's build a complete web scraping server that fetches and returns content from URLs:

```python
# server.py
import httpx
from mcp.server.fastmcp import FastMCP

# Create the server
mcp = FastMCP("WebScraper")

@mcp.tool()
async def web_scrape(url: str) -> str:
    """
    Fetch content from a URL and return it.
    
    Args:
        url: The URL to scrape
    
    Returns:
        The page content
    """
    # Ensure URL has a scheme
    if not url.startswith(('http://', 'https://')):
        url = 'https://' + url
    
    # Fetch the content
    try:
        async with httpx.AsyncClient() as client:
            response = await client.get(url, follow_redirects=True)
            response.raise_for_status()
            return response.text
    except httpx.HTTPStatusError as e:
        return f"Error: HTTP status error - {e.response.status_code}"
    except httpx.RequestError as e:
        return f"Error: Request failed - {str(e)}"
    except Exception as e:
        return f"Error: Unexpected error occurred - {str(e)}"

if __name__ == "__main__":
    mcp.run()
```

## Conclusion

Building MCP servers with Python is a powerful way to extend LLM capabilities. By following the patterns and practices in this guide, you can create robust, maintainable MCP servers that integrate with Claude and other LLMs.

In the next document, we'll explore how to connect to MCP servers from different clients.

```

--------------------------------------------------------------------------------
/docs/00-important-python-mcp-sdk.md:
--------------------------------------------------------------------------------

```markdown
# MCP Python SDK

<div align="center">

<strong>Python implementation of the Model Context Protocol (MCP)</strong>

[![PyPI][pypi-badge]][pypi-url]
[![MIT licensed][mit-badge]][mit-url]
[![Python Version][python-badge]][python-url]
[![Documentation][docs-badge]][docs-url]
[![Specification][spec-badge]][spec-url]
[![GitHub Discussions][discussions-badge]][discussions-url]

</div>

<!-- omit in toc -->
## Table of Contents

- [Overview](#overview)
- [Installation](#installation)
- [Quickstart](#quickstart)
- [What is MCP?](#what-is-mcp)
- [Core Concepts](#core-concepts)
  - [Server](#server)
  - [Resources](#resources)
  - [Tools](#tools)
  - [Prompts](#prompts)
  - [Images](#images)
  - [Context](#context)
- [Running Your Server](#running-your-server)
  - [Development Mode](#development-mode)
  - [Claude Desktop Integration](#claude-desktop-integration)
  - [Direct Execution](#direct-execution)
- [Examples](#examples)
  - [Echo Server](#echo-server)
  - [SQLite Explorer](#sqlite-explorer)
- [Advanced Usage](#advanced-usage)
  - [Low-Level Server](#low-level-server)
  - [Writing MCP Clients](#writing-mcp-clients)
  - [MCP Primitives](#mcp-primitives)
  - [Server Capabilities](#server-capabilities)
- [Documentation](#documentation)
- [Contributing](#contributing)
- [License](#license)

[pypi-badge]: https://img.shields.io/pypi/v/mcp.svg
[pypi-url]: https://pypi.org/project/mcp/
[mit-badge]: https://img.shields.io/pypi/l/mcp.svg
[mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE
[python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg
[python-url]: https://www.python.org/downloads/
[docs-badge]: https://img.shields.io/badge/docs-modelcontextprotocol.io-blue.svg
[docs-url]: https://modelcontextprotocol.io
[spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg
[spec-url]: https://spec.modelcontextprotocol.io
[discussions-badge]: https://img.shields.io/github/discussions/modelcontextprotocol/python-sdk
[discussions-url]: https://github.com/modelcontextprotocol/python-sdk/discussions

## Overview

The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:

- Build MCP clients that can connect to any MCP server
- Create MCP servers that expose resources, prompts and tools
- Use standard transports like stdio and SSE
- Handle all MCP protocol messages and lifecycle events

## Installation

We recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects:

```bash
uv add "mcp[cli]"
```

Alternatively:
```bash
pip install mcp
```

## Quickstart

Let's create a simple MCP server that exposes a calculator tool and some data:

```python
# server.py
from mcp.server.fastmcp import FastMCP

# Create an MCP server
mcp = FastMCP("Demo")

# Add an addition tool
@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

# Add a dynamic greeting resource
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
    """Get a personalized greeting"""
    return f"Hello, {name}!"
```

You can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running:
```bash
mcp install server.py
```

Alternatively, you can test it with the MCP Inspector:
```bash
mcp dev server.py
```

## What is MCP?

The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:

- Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
- Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
- Define interaction patterns through **Prompts** (reusable templates for LLM interactions)
- And more!

## Core Concepts

### Server

The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:

```python
# Add lifespan support for startup/shutdown with strong typing
from dataclasses import dataclass
from typing import AsyncIterator
from mcp.server.fastmcp import FastMCP

# Create a named server
mcp = FastMCP("My App")

# Specify dependencies for deployment and development
mcp = FastMCP("My App", dependencies=["pandas", "numpy"])

@dataclass
class AppContext:
    db: Database  # Replace with your actual DB type

@asynccontextmanager
async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
    """Manage application lifecycle with type-safe context"""
    try:
        # Initialize on startup
        await db.connect()
        yield AppContext(db=db)
    finally:
        # Cleanup on shutdown
        await db.disconnect()

# Pass lifespan to server
mcp = FastMCP("My App", lifespan=app_lifespan)

# Access type-safe lifespan context in tools
@mcp.tool()
def query_db(ctx: Context) -> str:
    """Tool that uses initialized resources"""
    db = ctx.request_context.lifespan_context["db"]
    return db.query()
```

### Resources

Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:

```python
@mcp.resource("config://app")
def get_config() -> str:
    """Static configuration data"""
    return "App configuration here"

@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: str) -> str:
    """Dynamic user data"""
    return f"Profile data for user {user_id}"
```

### Tools

Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:

```python
@mcp.tool()
def calculate_bmi(weight_kg: float, height_m: float) -> float:
    """Calculate BMI given weight in kg and height in meters"""
    return weight_kg / (height_m ** 2)

@mcp.tool()
async def fetch_weather(city: str) -> str:
    """Fetch current weather for a city"""
    async with httpx.AsyncClient() as client:
        response = await client.get(f"https://api.weather.com/{city}")
        return response.text
```

### Prompts

Prompts are reusable templates that help LLMs interact with your server effectively:

```python
@mcp.prompt()
def review_code(code: str) -> str:
    return f"Please review this code:\n\n{code}"

@mcp.prompt()
def debug_error(error: str) -> list[Message]:
    return [
        UserMessage("I'm seeing this error:"),
        UserMessage(error),
        AssistantMessage("I'll help debug that. What have you tried so far?")
    ]
```

### Images

FastMCP provides an `Image` class that automatically handles image data:

```python
from mcp.server.fastmcp import FastMCP, Image
from PIL import Image as PILImage

@mcp.tool()
def create_thumbnail(image_path: str) -> Image:
    """Create a thumbnail from an image"""
    img = PILImage.open(image_path)
    img.thumbnail((100, 100))
    return Image(data=img.tobytes(), format="png")
```

### Context

The Context object gives your tools and resources access to MCP capabilities:

```python
from mcp.server.fastmcp import FastMCP, Context

@mcp.tool()
async def long_task(files: list[str], ctx: Context) -> str:
    """Process multiple files with progress tracking"""
    for i, file in enumerate(files):
        ctx.info(f"Processing {file}")
        await ctx.report_progress(i, len(files))
        data, mime_type = await ctx.read_resource(f"file://{file}")
    return "Processing complete"
```

## Running Your Server

### Development Mode

The fastest way to test and debug your server is with the MCP Inspector:

```bash
mcp dev server.py

# Add dependencies
mcp dev server.py --with pandas --with numpy

# Mount local code
mcp dev server.py --with-editable .
```

### Claude Desktop Integration

Once your server is ready, install it in Claude Desktop:

```bash
mcp install server.py

# Custom name
mcp install server.py --name "My Analytics Server"

# Environment variables
mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...
mcp install server.py -f .env
```

### Direct Execution

For advanced scenarios like custom deployments:

```python
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("My App")

if __name__ == "__main__":
    mcp.run()
```

Run it with:
```bash
python server.py
# or
mcp run server.py
```

## Examples

### Echo Server

A simple server demonstrating resources, tools, and prompts:

```python
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Echo")

@mcp.resource("echo://{message}")
def echo_resource(message: str) -> str:
    """Echo a message as a resource"""
    return f"Resource echo: {message}"

@mcp.tool()
def echo_tool(message: str) -> str:
    """Echo a message as a tool"""
    return f"Tool echo: {message}"

@mcp.prompt()
def echo_prompt(message: str) -> str:
    """Create an echo prompt"""
    return f"Please process this message: {message}"
```

### SQLite Explorer

A more complex example showing database integration:

```python
from mcp.server.fastmcp import FastMCP
import sqlite3

mcp = FastMCP("SQLite Explorer")

@mcp.resource("schema://main")
def get_schema() -> str:
    """Provide the database schema as a resource"""
    conn = sqlite3.connect("database.db")
    schema = conn.execute(
        "SELECT sql FROM sqlite_master WHERE type='table'"
    ).fetchall()
    return "\n".join(sql[0] for sql in schema if sql[0])

@mcp.tool()
def query_data(sql: str) -> str:
    """Execute SQL queries safely"""
    conn = sqlite3.connect("database.db")
    try:
        result = conn.execute(sql).fetchall()
        return "\n".join(str(row) for row in result)
    except Exception as e:
        return f"Error: {str(e)}"
```

## Advanced Usage

### Low-Level Server

For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:

```python
from contextlib import asynccontextmanager
from typing import AsyncIterator

@asynccontextmanager
async def server_lifespan(server: Server) -> AsyncIterator[dict]:
    """Manage server startup and shutdown lifecycle."""
    try:
        # Initialize resources on startup
        await db.connect()
        yield {"db": db}
    finally:
        # Clean up on shutdown
        await db.disconnect()

# Pass lifespan to server
server = Server("example-server", lifespan=server_lifespan)

# Access lifespan context in handlers
@server.call_tool()
async def query_db(name: str, arguments: dict) -> list:
    ctx = server.request_context
    db = ctx.lifespan_context["db"]
    return await db.query(arguments["query"])
```

The lifespan API provides:
- A way to initialize resources when the server starts and clean them up when it stops
- Access to initialized resources through the request context in handlers
- Type-safe context passing between lifespan and request handlers

```python
from mcp.server.lowlevel import Server, NotificationOptions
from mcp.server.models import InitializationOptions
import mcp.server.stdio
import mcp.types as types

# Create a server instance
server = Server("example-server")

@server.list_prompts()
async def handle_list_prompts() -> list[types.Prompt]:
    return [
        types.Prompt(
            name="example-prompt",
            description="An example prompt template",
            arguments=[
                types.PromptArgument(
                    name="arg1",
                    description="Example argument",
                    required=True
                )
            ]
        )
    ]

@server.get_prompt()
async def handle_get_prompt(
    name: str,
    arguments: dict[str, str] | None
) -> types.GetPromptResult:
    if name != "example-prompt":
        raise ValueError(f"Unknown prompt: {name}")

    return types.GetPromptResult(
        description="Example prompt",
        messages=[
            types.PromptMessage(
                role="user",
                content=types.TextContent(
                    type="text",
                    text="Example prompt text"
                )
            )
        ]
    )

async def run():
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await server.run(
            read_stream,
            write_stream,
            InitializationOptions(
                server_name="example",
                server_version="0.1.0",
                capabilities=server.get_capabilities(
                    notification_options=NotificationOptions(),
                    experimental_capabilities={},
                )
            )
        )

if __name__ == "__main__":
    import asyncio
    asyncio.run(run())
```

### Writing MCP Clients

The SDK provides a high-level client interface for connecting to MCP servers:

```python
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

# Create server parameters for stdio connection
server_params = StdioServerParameters(
    command="python", # Executable
    args=["example_server.py"], # Optional command line arguments
    env=None # Optional environment variables
)

# Optional: create a sampling callback
async def handle_sampling_message(message: types.CreateMessageRequestParams) -> types.CreateMessageResult:
    return types.CreateMessageResult(
        role="assistant",
        content=types.TextContent(
            type="text",
            text="Hello, world! from model",
        ),
        model="gpt-3.5-turbo",
        stopReason="endTurn",
    )

async def run():
    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write, sampling_callback=handle_sampling_message) as session:
            # Initialize the connection
            await session.initialize()

            # List available prompts
            prompts = await session.list_prompts()

            # Get a prompt
            prompt = await session.get_prompt("example-prompt", arguments={"arg1": "value"})

            # List available resources
            resources = await session.list_resources()

            # List available tools
            tools = await session.list_tools()

            # Read a resource
            content, mime_type = await session.read_resource("file://some/path")

            # Call a tool
            result = await session.call_tool("tool-name", arguments={"arg1": "value"})

if __name__ == "__main__":
    import asyncio
    asyncio.run(run())
```

### MCP Primitives

The MCP protocol defines three core primitives that servers can implement:

| Primitive | Control               | Description                                         | Example Use                  |
|-----------|-----------------------|-----------------------------------------------------|------------------------------|
| Prompts   | User-controlled       | Interactive templates invoked by user choice        | Slash commands, menu options |
| Resources | Application-controlled| Contextual data managed by the client application   | File contents, API responses |
| Tools     | Model-controlled      | Functions exposed to the LLM to take actions        | API calls, data updates      |

### Server Capabilities

MCP servers declare capabilities during initialization:

| Capability  | Feature Flag                 | Description                        |
|-------------|------------------------------|------------------------------------|
| `prompts`   | `listChanged`                | Prompt template management         |
| `resources` | `subscribe`<br/>`listChanged`| Resource exposure and updates      |
| `tools`     | `listChanged`                | Tool discovery and execution       |
| `logging`   | -                            | Server logging configuration       |
| `completion`| -                            | Argument completion suggestions    |

## Documentation

- [Model Context Protocol documentation](https://modelcontextprotocol.io)
- [Model Context Protocol specification](https://spec.modelcontextprotocol.io)
- [Officially supported servers](https://github.com/modelcontextprotocol/servers)

## Contributing

We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the [contributing guide](CONTRIBUTING.md) to get started.

## License

This project is licensed under the MIT License - see the LICENSE file for details.
```

--------------------------------------------------------------------------------
/docs/05-communication-protocols.md:
--------------------------------------------------------------------------------

```markdown
# MCP Communication Protocols

This document provides a detailed exploration of the communication protocols used in the Model Context Protocol (MCP). Understanding these protocols is essential for developing robust MCP servers and clients, and for troubleshooting connection issues.

## Protocol Overview

MCP uses a layered protocol architecture:

```mermaid
flowchart TB
    subgraph Application
        Tools["Tools, Resources, Prompts"]
    end
    subgraph Protocol
        Messages["MCP Message Format"]
        JSONRPC["JSON-RPC 2.0"]
    end
    subgraph Transport
        STDIO["STDIO Transport"]
        SSE["SSE Transport"]
    end
    
    Tools <--> Messages
    Messages <--> JSONRPC
    JSONRPC <--> STDIO
    JSONRPC <--> SSE
```

The layers are:

1. **Application Layer**: Defines tools, resources, and prompts
2. **Protocol Layer**: Specifies message formats and semantics
3. **Transport Layer**: Handles the physical transmission of messages

## Message Format

MCP uses [JSON-RPC 2.0](https://www.jsonrpc.org/specification) as its message format. This provides a standardized way to structure requests, responses, and notifications.

### JSON-RPC Structure

There are three types of messages in JSON-RPC:

1. **Requests**: Messages that require a response
2. **Responses**: Replies to requests (success or error)
3. **Notifications**: One-way messages that don't expect a response

### Request Format

```json
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/call",
  "params": {
    "name": "tool_name",
    "arguments": {
      "param1": "value1",
      "param2": 42
    }
  }
}
```

Key components:
- `jsonrpc`: Always "2.0" to indicate JSON-RPC 2.0
- `id`: A unique identifier for matching responses to requests
- `method`: The operation to perform (e.g., "tools/call")
- `params`: Parameters for the method

### Response Format (Success)

```json
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "content": [
      {
        "type": "text",
        "text": "Operation result"
      }
    ]
  }
}
```

Key components:
- `jsonrpc`: Always "2.0"
- `id`: Matches the id from the request
- `result`: The operation result (structure depends on the method)

### Response Format (Error)

```json
{
  "jsonrpc": "2.0",
  "id": 1,
  "error": {
    "code": -32602,
    "message": "Invalid parameters",
    "data": {
      "details": "Parameter 'param1' is required"
    }
  }
}
```

Key components:
- `jsonrpc`: Always "2.0"
- `id`: Matches the id from the request
- `error`: Error information with code, message, and optional data

### Notification Format

```json
{
  "jsonrpc": "2.0",
  "method": "notifications/resources/list_changed",
  "params": {}
}
```

Key components:
- `jsonrpc`: Always "2.0"
- `method`: The notification type
- `params`: Parameters for the notification (if any)
- No `id` field (distinguishes notifications from requests)

## Transport Methods

MCP supports two main transport methods:

### STDIO Transport

Standard Input/Output (STDIO) transport uses standard input and output streams for communication. This is particularly useful for local processes.

```mermaid
flowchart LR
    Client["MCP Client"]
    Server["MCP Server Process"]
    
    Client -->|stdin| Server
    Server -->|stdout| Client
```

#### Message Framing

STDIO transport uses a simple message framing format:

```
Content-Length: <length>\r\n
\r\n
<message>
```

Where:
- `<length>` is the length of the message in bytes
- `<message>` is the JSON-RPC message

Example:

```
Content-Length: 76
 
{"jsonrpc":"2.0","method":"initialize","id":0,"params":{"version":"1.0.0"}}
```

#### Implementation Details

STDIO transport is implemented by:

1. Starting a child process
2. Writing to the process's standard input
3. Reading from the process's standard output
4. Parsing messages according to the framing format

Python implementation example:

```python
async def read_message(reader):
    # Read headers
    headers = {}
    while True:
        line = await reader.readline()
        line = line.decode('utf-8').strip()
        if not line:
            break
        key, value = line.split(': ', 1)
        headers[key] = value
    
    # Get content length
    content_length = int(headers.get('Content-Length', 0))
    
    # Read content
    content = await reader.read(content_length)
    return json.loads(content)

async def write_message(writer, message):
    # Serialize message
    content = json.dumps(message).encode('utf-8')
    
    # Write headers
    header = f'Content-Length: {len(content)}\r\n\r\n'
    writer.write(header.encode('utf-8'))
    
    # Write content
    writer.write(content)
    await writer.drain()
```

#### Advantages and Limitations

Advantages:
- Simple to implement
- Works well for local processes
- No network configuration required
- Natural process lifecycle management

Limitations:
- Only works for local processes
- Limited to one client per server
- No built-in authentication
- Potential blocking issues

### SSE Transport

Server-Sent Events (SSE) transport uses HTTP for client-to-server requests and SSE for server-to-client messages. This is suitable for web applications and remote servers.

```mermaid
flowchart LR
    Client["MCP Client"]
    Server["MCP Server (HTTP)"]
    
    Client -->|HTTP POST| Server
    Server -->|SSE Events| Client
```

#### Client-to-Server Messages

Client-to-server messages are sent using HTTP POST requests:

```
POST /message HTTP/1.1
Content-Type: application/json

{"jsonrpc":"2.0","method":"tools/call","id":1,"params":{...}}
```

#### Server-to-Client Messages

Server-to-client messages are sent using SSE events:

```
event: message
data: {"jsonrpc":"2.0","id":1,"result":{...}}

```

#### Implementation Details

SSE transport implementation requires:

1. An HTTP server endpoint for accepting client POST requests
2. An SSE endpoint for sending server messages to clients
3. Proper HTTP and SSE headers and formatting

Python implementation example (using aiohttp):

```python
from aiohttp import web
import json

# For server-to-client messages (SSE)
async def sse_handler(request):
    response = web.Response(
        content_type='text/event-stream',
        headers={
            'Cache-Control': 'no-cache',
            'Connection': 'keep-alive',
            'Access-Control-Allow-Origin': '*'
        }
    )
    
    response.enable_chunked_encoding()
    
    # Get the response writer
    writer = response.write
    
    # Store the client connection
    client_id = request.query.get('id', 'unknown')
    clients[client_id] = writer
    
    # Keep the connection open
    while True:
        await asyncio.sleep(1)
    
    return response

# For client-to-server messages (HTTP POST)
async def message_handler(request):
    # Parse the message
    data = await request.json()
    
    # Process the message
    result = await process_message(data)
    
    # If it's a request (has an ID), send the response via SSE
    if 'id' in data:
        client_id = request.query.get('id', 'unknown')
        if client_id in clients:
            writer = clients[client_id]
            message = json.dumps(result)
            await writer(f'event: message\ndata: {message}\n\n')
    
    # Return an acknowledgment
    return web.Response(text='OK')

# Send an SSE message to a client
async def send_sse_message(client_id, message):
    if client_id in clients:
        writer = clients[client_id]
        data = json.dumps(message)
        await writer(f'event: message\ndata: {data}\n\n')
```

#### Advantages and Limitations

Advantages:
- Works over standard HTTP
- Supports remote clients
- Can serve multiple clients
- Integrates with web infrastructure

Limitations:
- More complex to implement
- Requires HTTP server
- Connection management is more challenging
- Potential firewall issues

## Protocol Lifecycle

The MCP protocol follows a defined lifecycle:

```mermaid
sequenceDiagram
    participant Client
    participant Server
    
    Note over Client,Server: Initialization Phase
    
    Client->>Server: initialize request
    Server->>Client: initialize response
    Client->>Server: initialized notification
    
    Note over Client,Server: Operation Phase
    
    Client->>Server: tools/list request
    Server->>Client: tools/list response
    Client->>Server: tools/call request
    Server->>Client: tools/call response
    
    Note over Client,Server: Termination Phase
    
    Client->>Server: exit notification
    Note over Client,Server: Connection Closed
```

### Initialization Phase

The initialization phase establishes the connection and negotiates capabilities:

1. **initialize request**: Client sends protocol version and supported capabilities
2. **initialize response**: Server responds with its version and capabilities
3. **initialized notification**: Client acknowledges initialization

Initialize request example:

```json
{
  "jsonrpc": "2.0",
  "id": 0,
  "method": "initialize",
  "params": {
    "clientInfo": {
      "name": "example-client",
      "version": "1.0.0"
    },
    "capabilities": {
      "tools": {
        "listChanged": true
      },
      "resources": {
        "listChanged": true,
        "subscribe": true
      },
      "prompts": {
        "listChanged": true
      }
    }
  }
}
```

Initialize response example:

```json
{
  "jsonrpc": "2.0",
  "id": 0,
  "result": {
    "serverInfo": {
      "name": "example-server",
      "version": "1.0.0"
    },
    "capabilities": {
      "tools": {
        "listChanged": true
      },
      "resources": {
        "listChanged": true,
        "subscribe": true
      },
      "prompts": {
        "listChanged": true
      },
      "experimental": {}
    }
  }
}
```

Initialized notification example:

```json
{
  "jsonrpc": "2.0",
  "method": "initialized",
  "params": {}
}
```

### Operation Phase

During the operation phase, clients and servers exchange various requests and notifications:

1. **Feature Discovery**: Listing tools, resources, and prompts
2. **Tool Execution**: Calling tools and receiving results
3. **Resource Access**: Reading resources and subscribing to changes
4. **Prompt Usage**: Getting prompt templates
5. **Notifications**: Receiving updates about changes

### Termination Phase

The termination phase cleanly closes the connection:

1. **exit notification**: Client indicates it's closing the connection
2. **Connection closure**: Transport connection is closed

Exit notification example:

```json
{
  "jsonrpc": "2.0",
  "method": "exit",
  "params": {}
}
```

## Message Types and Methods

MCP defines several standard message types for different operations:

### Tools Methods

| Method | Type | Description |
|--------|------|-------------|
| `tools/list` | Request/Response | List available tools |
| `tools/call` | Request/Response | Execute a tool with parameters |
| `notifications/tools/list_changed` | Notification | Notify that the tool list has changed |

Example tools/list request:
```json
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "tools/list"
}
```

Example tools/list response:
```json
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "tools": [
      {
        "name": "web_scrape",
        "description": "Scrape content from a URL",
        "inputSchema": {
          "type": "object",
          "properties": {
            "url": {
              "type": "string",
              "description": "The URL to scrape"
            }
          },
          "required": ["url"]
        }
      }
    ]
  }
}
```

### Resources Methods

| Method | Type | Description |
|--------|------|-------------|
| `resources/list` | Request/Response | List available resources |
| `resources/read` | Request/Response | Read a resource by URI |
| `resources/subscribe` | Request/Response | Subscribe to resource updates |
| `resources/unsubscribe` | Request/Response | Unsubscribe from resource updates |
| `notifications/resources/list_changed` | Notification | Notify that the resource list has changed |
| `notifications/resources/updated` | Notification | Notify that a resource has been updated |

Example resources/read request:
```json
{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "resources/read",
  "params": {
    "uri": "file:///path/to/file.txt"
  }
}
```

Example resources/read response:
```json
{
  "jsonrpc": "2.0",
  "id": 2,
  "result": {
    "contents": [
      {
        "uri": "file:///path/to/file.txt",
        "text": "File content goes here",
        "mimeType": "text/plain"
      }
    ]
  }
}
```

### Prompts Methods

| Method | Type | Description |
|--------|------|-------------|
| `prompts/list` | Request/Response | List available prompts |
| `prompts/get` | Request/Response | Get a prompt by name |
| `notifications/prompts/list_changed` | Notification | Notify that the prompt list has changed |

Example prompts/get request:
```json
{
  "jsonrpc": "2.0",
  "id": 3,
  "method": "prompts/get",
  "params": {
    "name": "code_review",
    "arguments": {
      "language": "python",
      "code": "def hello(): print('Hello, world!')"
    }
  }
}
```

Example prompts/get response:
```json
{
  "jsonrpc": "2.0",
  "id": 3,
  "result": {
    "messages": [
      {
        "role": "user",
        "content": {
          "type": "text",
          "text": "Please review this Python code:\n\ndef hello(): print('Hello, world!')"
        }
      }
    ]
  }
}
```

### Logging and Progress

| Method | Type | Description |
|--------|------|-------------|
| `notifications/logging/message` | Notification | Log a message |
| `notifications/progress` | Notification | Report progress of a long-running operation |

Example logging notification:
```json
{
  "jsonrpc": "2.0",
  "method": "notifications/logging/message",
  "params": {
    "level": "info",
    "message": "Operation started",
    "data": { 
      "operation": "file_processing" 
    }
  }
}
```

Example progress notification:
```json
{
  "jsonrpc": "2.0",
  "method": "notifications/progress",
  "params": {
    "token": "operation-123",
    "value": 50
  }
}
```

## Error Codes

MCP uses standard JSON-RPC error codes plus additional codes for specific errors:

| Code | Name | Description |
|------|------|-------------|
| -32700 | Parse Error | Invalid JSON |
| -32600 | Invalid Request | Request not conforming to JSON-RPC |
| -32601 | Method Not Found | Method not supported |
| -32602 | Invalid Params | Invalid parameters |
| -32603 | Internal Error | Internal server error |
| -32000 | Server Error | Server-specific error |
| -32001 | Resource Not Found | Resource URI not found |
| -32002 | Tool Not Found | Tool name not found |
| -32003 | Prompt Not Found | Prompt name not found |
| -32004 | Execution Failed | Tool execution failed |
| -32005 | Permission Denied | Operation not permitted |

## Protocol Extensions

The MCP protocol supports extensions through the "experimental" capability field:

```json
{
  "capabilities": {
    "experimental": {
      "customFeature": {
        "enabled": true,
        "options": { ... }
      }
    }
  }
}
```

Extensions should follow these guidelines:

1. Use namespaced method names (e.g., "customFeature/operation")
2. Document the extension clearly
3. Provide fallback behavior when the extension is not supported
4. Consider standardization for widely used extensions

## Troubleshooting Protocol Issues

Common protocol issues include:

### Initialization Problems

1. **Version Mismatch**: Client and server using incompatible protocol versions
   - Check version in initialize request/response
   - Update client or server to compatible versions

2. **Capability Negotiation Failure**: Client and server capabilities don't match
   - Verify capabilities in initialize request/response
   - Update client or server to support required capabilities

### Message Format Issues

1. **Invalid JSON**: Message contains malformed JSON
   - Check message format before sending
   - Validate JSON with a schema

2. **Missing Fields**: Required fields are missing
   - Ensure all required fields are present
   - Use a protocol validation library

3. **Incorrect Types**: Fields have incorrect types
   - Validate field types before sending
   - Use typed interfaces for messages

### Transport Issues

1. **Connection Lost**: Transport connection unexpectedly closed
   - Implement reconnection logic
   - Handle connection failures gracefully

2. **Message Framing**: Incorrect message framing (STDIO)
   - Ensure Content-Length is correct
   - Validate message framing format

3. **SSE Connection**: SSE connection issues
   - Check network connectivity
   - Verify SSE endpoint is accessible

### Tool Call Issues

1. **Invalid Parameters**: Tool parameters don't match schema
   - Validate parameters against schema
   - Provide descriptive error messages

2. **Execution Failure**: Tool execution fails
   - Handle exceptions in tool implementation
   - Return appropriate error responses

### Debugging Techniques

1. **Message Logging**: Log all protocol messages
   - Set up logging before and after sending/receiving
   - Log both raw and parsed messages

2. **Protocol Tracing**: Enable protocol tracing
   - Set environment variables for trace logging
   - Use MCP Inspector for visual tracing

3. **Transport Monitoring**: Monitor transport state
   - Check connection status
   - Log transport events

## Conclusion

Understanding the MCP communication protocols is essential for building robust MCP servers and clients. By following the standard message formats and transport mechanisms, you can ensure reliable communication between LLMs and external tools and data sources.

In the next document, we'll explore common troubleshooting techniques and solutions for MCP servers.

```

--------------------------------------------------------------------------------
/docs/07-extending-the-repo.md:
--------------------------------------------------------------------------------

```markdown
# Extending the Repository with New Tools

This guide explains how to add new tools to the MCP repository. You'll learn best practices for tool design, implementation strategies, and integration techniques that maintain the repository's modular structure.

## Understanding the Repository Structure

Before adding new tools, it's important to understand the existing structure:

```
/MCP/
├── LICENSE
├── README.md
├── requirements.txt
├── server.py
├── streamlit_app.py
├── run.sh
├── run.bat
├── tools/
│   ├── __init__.py
│   └── web_scrape.py
└── docs/
    └── *.md
```

Key components:

1. **server.py**: The main MCP server that registers and exposes tools
2. **tools/**: Directory containing individual tool implementations
3. **streamlit_app.py**: UI for interacting with MCP servers
4. **requirements.txt**: Python dependencies
5. **run.sh/run.bat**: Convenience scripts for running the server or UI

## Planning Your New Tool

Before implementation, plan your tool carefully:

### 1. Define the Purpose

Clearly define what your tool will do:

- What problem does it solve?
- How does it extend the capabilities of an LLM?
- Does it retrieve information, process data, or perform actions?

### 2. Choose a Tool Type

MCP supports different types of tools:

- **Information retrieval tools**: Fetch information from external sources
- **Processing tools**: Transform or analyze data
- **Action tools**: Perform operations with side effects
- **Integration tools**: Connect to external services or APIs

### 3. Design the Interface

Consider the tool's interface:

- What parameters does it need?
- What will it return?
- How will it handle errors?
- What schema will describe it?

Example interface design:

```
Tool: search_news
Purpose: Search for recent news articles by keyword
Parameters:
  - query (string): Search query
  - days (int, optional): How recent the news should be (default: 7)
  - limit (int, optional): Maximum number of results (default: 5)
Returns:
  - List of articles with titles, sources, and summaries
Errors:
  - Handle API timeouts
  - Handle rate limiting
  - Handle empty results
```

## Implementing Your Tool

Now that you've planned your tool, it's time to implement it.

### 1. Create a New Tool Module

Create a new Python file in the `tools` directory:

```bash
touch tools/my_new_tool.py
```

### 2. Implement the Tool Function

Write the core functionality in your new tool file:

```python
# tools/my_new_tool.py
"""
MCP tool for [description of your tool].
"""

import httpx
import asyncio
import json
from typing import List, Dict, Any, Optional


async def search_news(query: str, days: int = 7, limit: int = 5) -> List[Dict[str, Any]]:
    """
    Search for recent news articles based on a query.
    
    Args:
        query: Search terms
        days: How recent the news should be (in days)
        limit: Maximum number of results to return
        
    Returns:
        List of news articles with title, source, and summary
    """
    # Implementation details
    try:
        # API call
        async with httpx.AsyncClient() as client:
            response = await client.get(
                "https://newsapi.example.com/v2/everything",
                params={
                    "q": query,
                    "from": f"-{days}d",
                    "pageSize": limit,
                    "apiKey": "YOUR_API_KEY"  # In production, use environment variables
                }
            )
            response.raise_for_status()
            data = response.json()
            
            # Process and return results
            articles = data.get("articles", [])
            results = []
            
            for article in articles[:limit]:
                results.append({
                    "title": article.get("title", "No title"),
                    "source": article.get("source", {}).get("name", "Unknown source"),
                    "url": article.get("url", ""),
                    "summary": article.get("description", "No description")
                })
                
            return results
            
    except httpx.HTTPStatusError as e:
        # Handle API errors
        return [{"error": f"API error: {e.response.status_code}"}]
    except httpx.RequestError as e:
        # Handle connection errors
        return [{"error": f"Connection error: {str(e)}"}]
    except Exception as e:
        # Handle unexpected errors
        return [{"error": f"Unexpected error: {str(e)}"}]


# For testing outside of MCP
if __name__ == "__main__":
    async def test():
        results = await search_news("python programming")
        print(json.dumps(results, indent=2))
    
    asyncio.run(test())
```

### 3. Add Required Dependencies

If your tool needs additional dependencies, add them to the requirements.txt file:

```bash
# Add to requirements.txt
httpx>=0.24.0
dateutil>=2.8.2
```

### 4. Register the Tool in the Server

Update the main server.py file to import and register your new tool:

```python
# server.py
from mcp.server.fastmcp import FastMCP

# Import existing tools
from tools.web_scrape import fetch_url_as_markdown

# Import your new tool
from tools.my_new_tool import search_news

# Create an MCP server
mcp = FastMCP("Web Tools")

# Register existing tools
@mcp.tool()
async def web_scrape(url: str) -> str:
    """
    Convert a URL to use r.jina.ai as a prefix and fetch the markdown content.
    
    Args:
        url (str): The URL to convert and fetch.
        
    Returns:
        str: The markdown content if successful, or an error message if not.
    """
    return await fetch_url_as_markdown(url)

# Register your new tool
@mcp.tool()
async def news_search(query: str, days: int = 7, limit: int = 5) -> str:
    """
    Search for recent news articles based on a query.
    
    Args:
        query: Search terms
        days: How recent the news should be (in days, default: 7)
        limit: Maximum number of results to return (default: 5)
        
    Returns:
        Formatted text with news article information
    """
    articles = await search_news(query, days, limit)
    
    # Format the results as text
    if articles and "error" in articles[0]:
        return articles[0]["error"]
    
    if not articles:
        return "No news articles found for the given query."
    
    results = []
    for i, article in enumerate(articles, 1):
        results.append(f"## {i}. {article['title']}")
        results.append(f"Source: {article['source']}")
        results.append(f"URL: {article['url']}")
        results.append(f"\n{article['summary']}\n")
    
    return "\n".join(results)

if __name__ == "__main__":
    mcp.run()
```

## Best Practices for Tool Implementation

### Error Handling

Robust error handling is essential for reliable tools:

```python
try:
    # Operation that might fail
    result = await perform_operation()
    return result
except SpecificError as e:
    # Handle specific error cases
    return f"Operation failed: {str(e)}"
except Exception as e:
    # Catch-all for unexpected errors
    logging.error(f"Unexpected error: {str(e)}")
    return "An unexpected error occurred. Please try again later."
```

### Input Validation

Validate inputs before processing:

```python
def validate_search_params(query: str, days: int, limit: int) -> Optional[str]:
    """Validate search parameters and return error message if invalid."""
    if not query or len(query.strip()) == 0:
        return "Search query cannot be empty"
    
    if days < 1 or days > 30:
        return "Days must be between 1 and 30"
    
    if limit < 1 or limit > 100:
        return "Limit must be between 1 and 100"
    
    return None

# In the tool function
error = validate_search_params(query, days, limit)
if error:
    return error
```

### Security Considerations

Implement security best practices:

```python
# Sanitize inputs
def sanitize_query(query: str) -> str:
    """Remove potentially dangerous characters from query."""
    import re
    return re.sub(r'[^\w\s\-.,?!]', '', query)

# Use environment variables for secrets
import os
api_key = os.environ.get("NEWS_API_KEY")
if not api_key:
    return "API key not configured. Please set the NEWS_API_KEY environment variable."

# Implement rate limiting
from functools import lru_cache
import time

@lru_cache(maxsize=100)
def get_last_call_time():
    return time.time()

def respect_rate_limit(min_interval=1.0):
    """Ensure minimum time between API calls."""
    last_call = get_last_call_time()
    now = time.time()
    if now - last_call < min_interval:
        time.sleep(min_interval - (now - last_call))
    get_last_call_time.cache_clear()
    get_last_call_time()
```

### Docstrings and Comments

Write clear documentation:

```python
async def translate_text(text: str, target_language: str) -> str:
    """
    Translate text to another language.
    
    This tool uses an external API to translate text from one language to another.
    It automatically detects the source language and translates to the specified
    target language.
    
    Args:
        text: The text to translate
        target_language: ISO 639-1 language code (e.g., 'es' for Spanish)
        
    Returns:
        Translated text in the target language
        
    Raises:
        ValueError: If the target language is not supported
    """
    # Implementation
```

### Testing

Include tests for your tools:

```python
# tools/tests/test_my_new_tool.py
import pytest
import asyncio
from tools.my_new_tool import search_news

@pytest.mark.asyncio
async def test_search_news_valid_query():
    """Test search_news with a valid query."""
    results = await search_news("test query")
    assert isinstance(results, list)
    assert len(results) > 0

@pytest.mark.asyncio
async def test_search_news_empty_query():
    """Test search_news with an empty query."""
    results = await search_news("")
    assert isinstance(results, list)
    assert "error" in results[0]

# Run tests
if __name__ == "__main__":
    asyncio.run(pytest.main(["-xvs", "test_my_new_tool.py"]))
```

## Managing Tool Configurations

For tools that require configuration, follow these practices:

### Environment Variables

Use environment variables for configuration:

```python
# tools/my_new_tool.py
import os

API_KEY = os.environ.get("MY_TOOL_API_KEY")
BASE_URL = os.environ.get("MY_TOOL_BASE_URL", "https://api.default.com")
```

### Configuration Files

For more complex configurations, use configuration files:

```python
# tools/config.py
import json
import os
from pathlib import Path

def load_config(tool_name):
    """Load tool-specific configuration."""
    config_dir = Path(os.environ.get("MCP_CONFIG_DIR", "~/.mcp")).expanduser()
    config_path = config_dir / f"{tool_name}.json"
    
    if not config_path.exists():
        return {}
    
    try:
        with open(config_path, "r") as f:
            return json.load(f)
    except Exception as e:
        print(f"Error loading config: {str(e)}")
        return {}

# In your tool file
from tools.config import load_config

config = load_config("my_new_tool")
api_key = config.get("api_key", os.environ.get("MY_TOOL_API_KEY", ""))
```

## Advanced Tool Patterns

### Composition

Compose multiple tools for complex functionality:

```python
async def search_and_summarize(query: str) -> str:
    """Search for news and summarize the results."""
    # First search for news
    articles = await search_news(query, days=3, limit=3)
    
    if not articles or "error" in articles[0]:
        return "Failed to find news articles."
    
    # Then summarize each article
    summaries = []
    for article in articles:
        summary = await summarize_text(article["summary"])
        summaries.append(f"Title: {article['title']}\nSummary: {summary}")
    
    return "\n\n".join(summaries)
```

### Stateful Tools

For tools that need to maintain state:

```python
# tools/stateful_tool.py
from typing import Dict, Any
import json
import os
from pathlib import Path

class SessionStore:
    """Simple file-based session store."""
    
    def __init__(self, tool_name):
        self.storage_dir = Path(os.environ.get("MCP_STORAGE_DIR", "~/.mcp/storage")).expanduser()
        self.storage_dir.mkdir(parents=True, exist_ok=True)
        self.tool_name = tool_name
        self.sessions: Dict[str, Dict[str, Any]] = {}
        self._load()
    
    def _get_storage_path(self):
        return self.storage_dir / f"{self.tool_name}_sessions.json"
    
    def _load(self):
        path = self._get_storage_path()
        if path.exists():
            try:
                with open(path, "r") as f:
                    self.sessions = json.load(f)
            except Exception:
                self.sessions = {}
    
    def _save(self):
        with open(self._get_storage_path(), "w") as f:
            json.dump(self.sessions, f, indent=2)
    
    def get(self, session_id, key, default=None):
        session = self.sessions.get(session_id, {})
        return session.get(key, default)
    
    def set(self, session_id, key, value):
        if session_id not in self.sessions:
            self.sessions[session_id] = {}
        self.sessions[session_id][key] = value
        self._save()
    
    def clear(self, session_id):
        if session_id in self.sessions:
            del self.sessions[session_id]
            self._save()

# Usage in a tool
from tools.stateful_tool import SessionStore

# Initialize store
session_store = SessionStore("conversation")

async def remember_fact(session_id: str, fact: str) -> str:
    """Remember a fact for later recall."""
    facts = session_store.get(session_id, "facts", [])
    facts.append(fact)
    session_store.set(session_id, "facts", facts)
    return f"I'll remember that: {fact}"

async def recall_facts(session_id: str) -> str:
    """Recall previously stored facts."""
    facts = session_store.get(session_id, "facts", [])
    if not facts:
        return "I don't have any facts stored for this session."
    
    return "Here are the facts I remember:\n- " + "\n- ".join(facts)
```

### Long-Running Operations

For tools that take time to complete:

```python
from mcp.server.fastmcp import FastMCP, Context

@mcp.tool()
async def process_large_dataset(dataset_url: str, ctx: Context) -> str:
    """Process a large dataset with progress reporting."""
    try:
        # Download dataset
        ctx.info(f"Downloading dataset from {dataset_url}")
        await ctx.report_progress(10)
        
        # Process in chunks
        total_chunks = 10
        for i in range(total_chunks):
            ctx.info(f"Processing chunk {i+1}/{total_chunks}")
            # Process chunk
            await asyncio.sleep(1)  # Simulate work
            await ctx.report_progress(10 + (i+1) * 80 // total_chunks)
        
        # Finalize
        ctx.info("Finalizing results")
        await ctx.report_progress(90)
        await asyncio.sleep(1)  # Simulate work
        
        # Complete
        await ctx.report_progress(100)
        return "Dataset processing complete. Found 42 insights."
        
    except Exception as e:
        ctx.info(f"Error: {str(e)}")
        return f"Processing failed: {str(e)}"
```

## Adding a Resource

In addition to tools, you might want to add a resource to your MCP server:

```python
# server.py
@mcp.resource("weather://{location}")
async def get_weather(location: str) -> str:
    """
    Get weather information for a location.
    
    Args:
        location: City name or coordinates
    
    Returns:
        Weather information as text
    """
    try:
        # Fetch weather data
        async with httpx.AsyncClient() as client:
            response = await client.get(
                f"https://api.weatherapi.com/v1/current.json",
                params={
                    "q": location,
                    "key": os.environ.get("WEATHER_API_KEY", "")
                }
            )
            response.raise_for_status()
            data = response.json()
        
        # Format weather data
        location_data = data.get("location", {})
        current_data = data.get("current", {})
        
        weather_info = f"""
        Weather for {location_data.get('name', location)}, {location_data.get('country', '')}
        
        Temperature: {current_data.get('temp_c', 'N/A')}°C / {current_data.get('temp_f', 'N/A')}°F
        Condition: {current_data.get('condition', {}).get('text', 'N/A')}
        Wind: {current_data.get('wind_kph', 'N/A')} kph, {current_data.get('wind_dir', 'N/A')}
        Humidity: {current_data.get('humidity', 'N/A')}%
        Updated: {current_data.get('last_updated', 'N/A')}
        """
        
        return weather_info
        
    except Exception as e:
        return f"Error fetching weather: {str(e)}"
```

## Adding a Prompt

You can also add a prompt to your MCP server:

```python
# server.py
@mcp.prompt()
def analyze_sentiment(text: str) -> str:
    """
    Create a prompt for sentiment analysis.
    
    Args:
        text: The text to analyze
    
    Returns:
        A prompt for sentiment analysis
    """
    return f"""
    Please analyze the sentiment of the following text and categorize it as positive, negative, or neutral. 
    Provide a brief explanation for your categorization and highlight key phrases that indicate the sentiment.
    
    Text to analyze:
    
    {text}
    
    Your analysis:
    """
```

## Conclusion

Extending the MCP repository with new tools is a powerful way to enhance the capabilities of LLMs. By following the patterns and practices outlined in this guide, you can create robust, reusable tools that integrate seamlessly with the existing repository structure.

Remember these key principles:

1. **Plan before coding**: Define the purpose and interface of your tool
2. **Follow best practices**: Implement proper error handling, input validation, and security
3. **Document thoroughly**: Write clear docstrings and comments
4. **Test rigorously**: Create tests for your tools
5. **Consider configurations**: Use environment variables or configuration files
6. **Explore advanced patterns**: Implement composition, state, and long-running operations as needed

In the next document, we'll explore example use cases for your MCP server and tools.

```

--------------------------------------------------------------------------------
/docs/04-connecting-to-mcp-servers.md:
--------------------------------------------------------------------------------

```markdown
# Connecting to MCP Servers

This document explains the different methods for connecting to Model Context Protocol (MCP) servers. Whether you're using Claude Desktop, a custom client, or programmatic access, this guide will help you establish and manage connections to MCP servers.

## Overview of MCP Clients

Before diving into implementation details, it's important to understand what an MCP client does:

1. **Discovers** MCP servers (through configuration or discovery mechanisms)
2. **Establishes** connections to servers using appropriate transport methods
3. **Negotiates** capabilities through protocol initialization
4. **Lists** available tools, resources, and prompts
5. **Facilitates** tool execution, resource retrieval, and prompt application
6. **Handles** errors, timeouts, and reconnection

```mermaid
flowchart LR
    Client[MCP Client]
    Server1[MCP Server 1]
    Server2[MCP Server 2]
    LLM[LLM]
    User[User]
    
    User <--> Client
    Client <--> Server1
    Client <--> Server2
    Client <--> LLM
```

## Client Types

There are several ways to connect to MCP servers:

1. **Integrated Clients**: Built into applications like Claude Desktop
2. **Standalone Clients**: Dedicated applications for MCP interaction (like our Streamlit UI)
3. **SDK Clients**: Using MCP SDKs for programmatic access
4. **Development Tools**: Tools like MCP Inspector for testing and development

## Using Claude Desktop

[Claude Desktop](https://claude.ai/download) is an integrated client that can connect to MCP servers through configuration.

### Configuration Setup

To configure Claude Desktop to use MCP servers:

1. Locate the configuration file:
   - macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
   - Windows: `%APPDATA%\Claude\claude_desktop_config.json`

2. Create or edit the file to include your MCP servers:

```json
{
  "mcpServers": {
    "web-tools": {
      "command": "python",
      "args": ["/absolute/path/to/server.py"]
    },
    "database-tools": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres", "postgres://user:pass@localhost/db"]
    }
  }
}
```

Each server configuration includes:
- A unique name (e.g., "web-tools")
- The command to run the server
- Arguments to pass to the command
- Optional environment variables

### Starting Servers

After configuring Claude Desktop:

1. Restart the application
2. Claude will automatically start configured servers
3. You'll see the MCP tools icon in the interface
4. You can now use the servers in conversations

### Using MCP Features in Claude

With MCP servers configured, you can:

1. **Use tools**: Ask Claude to perform actions using server tools
2. **Access resources**: Request information from resources
3. **Apply prompts**: Use the prompts menu for standardized interactions

## Using the Streamlit UI

The Streamlit UI included in this repository provides a graphical interface for interacting with MCP servers.

### Running the UI

```bash
streamlit run streamlit_app.py
```

This will open a web browser with the UI.

### Connecting to Servers

1. Enter the path to your Claude Desktop config file
2. Click "Load Servers" to see all configured servers
3. Select a server tab and click "Connect"
4. The UI will display tools, resources, and prompts

### Using Tools

1. Select a tool tab
2. Fill in the required parameters
3. Click "Execute" to run the tool
4. View the results in the UI

## Programmatic Access with Python

For programmatic access, you can use the MCP Python SDK.

### Basic Client Example

```python
import asyncio
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

async def connect_to_server():
    # Set up server parameters
    server_params = StdioServerParameters(
        command="python",
        args=["server.py"]
    )
    
    # Connect to the server
    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write) as session:
            # Initialize the connection
            await session.initialize()
            
            # List tools
            tools_result = await session.list_tools()
            print(f"Available tools: {[tool.name for tool in tools_result.tools]}")
            
            # Call a tool
            result = await session.call_tool("web_scrape", {"url": "example.com"})
            print(f"Result: {result.content[0].text if result.content else 'No content'}")

# Run the async function
if __name__ == "__main__":
    asyncio.run(connect_to_server())
```

### Tool Execution

To call a tool programmatically:

```python
# Call a tool with parameters
result = await session.call_tool("tool_name", {
    "param1": "value1",
    "param2": 42
})

# Process the result
if hasattr(result, 'content') and result.content:
    for item in result.content:
        if hasattr(item, 'text'):
            print(item.text)
```

### Resource Access

To access resources programmatically:

```python
# List available resources
resources_result = await session.list_resources()
for resource in resources_result.resources:
    print(f"Resource: {resource.name} ({resource.uri})")

# Read a resource
result = await session.read_resource("resource://uri")
content, mime_type = result.contents[0].text, result.contents[0].mimeType
print(f"Content ({mime_type}): {content[:100]}...")
```

### Prompt Usage

To use prompts programmatically:

```python
# List available prompts
prompts_result = await session.list_prompts()
for prompt in prompts_result.prompts:
    print(f"Prompt: {prompt.name}")

# Get a prompt
result = await session.get_prompt("prompt_name", {"arg1": "value1"})
for message in result.messages:
    print(f"{message.role}: {message.content.text}")
```

## Transport Methods

MCP supports different transport methods for client-server communication.

### STDIO Transport

Standard Input/Output (STDIO) transport is the simplest method:

```python
# STDIO server parameters
server_params = StdioServerParameters(
    command="python",  # Command to run the server
    args=["server.py"],  # Arguments
    env={"ENV_VAR": "value"}  # Optional environment variables
)

# Connect using STDIO
async with stdio_client(server_params) as (read, write):
    # Use the connection...
```

STDIO transport:
- Is simple to set up
- Works well for local processes
- Doesn't require network configuration
- Automatically terminates when the process ends

### SSE Transport

Server-Sent Events (SSE) transport is used for web-based connections:

```python
from mcp.client.sse import sse_client

# Connect to an SSE server
async with sse_client("http://localhost:5000") as (read, write):
    async with ClientSession(read, write) as session:
        # Use the session...
```

SSE transport:
- Supports remote connections
- Works over standard HTTP
- Can be used with web servers
- Supports multiple clients per server

## Connection Lifecycle

Understanding the connection lifecycle is important for robust implementations:

```mermaid
sequenceDiagram
    participant Client
    participant Server
    
    Client->>Server: initialize request
    Server->>Client: initialize response (capabilities)
    Client->>Server: initialized notification
    
    Note over Client,Server: Connection Ready
    
    loop Normal Operation
        Client->>Server: Requests (list_tools, call_tool, etc.)
        Server->>Client: Responses
    end
    
    Note over Client,Server: Termination
    
    Client->>Server: exit notification
    Client->>Server: Close connection
```

### Initialization

When a connection is established:

1. Client sends `initialize` request with supported capabilities
2. Server responds with its capabilities
3. Client sends `initialized` notification
4. Normal operation begins

### Normal Operation

During normal operation:

1. Client sends requests (e.g., `list_tools`, `call_tool`)
2. Server processes requests and sends responses
3. Server may send notifications (e.g., `resources/list_changed`)

### Termination

When ending a connection:

1. Client sends `exit` notification
2. Client closes the connection
3. Server cleans up resources

## Error Handling

Robust error handling is essential for MCP clients:

```python
try:
    result = await session.call_tool("tool_name", params)
except Exception as e:
    print(f"Error calling tool: {str(e)}")
    
    # Check for specific error types
    if isinstance(e, mcp.McpProtocolError):
        print(f"Protocol error: {e.code}")
    elif isinstance(e, mcp.McpTimeoutError):
        print("Request timed out")
    elif isinstance(e, mcp.McpConnectionError):
        print("Connection lost")
```

Common error scenarios:

1. **Connection Failures**: Server not found or refused connection
2. **Initialization Errors**: Protocol incompatibility or capability mismatch
3. **Request Errors**: Invalid parameters or tool not found
4. **Execution Errors**: Tool execution failed or timed out
5. **Connection Loss**: Server terminated unexpectedly

## Building Your Own Client

To build a custom MCP client, follow these steps:

### 1. Set Up Transport

Choose a transport method and establish a connection:

```python
import asyncio
from mcp.client.stdio import stdio_client
from mcp import ClientSession

# Set up server parameters
server_params = StdioServerParameters(
    command="python",
    args=["server.py"]
)

# Establish connection
async with stdio_client(server_params) as (read, write):
    # Create session and use it...
```

### 2. Create a Session

The `ClientSession` manages the protocol interaction:

```python
async with ClientSession(read, write) as session:
    # Initialize the connection
    await session.initialize()
    
    # Now you can use the session
```

### 3. Implement Feature Discovery

List available features from the server:

```python
# List tools
tools_result = await session.list_tools()
tools = tools_result.tools if hasattr(tools_result, 'tools') else []

# List resources
resources_result = await session.list_resources()
resources = resources_result.resources if hasattr(resources_result, 'resources') else []

# List prompts
prompts_result = await session.list_prompts()
prompts = prompts_result.prompts if hasattr(prompts_result, 'prompts') else []
```

### 4. Implement Tool Execution

Create a function to call tools:

```python
async def call_tool(session, tool_name, tool_args):
    try:
        result = await session.call_tool(tool_name, arguments=tool_args)
        
        # Format the result
        if hasattr(result, 'content') and result.content:
            content_text = []
            for item in result.content:
                if hasattr(item, 'text'):
                    content_text.append(item.text)
            return "\n".join(content_text)
        return "Tool executed, but no text content was returned."
    except Exception as e:
        return f"Error calling tool: {str(e)}"
```

### 5. Implement Resource Access

Create a function to read resources:

```python
async def read_resource(session, resource_uri):
    try:
        result = await session.read_resource(resource_uri)
        
        # Format the result
        content_items = []
        for content in result.contents:
            if hasattr(content, 'text'):
                content_items.append(content.text)
            elif hasattr(content, 'blob'):
                content_items.append(f"[Binary data: {len(content.blob)} bytes]")
        
        return "\n".join(content_items)
    except Exception as e:
        return f"Error reading resource: {str(e)}"
```

### 6. Implement User Interface

Create a user interface appropriate for your application:

- Command-line interface
- Web UI (like our Streamlit example)
- GUI application
- Integration with existing tools

## Example: Command-Line Client

Here's a simple command-line client example:

```python
import asyncio
import argparse
import json
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

async def main(args):
    server_params = StdioServerParameters(
        command=args.command,
        args=args.args
    )
    
    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write) as session:
            await session.initialize()
            
            if args.action == "list-tools":
                tools_result = await session.list_tools()
                tools = tools_result.tools if hasattr(tools_result, 'tools') else []
                print(json.dumps([{
                    "name": tool.name,
                    "description": tool.description
                } for tool in tools], indent=2))
            
            elif args.action == "call-tool":
                tool_args = json.loads(args.params)
                result = await session.call_tool(args.tool, arguments=tool_args)
                if hasattr(result, 'content') and result.content:
                    for item in result.content:
                        if hasattr(item, 'text'):
                            print(item.text)
            
            elif args.action == "list-resources":
                resources_result = await session.list_resources()
                resources = resources_result.resources if hasattr(resources_result, 'resources') else []
                print(json.dumps([{
                    "name": resource.name,
                    "uri": resource.uri
                } for resource in resources], indent=2))
            
            elif args.action == "read-resource":
                result = await session.read_resource(args.uri)
                for content in result.contents:
                    if hasattr(content, 'text'):
                        print(content.text)

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="MCP Command Line Client")
    parser.add_argument("--command", required=True, help="Server command")
    parser.add_argument("--args", nargs="*", default=[], help="Server arguments")
    
    subparsers = parser.add_subparsers(dest="action", required=True)
    
    list_tools_parser = subparsers.add_parser("list-tools")
    
    call_tool_parser = subparsers.add_parser("call-tool")
    call_tool_parser.add_argument("--tool", required=True, help="Tool name")
    call_tool_parser.add_argument("--params", required=True, help="Tool parameters (JSON)")
    
    list_resources_parser = subparsers.add_parser("list-resources")
    
    read_resource_parser = subparsers.add_parser("read-resource")
    read_resource_parser.add_argument("--uri", required=True, help="Resource URI")
    
    args = parser.parse_args()
    asyncio.run(main(args))
```

## Integration with LLMs

To integrate MCP clients with LLMs like Claude:

1. **Tool Registration**: Register MCP tools with the LLM system
2. **Resource Loading**: Provide a way to load resources into LLM context
3. **Permission Handling**: Implement approval flows for tool execution
4. **Result Processing**: Process and present tool results to the LLM

Example integration with Anthropic Claude:

```python
import anthropic
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

async def process_claude_query(client, query):
    # Connect to MCP server
    server_params = StdioServerParameters(
        command="python",
        args=["server.py"]
    )
    
    async with stdio_client(server_params) as (read, write):
        async with ClientSession(read, write) as session:
            # Initialize
            await session.initialize()
            
            # Get available tools
            tools_result = await session.list_tools()
            tools = []
            for tool in tools_result.tools:
                tools.append({
                    "name": tool.name,
                    "description": tool.description,
                    "input_schema": tool.inputSchema
                })
            
            # Initial Claude query
            messages = [{"role": "user", "content": query}]
            response = client.messages.create(
                model="claude-3-opus-20240229",
                max_tokens=1000,
                messages=messages,
                tools=tools
            )
            
            # Process tool calls
            for content in response.content:
                if content.type == "tool_use":
                    # Execute the tool
                    tool_name = content.name
                    tool_args = content.input
                    
                    # Call MCP tool
                    result = await session.call_tool(tool_name, arguments=tool_args)
                    
                    # Format result
                    result_text = ""
                    if hasattr(result, 'content') and result.content:
                        for item in result.content:
                            if hasattr(item, 'text'):
                                result_text += item.text
                    
                    # Send result back to Claude
                    messages.append({
                        "role": "assistant",
                        "content": [content]
                    })
                    messages.append({
                        "role": "user",
                        "content": [
                            {
                                "type": "tool_result",
                                "tool_use_id": content.id,
                                "content": result_text
                            }
                        ]
                    })
                    
                    # Get final response
                    final_response = client.messages.create(
                        model="claude-3-opus-20240229",
                        max_tokens=1000,
                        messages=messages
                    )
                    
                    return final_response.content[0].text
            
            # If no tool calls, return initial response
            return response.content[0].text
```

## Troubleshooting Connection Issues

### Common Problems and Solutions

1. **Server Not Found**:
   - Check that the command path is correct
   - Verify the server file exists
   - Check that Python or Node.js is properly installed

2. **Connection Refused**:
   - For SSE, verify the port is available
   - Check for firewall or network issues
   - Ensure the server is running

3. **Protocol Errors**:
   - Verify MCP versions are compatible
   - Check for syntax errors in tool schemas
   - Ensure tools are properly registered

4. **Tool Execution Failures**:
   - Validate input parameters
   - Check for runtime errors in tool implementation
   - Verify external dependencies are available

5. **Node.js Environment Issues**:
   - Ensure Node.js is properly installed
   - Check for proper paths to node, npm, and npx
   - Verify global packages are accessible

### Debugging Techniques

1. **Logging**:
   - Enable debug logging in your client
   - Check server logs for errors
   - Use the MCP Inspector for detailed message logs

2. **Environment Variables**:
   - Set `MCP_DEBUG=1` for verbose logging
   - Use appropriate environment variables for servers

3. **Manual Testing**:
   - Test servers directly with the MCP Inspector
   - Try simple tools first to isolate issues
   - Verify transport works with echo tools

## Conclusion

Connecting to MCP servers opens up powerful capabilities for extending LLMs with custom tools and data sources. Whether using existing clients like Claude Desktop, building custom integrations, or developing your own applications, the MCP protocol provides a standardized way to enhance LLM interactions.

In the next document, we'll explore the communication protocols used by MCP in more detail.

```

--------------------------------------------------------------------------------
/docs/06-troubleshooting-guide.md:
--------------------------------------------------------------------------------

```markdown
# MCP Troubleshooting Guide

This comprehensive guide addresses common issues encountered when working with Model Context Protocol (MCP) servers and clients. It provides step-by-step solutions, diagnostic techniques, and best practices for resolving problems efficiently.

## Environment Setup Issues

### Python Environment Problems

#### Missing Dependencies

**Symptoms:**
- Import errors when running server code
- "Module not found" errors
- Unexpected version conflicts

**Solutions:**
1. Verify all dependencies are installed:
   ```bash
   pip install -r requirements.txt
   ```

2. Check for version conflicts:
   ```bash
   pip list
   ```

3. Consider using a virtual environment:
   ```bash
   python -m venv venv
   source venv/bin/activate  # On Windows: venv\Scripts\activate
   pip install -r requirements.txt
   ```

4. Try using `uv` for faster, more reliable installation:
   ```bash
   uv pip install -r requirements.txt
   ```

#### Incompatible Python Version

**Symptoms:**
- Syntax errors in valid code
- Feature not found errors
- Type hint errors

**Solutions:**
1. Check your Python version:
   ```bash
   python --version
   ```

2. Ensure you're using Python 3.10 or higher (required for MCP):
   ```bash
   # Install or update Python if needed
   # Then create a new virtual environment with the correct version
   python3.10 -m venv venv
   ```

### Node.js Environment Problems

#### Missing or Inaccessible Node.js

**Symptoms:**
- "Command not found: npx" errors
- "npx is not recognized as an internal or external command"
- Node.js servers fail to start

**Solutions:**
1. Verify Node.js installation:
   ```bash
   node --version
   npm --version
   npx --version
   ```

2. Install Node.js if needed (from [nodejs.org](https://nodejs.org/))

3. Check PATH environment variable:
   ```bash
   # On Unix-like systems
   echo $PATH
   
   # On Windows
   echo %PATH%
   ```

4. Find the location of Node.js binaries:
   ```bash
   # On Unix-like systems
   which node
   which npm
   which npx
   
   # On Windows
   where node
   where npm
   where npx
   ```

5. Add the Node.js bin directory to your PATH if needed

#### NPM Package Issues

**Symptoms:**
- NPM packages fail to install
- "Error: Cannot find module" when using npx
- Permission errors during installation

**Solutions:**
1. Clear npm cache:
   ```bash
   npm cache clean --force
   ```

2. Try installing packages globally:
   ```bash
   npm install -g @modelcontextprotocol/server-name
   ```

3. Check npm permissions:
   ```bash
   # Fix ownership issues on Unix-like systems
   sudo chown -R $(whoami) ~/.npm
   ```

4. Use npx with explicit paths:
   ```bash
   npx --no-install @modelcontextprotocol/server-name
   ```

## Server Connection Issues

### STDIO Connection Problems

#### Process Launch Failures

**Symptoms:**
- "No such file or directory" errors
- "Cannot execute binary file" errors
- Process exits immediately

**Solutions:**
1. Check that the command exists and is executable:
   ```bash
   # For Python servers
   which python
   
   # For Node.js servers
   which node
   ```

2. Verify file paths are correct:
   ```bash
   # Check if file exists
   ls -l /path/to/server.py
   ```

3. Use absolute paths in configuration:
   ```json
   {
     "command": "/usr/bin/python",
     "args": ["/absolute/path/to/server.py"]
   }
   ```

4. Check file permissions:
   ```bash
   # Make script executable if needed
   chmod +x /path/to/server.py
   ```

#### STDIO Protocol Errors

**Symptoms:**
- "Unexpected message format" errors
- "Invalid JSON" errors
- Connection dropped after initialization

**Solutions:**
1. Avoid mixing regular print statements with MCP protocol:
   ```python
   # Bad: writes to stdout, interfering with protocol
   print("Debug info")
   
   # Good: writes to stderr, doesn't interfere
   import sys
   print("Debug info", file=sys.stderr)
   ```

2. Enable protocol logging for debugging:
   ```python
   import logging
   logging.basicConfig(level=logging.DEBUG)
   ```

3. Check for blocked I/O operations

### SSE Connection Problems

#### HTTP Server Issues

**Symptoms:**
- "Connection refused" errors
- Timeout errors
- SSE connection fails

**Solutions:**
1. Verify server is running on the correct host/port:
   ```bash
   # Check if something is listening on the port
   netstat -tuln | grep 5000
   ```

2. Check for firewall or network issues:
   ```bash
   # Test connection to server
   curl http://localhost:5000/
   ```

3. Ensure CORS is properly configured (for web clients):
   ```python
   # Example CORS headers in aiohttp
   response.headers.update({
       'Access-Control-Allow-Origin': '*',
       'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
       'Access-Control-Allow-Headers': 'Content-Type'
   })
   ```

#### SSE Message Problems

**Symptoms:**
- Messages not received
- "Invalid SSE format" errors
- Connection closes unexpectedly

**Solutions:**
1. Check SSE message format:
   ```
   event: message
   data: {"jsonrpc":"2.0","id":1,"result":{...}}
   
   ```
   (Note the double newline at the end)

2. Verify content-type header:
   ```
   Content-Type: text/event-stream
   ```

3. Ensure Keep-Alive is properly configured:
   ```
   Connection: keep-alive
   Cache-Control: no-cache
   ```

## Claude Desktop Integration Issues

### Configuration Problems

#### Configuration File Issues

**Symptoms:**
- MCP servers don't appear in Claude
- "Failed to start server" errors
- No MCP icon in Claude interface

**Solutions:**
1. Check configuration file location:
   - macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
   - Windows: `%APPDATA%\Claude\claude_desktop_config.json`

2. Verify JSON syntax is valid:
   ```json
   {
     "mcpServers": {
       "web-tools": {
         "command": "python",
         "args": ["/absolute/path/to/server.py"]
       }
     }
   }
   ```

3. Create the file if it doesn't exist:
   ```bash
   # Create directory if needed
   mkdir -p ~/Library/Application\ Support/Claude/
   
   # Create basic config file
   echo '{"mcpServers":{}}' > ~/Library/Application\ Support/Claude/claude_desktop_config.json
   ```

4. Check file permissions:
   ```bash
   # Ensure user can read/write the file
   chmod 600 ~/Library/Application\ Support/Claude/claude_desktop_config.json
   ```

#### Server Path Issues

**Symptoms:**
- "Command not found" errors
- "No such file or directory" errors

**Solutions:**
1. Use absolute paths in configuration:
   ```json
   {
     "mcpServers": {
       "web-tools": {
         "command": "/usr/bin/python",
         "args": ["/Users/username/Documents/Personal/MCP/server.py"]
       }
     }
   }
   ```

2. Avoid using environment variables or relative paths:
   ```json
   // Bad: using relative path
   "args": ["./server.py"]
   
   // Good: using absolute path
   "args": ["/Users/username/Documents/Personal/MCP/server.py"]
   ```

3. Escape backslashes properly on Windows:
   ```json
   "args": ["C:\\Users\\username\\Documents\\Personal\\MCP\\server.py"]
   ```

### Tool Execution Problems

#### Permission Denials

**Symptoms:**
- "Permission denied" errors
- Tools fail silently
- Claude cannot access files or resources

**Solutions:**
1. Check file and directory permissions:
   ```bash
   ls -la /path/to/files/
   ```

2. Run Claude Desktop with appropriate permissions

3. Check for sandboxing restrictions

#### Command Execution Failures

**Symptoms:**
- Tools fail but not due to permission issues
- Timeouts during tool execution
- Tool returns error message

**Solutions:**
1. Check logs for detailed error messages:
   ```bash
   # View Claude Desktop MCP logs
   tail -f ~/Library/Logs/Claude/mcp*.log
   ```

2. Test tools directly outside of Claude:
   ```bash
   # Run the server directly and test with MCP Inspector
   npx @modelcontextprotocol/inspector python server.py
   ```

3. Implement better error handling in tools

## Streamlit UI Issues

### Connection Problems

#### Config File Access

**Symptoms:**
- "File not found" errors
- Cannot load servers from config file
- Permission errors

**Solutions:**
1. Verify the config file path is correct
2. Check file permissions
3. Use the pre-filled default path if available

#### Server Command Execution

**Symptoms:**
- "Command not found" errors
- Node.js/Python not found
- Server fails to start

**Solutions:**
1. Check environment detection in the UI:
   ```python
   # Are Node.js and other tools detected?
   node_installed = bool(find_executable('node'))
   ```

2. Add logging to track command execution:
   ```python
   print(f"Trying to execute: {command} {' '.join(args)}")
   ```

3. Use full paths to executables

### UI Display Issues

#### Tool Schema Problems

**Symptoms:**
- Tool parameters not displayed correctly
- Input fields missing
- Form submission fails

**Solutions:**
1. Check tool schema format:
   ```python
   # Ensure schema has proper structure
   @mcp.tool()
   def my_tool(param1: str, param2: int = 0) -> str:
       """
       Tool description.
       
       Args:
           param1: Description of param1
           param2: Description of param2 (default: 0)
       
       Returns:
           Result description
       """
       # Implementation
   ```

2. Verify all required schema fields are present
3. Check for type conversion issues

#### Tool Execution Display

**Symptoms:**
- Results not displayed
- Format issues in results
- Truncated output

**Solutions:**
1. Check error handling in result processing:
   ```python
   try:
       result = asyncio.run(call_tool(command, args, tool_name, tool_inputs))
       st.subheader("Result")
       st.write(result)
   except Exception as e:
       st.error(f"Error: {str(e)}")
   ```

2. Improve content type handling:
   ```python
   # Process different content types
   for item in result.content:
       if hasattr(item, 'text'):
           st.write(item.text)
       elif hasattr(item, 'blob'):
           st.write("Binary data: use appropriate display method")
   ```

3. Add pagination for large results

## Tool-Specific Issues

### Web Scraping Tool Problems

#### URL Formatting Issues

**Symptoms:**
- "Invalid URL" errors
- Requests to wrong domain
- URL protocol issues

**Solutions:**
1. Ensure proper URL formatting:
   ```python
   # Add protocol if missing
   if not url.startswith(('http://', 'https://')):
       url = 'https://' + url
   ```

2. Handle URL encoding properly:
   ```python
   from urllib.parse import quote_plus
   
   # Encode URL components
   safe_url = quote_plus(url)
   ```

3. Validate URLs before processing:
   ```python
   import re
   
   # Simple URL validation
   if not re.match(r'^(https?://)?[a-zA-Z0-9][-a-zA-Z0-9.]*\.[a-zA-Z]{2,}(/.*)?$', url):
       raise ValueError("Invalid URL format")
   ```

#### HTTP Request Failures

**Symptoms:**
- Timeouts
- Rate limiting errors
- Connection refused errors

**Solutions:**
1. Implement proper error handling:
   ```python
   try:
       async with httpx.AsyncClient() as client:
           response = await client.get(url, timeout=30.0)
           response.raise_for_status()
           return response.text
   except httpx.HTTPStatusError as e:
       return f"Error: HTTP status error - {e.response.status_code}"
   except httpx.RequestError as e:
       return f"Error: Request failed - {str(e)}"
   ```

2. Add retries for transient errors:
   ```python
   for attempt in range(3):
       try:
           async with httpx.AsyncClient() as client:
               response = await client.get(url, timeout=30.0)
               response.raise_for_status()
               return response.text
       except (httpx.HTTPStatusError, httpx.RequestError) as e:
           if attempt == 2:  # Last attempt
               raise
           await asyncio.sleep(1)  # Wait before retry
   ```

3. Add user-agent headers:
   ```python
   headers = {
       "User-Agent": "MCP-WebScraper/1.0",
       "Accept": "text/html,application/xhtml+xml,application/xml"
   }
   response = await client.get(url, headers=headers, timeout=30.0)
   ```

#### Content Processing Issues

**Symptoms:**
- Empty or malformed content
- Encoding issues
- Content too large

**Solutions:**
1. Handle different content types:
   ```python
   if "application/json" in response.headers.get("content-type", ""):
       return json.dumps(response.json(), indent=2)
   elif "text/html" in response.headers.get("content-type", ""):
       # Extract main content
       soup = BeautifulSoup(response.text, 'html.parser')
       # Remove scripts, styles, etc.
       for script in soup(["script", "style", "meta", "noscript"]):
           script.extract()
       return soup.get_text()
   else:
       return response.text
   ```

2. Handle encoding properly:
   ```python
   # Detect encoding
   encoding = response.encoding
   # Fix common encoding issues
   if not encoding or encoding == 'ISO-8859-1':
       encoding = 'utf-8'
   text = response.content.decode(encoding, errors='replace')
   ```

3. Implement content size limits:
   ```python
   # Limit content size
   max_size = 100 * 1024  # 100 KB
   if len(response.content) > max_size:
       return response.content[:max_size].decode('utf-8', errors='replace') + "\n[Content truncated...]"
   ```

## Protocol and Message Issues

### JSON-RPC Issues

#### Invalid Message Format

**Symptoms:**
- "Invalid request" errors
- "Parse error" errors
- Unexpected protocol errors

**Solutions:**
1. Validate JSON-RPC message structure:
   ```python
   def validate_jsonrpc_message(message):
       if "jsonrpc" not in message or message["jsonrpc"] != "2.0":
           raise ValueError("Invalid jsonrpc version")
       
       if "method" in message:
           if "id" in message:
               # It's a request
               if "params" in message and not isinstance(message["params"], (dict, list)):
                   raise ValueError("Params must be object or array")
           else:
               # It's a notification
               pass
       elif "id" in message:
           # It's a response
           if "result" not in message and "error" not in message:
               raise ValueError("Response must have result or error")
           if "error" in message and "result" in message:
               raise ValueError("Response cannot have both result and error")
       else:
           raise ValueError("Invalid message format")
   ```

2. Use proper JSON-RPC libraries:
   ```python
   from jsonrpcserver import method, async_dispatch
   from jsonrpcclient import request, parse
   ```

3. Check for JSON encoding/decoding issues:
   ```python
   try:
       json_str = json.dumps(message)
       decoded = json.loads(json_str)
       # Compare decoded with original to check for precision loss
   except Exception as e:
       print(f"JSON error: {str(e)}")
   ```

#### Method Not Found

**Symptoms:**
- "Method not found" errors
- Methods available but not accessible
- Methods incorrectly implemented

**Solutions:**
1. Check method registration:
   ```python
   # For FastMCP, ensure methods are properly decorated
   @mcp.tool()
   def my_tool():
       pass
       
   # For low-level API, ensure methods are registered
   server.setRequestHandler("tools/call", handle_tool_call)
   ```

2. Verify method names exactly match specifications:
   ```
   tools/list
   tools/call
   resources/list
   resources/read
   prompts/list
   prompts/get
   ```

3. Check capability negotiation:
   ```python
   # Ensure capabilities are properly declared
   server = FastMCP(
       "MyServer",
       capabilities={
           "tools": {
               "listChanged": True
           }
       }
   )
   ```

### Error Handling Issues

#### Unhandled Exceptions

**Symptoms:**
- Crashes during operation
- Unexpected termination
- Missing error responses

**Solutions:**
1. Wrap operations in try-except blocks:
   ```python
   @mcp.tool()
   async def risky_operation(param: str) -> str:
       try:
           # Potentially dangerous operation
           result = await perform_operation(param)
           return result
       except Exception as e:
           # Log the error
           logging.error(f"Error in risky_operation: {str(e)}")
           # Return a friendly error message
           return f"Operation failed: {str(e)}"
   ```

2. Use context managers for resource cleanup:
   ```python
   @mcp.tool()
   async def file_operation(path: str) -> str:
       try:
           async with aiofiles.open(path, "r") as f:
               content = await f.read()
           return content
       except FileNotFoundError:
           return f"File not found: {path}"
       except PermissionError:
           return f"Permission denied: {path}"
       except Exception as e:
           return f"Error reading file: {str(e)}"
   ```

3. Implement proper error responses:
   ```python
   # Return error in tool result
   return {
       "isError": True,
       "content": [
           {
               "type": "text",
               "text": f"Error: {str(e)}"
           }
       ]
   }
   ```

#### Error Response Format

**Symptoms:**
- Clients can't parse error responses
- Errors not displayed properly
- Missing error details

**Solutions:**
1. Use standard error codes:
   ```python
   # JSON-RPC standard error codes
   PARSE_ERROR = -32700
   INVALID_REQUEST = -32600
   METHOD_NOT_FOUND = -32601
   INVALID_PARAMS = -32602
   INTERNAL_ERROR = -32603
   
   # MCP-specific error codes
   RESOURCE_NOT_FOUND = -32001
   TOOL_NOT_FOUND = -32002
   PROMPT_NOT_FOUND = -32003
   EXECUTION_FAILED = -32004
   ```

2. Include helpful error messages:
   ```python
   raise McpError(
       code=INVALID_PARAMS,
       message="Invalid parameters",
       data={
           "details": "Parameter 'url' must be a valid URL",
           "parameter": "url"
       }
   )
   ```

3. Log detailed errors but return simplified messages:
   ```python
   try:
       # Operation
   except Exception as e:
       # Log detailed error
       logging.error(f"Detailed error: {str(e)}", exc_info=True)
       # Return simplified error to client
       raise McpError(
           code=INTERNAL_ERROR,
           message="Operation failed"
       )
   ```

## Advanced Troubleshooting Techniques

### Logging and Monitoring

#### Setting Up Comprehensive Logging

**Approach:**
1. Configure logging at different levels:
   ```python
   import logging
   
   # Set up file handler
   file_handler = logging.FileHandler("mcp_server.log")
   file_handler.setLevel(logging.DEBUG)
   
   # Set up console handler
   console_handler = logging.StreamHandler()
   console_handler.setLevel(logging.INFO)
   
   # Set up formatter
   formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
   file_handler.setFormatter(formatter)
   console_handler.setFormatter(formatter)
   
   # Configure logger
   logger = logging.getLogger("mcp")
   logger.setLevel(logging.DEBUG)
   logger.addHandler(file_handler)
   logger.addHandler(console_handler)
   ```

2. Log at appropriate levels:
   ```python
   logger.debug("Detailed debug info")
   logger.info("General operational info")
   logger.warning("Warning - something unexpected")
   logger.error("Error - operation failed")
   logger.critical("Critical - system failure")
   ```

3. Use structured logging for better analysis:
   ```python
   import json
   
   def log_structured(level, message, **kwargs):
       log_data = {
           "message": message,
           **kwargs
       }
       log_str = json.dumps(log_data)
       
       if level == "debug":
           logger.debug(log_str)
       elif level == "info":
           logger.info(log_str)
       # etc.
   
   # Usage
   log_structured("info", "Tool called", tool="web_scrape", url="example.com")
   ```

#### Protocol Tracing

**Approach:**
1. Set up protocol tracing:
   ```python
   # Enable detailed protocol tracing
   os.environ["MCP_TRACE"] = "1"
   ```

2. Log all protocol messages:
   ```python
   async def log_protocol_message(direction, message):
       log_structured(
           "debug",
           f"MCP {direction}",
           message=message,
           timestamp=datetime.now().isoformat()
       )
   
   # Intercept all messages
   original_send = protocol.send
   
   async def logged_send(message):
       await log_protocol_message("SEND", message)
       return await original_send(message)
   
   protocol.send = logged_send
   ```

3. Use MCP Inspector for visual tracing

### Performance Diagnosis

#### Identifying Bottlenecks

**Approach:**
1. Time operations:
   ```python
   import time
   
   @mcp.tool()
   async def slow_operation(param: str) -> str:
       start_time = time.time()
       
       # Operation
       result = await perform_operation(param)
       
       elapsed_time = time.time() - start_time
       logger.info(f"Operation took {elapsed_time:.3f} seconds")
       
       return result
   ```

2. Profile code:
   ```python
   import cProfile
   import pstats
   
   def profile_function(func, *args, **kwargs):
       profiler = cProfile.Profile()
       profiler.enable()
       result = func(*args, **kwargs)
       profiler.disable()
       
       stats = pstats.Stats(profiler).sort_stats("cumtime")
       stats.print_stats(20)  # Print top 20 items
       
       return result
   ```

3. Monitor resource usage:
   ```python
   import psutil
   
   def log_resource_usage():
       process = psutil.Process()
       memory_info = process.memory_info()
       cpu_percent = process.cpu_percent(interval=1)
       
       logger.info(f"Memory usage: {memory_info.rss / 1024 / 1024:.2f} MB")
       logger.info(f"CPU usage: {cpu_percent:.2f}%")
   ```

#### Optimizing Performance

**Approach:**
1. Use connection pooling:
   ```python
   # Create a shared HTTP client
   http_client = httpx.AsyncClient()
   
   @mcp.tool()
   async def fetch_url(url: str) -> str:
       # Use shared client instead of creating a new one each time
       response = await http_client.get(url)
       return response.text
   
   # Clean up on shutdown
   @lifespan.cleanup
   async def close_http_client():
       await http_client.aclose()
   ```

2. Implement caching:
   ```python
   # Simple in-memory cache
   cache = {}
   cache_ttl = {}
   
   async def cached_fetch(url, ttl=300):
       now = time.time()
       
       # Check if in cache and not expired
       if url in cache and now < cache_ttl.get(url, 0):
           return cache[url]
       
       # Fetch and cache
       response = await http_client.get(url)
       result = response.text
       
       cache[url] = result
       cache_ttl[url] = now + ttl
       
       return result
   ```

3. Use async operations effectively:
   ```python
   # Run operations in parallel
   async def fetch_multiple(urls):
       tasks = [http_client.get(url) for url in urls]
       responses = await asyncio.gather(*tasks)
       return [response.text for response in responses]
   ```

### Debugging Complex Servers

#### Interactive Debugging

**Approach:**
1. Set up Python debugger:
   ```python
   import pdb
   
   @mcp.tool()
   def debug_tool(param: str) -> str:
       # Set breakpoint
       pdb.set_trace()
       # Rest of function
   ```

2. Use remote debugging for production:
   ```python
   from debugpy import listen, wait_for_client
   
   # Set up remote debugger
   listen(("0.0.0.0", 5678))
   wait_for_client()  # Wait for the debugger to attach
   ```

3. Use logging-based debugging:
   ```python
   def trace_function(func):
       def wrapper(*args, **kwargs):
           arg_str = ", ".join([
               *[repr(arg) for arg in args],
               *[f"{k}={repr(v)}" for k, v in kwargs.items()]
           ])
           logger.debug(f"CALL: {func.__name__}({arg_str})")
           
           try:
               result = func(*args, **kwargs)
               logger.debug(f"RETURN: {func.__name__} -> {repr(result)}")
               return result
           except Exception as e:
               logger.debug(f"EXCEPTION: {func.__name__} -> {str(e)}")
               raise
       
       return wrapper
   ```

#### Reproducing Issues

**Approach:**
1. Create minimal test cases:
   ```python
   # test_web_scrape.py
   import asyncio
   from server import mcp
   
   async def test_web_scrape():
       # Get tool function
       web_scrape = mcp._tools["web_scrape"]
       
       # Test with different inputs
       result1 = await web_scrape("example.com")
       print(f"Result 1: {result1[:100]}...")
       
       result2 = await web_scrape("invalid^^url")
       print(f"Result 2: {result2}")
       
       # Add more test cases
   
   if __name__ == "__main__":
       asyncio.run(test_web_scrape())
   ```

2. Record and replay protocol sessions:
   ```python
   # Record session
   async def record_session(file_path):
       messages = []
       
       # Intercept messages
       original_send = protocol.send
       original_receive = protocol.receive
       
       async def logged_send(message):
           messages.append({"direction": "send", "message": message})
           return await original_send(message)
       
       async def logged_receive():
           message = await original_receive()
           messages.append({"direction": "receive", "message": message})
           return message
       
       protocol.send = logged_send
       protocol.receive = logged_receive
       
       # Run session
       # ...
       
       # Save recorded session
       with open(file_path, "w") as f:
           json.dump(messages, f, indent=2)
   ```

3. Use request/response mocking:
   ```python
   # Mock HTTP responses
   class MockResponse:
       def __init__(self, text, status_code=200):
           self.text = text
           self.status_code = status_code
       
       def raise_for_status(self):
           if self.status_code >= 400:
               raise httpx.HTTPStatusError(f"HTTP Error: {self.status_code}", request=None, response=self)
   
   # Replace httpx client get method
   async def mock_get(url, **kwargs):
       if url == "https://example.com":
           return MockResponse("<html><body>Example content</body></html>")
       elif url == "https://error.example.com":
           return MockResponse("Error", status_code=500)
       else:
           raise httpx.RequestError(f"Connection error: {url}")
   
   # Apply mock
   httpx.AsyncClient.get = mock_get
   ```

## Conclusion

Troubleshooting MCP servers requires a systematic approach and understanding of the various components involved. By following the guidelines in this document, you should be able to diagnose and resolve most common issues.

Remember these key principles:

1. **Start simple**: Check the basics first (environment, commands, paths)
2. **Use logging**: Enable detailed logging to understand what's happening
3. **Test incrementally**: Test individual components before full integration
4. **Check documentation**: Consult MCP documentation for specifications
5. **Use tools**: Leverage MCP Inspector and other debugging tools

The next document will explain how to extend this repository with new tools.

```
Page 1/2FirstPrevNextLast