#
tokens: 28770/50000 21/21 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .gitignore
├── container-guide.md
├── docker-compose.txt
├── Dockerfile
├── github-module.py
├── gitlab-module.py
├── gmaps-module.py
├── LICENSE
├── llm-integration.txt
├── mcp-architecture.mermaid
├── mcp-config.py
├── mcp-server-code.py
├── memory-module.py
├── package.json
├── production-deployment.md
├── puppeteer-module.py
├── python-client.py
├── README.md
├── redhat-deployment.md
├── requirements.txt
├── summary.md
├── test-script.py
└── tools-init.py
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
#   For a library or package, you might want to ignore these files since the code is
#   intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# UV
#   Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#uv.lock

# poetry
#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
#   pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
#   in version control.
#   https://pdm.fming.dev/latest/usage/project/#working-with-version-control
.pdm.toml
.pdm-python
.pdm-build/

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
#  and can be added to the global gitignore or merged into this file.  For a more nuclear
#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/

# PyPI configuration file
.pypirc

```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
** NOTE: This project is no longer being maintained, as issues with model memory for smaller models means my preferred method for MCP servers is as individual containerized server providers rather than a monolithic routing provider.  Code is being kept up in case others still want to try to go down this route.




# Model Context Protocol (MCP) Server

A modular server that implements the [Model Context Protocol](https://modelcontextprotocol.io/) standard, providing tools for GitHub, GitLab, Google Maps, Memory storage, and Puppeteer web automation.

## Architecture

The MCP server is built with a modular architecture, where each tool is implemented as a separate module. The server provides a unified gateway that routes requests to the appropriate tool.

![MCP Server Architecture](./architecture.png)

## Features

- **MCP Gateway**: A unified endpoint for all tool requests following the MCP standard
- **MCP Manifest**: An endpoint that describes all available tools and their capabilities
- **Direct Tool Access**: Each tool can be accessed directly via its own API endpoints
- **Modular Design**: Easy to add or remove tools as needed

### Included Tools

1. **GitHub Tool**: Interact with GitHub repositories, issues, and search
2. **GitLab Tool**: Interact with GitLab projects, issues, and pipelines
3. **Google Maps Tool**: Geocoding, directions, and places search
4. **Memory Tool**: Store and retrieve data persistently
5. **Puppeteer Tool**: Take screenshots, generate PDFs, and extract content from websites

## Getting Started

### Prerequisites

- Python 3.8 or higher
- Node.js 14 or higher
- A Red Hat-based Linux distribution (RHEL, CentOS, Fedora) or any Linux/macOS system

### Installation

1. Clone this repository:
   ```bash
   git clone https://github.com/yourusername/mcp-server.git
   cd mcp-server
   ```

2. Install Python dependencies:
   ```bash
   pip install -r requirements.txt
   ```

3. Install Node.js dependencies:
   ```bash
   npm install
   ```

4. Create a `.env` file with your configuration:
   ```
   SECRET_KEY=your-secret-key
   DEBUG=False
   
   # GitHub configuration
   GITHUB_TOKEN=your-github-token
   
   # GitLab configuration
   GITLAB_TOKEN=your-gitlab-token
   
   # Google Maps configuration
   GMAPS_API_KEY=your-google-maps-api-key
   
   # Memory configuration
   MEMORY_DB_URI=sqlite:///memory.db
   
   # Puppeteer configuration
   PUPPETEER_HEADLESS=true
   CHROME_PATH=/usr/bin/chromium-browser
   ```

5. Start the server:
   ```bash
   python app.py
   ```

### Containerized Deployment

You can run the server using either Docker or Podman (Red Hat's container engine).

#### Docker Deployment

If you already have Docker and docker-compose installed:

1. Build the Docker image:
   ```bash
   docker build -t mcp-server .
   ```

2. Run the container:
   ```bash
   docker run -p 5000:5000 --env-file .env mcp-server
   ```

3. Alternatively, use docker-compose:
   
   Create a `docker-compose.yml` file:
   ```yaml
   version: '3'
   services:
     mcp-server:
       build: .
       ports:
         - "5000:5000"
       volumes:
         - ./data:/app/data
       env_file:
         - .env
       restart: unless-stopped
   ```

   Then run:
   ```bash
   docker-compose up -d
   ```

#### Podman Deployment

For Red Hat based systems (RHEL, CentOS, Fedora) using Podman:

1. Build the container image:
   ```bash
   podman build -t mcp-server .
   ```

2. Run the container:
   ```bash
   podman run -p 5000:5000 --env-file .env mcp-server
   ```

3. If you need persistent storage:
   ```bash
   mkdir -p ./data
   podman run -p 5000:5000 --env-file .env -v ./data:/app/data:Z mcp-server
   ```
   Note: The `:Z` suffix is important for SELinux-enabled systems.

4. Using Podman Compose (if installed):
   ```bash
   # Install podman-compose if needed
   pip install podman-compose
   
   # Use the same docker-compose.yml file as above
   podman-compose up -d
   ```

## Using the MCP Server

### MCP Gateway

The MCP Gateway is the main endpoint for accessing all tools using the MCP standard.

**Endpoint**: `POST /mcp/gateway`

**Request format**:
```json
{
  "tool": "github",
  "action": "listRepos",
  "parameters": {
    "username": "octocat"
  }
}
```

**Response format**:
```json
{
  "tool": "github",
  "action": "listRepos",
  "status": "success",
  "result": [
    {
      "id": 1296269,
      "name": "Hello-World",
      "full_name": "octocat/Hello-World",
      "owner": {
        "login": "octocat",
        "id": 1
      },
      ...
    }
  ]
}
```

### MCP Manifest

The MCP Manifest describes all available tools and their capabilities.

**Endpoint**: `GET /mcp/manifest`

**Response format**:
```json
{
  "manifestVersion": "1.0",
  "tools": {
    "github": {
      "actions": {
        "listRepos": {
          "description": "List repositories for a user or organization",
          "parameters": {
            "username": {
              "type": "string",
              "description": "GitHub username or organization name"
            }
          },
          "returns": {
            "type": "array",
            "description": "List of repository objects"
          }
        },
        ...
      }
    },
    ...
  }
}
```

### Direct Tool Access

Each tool can also be accessed directly via its own API endpoints:

- GitHub: `/tool/github/...`
- GitLab: `/tool/gitlab/...`
- Google Maps: `/tool/gmaps/...`
- Memory: `/tool/memory/...`
- Puppeteer: `/tool/puppeteer/...`

See the API documentation for each tool for details on the available endpoints.

## Tool Documentation

### GitHub Tool

The GitHub tool provides access to the GitHub API for repositories, issues, and search.

**Actions**:
- `listRepos`: List repositories for a user or organization
- `getRepo`: Get details for a specific repository
- `searchRepos`: Search for repositories
- `getIssues`: Get issues for a repository
- `createIssue`: Create a new issue in a repository

### GitLab Tool

The GitLab tool provides access to the GitLab API for projects, issues, and pipelines.

**Actions**:
- `listProjects`: List all projects accessible by the authenticated user
- `getProject`: Get details for a specific project
- `searchProjects`: Search for projects on GitLab
- `getIssues`: Get issues for a project
- `createIssue`: Create a new issue in a project
- `getPipelines`: Get pipelines for a project

### Google Maps Tool

The Google Maps tool provides access to the Google Maps API for geocoding, directions, and places search.

**Actions**:
- `geocode`: Convert an address to geographic coordinates
- `reverseGeocode`: Convert geographic coordinates to an address
- `getDirections`: Get directions between two locations
- `searchPlaces`: Search for places using the Google Places API
- `getPlaceDetails`: Get details for a specific place

### Memory Tool

The Memory tool provides a persistent key-value store for storing and retrieving data.

**Actions**:
- `get`: Get a memory item by key
- `set`: Create or update a memory item
- `delete`: Delete a memory item by key
- `list`: List all memory items, with optional filtering
- `search`: Search memory items by value

### Puppeteer Tool

The Puppeteer tool provides web automation capabilities for taking screenshots, generating PDFs, and extracting content from websites.

**Actions**:
- `screenshot`: Take a screenshot of a webpage
- `pdf`: Generate a PDF of a webpage
- `extract`: Extract content from a webpage

## Contributing

Contributions are welcome! Here's how you can extend the MCP server:

### Adding a New Tool

1. Create a new file in the `tools` directory, e.g., `tools/newtool_tool.py`
2. Implement the tool with actions following the same pattern as existing tools
3. Add the tool to the manifest in `app.py`
4. Register the tool's blueprint in `tools/__init__.py`

## License

This project is licensed under the MIT License - see the LICENSE file for details.

## Acknowledgements

- [Model Context Protocol](https://modelcontextprotocol.io/) for the standard specification
- [Flask](https://flask.palletsprojects.com/) for the web framework
- [Puppeteer](https://pptr.dev/) for web automation

```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
# Python dependencies
flask==2.0.1
flask-cors==3.0.10
requests==2.32.3
python-dotenv==1.0.0
sqlalchemy==1.4.26
pyjwt==2.3.0
polyline==1.4.0

```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
{
  "name": "mcp-server",
  "version": "1.0.0",
  "description": "Model Context Protocol server with tools for GitHub, GitLab, Google Maps, Memory, and Puppeteer",
  "main": "index.js",
  "scripts": {
    "start": "python app.py",
    "install-node-deps": "npm install"
  },
  "keywords": [
    "mcp",
    "ai",
    "tools",
    "puppeteer"
  ],
  "dependencies": {
    "puppeteer": "^13.0.0"
  },
  "engines": {
    "node": ">=14.0.0"
  }
}

```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
# Dockerfile
FROM registry.access.redhat.com/ubi8/python-39:latest

WORKDIR /app

# Copy requirements first for better layer caching
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy the application code
COPY . .

# Run as non-root user
USER 1001

# Expose the application port
EXPOSE 5000

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1 \
    PORT=5000

# Start the application
CMD ["python", "app.py"]

```

--------------------------------------------------------------------------------
/tools-init.py:
--------------------------------------------------------------------------------

```python
# tools/__init__.py
"""
MCP Tools Package
Contains the various tool implementations for the Model Context Protocol server.
"""

from flask import Flask

def register_tools(app: Flask):
    """
    Register all tool blueprints with the Flask application.
    
    Args:
        app: The Flask application instance
    """
    from .github_tool import github_routes
    from .gitlab_tool import gitlab_routes
    from .gmaps_tool import gmaps_routes
    from .memory_tool import memory_routes
    from .puppeteer_tool import puppeteer_routes
    
    # Register blueprints
    app.register_blueprint(github_routes, url_prefix='/tool/github')
    app.register_blueprint(gitlab_routes, url_prefix='/tool/gitlab')
    app.register_blueprint(gmaps_routes, url_prefix='/tool/gmaps')
    app.register_blueprint(memory_routes, url_prefix='/tool/memory')
    app.register_blueprint(puppeteer_routes, url_prefix='/tool/puppeteer')

```

--------------------------------------------------------------------------------
/mcp-config.py:
--------------------------------------------------------------------------------

```python
# config.py
import os
from dotenv import load_dotenv

load_dotenv()

class Config:
    # Flask configuration
    SECRET_KEY = os.environ.get('SECRET_KEY', 'dev-secret-key')
    DEBUG = os.environ.get('DEBUG', 'False').lower() in ('true', '1', 't')
    
    # GitHub module configuration
    GITHUB_API_URL = os.environ.get('GITHUB_API_URL', 'https://api.github.com')
    GITHUB_TOKEN = os.environ.get('GITHUB_TOKEN')
    
    # GitLab module configuration
    GITLAB_API_URL = os.environ.get('GITLAB_API_URL', 'https://gitlab.com/api/v4')
    GITLAB_TOKEN = os.environ.get('GITLAB_TOKEN')
    
    # Google Maps module configuration
    GMAPS_API_KEY = os.environ.get('GMAPS_API_KEY')
    
    # Memory module configuration
    MEMORY_DB_URI = os.environ.get('MEMORY_DB_URI', 'sqlite:///memory.db')
    
    # Puppeteer module configuration
    PUPPETEER_HEADLESS = os.environ.get('PUPPETEER_HEADLESS', 'true').lower() in ('true', '1', 't')
    CHROME_PATH = os.environ.get('CHROME_PATH', '/usr/bin/chromium-browser')

```

--------------------------------------------------------------------------------
/docker-compose.txt:
--------------------------------------------------------------------------------

```
version: '3'

services:
  mcp-server:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: mcp-server
    restart: unless-stopped
    ports:
      - "5000:5000"
    volumes:
      - ./data:/app/data
      - ./node_scripts:/app/node_scripts
    env_file:
      - .env
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  # Optional database service for Memory tool
  # Uncomment to use PostgreSQL instead of SQLite
  #
  # db:
  #   image: postgres:13-alpine
  #   container_name: mcp-db
  #   restart: unless-stopped
  #   environment:
  #     POSTGRES_USER: mcp
  #     POSTGRES_PASSWORD: mcppassword
  #     POSTGRES_DB: mcp
  #   volumes:
  #     - postgres-data:/var/lib/postgresql/data
  #   healthcheck:
  #     test: ["CMD-SHELL", "pg_isready -U mcp"]
  #     interval: 10s
  #     timeout: 5s
  #     retries: 5

# Uncomment if using PostgreSQL
# volumes:
#   postgres-data:

```

--------------------------------------------------------------------------------
/summary.md:
--------------------------------------------------------------------------------

```markdown

# MCP Server Implementation Summary

We've built a comprehensive Model Context Protocol (MCP) server that provides a standardized way for AI models to interact with tools and services. This implementation aligns with the [Model Context Protocol standards](https://modelcontextprotocol.io/) and provides a modular architecture for integrating various tools.

## Key Components

1. **MCP Gateway**: A unified entry point that routes requests to the appropriate tool
2. **MCP Manifest**: Provides a standardized description of all available tools and their capabilities
3. **Modular Tool Architecture**: Each tool is implemented as a separate module that can be easily added or removed
4. **Direct API Access**: Each tool can be accessed directly via RESTful API endpoints
5. **Integration with Language Models**: Examples for integrating with OpenAI and Anthropic LLMs

## Implemented Tools

Our MCP server includes five key tools:

1. **GitHub Tool**: For interacting with GitHub repositories, issues, and search
2. **GitLab Tool**: For interacting with GitLab projects, issues, and pipelines
3. **Google Maps Tool**: For geocoding, directions, and places search
4. **Memory Tool**: For persistent storage and retrieval of data
5. **Puppeteer Tool**: For web automation, screenshots, PDFs, and content extraction

## MCP Protocol Compliance

This implementation follows the Model Context Protocol specification by:

1. **Standardized Request Format**:
   ```json
   {
     "tool": "github",
     "action": "listRepos",
     "parameters": {
       "username": "octocat"
     }
   }
   ```

2. **Standardized Response Format**:
   ```json
   {
     "tool": "github",
     "action": "listRepos",
     "status": "success",
     "result": [...]
   }
   ```

3. **Tool Discovery via Manifest**:
   - Provides a comprehensive manifest at `/mcp/manifest`
   - Documents all tools, actions, parameters, and return types

4. **Error Handling**:
   - Consistent error reporting across all tools
   - Error responses include type and message

## Modularity and Extensibility

The architecture is designed for modularity and extensibility:

1. **Tool Module Structure**:
   - Each tool is contained in its own module
   - Modules implement standard interfaces for actions

2. **Adding New Tools**:
   - Create a new module file in the `tools` directory
   - Implement action handlers and API endpoints
   - Register the tool in the MCP manifest

3. **Configuration and Deployment**:
   - Environment-based configuration
   - Multiple deployment options (direct, container, OpenShift)
   - Red Hat specific optimizations

## Integration with LLMs

The MCP server integrates seamlessly with Large Language Models:

1. **OpenAI Integration**:
   - Converts MCP tool definitions to OpenAI function calling format
   - Handles multi-step interactions with tool calling

2. **Anthropic Integration**:
   - Adapts to Anthropic's tool calling format
   - Maps between different message formats

3. **Tool Execution**:
   - Provides a standardized interface for executing tool actions
   - Handles errors and formats responses for the LLM

## Visual Architecture

![MCP Server Architecture](./architecture.png)

The modular architecture follows first principles and separates concerns into distinct layers:

1. **Gateway Layer**: Handles routing and protocol compliance
2. **Tool Layer**: Implements specific tool functionality
3. **External Service Layer**: Connects to external APIs and services

## Next Steps and Future Enhancements

Potential enhancements for the MCP server:

1. **Additional Tools**:
   - Adding file storage/retrieval tools
   - Database interaction tools
   - Email sending/receiving tools

2. **Authentication and Authorization**:
   - Implementing OAuth for GitHub/GitLab
   - Role-based access control for tools

3. **Performance Optimizations**:
   - Caching frequently used results
   - Connection pooling for external services

4. **Monitoring and Observability**:
   - Metrics collection via Prometheus
   - Distributed tracing with OpenTelemetry

5. **Streaming Support**:
   - Adding support for streaming responses
   - WebSocket integration for real-time updates

```

--------------------------------------------------------------------------------
/github-module.py:
--------------------------------------------------------------------------------

```python
# tools/github_tool.py
from flask import Blueprint, request, jsonify, current_app
import requests

github_routes = Blueprint('github', __name__)

def handle_action(action, parameters):
    """Handle GitHub tool actions according to MCP standard"""
    action_handlers = {
        "listRepos": list_repos,
        "getRepo": get_repo,
        "searchRepos": search_repos,
        "getIssues": get_issues,
        "createIssue": create_issue
    }
    
    if action not in action_handlers:
        raise ValueError(f"Unknown action: {action}")
    
    return action_handlers[action](parameters)

def list_repos(parameters):
    """List repositories for a user or organization"""
    username = parameters.get('username')
    if not username:
        raise ValueError("Username parameter is required")
    
    headers = {'Authorization': f'token {current_app.config["GITHUB_TOKEN"]}'}
    response = requests.get(f'{current_app.config["GITHUB_API_URL"]}/users/{username}/repos', headers=headers)
    
    if response.status_code != 200:
        raise Exception(f"GitHub API error: {response.json()}")
    
    return response.json()

def get_repo(parameters):
    """Get details for a specific repository"""
    owner = parameters.get('owner')
    repo = parameters.get('repo')
    
    if not owner or not repo:
        raise ValueError("Owner and repo parameters are required")
    
    headers = {'Authorization': f'token {current_app.config["GITHUB_TOKEN"]}'}
    response = requests.get(f'{current_app.config["GITHUB_API_URL"]}/repos/{owner}/{repo}', headers=headers)
    
    if response.status_code != 200:
        raise Exception(f"GitHub API error: {response.json()}")
    
    return response.json()

def search_repos(parameters):
    """Search for repositories"""
    query = parameters.get('query')
    
    if not query:
        raise ValueError("Query parameter is required")
    
    headers = {'Authorization': f'token {current_app.config["GITHUB_TOKEN"]}'}
    response = requests.get(
        f'{current_app.config["GITHUB_API_URL"]}/search/repositories',
        params={'q': query},
        headers=headers
    )
    
    if response.status_code != 200:
        raise Exception(f"GitHub API error: {response.json()}")
    
    return response.json()

def get_issues(parameters):
    """Get issues for a repository"""
    owner = parameters.get('owner')
    repo = parameters.get('repo')
    state = parameters.get('state', 'open')
    
    if not owner or not repo:
        raise ValueError("Owner and repo parameters are required")
    
    headers = {'Authorization': f'token {current_app.config["GITHUB_TOKEN"]}'}
    response = requests.get(
        f'{current_app.config["GITHUB_API_URL"]}/repos/{owner}/{repo}/issues',
        params={'state': state},
        headers=headers
    )
    
    if response.status_code != 200:
        raise Exception(f"GitHub API error: {response.json()}")
    
    return response.json()

def create_issue(parameters):
    """Create a new issue in a repository"""
    owner = parameters.get('owner')
    repo = parameters.get('repo')
    title = parameters.get('title')
    body = parameters.get('body', '')
    
    if not owner or not repo:
        raise ValueError("Owner and repo parameters are required")
    if not title:
        raise ValueError("Title parameter is required")
    
    headers = {'Authorization': f'token {current_app.config["GITHUB_TOKEN"]}'}
    response = requests.post(
        f'{current_app.config["GITHUB_API_URL"]}/repos/{owner}/{repo}/issues',
        json={'title': title, 'body': body},
        headers=headers
    )
    
    if response.status_code not in (201, 200):
        raise Exception(f"GitHub API error: {response.json()}")
    
    return response.json()

# API routes for direct access (not through MCP gateway)
@github_routes.route('/listRepos', methods=['GET'])
def api_list_repos():
    """API endpoint for listing repositories"""
    try:
        username = request.args.get('username')
        result = list_repos({'username': username})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@github_routes.route('/getRepo/<owner>/<repo>', methods=['GET'])
def api_get_repo(owner, repo):
    """API endpoint for getting a specific repository"""
    try:
        result = get_repo({'owner': owner, 'repo': repo})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@github_routes.route('/searchRepos', methods=['GET'])
def api_search_repos():
    """API endpoint for searching repositories"""
    try:
        query = request.args.get('query')
        result = search_repos({'query': query})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@github_routes.route('/getIssues/<owner>/<repo>', methods=['GET'])
def api_get_issues(owner, repo):
    """API endpoint for getting issues for a repository"""
    try:
        state = request.args.get('state', 'open')
        result = get_issues({'owner': owner, 'repo': repo, 'state': state})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@github_routes.route('/createIssue/<owner>/<repo>', methods=['POST'])
def api_create_issue(owner, repo):
    """API endpoint for creating a new issue"""
    try:
        data = request.get_json()
        parameters = {
            'owner': owner,
            'repo': repo,
            'title': data.get('title'),
            'body': data.get('body', '')
        }
        result = create_issue(parameters)
        return jsonify(result), 201
    except Exception as e:
        return jsonify({'error': str(e)}), 400

```

--------------------------------------------------------------------------------
/test-script.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python
"""
Integration test script for the MCP Server.
Tests the MCP Gateway and direct API endpoints for each tool.
"""

import requests
import json
import sys
import os
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Server URL
BASE_URL = os.getenv('MCP_SERVER_URL', 'http://localhost:5000')

def test_health():
    """Test the health check endpoint"""
    print("Testing health check endpoint...")
    response = requests.get(f"{BASE_URL}/health")
    
    if response.status_code == 200:
        print("✅ Health check successful")
    else:
        print(f"❌ Health check failed: {response.status_code}")
        print(response.text)

def test_manifest():
    """Test the MCP manifest endpoint"""
    print("\nTesting MCP manifest endpoint...")
    response = requests.get(f"{BASE_URL}/mcp/manifest")
    
    if response.status_code == 200:
        manifest = response.json()
        print("✅ Manifest retrieved successfully")
        print(f"Available tools: {', '.join(manifest['tools'].keys())}")
    else:
        print(f"❌ Manifest retrieval failed: {response.status_code}")
        print(response.text)

def test_github_tool():
    """Test the GitHub tool using MCP gateway"""
    print("\nTesting GitHub tool via MCP gateway...")
    
    # Define a GitHub username to test with
    github_username = "octocat"
    
    payload = {
        "tool": "github",
        "action": "listRepos",
        "parameters": {
            "username": github_username
        }
    }
    
    response = requests.post(f"{BASE_URL}/mcp/gateway", json=payload)
    
    if response.status_code == 200:
        result = response.json()
        if result['status'] == 'success':
            print(f"✅ GitHub listRepos successful - found {len(result['result'])} repos")
        else:
            print(f"❌ GitHub listRepos request failed: {result.get('error')}")
    else:
        print(f"❌ GitHub tool request failed: {response.status_code}")
        print(response.text)
    
    # Test direct API endpoint
    print("Testing GitHub tool via direct API...")
    response = requests.get(f"{BASE_URL}/tool/github/listRepos?username={github_username}")
    
    if response.status_code == 200:
        print("✅ Direct GitHub API request successful")
    else:
        print(f"❌ Direct GitHub API request failed: {response.status_code}")
        print(response.text)

def test_memory_tool():
    """Test the Memory tool for setting and retrieving data"""
    print("\nTesting Memory tool...")
    
    # Test set operation via MCP gateway
    key = "test-key"
    value = "test-value"
    
    set_payload = {
        "tool": "memory",
        "action": "set",
        "parameters": {
            "key": key,
            "value": value,
            "metadata": {
                "test": True,
                "timestamp": "2023-01-01T00:00:00Z"
            }
        }
    }
    
    response = requests.post(f"{BASE_URL}/mcp/gateway", json=set_payload)
    
    if response.status_code == 200:
        result = response.json()
        if result['status'] == 'success':
            print("✅ Memory set successful")
        else:
            print(f"❌ Memory set failed: {result.get('error')}")
    else:
        print(f"❌ Memory set request failed: {response.status_code}")
        print(response.text)
    
    # Test get operation via MCP gateway
    get_payload = {
        "tool": "memory",
        "action": "get",
        "parameters": {
            "key": key
        }
    }
    
    response = requests.post(f"{BASE_URL}/mcp/gateway", json=get_payload)
    
    if response.status_code == 200:
        result = response.json()
        if result['status'] == 'success' and result['result']['value'] == value:
            print(f"✅ Memory get successful - retrieved value: {result['result']['value']}")
        else:
            print(f"❌ Memory get failed or incorrect value: {result}")
    else:
        print(f"❌ Memory get request failed: {response.status_code}")
        print(response.text)
    
    # Test direct API endpoint for list operation
    print("Testing Memory tool via direct API...")
    response = requests.get(f"{BASE_URL}/tool/memory/list")
    
    if response.status_code == 200:
        result = response.json()
        print(f"✅ Direct Memory API request successful - found {result['total']} items")
    else:
        print(f"❌ Direct Memory API request failed: {response.status_code}")
        print(response.text)

def test_puppeteer_tool():
    """Test the Puppeteer tool for taking a screenshot"""
    print("\nTesting Puppeteer tool...")
    
    # Test screenshot operation via MCP gateway
    screenshot_payload = {
        "tool": "puppeteer",
        "action": "screenshot",
        "parameters": {
            "url": "https://example.com",
            "fullPage": False,
            "type": "png"
        }
    }
    
    response = requests.post(f"{BASE_URL}/mcp/gateway", json=screenshot_payload)
    
    if response.status_code == 200:
        result = response.json()
        if result['status'] == 'success' and 'base64Image' in result['result']:
            print("✅ Puppeteer screenshot successful - image received")
        else:
            print(f"❌ Puppeteer screenshot failed: {result}")
    else:
        print(f"❌ Puppeteer screenshot request failed: {response.status_code}")
        print(response.text)

def main():
    """Run all tests"""
    print("=== MCP Server Integration Tests ===")
    print(f"Testing server at: {BASE_URL}")
    
    try:
        test_health()
        test_manifest()
        test_github_tool()
        test_memory_tool()
        test_puppeteer_tool()
        
        print("\n✅ All tests completed!")
    except Exception as e:
        print(f"\n❌ Tests failed with error: {str(e)}")
        sys.exit(1)

if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/production-deployment.md:
--------------------------------------------------------------------------------

```markdown
# Production Deployment Guide

This guide outlines the best practices for deploying the MCP Server in production environments using containers (Docker or Podman).

## Production-Ready Configuration

### Recommended Setup

For a robust production deployment, we recommend:

1. Using a proper database backend (PostgreSQL/MySQL) instead of SQLite
2. Setting up a reverse proxy (Nginx/Traefik) with TLS
3. Implementing proper authentication
4. Setting up monitoring and logging
5. Configuring automatic restarts and health checks

## Container Orchestration

### Docker Compose for Production

The included `docker-compose.yml` file provides a good starting point for production:

```bash
# Start the production stack
docker-compose up -d

# Scale if needed (for the web service)
docker-compose up -d --scale mcp-server=2
```

### Using Podman in Production

For Red Hat environments, Podman provides a more secure alternative:

```bash
# Using podman-compose
podman-compose up -d

# Or manual pod creation
podman pod create --name mcp-pod -p 5000:5000
podman run -d --pod mcp-pod --name mcp-db postgres:13-alpine
podman run -d --pod mcp-pod --name mcp-server mcp-server
```

### Kubernetes/OpenShift Deployment

For larger scale deployments, use Kubernetes or OpenShift:

1. Create Kubernetes manifests in `k8s/` directory
2. Apply the configuration:

```bash
kubectl apply -f k8s/

# Or for OpenShift
oc apply -f k8s/
```

## Database Configuration

### Using PostgreSQL

Update your `.env` file to use PostgreSQL:

```
MEMORY_DB_URI=postgresql://mcp:mcppassword@db:5432/mcp
```

### Database Migrations

If you're upgrading or need to migrate data:

```bash
# Inside the container
flask db upgrade
```

## Web Server Configuration

### Using Gunicorn

For production, replace the development server with Gunicorn:

1. Update the Dockerfile:

```dockerfile
CMD ["gunicorn", "--workers=4", "--bind=0.0.0.0:5000", "app:app"]
```

2. Or override the command in docker-compose.yml:

```yaml
command: gunicorn --workers=4 --bind=0.0.0.0:5000 app:app
```

### Reverse Proxy with Nginx

Set up Nginx as a reverse proxy in front of the application:

```nginx
server {
    listen 80;
    server_name mcp.example.com;
    
    location / {
        proxy_pass http://mcp-server:5000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}
```

## Security Hardening

### Environment Variables

Never commit sensitive environment variables. Use secrets management:

```bash
# For Docker Swarm
docker secret create mcp_env .env
docker service create --secret mcp_env mcp-server

# For Kubernetes
kubectl create secret generic mcp-secrets --from-env-file=.env
```

### Container Security

1. Run as non-root user:

```dockerfile
USER 1001
```

2. Use read-only file systems where possible:

```yaml
volumes:
  - ./data:/app/data:ro
```

3. Use security scanning:

```bash
# Scan the image
docker scan mcp-server

# Or with Podman
podman image scan mcp-server
```

## Monitoring and Logging

### Prometheus Metrics

Expose metrics for Prometheus monitoring:

```python
# Add to app.py
from prometheus_client import Counter, Histogram, start_http_server
```

### Centralized Logging

Configure logging to a central service:

```yaml
# docker-compose.yml
logging:
  driver: "json-file"
  options:
    max-size: "10m"
    max-file: "3"
```

Or integrate with ELK/Graylog:

```yaml
logging:
  driver: "gelf"
  options:
    gelf-address: "udp://localhost:12201"
```

## High Availability Setup

### Load Balancing

Use a load balancer in front of multiple instances:

```yaml
# docker-compose.yml
services:
  mcp-server:
    deploy:
      replicas: 3
  
  lb:
    image: traefik:v2.4
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
```

### Health Checks and Auto-healing

Configure health checks for automatic recovery:

```yaml
healthcheck:
  test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 40s
```

## Backup Strategy

### Database Backups

Regularly backup the database:

```bash
# For PostgreSQL
docker exec mcp-db pg_dump -U mcp mcp > backup.sql

# Restore if needed
cat backup.sql | docker exec -i mcp-db psql -U mcp mcp
```

### Volume Backups

Backup mounted volumes:

```bash
docker run --rm -v mcp_data:/source:ro -v $(pwd):/backup alpine tar -czvf /backup/mcp-data.tar.gz -C /source .
```

## Rolling Updates

### Zero-Downtime Deployment

Perform rolling updates without downtime:

```bash
# With Docker Compose
docker-compose up -d --no-deps --build mcp-server

# With Kubernetes
kubectl set image deployment/mcp-server mcp-server=mcp-server:new
```

## Testing Your Production Deployment

### Smoke Tests

Run basic smoke tests against the production instance:

```bash
# Test API endpoints
curl -f http://mcp.example.com/health
curl -f http://mcp.example.com/mcp/manifest
```

### Load Testing

Test performance under load:

```bash
# Using Apache Bench
ab -n 1000 -c 10 http://mcp.example.com/health
```

## CI/CD Pipeline Integration

### Docker Hub / GitHub Actions

Example GitHub Actions workflow for automatic builds:

```yaml
name: Build and Push Docker Image

on:
  push:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Build and Push
        uses: docker/build-push-action@v2
        with:
          push: true
          tags: yourusername/mcp-server:latest
```

## Disaster Recovery

### Failover Strategy

Document your failover process:

1. Identify backup servers or cloud regions
2. Maintain recent backups
3. Test recovery procedures regularly
4. Document recovery time objectives (RTO)

### Recovery Procedure

Steps to recover from a disaster:

1. Deploy new infrastructure
2. Restore database from backup
3. Validate application functionality
4. Update DNS records if needed

## Conclusion

By following these production deployment best practices, you will have a robust, secure, and maintainable MCP Server deployment that can handle production workloads reliably.

Remember to regularly review logs, monitor performance, and update dependencies to maintain a healthy production environment.

```

--------------------------------------------------------------------------------
/gmaps-module.py:
--------------------------------------------------------------------------------

```python
# tools/gmaps_tool.py
from flask import Blueprint, request, jsonify, current_app
import requests

gmaps_routes = Blueprint('gmaps', __name__)

def handle_action(action, parameters):
    """Handle Google Maps tool actions according to MCP standard"""
    action_handlers = {
        "geocode": geocode,
        "reverseGeocode": reverse_geocode,
        "getDirections": get_directions,
        "searchPlaces": search_places,
        "getPlaceDetails": get_place_details
    }
    
    if action not in action_handlers:
        raise ValueError(f"Unknown action: {action}")
    
    return action_handlers[action](parameters)

def geocode(parameters):
    """Convert an address to geographic coordinates"""
    address = parameters.get('address')
    
    if not address:
        raise ValueError("Address parameter is required")
    
    params = {
        'address': address,
        'key': current_app.config['GMAPS_API_KEY']
    }
    
    response = requests.get('https://maps.googleapis.com/maps/api/geocode/json', params=params)
    
    if response.status_code != 200:
        raise Exception(f"Google Maps API error: {response.json()}")
    
    return response.json()

def reverse_geocode(parameters):
    """Convert geographic coordinates to an address"""
    lat = parameters.get('lat')
    lng = parameters.get('lng')
    
    if not lat or not lng:
        raise ValueError("Latitude and longitude parameters are required")
    
    params = {
        'latlng': f'{lat},{lng}',
        'key': current_app.config['GMAPS_API_KEY']
    }
    
    response = requests.get('https://maps.googleapis.com/maps/api/geocode/json', params=params)
    
    if response.status_code != 200:
        raise Exception(f"Google Maps API error: {response.json()}")
    
    return response.json()

def get_directions(parameters):
    """Get directions between two locations"""
    origin = parameters.get('origin')
    destination = parameters.get('destination')
    mode = parameters.get('mode', 'driving')
    
    if not origin or not destination:
        raise ValueError("Origin and destination parameters are required")
    
    params = {
        'origin': origin,
        'destination': destination,
        'mode': mode,
        'key': current_app.config['GMAPS_API_KEY']
    }
    
    response = requests.get('https://maps.googleapis.com/maps/api/directions/json', params=params)
    
    if response.status_code != 200:
        raise Exception(f"Google Maps API error: {response.json()}")
    
    return response.json()

def search_places(parameters):
    """Search for places using the Google Places API"""
    query = parameters.get('query')
    location = parameters.get('location')
    radius = parameters.get('radius', 1000)
    place_type = parameters.get('type')
    
    if not query and not (location and place_type):
        raise ValueError("Either query or location with type parameters are required")
    
    params = {
        'key': current_app.config['GMAPS_API_KEY']
    }
    
    if query:
        params['query'] = query
        url = 'https://maps.googleapis.com/maps/api/place/textsearch/json'
    else:
        params['location'] = location
        params['radius'] = radius
        params['type'] = place_type
        url = 'https://maps.googleapis.com/maps/api/place/nearbysearch/json'
    
    response = requests.get(url, params=params)
    
    if response.status_code != 200:
        raise Exception(f"Google Maps API error: {response.json()}")
    
    return response.json()

def get_place_details(parameters):
    """Get details for a specific place"""
    place_id = parameters.get('placeId')
    
    if not place_id:
        raise ValueError("Place ID parameter is required")
    
    params = {
        'place_id': place_id,
        'fields': 'name,rating,formatted_address,geometry,photo,opening_hours,price_level,website,formatted_phone_number',
        'key': current_app.config['GMAPS_API_KEY']
    }
    
    response = requests.get('https://maps.googleapis.com/maps/api/place/details/json', params=params)
    
    if response.status_code != 200:
        raise Exception(f"Google Maps API error: {response.json()}")
    
    return response.json()

# API routes for direct access (not through MCP gateway)
@gmaps_routes.route('/geocode', methods=['GET'])
def api_geocode():
    """API endpoint for geocoding an address"""
    try:
        address = request.args.get('address')
        result = geocode({'address': address})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@gmaps_routes.route('/reverseGeocode', methods=['GET'])
def api_reverse_geocode():
    """API endpoint for reverse geocoding coordinates"""
    try:
        lat = request.args.get('lat')
        lng = request.args.get('lng')
        result = reverse_geocode({'lat': lat, 'lng': lng})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@gmaps_routes.route('/getDirections', methods=['GET'])
def api_get_directions():
    """API endpoint for getting directions"""
    try:
        origin = request.args.get('origin')
        destination = request.args.get('destination')
        mode = request.args.get('mode', 'driving')
        result = get_directions({'origin': origin, 'destination': destination, 'mode': mode})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@gmaps_routes.route('/searchPlaces', methods=['GET'])
def api_search_places():
    """API endpoint for searching places"""
    try:
        parameters = {
            'query': request.args.get('query'),
            'location': request.args.get('location'),
            'radius': request.args.get('radius', 1000),
            'type': request.args.get('type')
        }
        result = search_places(parameters)
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@gmaps_routes.route('/getPlaceDetails', methods=['GET'])
def api_get_place_details():
    """API endpoint for getting place details"""
    try:
        place_id = request.args.get('placeId')
        result = get_place_details({'placeId': place_id})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

```

--------------------------------------------------------------------------------
/gitlab-module.py:
--------------------------------------------------------------------------------

```python
# tools/gitlab_tool.py
from flask import Blueprint, request, jsonify, current_app
import requests

gitlab_routes = Blueprint('gitlab', __name__)

def handle_action(action, parameters):
    """Handle GitLab tool actions according to MCP standard"""
    action_handlers = {
        "listProjects": list_projects,
        "getProject": get_project,
        "searchProjects": search_projects,
        "getIssues": get_issues,
        "createIssue": create_issue,
        "getPipelines": get_pipelines
    }
    
    if action not in action_handlers:
        raise ValueError(f"Unknown action: {action}")
    
    return action_handlers[action](parameters)

def list_projects(parameters):
    """List all projects accessible by the authenticated user"""
    headers = {'Private-Token': current_app.config['GITLAB_TOKEN']}
    response = requests.get(f'{current_app.config["GITLAB_API_URL"]}/projects', headers=headers)
    
    if response.status_code != 200:
        raise Exception(f"GitLab API error: {response.json()}")
    
    return response.json()

def get_project(parameters):
    """Get details for a specific project"""
    project_id = parameters.get('projectId')
    
    if not project_id:
        raise ValueError("Project ID parameter is required")
    
    headers = {'Private-Token': current_app.config['GITLAB_TOKEN']}
    response = requests.get(f'{current_app.config["GITLAB_API_URL"]}/projects/{project_id}', headers=headers)
    
    if response.status_code != 200:
        raise Exception(f"GitLab API error: {response.json()}")
    
    return response.json()

def search_projects(parameters):
    """Search for projects on GitLab"""
    query = parameters.get('query')
    
    if not query:
        raise ValueError("Query parameter is required")
    
    headers = {'Private-Token': current_app.config['GITLAB_TOKEN']}
    response = requests.get(
        f'{current_app.config["GITLAB_API_URL"]}/search',
        params={'scope': 'projects', 'search': query},
        headers=headers
    )
    
    if response.status_code != 200:
        raise Exception(f"GitLab API error: {response.json()}")
    
    return response.json()

def get_issues(parameters):
    """Get issues for a project"""
    project_id = parameters.get('projectId')
    state = parameters.get('state', 'opened')
    
    if not project_id:
        raise ValueError("Project ID parameter is required")
    
    headers = {'Private-Token': current_app.config['GITLAB_TOKEN']}
    response = requests.get(
        f'{current_app.config["GITLAB_API_URL"]}/projects/{project_id}/issues',
        params={'state': state},
        headers=headers
    )
    
    if response.status_code != 200:
        raise Exception(f"GitLab API error: {response.json()}")
    
    return response.json()

def create_issue(parameters):
    """Create a new issue in a project"""
    project_id = parameters.get('projectId')
    title = parameters.get('title')
    description = parameters.get('description', '')
    
    if not project_id:
        raise ValueError("Project ID parameter is required")
    if not title:
        raise ValueError("Title parameter is required")
    
    headers = {'Private-Token': current_app.config['GITLAB_TOKEN']}
    response = requests.post(
        f'{current_app.config["GITLAB_API_URL"]}/projects/{project_id}/issues',
        json={'title': title, 'description': description},
        headers=headers
    )
    
    if response.status_code not in (201, 200):
        raise Exception(f"GitLab API error: {response.json()}")
    
    return response.json()

def get_pipelines(parameters):
    """Get pipelines for a project"""
    project_id = parameters.get('projectId')
    
    if not project_id:
        raise ValueError("Project ID parameter is required")
    
    headers = {'Private-Token': current_app.config['GITLAB_TOKEN']}
    response = requests.get(
        f'{current_app.config["GITLAB_API_URL"]}/projects/{project_id}/pipelines',
        headers=headers
    )
    
    if response.status_code != 200:
        raise Exception(f"GitLab API error: {response.json()}")
    
    return response.json()

# API routes for direct access (not through MCP gateway)
@gitlab_routes.route('/listProjects', methods=['GET'])
def api_list_projects():
    """API endpoint for listing projects"""
    try:
        result = list_projects({})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@gitlab_routes.route('/getProject/<project_id>', methods=['GET'])
def api_get_project(project_id):
    """API endpoint for getting a specific project"""
    try:
        result = get_project({'projectId': project_id})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@gitlab_routes.route('/searchProjects', methods=['GET'])
def api_search_projects():
    """API endpoint for searching projects"""
    try:
        query = request.args.get('query')
        result = search_projects({'query': query})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@gitlab_routes.route('/getIssues/<project_id>', methods=['GET'])
def api_get_issues(project_id):
    """API endpoint for getting issues for a project"""
    try:
        state = request.args.get('state', 'opened')
        result = get_issues({'projectId': project_id, 'state': state})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@gitlab_routes.route('/createIssue/<project_id>', methods=['POST'])
def api_create_issue(project_id):
    """API endpoint for creating a new issue"""
    try:
        data = request.get_json()
        parameters = {
            'projectId': project_id,
            'title': data.get('title'),
            'description': data.get('description', '')
        }
        result = create_issue(parameters)
        return jsonify(result), 201
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@gitlab_routes.route('/getPipelines/<project_id>', methods=['GET'])
def api_get_pipelines(project_id):
    """API endpoint for getting pipelines for a project"""
    try:
        result = get_pipelines({'projectId': project_id})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

```

--------------------------------------------------------------------------------
/memory-module.py:
--------------------------------------------------------------------------------

```python
# tools/memory_tool.py
from flask import Blueprint, request, jsonify, current_app
from sqlalchemy import create_engine, Column, Integer, String, Text, DateTime, JSON
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
import datetime
import json
import uuid

memory_routes = Blueprint('memory', __name__)

# Initialize SQLAlchemy
Base = declarative_base()

class MemoryItem(Base):
    """Model for storing memory items"""
    __tablename__ = 'memory_items'
    
    id = Column(Integer, primary_key=True)
    key = Column(String(100), unique=True, nullable=False)
    value = Column(Text, nullable=True)
    metadata = Column(JSON, nullable=True)
    created_at = Column(DateTime, default=datetime.datetime.utcnow)
    updated_at = Column(DateTime, default=datetime.datetime.utcnow, onupdate=datetime.datetime.utcnow)
    
    def to_dict(self):
        return {
            'id': self.id,
            'key': self.key,
            'value': self.value,
            'metadata': self.metadata,
            'created_at': self.created_at.isoformat(),
            'updated_at': self.updated_at.isoformat()
        }

# Initialize database
engine = None
Session = None

def initialize_db(app):
    """Initialize the database with the Flask app context"""
    global engine, Session
    engine = create_engine(app.config['MEMORY_DB_URI'])
    Base.metadata.create_all(engine)
    Session = sessionmaker(bind=engine)

def handle_action(action, parameters):
    """Handle Memory tool actions according to MCP standard"""
    action_handlers = {
        "get": get_memory,
        "set": set_memory,
        "delete": delete_memory,
        "list": list_memory,
        "search": search_memory
    }
    
    if action not in action_handlers:
        raise ValueError(f"Unknown action: {action}")
    
    return action_handlers[action](parameters)

def get_memory(parameters):
    """Get a memory item by key"""
    key = parameters.get('key')
    
    if not key:
        raise ValueError("Key parameter is required")
    
    # Initialize DB if needed
    if engine is None:
        initialize_db(current_app)
    
    session = Session()
    item = session.query(MemoryItem).filter_by(key=key).first()
    session.close()
    
    if not item:
        raise ValueError(f"Memory item with key '{key}' not found")
    
    return item.to_dict()

def set_memory(parameters):
    """Create or update a memory item"""
    key = parameters.get('key')
    value = parameters.get('value')
    metadata = parameters.get('metadata', {})
    
    if not key:
        key = str(uuid.uuid4())
    
    # Initialize DB if needed
    if engine is None:
        initialize_db(current_app)
    
    session = Session()
    item = session.query(MemoryItem).filter_by(key=key).first()
    
    if item:
        item.value = value
        item.metadata = metadata
        item.updated_at = datetime.datetime.utcnow()
    else:
        item = MemoryItem(key=key, value=value, metadata=metadata)
        session.add(item)
    
    session.commit()
    result = item.to_dict()
    session.close()
    
    return result

def delete_memory(parameters):
    """Delete a memory item by key"""
    key = parameters.get('key')
    
    if not key:
        raise ValueError("Key parameter is required")
    
    # Initialize DB if needed
    if engine is None:
        initialize_db(current_app)
    
    session = Session()
    item = session.query(MemoryItem).filter_by(key=key).first()
    
    if not item:
        session.close()
        raise ValueError(f"Memory item with key '{key}' not found")
    
    session.delete(item)
    session.commit()
    session.close()
    
    return {'success': True, 'message': f'Memory item with key {key} deleted successfully'}

def list_memory(parameters):
    """List all memory items, with optional filtering"""
    filter_key = parameters.get('filterKey')
    limit = int(parameters.get('limit', 100))
    offset = int(parameters.get('offset', 0))
    
    # Initialize DB if needed
    if engine is None:
        initialize_db(current_app)
    
    session = Session()
    query = session.query(MemoryItem)
    
    if filter_key:
        query = query.filter(MemoryItem.key.like(f'%{filter_key}%'))
    
    total = query.count()
    items = query.limit(limit).offset(offset).all()
    result = [item.to_dict() for item in items]
    session.close()
    
    return {
        'items': result,
        'total': total,
        'limit': limit,
        'offset': offset
    }

def search_memory(parameters):
    """Search memory items by value"""
    query_string = parameters.get('q')
    
    if not query_string:
        raise ValueError("Query parameter is required")
    
    # Initialize DB if needed
    if engine is None:
        initialize_db(current_app)
    
    session = Session()
    items = session.query(MemoryItem).filter(MemoryItem.value.like(f'%{query_string}%')).all()
    result = [item.to_dict() for item in items]
    session.close()
    
    return {
        'items': result,
        'count': len(result),
        'query': query_string
    }

# API routes for direct access (not through MCP gateway)
@memory_routes.route('/get', methods=['GET'])
def api_get_memory():
    """API endpoint for getting a memory item"""
    try:
        key = request.args.get('key')
        result = get_memory({'key': key})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@memory_routes.route('/set', methods=['POST'])
def api_set_memory():
    """API endpoint for setting a memory item"""
    try:
        data = request.get_json()
        parameters = {
            'key': data.get('key'),
            'value': data.get('value'),
            'metadata': data.get('metadata', {})
        }
        result = set_memory(parameters)
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@memory_routes.route('/delete', methods=['DELETE'])
def api_delete_memory():
    """API endpoint for deleting a memory item"""
    try:
        key = request.args.get('key')
        result = delete_memory({'key': key})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@memory_routes.route('/list', methods=['GET'])
def api_list_memory():
    """API endpoint for listing memory items"""
    try:
        parameters = {
            'filterKey': request.args.get('filterKey'),
            'limit': request.args.get('limit', 100),
            'offset': request.args.get('offset', 0)
        }
        result = list_memory(parameters)
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

@memory_routes.route('/search', methods=['GET'])
def api_search_memory():
    """API endpoint for searching memory items"""
    try:
        query = request.args.get('q')
        result = search_memory({'q': query})
        return jsonify(result)
    except Exception as e:
        return jsonify({'error': str(e)}), 400

# Initialize the database on first request
@memory_routes.before_app_first_request
def before_first_request():
    initialize_db(current_app)

```

--------------------------------------------------------------------------------
/container-guide.md:
--------------------------------------------------------------------------------

```markdown
# Container Deployment Guide

This guide covers deploying the MCP Server using containers on both Docker and Podman, with specific instructions for Fedora and other Red Hat based systems.

## Docker vs Podman on Red Hat Systems

### Docker

Docker is a widely used container runtime that works across multiple platforms. If you already have Docker installed on your Fedora system, you can use it to deploy the MCP server.

### Podman

Podman is Red Hat's alternative to Docker with several key advantages:
- Does not require a daemon process
- Can run containers without root privileges (rootless containers)
- Better integration with systemd
- Compatible with Docker commands and Dockerfiles
- Native SELinux integration

Podman is the default container engine in Fedora, RHEL, and CentOS.

## Deployment with Docker

### Prerequisites

If you already have Docker and Docker Compose installed on your Fedora system:

```bash
# Verify Docker installation
docker --version
docker-compose --version
```

### Building and Running with Docker

1. Clone the repository:
   ```bash
   git clone https://github.com/yourusername/mcp-server.git
   cd mcp-server
   ```

2. Create your `.env` file:
   ```bash
   cp .env.example .env
   # Edit .env with your configuration
   ```

3. Build the Docker image:
   ```bash
   docker build -t mcp-server .
   ```

4. Run the container:
   ```bash
   docker run -d --name mcp-server -p 5000:5000 --env-file .env mcp-server
   ```

5. Check container logs:
   ```bash
   docker logs mcp-server
   ```

### Using Docker Compose

1. Create a `docker-compose.yml` file:
   ```yaml
   version: '3'
   services:
     mcp-server:
       build: .
       container_name: mcp-server
       ports:
         - "5000:5000"
       volumes:
         - ./data:/app/data
       env_file:
         - .env
       restart: unless-stopped
   ```

2. Start the service:
   ```bash
   docker-compose up -d
   ```

3. Check the service:
   ```bash
   docker-compose ps
   docker-compose logs
   ```

## Deployment with Podman

### Prerequisites

Podman comes pre-installed on Fedora. If you're using an older version:

```bash
# Install Podman
sudo dnf install -y podman

# Verify installation
podman --version
```

### Building and Running with Podman

1. Clone the repository:
   ```bash
   git clone https://github.com/yourusername/mcp-server.git
   cd mcp-server
   ```

2. Create your `.env` file:
   ```bash
   cp .env.example .env
   # Edit .env with your configuration
   ```

3. Build the container image:
   ```bash
   podman build -t mcp-server .
   ```

4. Run the container:
   ```bash
   podman run -d --name mcp-server -p 5000:5000 --env-file .env mcp-server
   ```

5. For SELinux-enabled systems, use the `:Z` volume mount flag for proper labeling:
   ```bash
   mkdir -p ./data
   podman run -d --name mcp-server -p 5000:5000 --env-file .env -v ./data:/app/data:Z mcp-server
   ```

6. Check container logs:
   ```bash
   podman logs mcp-server
   ```

### Using Podman Compose

1. Install Podman Compose:
   ```bash
   pip install podman-compose
   # or
   sudo dnf install podman-compose   # On Fedora 33+
   ```

2. Create a `docker-compose.yml` file (same as for Docker):
   ```yaml
   version: '3'
   services:
     mcp-server:
       build: .
       container_name: mcp-server
       ports:
         - "5000:5000"
       volumes:
         - ./data:/app/data:Z  # Note the :Z for SELinux
       env_file:
         - .env
       restart: unless-stopped
   ```

3. Start the service:
   ```bash
   podman-compose up -d
   ```

4. Check the service:
   ```bash
   podman-compose ps
   podman-compose logs
   ```

### Setting Up as a Systemd Service with Podman

Podman integrates well with systemd, allowing you to manage containers as systemd services:

1. Generate a systemd service file:
   ```bash
   mkdir -p ~/.config/systemd/user
   podman generate systemd --name mcp-server --files --new
   ```

2. Move the generated file to your user's systemd directory:
   ```bash
   mv container-mcp-server.service ~/.config/systemd/user/
   ```

3. Enable and start the service:
   ```bash
   systemctl --user daemon-reload
   systemctl --user enable container-mcp-server.service
   systemctl --user start container-mcp-server.service
   ```

4. Check the service status:
   ```bash
   systemctl --user status container-mcp-server.service
   ```

5. To allow the service to run even when you're not logged in:
   ```bash
   sudo loginctl enable-linger $USER
   ```

## Migrating from Docker to Podman on Fedora

If you're migrating from Docker to Podman on Fedora:

1. Make Podman use Docker Compose files:
   ```bash
   # Create an alias
   echo 'alias docker-compose="podman-compose"' >> ~/.bashrc
   source ~/.bashrc
   ```

2. Set up Docker compatibility:
   ```bash
   # Make docker command invoke podman
   echo 'alias docker="podman"' >> ~/.bashrc
   source ~/.bashrc
   ```

3. Migrate existing containers (if needed):
   ```bash
   # Stop Docker services
   sudo systemctl stop docker
   
   # Pull the same images with Podman
   podman pull [your-images]
   
   # Create new containers using the same configurations
   podman run [your-container-configs]
   ```

## Container Configuration Tips

### Environment Variables

Both Docker and Podman support passing environment variables from a file:

```bash
# With Docker
docker run --env-file .env mcp-server

# With Podman
podman run --env-file .env mcp-server
```

### Persistent Storage

To persist data between container restarts:

```bash
# With Docker
docker run -v ./data:/app/data mcp-server

# With Podman on SELinux systems
podman run -v ./data:/app/data:Z mcp-server
```

### Resource Limits

Set CPU and memory limits for the container:

```bash
# With Docker
docker run --memory="1g" --cpus="1.0" mcp-server

# With Podman
podman run --memory="1g" --cpus="1.0" mcp-server
```

### Network Configuration

By default, the container exposes port 5000. If you need to use a different port:

```bash
# Map container port 5000 to host port 8080
docker run -p 8080:5000 mcp-server
# or
podman run -p 8080:5000 mcp-server
```

## Troubleshooting Container Deployments

### SELinux Issues

If you encounter permission denied errors on Fedora:

```bash
# Check SELinux denials
sudo ausearch -m avc -ts recent

# If needed, set the correct context for mounted volumes
sudo chcon -Rt container_file_t ./data
```

### Networking Issues

If you can't access the container from the host:

```bash
# Check if the container is running
podman ps

# Verify the port mapping
podman port mcp-server

# Check firewall rules
sudo firewall-cmd --list-all
```

### Resource Limitations

If the container is terminated unexpectedly:

```bash
# Check container logs
podman logs mcp-server

# Check system resources
podman stats

# Increase container memory limit
podman run --memory="2g" mcp-server
```

### Container Healthchecks

Add a healthcheck to your container:

```yaml
# In docker-compose.yml
services:
  mcp-server:
    build: .
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
```

## Conclusion

Both Docker and Podman provide robust container environments for deploying the MCP server. On Fedora and other Red Hat based systems, Podman offers better integration with system services and security features.

The main differences in deployment are:
- SELinux context handling with `:Z` volume mounts in Podman
- Rootless execution model in Podman
- Systemd integration in Podman
- Docker requires a daemon while Podman is daemonless

Choose the approach that best fits your existing infrastructure and security requirements.

```

--------------------------------------------------------------------------------
/redhat-deployment.md:
--------------------------------------------------------------------------------

```markdown
# MCP Server Deployment Guide for Red Hat Environments

This guide covers deploying the Model Context Protocol (MCP) server on Red Hat based environments including RHEL, CentOS, and Fedora.

## Prerequisites

- Red Hat Enterprise Linux 8/9, CentOS Stream 8/9, or Fedora 35+
- `podman` or `docker` installed
- Python 3.8+ and pip
- Node.js 14+ and npm
- Git

## Installation Methods

There are several ways to deploy the MCP server in a Red Hat environment:

1. Direct deployment on the host
2. Containerized deployment using Podman (Red Hat's container engine)
3. Deployment on OpenShift (Red Hat's Kubernetes platform)

## Method 1: Direct Deployment

### Install Dependencies

```bash
# Install Python and Node.js
sudo dnf install -y python39 python39-devel python39-pip nodejs npm

# Install development tools (for building Python extensions)
sudo dnf group install -y "Development Tools"

# Install Chromium (for Puppeteer)
sudo dnf install -y chromium
```

### Set Up the MCP Server

```bash
# Clone the repository
git clone https://github.com/yourusername/mcp-server.git
cd mcp-server

# Create a Python virtual environment
python3.9 -m venv venv
source venv/bin/activate

# Install Python dependencies
pip install -r requirements.txt

# Install Node.js dependencies
npm install

# Create .env file with your configuration
cat > .env << EOF
SECRET_KEY=your-secret-key
DEBUG=False

# GitHub configuration
GITHUB_TOKEN=your-github-token

# GitLab configuration
GITLAB_TOKEN=your-gitlab-token

# Google Maps configuration
GMAPS_API_KEY=your-google-maps-api-key

# Memory configuration
MEMORY_DB_URI=sqlite:///memory.db

# Puppeteer configuration
PUPPETEER_HEADLESS=true
CHROME_PATH=/usr/bin/chromium-browser
EOF

# Run the server
python app.py
```

### Setting Up as a Systemd Service

To run the MCP server as a background service:

```bash
# Create a systemd service file
sudo tee /etc/systemd/system/mcp-server.service > /dev/null << EOF
[Unit]
Description=MCP Server
After=network.target

[Service]
User=mcp
WorkingDirectory=/opt/mcp-server
ExecStart=/opt/mcp-server/venv/bin/python /opt/mcp-server/app.py
Restart=on-failure
Environment=PATH=/opt/mcp-server/venv/bin:/usr/local/bin:/usr/bin:/bin
EnvironmentFile=/opt/mcp-server/.env

[Install]
WantedBy=multi-user.target
EOF

# Create a dedicated user for the service
sudo useradd -r -s /bin/false mcp

# Set up the application directory
sudo mkdir -p /opt/mcp-server
sudo cp -r * /opt/mcp-server/
sudo cp .env /opt/mcp-server/
sudo chown -R mcp:mcp /opt/mcp-server

# Enable and start the service
sudo systemctl enable mcp-server
sudo systemctl start mcp-server
sudo systemctl status mcp-server
```

## Method 2: Containerized Deployment with Podman

Podman is Red Hat's container engine, compatible with Docker commands but with added security features.

### Building and Running the Container

```bash
# Build the container image
podman build -t mcp-server .

# Create a directory for persistent data
mkdir -p ~/mcp-data

# Run the container
podman run --name mcp-server \
  -p 5000:5000 \
  -v ~/mcp-data:/app/data \
  --env-file .env \
  -d mcp-server
```

### Setting Up as a Systemd Service with Podman

```bash
# Generate a systemd service file for the container
mkdir -p ~/.config/systemd/user
podman generate systemd --name mcp-server --files --new

# Move the generated file
mv container-mcp-server.service ~/.config/systemd/user/

# Enable linger for your user (allows services to run without being logged in)
sudo loginctl enable-linger $USER

# Enable and start the service
systemctl --user enable container-mcp-server.service
systemctl --user start container-mcp-server.service
systemctl --user status container-mcp-server.service
```

## Method 3: Deployment on OpenShift

OpenShift is Red Hat's enterprise Kubernetes platform.

### Prerequisites

- Access to an OpenShift cluster
- OpenShift CLI (`oc`) installed and configured
- Container image pushed to a registry accessible from OpenShift

### Deploying to OpenShift

1. Create a new project:

```bash
oc new-project mcp-server
```

2. Create a ConfigMap for configuration:

```bash
oc create configmap mcp-config \
  --from-literal=DEBUG=False \
  --from-literal=CHROME_PATH=/usr/bin/chromium-browser \
  --from-literal=PUPPETEER_HEADLESS=true
```

3. Create secrets for sensitive data:

```bash
oc create secret generic mcp-secrets \
  --from-literal=SECRET_KEY=your-secret-key \
  --from-literal=GITHUB_TOKEN=your-github-token \
  --from-literal=GITLAB_TOKEN=your-gitlab-token \
  --from-literal=GMAPS_API_KEY=your-google-maps-api-key
```

4. Create a YAML file for the deployment:

```yaml
# mcp-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mcp-server
  labels:
    app: mcp-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mcp-server
  template:
    metadata:
      labels:
        app: mcp-server
    spec:
      containers:
      - name: mcp-server
        image: your-registry/mcp-server:latest
        ports:
        - containerPort: 5000
        envFrom:
        - configMapRef:
            name: mcp-config
        - secretRef:
            name: mcp-secrets
        volumeMounts:
        - name: mcp-data
          mountPath: /app/data
      volumes:
      - name: mcp-data
        persistentVolumeClaim:
          claimName: mcp-data-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: mcp-server
spec:
  selector:
    app: mcp-server
  ports:
  - port: 80
    targetPort: 5000
  type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mcp-data-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: mcp-server
spec:
  to:
    kind: Service
    name: mcp-server
  port:
    targetPort: 5000
```

5. Apply the deployment:

```bash
oc apply -f mcp-deployment.yaml
```

6. Verify the deployment:

```bash
oc get pods
oc get routes
```

The route URL will be the publicly accessible endpoint for your MCP server.

## Security Considerations

### SELinux Configuration

Red Hat systems use SELinux by default. If you're running the server directly:

```bash
# For the direct deployment method
sudo semanage fcontext -a -t httpd_sys_content_t "/opt/mcp-server(/.*)?"
sudo restorecon -Rv /opt/mcp-server

# If using socket connections
sudo setsebool -P httpd_can_network_connect 1
```

### Firewall Configuration

```bash
# Open the port in the firewall
sudo firewall-cmd --permanent --add-port=5000/tcp
sudo firewall-cmd --reload
```

## Monitoring and Logging

### Setting up Prometheus Monitoring

1. Install the Prometheus client for Python:

```bash
pip install prometheus-client
```

2. Add Prometheus metrics to the MCP server (add to app.py).

### Configuring Logging with rsyslog

1. Create a rsyslog configuration:

```bash
sudo tee /etc/rsyslog.d/mcp-server.conf > /dev/null << EOF
if \$programname == 'mcp-server' then /var/log/mcp-server.log
& stop
EOF
```

2. Restart rsyslog:

```bash
sudo systemctl restart rsyslog
```

## Performance Tuning

### Running with gunicorn

For production environments, use gunicorn:

```bash
# Install gunicorn
pip install gunicorn

# Run with gunicorn
gunicorn --workers 4 --bind 0.0.0.0:5000 app:app
```

Update the systemd service file to use gunicorn instead of direct Python execution.

## Troubleshooting

### Common Issues and Solutions

1. **SELinux denials**:
   - Check audit logs: `sudo ausearch -m avc -ts recent`
   - Create a policy module: `sudo audit2allow -a -M mcp-server`
   - Apply the policy: `sudo semodule -i mcp-server.pp`

2. **Puppeteer/Chrome issues**:
   - Ensure Chromium is installed: `sudo dnf install chromium`
   - Check for missing dependencies: `ldd /usr/bin/chromium-browser`
   - Install additional libraries if needed: `sudo dnf install libXcomposite libXcursor libXi libXtst cups-libs libXScrnSaver alsa-lib pango at-spi2-atk gtk3`

3. **Node.js compatibility**:
   - If you need a newer Node.js version than what's in the repositories:
   ```bash
   # Install Node.js 16.x
   sudo dnf module reset nodejs
   sudo dnf module enable nodejs:16
   sudo dnf install nodejs
   ```

For any other issues, check the logs:
```bash
sudo journalctl -u mcp-server.service
```

## Conclusion

This deployment guide provides multiple methods for deploying the MCP server on Red Hat environments. Choose the method that best fits your infrastructure and operational requirements.

For further assistance, refer to the Red Hat documentation or open an issue in the MCP server repository.

```

--------------------------------------------------------------------------------
/python-client.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python
"""
Example Python client for the MCP Server.
Demonstrates how to interact with the MCP Gateway.
"""

import requests
import json
import os
import argparse
from dotenv import load_dotenv
from typing import Dict, Any, List, Optional

# Load environment variables
load_dotenv()

class MCPClient:
    """
    Client for the Model Context Protocol (MCP) server.
    """
    
    def __init__(self, base_url: str):
        """
        Initialize the MCP client.
        
        Args:
            base_url: The base URL of the MCP server
        """
        self.base_url = base_url
        self.gateway_url = f"{base_url}/mcp/gateway"
        self.manifest_url = f"{base_url}/mcp/manifest"
        self.manifest = None
    
    def get_manifest(self) -> Dict[str, Any]:
        """
        Get the MCP manifest describing available tools.
        
        Returns:
            The MCP manifest as a dictionary
        """
        response = requests.get(self.manifest_url)
        response.raise_for_status()
        self.manifest = response.json()
        return self.manifest
    
    def call_tool(self, tool: str, action: str, parameters: Dict[str, Any]) -> Dict[str, Any]:
        """
        Call a tool action via the MCP gateway.
        
        Args:
            tool: The name of the tool to call
            action: The action to perform
            parameters: The parameters for the action
        
        Returns:
            The response from the tool as a dictionary
        """
        payload = {
            "tool": tool,
            "action": action,
            "parameters": parameters
        }
        
        response = requests.post(self.gateway_url, json=payload)
        response.raise_for_status()
        return response.json()
    
    def list_tools(self) -> List[str]:
        """
        List the names of all available tools.
        
        Returns:
            A list of tool names
        """
        if not self.manifest:
            self.get_manifest()
        
        return list(self.manifest["tools"].keys())
    
    def list_actions(self, tool: str) -> Dict[str, Dict[str, Any]]:
        """
        List all available actions for a tool.
        
        Args:
            tool: The name of the tool
        
        Returns:
            A dictionary of action names to action descriptions
        """
        if not self.manifest:
            self.get_manifest()
        
        if tool not in self.manifest["tools"]:
            raise ValueError(f"Unknown tool: {tool}")
        
        return self.manifest["tools"][tool]["actions"]

def example_github_repos(client: MCPClient, username: str) -> None:
    """
    Example: List GitHub repositories for a user.
    
    Args:
        client: The MCP client
        username: The GitHub username to list repositories for
    """
    print(f"\n=== Listing GitHub repositories for {username} ===")
    
    result = client.call_tool("github", "listRepos", {"username": username})
    
    if result["status"] == "success":
        repos = result["result"]
        print(f"Found {len(repos)} repositories:")
        
        for i, repo in enumerate(repos[:5], 1):  # Show only first 5 repos
            print(f"{i}. {repo['name']} - {repo.get('description', 'No description')}")
        
        if len(repos) > 5:
            print(f"... and {len(repos) - 5} more")
    else:
        print(f"Error: {result.get('error')}")

def example_memory_operations(client: MCPClient) -> None:
    """
    Example: Perform memory operations (set, get, list).
    
    Args:
        client: The MCP client
    """
    print("\n=== Memory Tool Examples ===")
    
    # Set a memory item
    key = "example-key"
    value = {"name": "Example Data", "timestamp": "2023-01-01T00:00:00Z", "count": 42}
    
    print(f"Setting memory item with key '{key}'...")
    result = client.call_tool("memory", "set", {
        "key": key,
        "value": json.dumps(value),
        "metadata": {"type": "example", "temporary": True}
    })
    
    if result["status"] == "success":
        print("Memory item set successfully")
    else:
        print(f"Error setting memory item: {result.get('error')}")
        return
    
    # Get the memory item
    print(f"Getting memory item with key '{key}'...")
    result = client.call_tool("memory", "get", {"key": key})
    
    if result["status"] == "success":
        item = result["result"]
        print(f"Retrieved memory item: {item['value']}")
    else:
        print(f"Error getting memory item: {result.get('error')}")
    
    # List memory items
    print("Listing all memory items...")
    result = client.call_tool("memory", "list", {"limit": 10})
    
    if result["status"] == "success":
        items = result["result"]["items"]
        print(f"Found {result['result']['total']} memory items:")
        
        for item in items[:3]:  # Show only first 3 items
            print(f"- {item['key']}: {item['value'][:30]}...")
        
        if len(items) > 3:
            print(f"... and {len(items) - 3} more")
    else:
        print(f"Error listing memory items: {result.get('error')}")

def example_google_maps(client: MCPClient, address: str) -> None:
    """
    Example: Geocode an address using Google Maps.
    
    Args:
        client: The MCP client
        address: The address to geocode
    """
    print(f"\n=== Geocoding address: {address} ===")
    
    result = client.call_tool("gmaps", "geocode", {"address": address})
    
    if result["status"] == "success":
        geocode_result = result["result"]
        
        if geocode_result["status"] == "OK" and geocode_result["results"]:
            location = geocode_result["results"][0]["geometry"]["location"]
            formatted_address = geocode_result["results"][0]["formatted_address"]
            
            print(f"Formatted address: {formatted_address}")
            print(f"Coordinates: {location['lat']}, {location['lng']}")
            
            # Get reverse geocoding result
            print("\nReverse geocoding these coordinates...")
            reverse_result = client.call_tool("gmaps", "reverseGeocode", {
                "lat": location["lat"],
                "lng": location["lng"]
            })
            
            if reverse_result["status"] == "success" and reverse_result["result"]["status"] == "OK":
                print(f"Reverse geocoded address: {reverse_result['result']['results'][0]['formatted_address']}")
            else:
                print("Reverse geocoding failed")
        else:
            print(f"Geocoding failed: {geocode_result['status']}")
    else:
        print(f"Error: {result.get('error')}")

def example_puppeteer(client: MCPClient, url: str) -> None:
    """
    Example: Take a screenshot of a webpage using Puppeteer.
    
    Args:
        client: The MCP client
        url: The URL to screenshot
    """
    print(f"\n=== Taking screenshot of {url} ===")
    
    result = client.call_tool("puppeteer", "screenshot", {
        "url": url,
        "fullPage": False,
        "type": "png"
    })
    
    if result["status"] == "success":
        screenshot_result = result["result"]
        
        if screenshot_result["success"]:
            # Save the screenshot to a file
            img_data = screenshot_result["base64Image"]
            filename = "screenshot.png"
            
            import base64
            with open(filename, "wb") as f:
                f.write(base64.b64decode(img_data))
            
            print(f"Screenshot saved to {filename}")
        else:
            print(f"Screenshot failed: {screenshot_result.get('error')}")
    else:
        print(f"Error: {result.get('error')}")

def main():
    """Run the example client"""
    parser = argparse.ArgumentParser(description="MCP Client Example")
    parser.add_argument("--url", default=os.getenv("MCP_SERVER_URL", "http://localhost:5000"),
                        help="MCP server URL (default: http://localhost:5000)")
    parser.add_argument("--github-user", default="octocat",
                        help="GitHub username for repository listing example (default: octocat)")
    parser.add_argument("--address", default="1600 Amphitheatre Parkway, Mountain View, CA",
                        help="Address for geocoding example")
    parser.add_argument("--webpage", default="https://example.com",
                        help="Webpage URL for screenshot example")
    args = parser.parse_args()
    
    client = MCPClient(args.url)
    
    try:
        # Show available tools
        print(f"Connecting to MCP server at {args.url}...")
        tools = client.list_tools()
        print(f"Available tools: {', '.join(tools)}")
        
        # Run examples
        example_github_repos(client, args.github_user)
        example_memory_operations(client)
        example_google_maps(client, args.address)
        example_puppeteer(client, args.webpage)
        
        print("\n✅ All examples completed successfully!")
    
    except requests.exceptions.RequestException as e:
        print(f"Error connecting to MCP server: {str(e)}")
    except Exception as e:
        print(f"Error: {str(e)}")

if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/puppeteer-module.py:
--------------------------------------------------------------------------------

```python
# tools/puppeteer_tool.py
from flask import Blueprint, request, jsonify, current_app
import os
import json
import base64
import tempfile
import subprocess
from pathlib import Path
import asyncio
import threading

puppeteer_routes = Blueprint('puppeteer', __name__)

# Path to the Node.js scripts
SCRIPT_DIR = Path(__file__).parent.parent / 'node_scripts'

def ensure_script_dir():
    """Ensure the puppeteer scripts directory exists and create necessary scripts"""
    os.makedirs(SCRIPT_DIR, exist_ok=True)
    
    # Create the screenshot script if it doesn't exist
    screenshot_script = SCRIPT_DIR / 'screenshot.js'
    if not screenshot_script.exists():
        with open(screenshot_script, 'w') as f:
            f.write("""
const puppeteer = require('puppeteer');
const fs = require('fs');

(async () => {
  const args = JSON.parse(process.argv[2]);
  const browser = await puppeteer.launch({
    headless: args.headless !== false,
    executablePath: args.executablePath || null,
    args: ['--no-sandbox', '--disable-setuid-sandbox']
  });
  
  const page = await browser.newPage();
  
  if (args.viewport) {
    await page.setViewport(args.viewport);
  }
  
  if (args.userAgent) {
    await page.setUserAgent(args.userAgent);
  }
  
  await page.goto(args.url, { 
    waitUntil: args.waitUntil || 'networkidle2',
    timeout: args.timeout || 30000
  });
  
  if (args.waitForSelector) {
    await page.waitForSelector(args.waitForSelector, { timeout: args.selectorTimeout || 30000 });
  }
  
  if (args.waitTime) {
    await new Promise(resolve => setTimeout(resolve, args.waitTime));
  }
  
  const screenshotOptions = {
    path: args.outputPath,
    fullPage: args.fullPage === true,
    type: args.type || 'png',
    quality: args.type === 'jpeg' ? (args.quality || 80) : undefined
  };
  
  await page.screenshot(screenshotOptions);
  await browser.close();
  
  console.log(JSON.stringify({ success: true, outputPath: args.outputPath }));
})().catch(err => {
  console.error(JSON.stringify({ success: false, error: err.message }));
  process.exit(1);
});
            """)
    
    # Create the PDF script if it doesn't exist
    pdf_script = SCRIPT_DIR / 'pdf.js'
    if not pdf_script.exists():
        with open(pdf_script, 'w') as f:
            f.write("""
const puppeteer = require('puppeteer');
const fs = require('fs');

(async () => {
  const args = JSON.parse(process.argv[2]);
  const browser = await puppeteer.launch({
    headless: args.headless !== false,
    executablePath: args.executablePath || null,
    args: ['--no-sandbox', '--disable-setuid-sandbox']
  });
  
  const page = await browser.newPage();
  
  if (args.viewport) {
    await page.setViewport(args.viewport);
  }
  
  if (args.userAgent) {
    await page.setUserAgent(args.userAgent);
  }
  
  await page.goto(args.url, { 
    waitUntil: args.waitUntil || 'networkidle2',
    timeout: args.timeout || 30000
  });
  
  if (args.waitForSelector) {
    await page.waitForSelector(args.waitForSelector, { timeout: args.selectorTimeout || 30000 });
  }
  
  if (args.waitTime) {
    await new Promise(resolve => setTimeout(resolve, args.waitTime));
  }
  
  const pdfOptions = {
    path: args.outputPath,
    format: args.format || 'A4',
    printBackground: args.printBackground !== false,
    margin: args.margin || { top: '1cm', right: '1cm', bottom: '1cm', left: '1cm' }
  };
  
  await page.pdf(pdfOptions);
  await browser.close();
  
  console.log(JSON.stringify({ success: true, outputPath: args.outputPath }));
})().catch(err => {
  console.error(JSON.stringify({ success: false, error: err.message }));
  process.exit(1);
});
            """)
    
    # Create the content extraction script if it doesn't exist
    extract_script = SCRIPT_DIR / 'extract.js'
    if not extract_script.exists():
        with open(extract_script, 'w') as f:
            f.write("""
const puppeteer = require('puppeteer');
const fs = require('fs');

(async () => {
  const args = JSON.parse(process.argv[2]);
  const browser = await puppeteer.launch({
    headless: args.headless !== false,
    executablePath: args.executablePath || null,
    args: ['--no-sandbox', '--disable-setuid-sandbox']
  });
  
  const page = await browser.newPage();
  
  if (args.userAgent) {
    await page.setUserAgent(args.userAgent);
  }
  
  await page.goto(args.url, { 
    waitUntil: args.waitUntil || 'networkidle2',
    timeout: args.timeout || 30000
  });
  
  if (args.waitForSelector) {
    await page.waitForSelector(args.waitForSelector, { timeout: args.selectorTimeout || 30000 });
  }
  
  if (args.waitTime) {
    await new Promise(resolve => setTimeout(resolve, args.waitTime));
  }
  
  let result;
  
  if (args.selector) {
    if (args.extractHtml) {
      result = await page.evaluate((selector) => {
        const elements = Array.from(document.querySelectorAll(selector));
        return elements.map(el => el.outerHTML);
      }, args.selector);
    } else {
      result = await page.evaluate((selector) => {
        const elements = Array.from(document.querySelectorAll(selector));
        return elements.map(el => el.textContent.trim());
      }, args.selector);
    }
  } else {
    if (args.extractHtml) {
      result = await page.content();
    } else {
      result = await page.evaluate(() => document.body.innerText);
    }
  }
  
  await browser.close();
  
  console.log(JSON.stringify({ success: true, content: result }));
})().catch(err => {
  console.error(JSON.stringify({ success: false, error: err.message }));
  process.exit(1);
});
            """)

def handle_action(action, parameters):
    """Handle Puppeteer tool actions according to MCP standard"""
    ensure_script_dir()
    
    action_handlers = {
        "screenshot": take_screenshot,
        "pdf": generate_pdf,
        "extract": extract_content
    }
    
    if action not in action_handlers:
        raise ValueError(f"Unknown action: {action}")
    
    return action_handlers[action](parameters)

def take_screenshot(parameters):
    """Take a screenshot of a webpage"""
    url = parameters.get('url')
    full_page = parameters.get('fullPage', False)
    image_type = parameters.get('type', 'png')
    
    if not url:
        raise ValueError("URL parameter is required")
    
    # Create a temporary file for the screenshot
    with tempfile.NamedTemporaryFile(suffix=f'.{image_type}', delete=False) as tmp_file:
        output_path = tmp_file.name
    
    # Prepare arguments for the Node.js script
    script_args = {
        'url': url,
        'outputPath': output_path,
        'fullPage': full_page,
        'type': image_type,
        'headless': current_app.config.get('PUPPETEER_HEADLESS', True),
        'executablePath': current_app.config.get('CHROME_PATH')
    }
    
    # Add optional parameters if provided
    for param in ['waitForSelector', 'waitTime', 'viewport', 'userAgent', 'quality']:
        if param in parameters:
            script_args[param] = parameters[param]
    
    # Execute the Node.js script
    script_path = SCRIPT_DIR / 'screenshot.js'
    
    try:
        process = subprocess.run(
            ['node', str(script_path), json.dumps(script_args)],
            capture_output=True,
            text=True,
            check=True
        )
        
        # Parse the output
        result = json.loads(process.stdout)
        
        # Read the screenshot file
        with open(output_path, 'rb') as f:
            image_data = f.read()
        
        # Encode as base64
        base64_image = base64.b64encode(image_data).decode('utf-8')
        
        # Clean up the file
        os.unlink(output_path)
        
        return {
            'success': True,
            'imageType': image_type,
            'base64Image': base64_image
        }
    
    except subprocess.CalledProcessError as e:
        # Clean up the file
        if os.path.exists(output_path):
            os.unlink(output_path)
        
        error_message = e.stderr
        try:
            error_data = json.loads(error_message)
            return {
                'success': False,
                'error': error_data.get('error', error_message)
            }
        except:
            return {
                'success': False,
                'error': error_message
            }
    
    except Exception as e:
        # Clean up the file
        if os.path.exists(output_path):
            os.unlink(output_path)
        
        return {
            'success': False,
            'error': str(e)
        }

def generate_pdf(parameters):
    """Generate a PDF of a webpage"""
    url = parameters.get('url')
    print_background = parameters.get('printBackground', True)
    
    if not url:
        raise ValueError("URL parameter is required")
    
    # Create a temporary file for the PDF
    with tempfile.NamedTemporaryFile(suffix='.pdf', delete=False) as tmp_file:
        output_path = tmp_file.name
    
    # Prepare arguments for the Node.js script
    script_args = {
        'url': url,
        'outputPath': output_path,
        'printBackground': print_background,
        'headless': current_app.config.get('PUPPETEER_HEADLESS', True),
        'executablePath': current_app.config.get('CHROME_PATH')
    }
    
    # Add optional parameters if provided
    for param in ['format', 'margin', 'waitForSelector', 'waitTime', 'viewport', 'userAgent']:
        if param in parameters:
            script_args[param] = parameters[param]
    
    # Execute the Node.js script
    script_path = SCRIPT_DIR / 'pdf.js'
    
    try:
        process = subprocess.run(
            ['node', str(script_path), json.dumps(script_args)],
            capture_output=True,
            text=True,
            check=True
        )
        
        # Parse the output
        result = json.loads(process.stdout)
        
        # Read the PDF file
        with open(output_path, 'rb') as f:
            pdf_data = f.read()
        
        # Encode as base64
        base64_pdf = base64.b64encode(pdf_data).decode('utf-8')
        
        # Clean up the file
        os.unlink(output_path)
        
        return {
            'success': True,
            'base64Pdf': base64_pdf
        }
    
    except subprocess.CalledProcessError as e:
        # Clean up the file
        if os.path.exists(output_path):
            os.unlink(output_path)
        
        error_message = e.stderr
        try:
            error_data = json.loads(error_message)
            return {
                'success': False,
                'error': error_data.get('error', error_message)
            }
        except:
            return {
                'success': False,
                'error': error_message
            }
    
    except Exception as e:
        # Clean up the file
        if os.path.exists(output_path):
            os.unlink(output_path)
        
        return {
            'success': False,
            'error': str(e)
        }

def extract_content(parameters):
    """Extract content from a webpage"""
    url = parameters.get('url')
    selector = parameters.get('selector')
    extract_html = parameters.get('extractHtml', False)
    
    if not url:
        raise ValueError("URL parameter is required")
    
    # Prepare arguments for the Node.js script
    script_args = {
        'url': url,
        'selector': selector,
        'extractHtml': extract_html,
        'headless': current_app.config.get('PUPPETEER_HEADLESS', True),
        'executablePath': current_app.config.get('CHROME_PATH')
    }
    
    # Add optional parameters if provided
    for param in ['waitForSelector', 'waitTime', 'userAgent']:
        if param in parameters:
            script_args[param] = parameters[param]
    
    # Execute the Node.js script
    script_path = SCRIPT_DIR / 'extract.js'
    
    try:
        process = subprocess.run(
            ['node', str(script_path), json.dumps(script_args)],
            capture_output=True,
            text=True,
            check=True
        )
        
        # Parse the output
        result = json.loads(process.stdout)
        
        return {
            'success': True,
            'content': result.get('content')
        }
    
    except subprocess.CalledProcessError as e:
        error_message = e.stderr
        try:
            error_data = json.loads(error_message)
            return {
                'success': False,
                'error': error_data.get('error', error_message)
            }
        except:
            return {
                'success': False,
                'error': error_message
            }
    
    except Exception as e:
        return {
            'success': False,
            'error': str(e)
        }

# API routes for direct access (not through MCP gateway)
@puppeteer_routes.route('/screenshot', methods=['POST'])
def api_screenshot():
    """API endpoint for taking a screenshot"""
    try:
        data = request.get_json()
        result = take_screenshot(data)
        return jsonify(result)
    except Exception as e:
        return jsonify({'success': False, 'error': str(e)}), 400

@puppeteer_routes.route('/pdf', methods=['POST'])
def api_pdf():
    """API endpoint for generating a PDF"""
    try:
        data = request.get_json()
        result = generate_pdf(data)
        return jsonify(result)
    except Exception as e:
        return jsonify({'success': False, 'error': str(e)}), 400

@puppeteer_routes.route('/extract', methods=['POST'])
def api_extract():
    """API endpoint for extracting content"""
    try:
        data = request.get_json()
        result = extract_content(data)
        return jsonify(result)
    except Exception as e:
        return jsonify({'success': False, 'error': str(e)}), 400
```

--------------------------------------------------------------------------------
/llm-integration.txt:
--------------------------------------------------------------------------------

```
#!/usr/bin/env python
"""
Example of integrating the MCP Server with a language model.
This example shows how to use the MCP server as a tool provider for an LLM.
"""

import os
import json
import requests
import argparse
from dotenv import load_dotenv
from typing import Dict, Any, List, Optional

# Load environment variables
load_dotenv()

# Configure API keys
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")
MCP_SERVER_URL = os.getenv("MCP_SERVER_URL", "http://localhost:5000")

class MCPToolProvider:
    """
    Tool provider for language models using the MCP server.
    """
    
    def __init__(self, base_url: str):
        """
        Initialize the MCP tool provider.
        
        Args:
            base_url: The base URL of the MCP server
        """
        self.base_url = base_url
        self.gateway_url = f"{base_url}/mcp/gateway"
        self.manifest_url = f"{base_url}/mcp/manifest"
        self.manifest = None
        self.tools_schema = []
    
    def load_tools(self) -> List[Dict[str, Any]]:
        """
        Load tool definitions from the MCP server manifest and convert to OpenAI tools format.
        
        Returns:
            A list of tools in OpenAI format
        """
        response = requests.get(self.manifest_url)
        response.raise_for_status()
        self.manifest = response.json()
        
        tools_schema = []
        
        for tool_name, tool_info in self.manifest["tools"].items():
            for action_name, action_info in tool_info["actions"].items():
                # Create OpenAI function calling format
                function_def = {
                    "type": "function",
                    "function": {
                        "name": f"{tool_name}_{action_name}",
                        "description": action_info["description"],
                        "parameters": {
                            "type": "object",
                            "properties": {},
                            "required": []
                        }
                    }
                }
                
                # Add parameters
                for param_name, param_info in action_info.get("parameters", {}).items():
                    function_def["function"]["parameters"]["properties"][param_name] = {
                        "type": param_info.get("type", "string"),
                        "description": param_info.get("description", "")
                    }
                    
                    # Add to required list if no default value
                    if "default" not in param_info:
                        function_def["function"]["parameters"]["required"].append(param_name)
                
                tools_schema.append(function_def)
        
        self.tools_schema = tools_schema
        return tools_schema
    
    def execute_tool(self, function_name: str, arguments: Dict[str, Any]) -> Dict[str, Any]:
        """
        Execute a tool function via the MCP gateway.
        
        Args:
            function_name: The name of the function to execute (in format "tool_action")
            arguments: The arguments for the function
        
        Returns:
            The result of the function execution
        """
        # Split function name into tool and action
        parts = function_name.split("_", 1)
        if len(parts) != 2:
            raise ValueError(f"Invalid function name format: {function_name}")
        
        tool, action = parts
        
        # Call the MCP gateway
        payload = {
            "tool": tool,
            "action": action,
            "parameters": arguments
        }
        
        response = requests.post(self.gateway_url, json=payload)
        response.raise_for_status()
        result = response.json()
        
        if result["status"] != "success":
            raise Exception(f"Tool execution failed: {result.get('error', 'Unknown error')}")
        
        return result["result"]


class OpenAIClient:
    """
    OpenAI API client with MCP tool integration.
    """
    
    def __init__(self, api_key: str, model: str = "gpt-4-0125-preview"):
        """
        Initialize the OpenAI client.
        
        Args:
            api_key: The OpenAI API key
            model: The model to use
        """
        self.api_key = api_key
        self.model = model
        self.headers = {
            "Content-Type": "application/json",
            "Authorization": f"Bearer {api_key}"
        }
        self.endpoint = "https://api.openai.com/v1/chat/completions"
    
    def chat(self, 
             messages: List[Dict[str, Any]], 
             tools: Optional[List[Dict[str, Any]]] = None,
             tool_provider: Optional[MCPToolProvider] = None) -> Dict[str, Any]:
        """
        Chat with the OpenAI model using tools.
        
        Args:
            messages: The conversation messages
            tools: The tools to make available to the model
            tool_provider: The tool provider for executing tool calls
        
        Returns:
            The model's response
        """
        payload = {
            "model": self.model,
            "messages": messages,
            "temperature": 0.7,
        }
        
        if tools:
            payload["tools"] = tools
            payload["tool_choice"] = "auto"
        
        # Make the initial API call
        response = requests.post(self.endpoint, headers=self.headers, json=payload)
        response.raise_for_status()
        result = response.json()
        
        # Handle tool calls if present
        while tool_provider and result["choices"][0]["message"].get("tool_calls"):
            tool_message = result["choices"][0]["message"]
            messages.append(tool_message)
            
            # Process each tool call
            for tool_call in tool_message["tool_calls"]:
                function_name = tool_call["function"]["name"]
                arguments = json.loads(tool_call["function"]["arguments"])
                
                try:
                    # Execute the tool
                    function_response = tool_provider.execute_tool(function_name, arguments)
                    
                    # Add the tool response to messages
                    messages.append({
                        "role": "tool",
                        "tool_call_id": tool_call["id"],
                        "name": function_name,
                        "content": json.dumps(function_response)
                    })
                except Exception as e:
                    # Handle tool execution errors
                    messages.append({
                        "role": "tool",
                        "tool_call_id": tool_call["id"],
                        "name": function_name,
                        "content": json.dumps({"error": str(e)})
                    })
            
            # Make another API call with the tool responses
            payload["messages"] = messages
            response = requests.post(self.endpoint, headers=self.headers, json=payload)
            response.raise_for_status()
            result = response.json()
        
        return result


class AnthropicClient:
    """
    Anthropic API client with MCP tool integration.
    """
    
    def __init__(self, api_key: str, model: str = "claude-3-opus-20240229"):
        """
        Initialize the Anthropic client.
        
        Args:
            api_key: The Anthropic API key
            model: The model to use
        """
        self.api_key = api_key
        self.model = model
        self.headers = {
            "Content-Type": "application/json",
            "X-API-Key": api_key,
            "anthropic-version": "2023-06-01"
        }
        self.endpoint = "https://api.anthropic.com/v1/messages"
    
    def chat(self, 
             messages: List[Dict[str, Any]], 
             tools: Optional[List[Dict[str, Any]]] = None,
             tool_provider: Optional[MCPToolProvider] = None) -> Dict[str, Any]:
        """
        Chat with the Anthropic model using tools.
        
        Args:
            messages: The conversation messages
            tools: The tools to make available to the model
            tool_provider: The tool provider for executing tool calls
        
        Returns:
            The model's response
        """
        # Convert OpenAI message format to Anthropic format
        anthropic_messages = []
        for msg in messages:
            if msg["role"] == "user":
                anthropic_messages.append({"role": "user", "content": msg["content"]})
            elif msg["role"] == "assistant":
                anthropic_messages.append({"role": "assistant", "content": msg["content"]})
            elif msg["role"] == "system":
                # System messages are handled differently in Anthropic API
                system_content = msg["content"]
        
        payload = {
            "model": self.model,
            "messages": anthropic_messages,
            "temperature": 0.7,
            "system": system_content if "system_content" in locals() else "",
            "max_tokens": 1024
        }
        
        if tools:
            # Convert OpenAI tools format to Anthropic tools format
            anthropic_tools = []
            for tool in tools:
                if tool["type"] == "function":
                    anthropic_tools.append({
                        "name": tool["function"]["name"],
                        "description": tool["function"]["description"],
                        "input_schema": tool["function"]["parameters"]
                    })
            
            payload["tools"] = anthropic_tools
        
        # Make the initial API call
        response = requests.post(self.endpoint, headers=self.headers, json=payload)
        response.raise_for_status()
        result = response.json()
        
        # Handle tool calls if present
        # Note: This is simplified and would need to be expanded for actual Anthropic tool use
        # as the formats between OpenAI and Anthropic differ
        
        return result


def run_github_example():
    """
    Run an example of GitHub tool integration with an LLM.
    """
    # Initialize the MCP tool provider
    tool_provider = MCPToolProvider(MCP_SERVER_URL)
    tools = tool_provider.load_tools()
    
    # Filter to just GitHub tools for this example
    github_tools = [tool for tool in tools if tool["function"]["name"].startswith("github_")]
    
    # Initialize the OpenAI client
    openai_client = OpenAIClient(OPENAI_API_KEY)
    
    # Set up the conversation
    messages = [
        {"role": "system", "content": "You are a helpful assistant that can use GitHub tools to retrieve information."},
        {"role": "user", "content": "What repositories does the user 'octocat' have on GitHub?"}
    ]
    
    # Chat with the model using GitHub tools
    print("Sending query to the language model with GitHub tools...")
    result = openai_client.chat(messages, github_tools, tool_provider)
    
    # Print the result
    assistant_message = result["choices"][0]["message"]
    print("\nAssistant response:")
    print(assistant_message["content"])


def run_maps_memory_example():
    """
    Run an example combining Google Maps and Memory tools with an LLM.
    """
    # Initialize the MCP tool provider
    tool_provider = MCPToolProvider(MCP_SERVER_URL)
    tools = tool_provider.load_tools()
    
    # Filter to just Google Maps and Memory tools for this example
    selected_tools = [
        tool for tool in tools 
        if tool["function"]["name"].startswith("gmaps_") or tool["function"]["name"].startswith("memory_")
    ]
    
    # Initialize the OpenAI client
    openai_client = OpenAIClient(OPENAI_API_KEY)
    
    # Set up the conversation
    messages = [
        {"role": "system", "content": "You are a helpful assistant that can look up locations and store information in memory."},
        {"role": "user", "content": "Find the coordinates for the Empire State Building, then save that information in memory with the key 'empire_state_building'."}
    ]
    
    # Chat with the model using the selected tools
    print("Sending query to the language model with Google Maps and Memory tools...")
    result = openai_client.chat(messages, selected_tools, tool_provider)
    
    # Print the result
    assistant_message = result["choices"][0]["message"]
    print("\nAssistant response:")
    print(assistant_message["content"])
    
    # Continue the conversation
    messages.append(assistant_message)
    messages.append({"role": "user", "content": "Now retrieve the information you stored about the Empire State Building and use Google Maps to find a nearby coffee shop."})
    
    # Chat with the model again
    print("\nSending follow-up query...")
    result = openai_client.chat(messages, selected_tools, tool_provider)
    
    # Print the result
    assistant_message = result["choices"][0]["message"]
    print("\nAssistant response:")
    print(assistant_message["content"])


def main():
    """Run the example integrations"""
    parser = argparse.ArgumentParser(description="MCP Integration with LLMs")
    parser.add_argument("--example", choices=["github", "maps_memory"], default="github",
                        help="Example to run (default: github)")
    args = parser.parse_args()
    
    if not OPENAI_API_KEY:
        print("Error: OPENAI_API_KEY environment variable is not set")
        return
    
    try:
        if args.example == "github":
            run_github_example()
        elif args.example == "maps_memory":
            run_maps_memory_example()
    
    except requests.exceptions.RequestException as e:
        print(f"Error connecting to server: {str(e)}")
    except Exception as e:
        print(f"Error: {str(e)}")


if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/mcp-server-code.py:
--------------------------------------------------------------------------------

```python
# Model Context Protocol (MCP) Server - Main Application
# project structure:
# mcp_server/
# ├── app.py
# ├── config.py
# ├── tools/
# │   ├── __init__.py
# │   ├── github_tool.py
# │   ├── gitlab_tool.py
# │   ├── gmaps_tool.py
# │   ├── memory_tool.py
# │   └── puppeteer_tool.py
# ├── static/
# └── templates/

# app.py
from flask import Flask, request, jsonify
from flask_cors import CORS
import json
import os
from config import Config
from tools.github_tool import github_routes
from tools.gitlab_tool import gitlab_routes
from tools.gmaps_tool import gmaps_routes
from tools.memory_tool import memory_routes
from tools.puppeteer_tool import puppeteer_routes

app = Flask(__name__)
CORS(app)
app.config.from_object(Config)

# Register tool routes
app.register_blueprint(github_routes, url_prefix='/tool/github')
app.register_blueprint(gitlab_routes, url_prefix='/tool/gitlab')
app.register_blueprint(gmaps_routes, url_prefix='/tool/gmaps')
app.register_blueprint(memory_routes, url_prefix='/tool/memory')
app.register_blueprint(puppeteer_routes, url_prefix='/tool/puppeteer')

# MCP Gateway endpoint
@app.route('/mcp/gateway', methods=['POST'])
def mcp_gateway():
    data = request.get_json()
    
    if not data:
        return jsonify({"error": "Request body is required"}), 400
    
    # Parse the MCP request
    tool_name = data.get('tool')
    action = data.get('action')
    parameters = data.get('parameters', {})
    
    # Check for required fields
    if not tool_name:
        return jsonify({"error": "Tool name is required"}), 400
    if not action:
        return jsonify({"error": "Action is required"}), 400
    
    # Route to the appropriate tool
    try:
        # Construct the tool endpoint URL
        tool_url = f"/tool/{tool_name}/{action}"
        
        # Forward the request to the tool handler
        # In a real implementation, you'd use Flask's test_client or requests library
        # But for this demo, we'll simulate the routing
        if tool_name == "github":
            from tools.github_tool import handle_action
            result = handle_action(action, parameters)
        elif tool_name == "gitlab":
            from tools.gitlab_tool import handle_action
            result = handle_action(action, parameters)
        elif tool_name == "gmaps":
            from tools.gmaps_tool import handle_action
            result = handle_action(action, parameters)
        elif tool_name == "memory":
            from tools.memory_tool import handle_action
            result = handle_action(action, parameters)
        elif tool_name == "puppeteer":
            from tools.puppeteer_tool import handle_action
            result = handle_action(action, parameters)
        else:
            return jsonify({"error": f"Unknown tool: {tool_name}"}), 404
        
        # Format the response according to MCP
        mcp_response = {
            "tool": tool_name,
            "action": action,
            "status": "success",
            "result": result
        }
        
        return jsonify(mcp_response)
    
    except Exception as e:
        # Handle errors according to MCP
        mcp_error = {
            "tool": tool_name,
            "action": action,
            "status": "error",
            "error": {
                "type": type(e).__name__,
                "message": str(e)
            }
        }
        
        return jsonify(mcp_error), 500

# MCP manifest endpoint
@app.route('/mcp/manifest', methods=['GET'])
def mcp_manifest():
    """Returns the MCP manifest describing available tools"""
    
    manifest = {
        "manifestVersion": "1.0",
        "tools": {
            "github": {
                "actions": {
                    "listRepos": {
                        "description": "List repositories for a user or organization",
                        "parameters": {
                            "username": {
                                "type": "string",
                                "description": "GitHub username or organization name"
                            }
                        },
                        "returns": {
                            "type": "array",
                            "description": "List of repository objects"
                        }
                    },
                    "getRepo": {
                        "description": "Get details for a specific repository",
                        "parameters": {
                            "owner": {
                                "type": "string",
                                "description": "Repository owner"
                            },
                            "repo": {
                                "type": "string",
                                "description": "Repository name"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Repository details"
                        }
                    },
                    "searchRepos": {
                        "description": "Search for repositories",
                        "parameters": {
                            "query": {
                                "type": "string",
                                "description": "Search query"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Search results"
                        }
                    },
                    "getIssues": {
                        "description": "Get issues for a repository",
                        "parameters": {
                            "owner": {
                                "type": "string",
                                "description": "Repository owner"
                            },
                            "repo": {
                                "type": "string",
                                "description": "Repository name"
                            },
                            "state": {
                                "type": "string",
                                "description": "Issue state (open, closed, all)",
                                "default": "open"
                            }
                        },
                        "returns": {
                            "type": "array",
                            "description": "List of issue objects"
                        }
                    },
                    "createIssue": {
                        "description": "Create a new issue in a repository",
                        "parameters": {
                            "owner": {
                                "type": "string",
                                "description": "Repository owner"
                            },
                            "repo": {
                                "type": "string",
                                "description": "Repository name"
                            },
                            "title": {
                                "type": "string",
                                "description": "Issue title"
                            },
                            "body": {
                                "type": "string",
                                "description": "Issue body"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Created issue"
                        }
                    }
                }
            },
            "gitlab": {
                "actions": {
                    "listProjects": {
                        "description": "List all projects accessible by the authenticated user",
                        "parameters": {},
                        "returns": {
                            "type": "array",
                            "description": "List of project objects"
                        }
                    },
                    "getProject": {
                        "description": "Get details for a specific project",
                        "parameters": {
                            "projectId": {
                                "type": "string",
                                "description": "GitLab project ID"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Project details"
                        }
                    },
                    "searchProjects": {
                        "description": "Search for projects on GitLab",
                        "parameters": {
                            "query": {
                                "type": "string",
                                "description": "Search query"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Search results"
                        }
                    }
                }
            },
            "gmaps": {
                "actions": {
                    "geocode": {
                        "description": "Convert an address to geographic coordinates",
                        "parameters": {
                            "address": {
                                "type": "string",
                                "description": "Address to geocode"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Geocoding results"
                        }
                    },
                    "reverseGeocode": {
                        "description": "Convert geographic coordinates to an address",
                        "parameters": {
                            "lat": {
                                "type": "number",
                                "description": "Latitude"
                            },
                            "lng": {
                                "type": "number",
                                "description": "Longitude"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Reverse geocoding results"
                        }
                    },
                    "getDirections": {
                        "description": "Get directions between two locations",
                        "parameters": {
                            "origin": {
                                "type": "string",
                                "description": "Origin address or coordinates"
                            },
                            "destination": {
                                "type": "string",
                                "description": "Destination address or coordinates"
                            },
                            "mode": {
                                "type": "string",
                                "description": "Travel mode (driving, walking, bicycling, transit)",
                                "default": "driving"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Directions results"
                        }
                    }
                }
            },
            "memory": {
                "actions": {
                    "get": {
                        "description": "Get a memory item by key",
                        "parameters": {
                            "key": {
                                "type": "string",
                                "description": "Memory item key"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Memory item"
                        }
                    },
                    "set": {
                        "description": "Create or update a memory item",
                        "parameters": {
                            "key": {
                                "type": "string",
                                "description": "Memory item key"
                            },
                            "value": {
                                "type": "any",
                                "description": "Memory item value"
                            },
                            "metadata": {
                                "type": "object",
                                "description": "Optional metadata",
                                "default": {}
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Created or updated memory item"
                        }
                    },
                    "delete": {
                        "description": "Delete a memory item by key",
                        "parameters": {
                            "key": {
                                "type": "string",
                                "description": "Memory item key"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Deletion result"
                        }
                    },
                    "list": {
                        "description": "List all memory items, with optional filtering",
                        "parameters": {
                            "filterKey": {
                                "type": "string",
                                "description": "Optional key filter"
                            },
                            "limit": {
                                "type": "number",
                                "description": "Maximum number of items to return",
                                "default": 100
                            },
                            "offset": {
                                "type": "number",
                                "description": "Number of items to skip",
                                "default": 0
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "List of memory items with pagination info"
                        }
                    }
                }
            },
            "puppeteer": {
                "actions": {
                    "screenshot": {
                        "description": "Take a screenshot of a webpage",
                        "parameters": {
                            "url": {
                                "type": "string",
                                "description": "URL to screenshot"
                            },
                            "fullPage": {
                                "type": "boolean",
                                "description": "Whether to capture the full page",
                                "default": False
                            },
                            "type": {
                                "type": "string",
                                "description": "Image type (png or jpeg)",
                                "default": "png"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Screenshot result with base64-encoded image"
                        }
                    },
                    "pdf": {
                        "description": "Generate a PDF of a webpage",
                        "parameters": {
                            "url": {
                                "type": "string",
                                "description": "URL to convert to PDF"
                            },
                            "printBackground": {
                                "type": "boolean",
                                "description": "Whether to print background graphics",
                                "default": True
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "PDF result with base64-encoded document"
                        }
                    },
                    "extract": {
                        "description": "Extract content from a webpage",
                        "parameters": {
                            "url": {
                                "type": "string",
                                "description": "URL to extract content from"
                            },
                            "selector": {
                                "type": "string",
                                "description": "CSS selector for content to extract"
                            }
                        },
                        "returns": {
                            "type": "object",
                            "description": "Extracted content"
                        }
                    }
                }
            }
        }
    }
    
    return jsonify(manifest)

# Health check endpoint
@app.route('/health', methods=['GET'])
def health_check():
    return jsonify({'status': 'ok'})

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=int(os.environ.get('PORT', 5000)), debug=Config.DEBUG)

```