#
tokens: 9631/50000 19/19 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .gitignore
├── .python-version
├── Dockerfile
├── LICENSE
├── main.py
├── mcp_client_bedrock
│   ├── converse_agent.py
│   ├── converse_tools.py
│   ├── main.py
│   ├── mcp_client.py
│   ├── pyproject.toml
│   ├── README.md
│   └── uv.lock
├── pyproject.toml
├── README.md
├── sample_functions
│   ├── customer-id-from-email
│   │   └── app.py
│   ├── customer-info-from-id
│   │   └── app.py
│   ├── run-python-code
│   │   ├── app.py
│   │   └── lambda_function.py
│   ├── samconfig.toml
│   └── template.yml
├── smithery.yaml
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
3.12

```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# Python-generated files
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info

# Virtual environments
.venv
.DS_Store
```

--------------------------------------------------------------------------------
/mcp_client_bedrock/README.md:
--------------------------------------------------------------------------------

```markdown
This is a demo of Anthropic's open source MCP used with Amazon Bedrock Converse API. This combination allows for the MCP to be used with any of the many models supported by the Converse API.

See https://github.com/mikegc-aws/amazon-bedrock-mcp for more information.
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# MCP2Lambda

[![smithery badge](https://smithery.ai/badge/@danilop/MCP2Lambda)](https://smithery.ai/server/@danilop/MCP2Lambda)

<a href="https://glama.ai/mcp/servers/4hokv207sz">
  <img width="380" height="200" src="https://glama.ai/mcp/servers/4hokv207sz/badge" alt="MCP2Lambda MCP server" />
</a>

Run any [AWS Lambda](https://aws.amazon.com/lambda/) function as a Large Language Model (LLM) **tool** without code changes using [Anthropic](https://www.anthropic.com)'s [Model Context Protocol (MCP)](https://github.com/modelcontextprotocol).

```mermaid
graph LR
    A[Model] <--> B[MCP Client]
    B <--> C["MCP2Lambda<br>(MCP Server)"]
    C <--> D[Lambda Function]
    D <--> E[Other AWS Services]
    D <--> F[Internet]
    D <--> G[VPC]
    
    style A fill:#f9f,stroke:#333,stroke-width:2px
    style B fill:#bbf,stroke:#333,stroke-width:2px
    style C fill:#bfb,stroke:#333,stroke-width:4px
    style D fill:#fbb,stroke:#333,stroke-width:2px
    style E fill:#fbf,stroke:#333,stroke-width:2px
    style F fill:#dff,stroke:#333,stroke-width:2px
    style G fill:#ffd,stroke:#333,stroke-width:2px
```

This MCP server acts as a **bridge** between MCP clients and AWS Lambda functions, allowing generative AI models to access and run Lambda functions as tools. This is useful, for example, to access private resources such as internal applications and databases without the need to provide public network access. This approach allows the model to use other AWS services, private networks, and the public internet.

From a **security** perspective, this approach implements segregation of duties by allowing the model to invoke the Lambda functions but not to access the other AWS services directly. The client only needs AWS credentials to invoke the Lambda functions. The Lambda functions can then interact with other AWS services (using the function role) and access public or private networks.

The MCP server gives access to two tools:

1. The first tool can **autodiscover** all Lambda functions in your account that match a prefix or an allowed list of names. This tool shares the names of the functions and their descriptions with the model.

2. The second tool allows to **invoke** those Lambda functions by name passing the required parameters.

No code changes are required. You should change these configurations to improve results:

## Strategy Selection

The gateway supports two different strategies for handling Lambda functions:

1. **Pre-Discovery Mode** (default: enabled): Registers each Lambda function as an individual tool at startup. This provides a more intuitive interface where each function appears as its own named tool.

2. **Generic Mode**: Uses two generic tools (`list_lambda_functions` and `invoke_lambda_function`) to interact with Lambda functions.

You can control this behavior through:

- Environment variable: `PRE_DISCOVERY=true|false`
- CLI flag: `--no-pre-discovery` (disables pre-discovery mode)

Example:
```bash
# Disable pre-discovery mode
export PRE_DISCOVERY=false
python main.py

# Or using CLI flag to disable pre-discovery
python main.py --no-pre-discovery
```

1. To provide the MCP client with the knowledge to use a Lambda function, the **description of the Lambda function** should indicate what the function does and which parameters it uses. See the sample functions for a quick demo and more details.

2. To help the model use the tools available via AWS Lambda, you can add something like this to your **system prompt**:

```
Use the AWS Lambda tools to improve your answers.
```

## Overview

MCP2Lambda enables LLMs to interact with AWS Lambda functions as tools, extending their capabilities beyond text generation. This allows models to:

- Access real-time and private data, including data sources in your VPCs
- Execute custom code using a Lambda function as sandbox environment
- Interact with external services and APIs using Lambda functions internet access (and bandwidth)
- Perform specialized calculations or data processing

The server uses the MCP protocol, which standardizes the way AI models can access external tools.

By default, only functions whose name starts with `mcp2lambda-` will be available to the model.

## Prerequisites

- Python 3.12 or higher
- AWS account with configured credentials
- AWS Lambda functions (sample functions provided in the repo)
- An application using [Amazon Bedrock](https://aws.amazon.com/bedrock/) with the [Converse API](https://docs.aws.amazon.com/bedrock/latest/userguide/converse.html)
- An MCP-compatible client like [Claude Desktop](https://docs.anthropic.com/en/docs/claude-desktop)

## Installation

### Installing via Smithery

To install MCP2Lambda for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@danilop/MCP2Lambda):

```bash
npx -y @smithery/cli install @danilop/MCP2Lambda --client claude
```

### Manual Installation
1. Clone the repository:
   ```
   git clone https://github.com/yourusername/mcp2lambda.git
   cd mcp2lambda
   ```

2. Configure AWS credentials. For example, using the [AWS CLI](https://aws.amazon.com/cli):
   ```
   aws configure
   ```

## Sample Lambda Functions

This repository includes three *sample* Lambda functions that demonstrate different use cases. These functions have basic permissions and can only write to CloudWatch logs.

### CustomerIdFromEmail
Retrieves a customer ID based on an email address. This function takes an email parameter and returns the associated customer ID, demonstrating how to build simple lookup tools. The function is hard coded to reply to the `[email protected]` email address. For example, you can ask the model to get the customer ID for the email `[email protected]`.

### CustomerInfoFromId
Retrieves detailed customer information based on a customer ID. This function returns customer details like name, email, and status, showing how Lambda can provide context-specific data. The function is hard coded to reply to the customer ID returned by the previous function. For example, you can ask the model to "Get the customer status for email `[email protected]`". This will use both functions to get to the result.

### RunPythonCode
Executes arbitrary Python code within a Lambda sandbox environment. This powerful function allows Claude to write and run Python code to perform calculations, data processing, or other operations not built into the model. For example, you can ask the model to "Calculate the number of prime numbers between 1 and 10, 1 and 100, and so on up to 1M".

## Deploying Sample Lambda Functions

The repository includes sample Lambda functions in the `sample_functions` directory.

1. Install the AWS SAM CLI: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html

2. Deploy the sample functions:
   ```
   cd sample_functions
   sam build
   sam deploy
   ```

The sample functions will be deployed with the prefix `mcp2lambda-`.

## Using with Amazon Bedrock

MCP2Lambda can also be used with Amazon Bedrock's Converse API, allowing you to use the MCP protocol with any of the models supported by Bedrock.

The `mcp_client_bedrock` directory contains a client implementation that connects MCP2Lambda to Amazon Bedrock models.

See https://github.com/mikegc-aws/amazon-bedrock-mcp for more information.

### Prerequisites

- Amazon Bedrock access and permissions to use models like Claude, Mistral, Llama, etc.
- Boto3 configured with appropriate credentials

### Installation and Setup

1. Navigate to the mcp_client_bedrock directory:
   ```
   cd mcp_client_bedrock
   ```

2. Install dependencies:
   ```
   uv pip install -e .
   ```

3. Run the client:
   ```
   python main.py
   ```

### Configuration

The client is configured to use Anthropic's Claude 3.7 Sonnet by default, but you can modify the `model_id` in `main.py` to use other Bedrock models:

```python
# Examples of supported models:
model_id = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
#model_id = "us.amazon.nova-pro-v1:0"
```

You can also customize the system prompt in the same file to change how the model behaves.

### Usage

1. Start the MCP2Lambda server in one terminal:
   ```
   cd mcp2lambda
   uv run main.py
   ```

2. Run the Bedrock client in another terminal:
   ```
   cd mcp_client_bedrock
   python main.py
   ```

3. Interact with the model through the command-line interface. The model will have access to the Lambda functions deployed earlier.

## Using with Claude Desktop

Add the following to your Claude Desktop configuration file:

```json
{
  "mcpServers": {
    "mcp2lambda": {
      "command": "uv",
      "args": [
        "--directory",
        "<full path to the mcp2lambda directory>",
        "run",
        "main.py"
      ]
    }
  }
}
```

To help the model use tools via AWS Lambda, in your settings profile, you can add to your personal preferences a sentence like:

```
Use the AWS Lambda tools to improve your answers.
```

## Starting the MCP Server

Start the MCP server locally:

```sh
cd mcp2lambda
uv run main.py
```
```

--------------------------------------------------------------------------------
/sample_functions/samconfig.toml:
--------------------------------------------------------------------------------

```toml
version = 0.1
[default.deploy.parameters]
stack_name = "mcp2lambda"
resolve_s3 = true
s3_prefix = "mcp2lambda"
region = "us-east-1"
capabilities = "CAPABILITY_IAM"
image_repositories = []

```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[project]
name = "mcp2lambda"
version = "0.1.0"
description = "MCP2Lambda - A bridge between MCP clients and AWS Lambda functions"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
    "boto3>=1.37.0",
    "mcp==1.3.0",
]

[tool.uv.workspace]
members = ["mcp_bedrock"]

```

--------------------------------------------------------------------------------
/mcp_client_bedrock/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[project]
name = "mcp-client-bedrock"
version = "0.1.0"
description = "Sample MCP client implementation for Amazon Bedrock (see https://github.com/mikegc-aws/amazon-bedrock-mcp for more information)"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
    "boto3>=1.37.0",
    "mcp==1.3.0",
]

```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml

startCommand:
  type: stdio
  configSchema:
    # JSON Schema defining the configuration options for the MCP.
    {}
  commandFunction:
    # A function that produces the CLI command to start the MCP on stdio.
    |-
    (config) => ({ command: 'python', args: ['main.py'] })
  exampleConfig: {}

```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
# Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
FROM python:3.12-slim

WORKDIR /app

# Copy all project files into container
COPY . .

# Create a setup.cfg to restrict package discovery and avoid multiple top-level packages
RUN echo "[metadata]\nname = mcp2lambda\nversion = 0.1.0\n\n[options]\npy_modules = main" > setup.cfg

# Upgrade pip and install the package without caching
RUN pip install --upgrade pip \
    && pip install . --no-cache-dir

CMD ["python", "main.py"]

```

--------------------------------------------------------------------------------
/sample_functions/customer-id-from-email/app.py:
--------------------------------------------------------------------------------

```python
def lambda_handler(event: dict, context: dict) -> dict:
    """
    AWS Lambda function to retrieve customer ID based on customer email address.
    
    Args:
        event (dict): The Lambda event object containing the customer email
                      Expected format: {"email": "[email protected]"}
        context (dict): AWS Lambda context object
        
    Returns:
        dict: Customer ID if found, otherwise an error message
              Success format: {"customerId": "123"}
              Error format: {"error": "Customer not found"}
    """
    try:
        # Extract email from the event
        email = event.get('email')
        
        if not email:
            return {"error": "Missing email parameter"}
            
        # This would normally query a database
        # For demo purposes, we'll return mock data
        
        # Simulate database lookup
        if email == "[email protected]":
            return {"customerId": "12345"}
        else:
            return {"error": "Customer not found"}
            
    except Exception as e:
        return {"error": str(e)}

```

--------------------------------------------------------------------------------
/sample_functions/template.yml:
--------------------------------------------------------------------------------

```yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Sample functions for MCP servers.

Resources:

  CustomerInfoFromId:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: ./customer-info-from-id
      Description: Customer status from { 'customerId' }
      MemorySize: 128
      Timeout: 3
      Handler: app.lambda_handler
      Runtime: python3.13
      Architectures:
        - arm64

  CustomerIdFromEmail:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: ./customer-id-from-email
      Description: Get customer ID from { 'email' }
      MemorySize: 128
      Timeout: 3
      Handler: app.lambda_handler
      Runtime: python3.13
      Architectures:
        - arm64
        
  RunPythonCode:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: ./run-python-code
      Description: Run Python code in the { 'input_script' }. Install modules if { 'install_modules' } is not an empty list.
      MemorySize: 1024
      Timeout: 60
      Handler: app.lambda_handler
      Runtime: python3.13
      Architectures:
        - arm64

Outputs:

  CustomerInfoFromId:
    Description: "CustomerInfoFromId Function ARN"
    Value: !GetAtt CustomerInfoFromId.Arn
    
  CustomerIdFromEmail:
    Description: "CustomerIdFromEmail Function ARN"
    Value: !GetAtt CustomerIdFromEmail.Arn
```

--------------------------------------------------------------------------------
/sample_functions/customer-info-from-id/app.py:
--------------------------------------------------------------------------------

```python
import json

def lambda_handler(event: dict, context: dict) -> dict:
    """
    AWS Lambda function to retrieve customer information based on customer ID.
    
    Args:
        event (dict): The Lambda event object containing the customer ID
                      Expected format: {"customerId": "123"}
        context (dict): AWS Lambda context object
        
    Returns:
        dict: Customer information if found, otherwise an error message
              Success format: {"customerId": "123", "name": "John Doe", "email": "[email protected]", ...}
              Error format: {"error": "Customer not found"}
    """
    try:
        # Extract customer ID from the event
        customer_id = event.get('customerId')
        
        if not customer_id:
            return {"error": "Missing customerId parameter"}
            
        # This would normally query a database
        # For demo purposes, we'll return mock data
        
        # Simulate database lookup
        if customer_id == "12345":
            return {
                "customerId": "12345",
                "name": "John Doe",
                "email": "[email protected]",
                "phone": "+1-555-123-4567",
                "address": {
                    "street": "123 Main St",
                    "city": "Anytown",
                    "state": "CA",
                    "zipCode": "12345"
                },
                "accountCreated": "2022-01-15"
            }
        else:
            return {"error": "Customer not found"}
            
    except Exception as e:
        return {"error": str(e)}

```

--------------------------------------------------------------------------------
/mcp_client_bedrock/mcp_client.py:
--------------------------------------------------------------------------------

```python
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from typing import Any, List

class MCPClient:
    def __init__(self, server_params: StdioServerParameters):
        self.server_params = server_params
        self.session = None
        self._client = None
        
    async def __aenter__(self):
        """Async context manager entry"""
        await self.connect()
        return self
        
    async def __aexit__(self, exc_type, exc_val, exc_tb):
        """Async context manager exit"""
        if self.session:
            await self.session.__aexit__(exc_type, exc_val, exc_tb)
        if self._client:
            await self._client.__aexit__(exc_type, exc_val, exc_tb)

    async def connect(self):
        """Establishes connection to MCP server"""
        self._client = stdio_client(self.server_params)
        self.read, self.write = await self._client.__aenter__()
        session = ClientSession(self.read, self.write)
        self.session = await session.__aenter__()
        await self.session.initialize()

    async def get_available_tools(self) -> List[Any]:
        """List available tools"""
        if not self.session:
            raise RuntimeError("Not connected to MCP server")
            
        tools = await self.session.list_tools()
        return tools.tools

    async def call_tool(self, tool_name: str, arguments: dict) -> Any:
        """Call a tool with given arguments"""
        if not self.session:
            raise RuntimeError("Not connected to MCP server")
            
        result = await self.session.call_tool(tool_name, arguments=arguments)
        return result

```

--------------------------------------------------------------------------------
/mcp_client_bedrock/main.py:
--------------------------------------------------------------------------------

```python
import asyncio
from mcp import StdioServerParameters
from converse_agent import ConverseAgent
from converse_tools import ConverseToolManager
from mcp_client import MCPClient

async def main():
    """
    Main function that sets up and runs an interactive AI agent with tool integration.
    The agent can process user prompts and utilize registered tools to perform tasks.
    """
    # Initialize model configuration
    model_id = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
    #model_id = "us.amazon.nova-pro-v1:0"
        
    # Set up the agent and tool manager
    agent = ConverseAgent(model_id)
    agent.tools = ConverseToolManager()

    # Define the agent's behavior through system prompt
    agent.system_prompt = """You are a helpful assistant that can use tools to help you answer 
questions and perform tasks."""

    # Create server parameters for SQLite configuration
    server_params = StdioServerParameters(
        command="uv",
#       args=["--directory", "..", "run", "main.py", "--no-pre-discovery"],
        args=["--directory", "..", "run", "main.py"],
        env=None
    )

    # Initialize MCP client with server parameters
    async with MCPClient(server_params) as mcp_client:

        # Fetch available tools from the MCP client
        tools = await mcp_client.get_available_tools()

        # Register each available tool with the agent
        for tool in tools:
            agent.tools.register_tool(
                name=tool.name,
                func=mcp_client.call_tool,
                description=tool.description,
                input_schema={'json': tool.inputSchema}
            )

        # Start interactive prompt loop
        while True:
            try:
                # Get user input and check for exit commands
                user_prompt = input("\nEnter your prompt (or 'quit' to exit): ")
                if user_prompt.lower() in ['quit', 'exit', 'q']:
                    break
                
                # Process the prompt and display the response
                response = await agent.invoke_with_prompt(user_prompt)
                print("\nResponse:", response)
                
            except KeyboardInterrupt:
                print("\nExiting...")
                break
            except Exception as e:
                print(f"\nError occurred: {e}")

if __name__ == "__main__":
    # Run the async main function
    asyncio.run(main()) 
```

--------------------------------------------------------------------------------
/mcp_client_bedrock/converse_tools.py:
--------------------------------------------------------------------------------

```python
from typing import Any, Dict, List, Callable

class ConverseToolManager:
    def __init__(self):
        self._tools = {}
        self._name_mapping = {}  # Maps sanitized names to original names
    
    def _sanitize_name(self, name: str) -> str:
        """Convert hyphenated names to underscore format"""
        return name.replace('-', '_')
    
    def register_tool(self, name: str, func: Callable, description: str, input_schema: Dict):
        """
        Register a new tool with the system, sanitizing the name for Bedrock compatibility
        """
        sanitized_name = self._sanitize_name(name)
        self._name_mapping[sanitized_name] = name
        self._tools[sanitized_name] = {
            'function': func,
            'description': description,
            'input_schema': input_schema,
            'original_name': name
        }

    def get_tools(self) -> Dict[str, List[Dict]]:
        """
        Generate the tools specification using sanitized names
        """
        tool_specs = []
        for sanitized_name, tool in self._tools.items():
            tool_specs.append({
                'toolSpec': {
                    'name': sanitized_name,  # Use sanitized name for Bedrock
                    'description': tool['description'],
                    'inputSchema': tool['input_schema']
                }
            })
        
        return {'tools': tool_specs}

    async def execute_tool(self, payload: Dict[str, Any]) -> Dict[str, Any]:
        """
        Execute a tool based on the agent's request, handling name translation
        """
        tool_use_id = payload['toolUseId']
        sanitized_name = payload['name']
        tool_input = payload['input']

        if sanitized_name not in self._tools:
            raise ValueError(f"Unknown tool: {sanitized_name}")
        try:
            tool_func = self._tools[sanitized_name]['function']
            # Use original name when calling the actual function
            original_name = self._tools[sanitized_name]['original_name']
            result = await tool_func(original_name, tool_input)
            return {
                'toolUseId': tool_use_id,
                'content': [{
                    'text': str(result)
                }],
                'status': 'success'
            }
        except Exception as e:
            return {
                'toolUseId': tool_use_id,
                'content': [{
                    'text': f"Error executing tool: {str(e)}"
                }],
                'status': 'error'
            }

    def clear_tools(self):
        """Clear all registered tools"""
        self._tools.clear()
    
```

--------------------------------------------------------------------------------
/sample_functions/run-python-code/app.py:
--------------------------------------------------------------------------------

```python
import os
import subprocess
import json

TMP_DIR = "/tmp"


def remove_tmp_contents() -> None:
    """
    Remove all contents (files and directories) from the temporary directory.

    This function traverses the /tmp directory tree and removes all files and empty
    directories. It handles exceptions for each removal attempt and prints any
    errors encountered.
    """
    # Traverse the /tmp directory tree
    for root, dirs, files in os.walk(TMP_DIR, topdown=False):
        # Remove files
        for file in files:
            file_path: str = os.path.join(root, file)
            try:
                os.remove(file_path)
            except Exception as e:
                print(f"Error removing {file_path}: {e}")
        
        # Remove empty directories
        for dir in dirs:
            dir_path: str = os.path.join(root, dir)
            try:
                os.rmdir(dir_path)
            except Exception as e:
                print(f"Error removing {dir_path}: {e}")


def do_install_modules(modules: list[str], current_env: dict[str, str]) -> str:    
    """
    Install Python modules using pip.

    This function takes a list of module names and attempts to install them
    using pip. It handles exceptions for each module installation and prints
    any errors encountered.

    Args:
        modules (list[str]): A list of module names to install.
    """

    output = ''

    if type(modules) is list and len(modules) > 0:
        current_env["PYTHONPATH"] = TMP_DIR
        try:
            _ = subprocess.run(f"pip install -U pip setuptools wheel -t {TMP_DIR} --no-cache-dir".split(), capture_output=True, text=True, check=True)
            for module in modules:
                _ = subprocess.run(f"pip install {module} -t {TMP_DIR} --no-cache-dir".split(), capture_output=True, text=True, check=True)
        except Exception as e:
            error_message = f"Error installing {module}: {e}"
            print(error_message)
            output += error_message

    return output


def lambda_handler(event: dict, context: dict) -> dict:
    """
    AWS Lambda function handler to execute Python code provided in the event.
    
    Args:
        event (dict): The Lambda event object containing the Python code to execute
                      Expected format: {"code": "your_python_code_as_string"}
        context (dict): AWS Lambda context object
        
    Returns:
        dict: Results of the code execution containing:
              - output (str): Output of the executed code or error message
    """
    remove_tmp_contents()

    output = ""
    current_env = os.environ.copy()

    # No need to go further if there is no script to run
    input_script = event.get('input_script', '')
    if len(input_script) == 0:
        return {
            'statusCode': 400,
            'body': 'Input script is required'
        }

    install_modules = event.get('install_modules', [])
    output += do_install_modules(install_modules, current_env)

    print(f"Script:\n{input_script}")
    
    result = subprocess.run(["python", "-c", input_script], env=current_env, capture_output=True, text=True)
    output += result.stdout + result.stderr

    print(f"Output: {output}")
    print(f"Len: {len(output)}")

    # After running the script
    remove_tmp_contents()

    result = {
        'output': output
    }

    return {
        'statusCode': 200,
        'body': json.dumps(result)
    }

```

--------------------------------------------------------------------------------
/mcp_client_bedrock/converse_agent.py:
--------------------------------------------------------------------------------

```python
import json
import re

import boto3

class ConverseAgent:
    def __init__(self, model_id, region='us-west-2', system_prompt='You are a helpful assistant.'):
        self.model_id = model_id
        self.region = region
        self.client = boto3.client('bedrock-runtime', region_name=self.region)
        self.system_prompt = system_prompt
        self.messages = []
        self.tools = None
        self.response_output_tags = [] # ['<response>', '</response>']

    async def invoke_with_prompt(self, prompt):
        content = [
            {
                'text': prompt
            }
        ]
        return await self.invoke(content)

    async def invoke(self, content):

        print(f"User: {json.dumps(content, indent=2)}")

        self.messages.append(
            {
                "role": "user", 
                "content": content
            }
        )
        response = self._get_converse_response()

        print(f"Agent: {json.dumps(response, indent=2)}")

        return await self._handle_response(response)

    def _get_converse_response(self):
        """
        https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime/client/converse.html
        """
        
        # print(f"Invoking with messages: {json.dumps(self.messages, indent=2)}")
        
        response = self.client.converse(
            modelId=self.model_id,
            messages=self.messages,
            system=[
                {
                    "text": self.system_prompt
                }
            ],
            inferenceConfig={
                "maxTokens": 4096,
                "temperature": 0.7,
            },
            toolConfig=self.tools.get_tools()
        )
        return(response)
    
    async def _handle_response(self, response):
        # Add the response to the conversation history
        self.messages.append(response['output']['message'])

        # Do we need to do anything else?
        stop_reason = response['stopReason']

        if stop_reason in ['end_turn', 'stop_sequence']:
            # Safely extract the text from the nested response structure
            try:
                message = response.get('output', {}).get('message', {})
                content = message.get('content', [])
                text = content[0].get('text', '')
                if hasattr(self, 'response_output_tags') and len(self.response_output_tags) == 2:
                    pattern = f"(?s).*{re.escape(self.response_output_tags[0])}(.*?){re.escape(self.response_output_tags[1])}"
                    match = re.search(pattern, text)
                    if match:
                        return match.group(1)
                return text
            except (KeyError, IndexError):
                return ''

        elif stop_reason == 'tool_use':
            try:
                # Extract tool use details from response
                tool_response = []
                for content_item in response['output']['message']['content']:
                    if 'toolUse' in content_item:
                        tool_request = {
                            "toolUseId": content_item['toolUse']['toolUseId'],
                            "name": content_item['toolUse']['name'],
                            "input": content_item['toolUse']['input']
                        }
                        
                        tool_result = await self.tools.execute_tool(tool_request)
                        tool_response.append({'toolResult': tool_result})
                
                return await self.invoke(tool_response)
                
            except KeyError as e:
                raise ValueError(f"Missing required tool use field: {e}")
            except Exception as e:
                raise ValueError(f"Failed to execute tool: {e}")

        elif stop_reason == 'max_tokens':
            # Hit token limit (this is one way to handle it.)
            await self.invoke_with_prompt('Please continue.')

        else:
            raise ValueError(f"Unknown stop reason: {stop_reason}")


```

--------------------------------------------------------------------------------
/sample_functions/run-python-code/lambda_function.py:
--------------------------------------------------------------------------------

```python
import base64
import json
import os
import subprocess
from typing import Dict, Any

TMP_DIR = "/tmp"

IMAGE_EXTENSIONS = ['png', 'jpeg', 'jpg', 'gif', 'webp']

# To avoid "Matplotlib created a temporary cache directory..." warning
os.environ['MPLCONFIGDIR'] = os.path.join(TMP_DIR, f'matplotlib_{os.getpid()}')


def remove_tmp_contents() -> None:
    """
    Remove all contents (files and directories) from the temporary directory.

    This function traverses the /tmp directory tree and removes all files and empty
    directories. It handles exceptions for each removal attempt and prints any
    errors encountered.
    """
    # Traverse the /tmp directory tree
    for root, dirs, files in os.walk(TMP_DIR, topdown=False):
        # Remove files
        for file in files:
            file_path: str = os.path.join(root, file)
            try:
                os.remove(file_path)
            except Exception as e:
                print(f"Error removing {file_path}: {e}")
        
        # Remove empty directories
        for dir in dirs:
            dir_path: str = os.path.join(root, dir)
            try:
                os.rmdir(dir_path)
            except Exception as e:
                print(f"Error removing {dir_path}: {e}")


def do_install_modules(modules: list[str], current_env: dict[str, str]) -> str:    
    """
    Install Python modules using pip.

    This function takes a list of module names and attempts to install them
    using pip. It handles exceptions for each module installation and prints
    any errors encountered.

    Args:
        modules (list[str]): A list of module names to install.
    """

    output = ''

    for module in modules:
        try:
            subprocess.run(["pip", "install", module], check=True)
        except Exception as e:
            print(f"Error installing {module}: {e}")

    if type(modules) is list and len(modules) > 0:
        current_env["PYTHONPATH"] = TMP_DIR
        try:
            _ = subprocess.run(f"pip install -U pip setuptools wheel -t {TMP_DIR} --no-cache-dir".split(), capture_output=True, text=True, check=True)
            for module in modules:
                _ = subprocess.run(f"pip install {module} -t {TMP_DIR} --no-cache-dir".split(), capture_output=True, text=True, check=True)
        except Exception as e:
            error_message = f"Error installing {module}: {e}"
            print(error_message)
            output += error_message

    return output


def lambda_handler(event: Dict[str, Any], context: Any) -> Dict[str, Any]:
    """
    AWS Lambda function handler that executes a Python script and processes its output.

    This function takes an input Python script, executes it, captures the output,
    and processes any generated images. It also handles temporary file management.

    Args:
        event (Dict[str, Any]): The event dict containing the Lambda function input.
        context (Any): The context object provided by AWS Lambda.

    Returns:
        Dict[str, Any]: A dictionary containing the execution results, including:
            - statusCode (int): HTTP status code (200 for success, 400 for bad request)
            - body (str): Error message in case of bad request
            - output (str): The combined stdout and stderr output from the script execution
            - images (List[Dict[str, str]]): List of dictionaries containing image data
    """
    # Before running the script
    remove_tmp_contents()

    output = ""
    current_env = os.environ.copy()

    # No need to go further if there is no script to run
    input_script = event.get('input_script', '')
    if len(input_script) == 0:
        return {
            'statusCode': 400,
            'body': 'Input script is required'
        }

    install_modules = event.get('install_modules', [])
    output += do_install_modules(install_modules, current_env)

    print(f"Script:\n{input_script}")
    
    result = subprocess.run(["python", "-c", input_script], env=current_env, capture_output=True, text=True)
    output += result.stdout + result.stderr

    # Search for images and convert them to base64
    images = []

    for file in os.listdir(TMP_DIR):
        file_path: str = os.path.join(TMP_DIR, file)
        if os.path.isfile(file_path) and any(file.lower().endswith(f".{ext}") for ext in IMAGE_EXTENSIONS):
            try:
                # Read file content
                with open(file_path, "rb") as f:
                    file_content: bytes = f.read()
                    images.append({
                        "path": file_path,
                        "base64": base64.b64encode(file_content).decode('utf-8')
                    })
                output += f"File {file_path} loaded.\n"
            except Exception as e:
                output += f"Error loading {file_path}: {e}"

    print(f"Output: {output}")
    print(f"Len: {len(output)}")
    print(f"Images: {len(images)}")

    # After running the script
    remove_tmp_contents()

    result: Dict[str, Any] = {
        'output': output,
        'images': images
    }

    return {
        'statusCode': 200,
        'body': json.dumps(result)
    }

```

--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------

```python
import json
import os
import re
import argparse

from mcp.server.fastmcp import FastMCP, Context
import boto3

# Strategy selection - set to True to register Lambda functions as individual tools
# set to False to use the original approach with list and invoke tools
parser = argparse.ArgumentParser(description='MCP Gateway to AWS Lambda')
parser.add_argument('--no-pre-discovery', 
                   action='store_true',
                   help='Disable registering Lambda functions as individual tools at startup')

# Parse arguments and set default configuration
args = parser.parse_args()

# Check environment variable first (takes precedence if set)
if 'PRE_DISCOVERY' in os.environ:
    PRE_DISCOVERY = os.environ.get('PRE_DISCOVERY').lower() == 'true'
else:
    # Otherwise use CLI argument (default is enabled, --no-pre-discovery disables)
    PRE_DISCOVERY = not args.no_pre_discovery

AWS_REGION = os.environ.get("AWS_REGION", "us-east-1")
FUNCTION_PREFIX = os.environ.get("FUNCTION_PREFIX", "mcp2lambda-")
FUNCTION_LIST = json.loads(os.environ.get("FUNCTION_LIST", "[]"))

mcp = FastMCP("MCP Gateway to AWS Lambda")

lambda_client = boto3.client("lambda", region_name=AWS_REGION)


def validate_function_name(function_name: str) -> bool:
    """Validate that the function name is valid and can be called."""
    return function_name.startswith(FUNCTION_PREFIX) or function_name in FUNCTION_LIST


def sanitize_tool_name(name: str) -> str:
    """Sanitize a Lambda function name to be used as a tool name."""
    # Remove prefix if present
    if name.startswith(FUNCTION_PREFIX):
        name = name[len(FUNCTION_PREFIX):]
    
    # Replace invalid characters with underscore
    name = re.sub(r'[^a-zA-Z0-9_]', '_', name)
    
    # Ensure name doesn't start with a number
    if name and name[0].isdigit():
        name = "_" + name
    
    return name


def format_lambda_response(function_name: str, payload: bytes) -> str:
    """Format the Lambda function response payload."""
    try:
        # Try to parse the payload as JSON
        payload_json = json.loads(payload)
        return f"Function {function_name} returned: {json.dumps(payload_json, indent=2)}"
    except (json.JSONDecodeError, UnicodeDecodeError):
        # Return raw payload if not JSON
        return f"Function {function_name} returned payload: {payload}"


# Define the generic tool functions that can be used directly or as fallbacks
def list_lambda_functions_impl(ctx: Context) -> str:
    """Tool that lists all AWS Lambda functions that you can call as tools.
    Use this list to understand what these functions are and what they do.
    This functions can help you in many different ways."""

    ctx.info("Calling AWS Lambda ListFunctions...")

    functions = lambda_client.list_functions()

    ctx.info(f"Found {len(functions['Functions'])} functions")

    functions_with_prefix = [
        f for f in functions["Functions"] if validate_function_name(f["FunctionName"])
    ]

    ctx.info(f"Found {len(functions_with_prefix)} functions with prefix {FUNCTION_PREFIX}")
    
    # Pass only function names and descriptions to the model
    function_names_and_descriptions = [ 
        {field: f[field] for field in ["FunctionName", "Description"] if field in f}
        for f in functions_with_prefix
    ]
    
    return json.dumps(function_names_and_descriptions)


def invoke_lambda_function_impl(function_name: str, parameters: dict, ctx: Context) -> str:
    """Tool that invokes an AWS Lambda function with a JSON payload.
    Before using this tool, list the functions available to you."""
    
    if not validate_function_name(function_name):
        return f"Function {function_name} is not valid"

    ctx.info(f"Invoking {function_name} with parameters: {parameters}")

    response = lambda_client.invoke(
        FunctionName=function_name,
        InvocationType="RequestResponse",
        Payload=json.dumps(parameters),
    )

    ctx.info(f"Function {function_name} returned with status code: {response['StatusCode']}")

    if "FunctionError" in response:
        error_message = f"Function {function_name} returned with error: {response['FunctionError']}"
        ctx.error(error_message)
        return error_message

    payload = response["Payload"].read()
    
    # Format the response payload
    return format_lambda_response(function_name, payload)


# Register the original tools if not using dynamic tools
if not PRE_DISCOVERY:
    # Register the generic tool functions with MCP
    mcp.tool()(list_lambda_functions_impl)
    mcp.tool()(invoke_lambda_function_impl)
    print("Using generic Lambda tools strategy...")


def create_lambda_tool(function_name: str, description: str):
    """Create a tool function for a Lambda function."""
    # Create a meaningful tool name
    tool_name = sanitize_tool_name(function_name)
    
    # Define the inner function
    def lambda_function(parameters: dict, ctx: Context) -> str:
        """Tool for invoking a specific AWS Lambda function with parameters."""
        # Use the same implementation as the generic invoke function
        return invoke_lambda_function_impl(function_name, parameters, ctx)
    
    # Set the function's documentation
    lambda_function.__doc__ = description
    
    # Apply the decorator manually with the specific name
    decorated_function = mcp.tool(name=tool_name)(lambda_function)
    
    return decorated_function


# Register Lambda functions as individual tools if dynamic strategy is enabled
if PRE_DISCOVERY:
    try:
        print("Using dynamic Lambda function registration strategy...")
        functions = lambda_client.list_functions()
        valid_functions = [
            f for f in functions["Functions"] if validate_function_name(f["FunctionName"])
        ]
        
        print(f"Dynamically registering {len(valid_functions)} Lambda functions as tools...")
        
        for function in valid_functions:
            function_name = function["FunctionName"]
            description = function.get("Description", f"AWS Lambda function: {function_name}")
            
            # Extract information about parameters from the description if available
            if "Expected format:" in description:
                # Add parameter information to the description
                parameter_info = description.split("Expected format:")[1].strip()
                description = f"{description}\n\nParameters: {parameter_info}"
            
            # Register the Lambda function as a tool
            create_lambda_tool(function_name, description)
        
        print("Lambda functions registered successfully as individual tools.")
    
    except Exception as e:
        print(f"Error registering Lambda functions as tools: {e}")
        print("Falling back to generic Lambda tools...")
        
        # Register the generic tool functions with MCP as fallback
        mcp.tool()(list_lambda_functions_impl)
        mcp.tool()(invoke_lambda_function_impl)


if __name__ == "__main__":
    mcp.run()

```