#
tokens: 12871/50000 19/19 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── .python-version
├── Dockerfile
├── LICENSE
├── main.py
├── mcp_client_bedrock
│   ├── converse_agent.py
│   ├── converse_tools.py
│   ├── main.py
│   ├── mcp_client.py
│   ├── pyproject.toml
│   ├── README.md
│   └── uv.lock
├── pyproject.toml
├── README.md
├── sample_functions
│   ├── customer-id-from-email
│   │   └── app.py
│   ├── customer-info-from-id
│   │   └── app.py
│   ├── run-python-code
│   │   ├── app.py
│   │   └── lambda_function.py
│   ├── samconfig.toml
│   └── template.yml
├── smithery.yaml
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
1 | 3.12
2 | 
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
 1 | # Python-generated files
 2 | __pycache__/
 3 | *.py[oc]
 4 | build/
 5 | dist/
 6 | wheels/
 7 | *.egg-info
 8 | 
 9 | # Virtual environments
10 | .venv
11 | .DS_Store
```

--------------------------------------------------------------------------------
/mcp_client_bedrock/README.md:
--------------------------------------------------------------------------------

```markdown
1 | This is a demo of Anthropic's open source MCP used with Amazon Bedrock Converse API. This combination allows for the MCP to be used with any of the many models supported by the Converse API.
2 | 
3 | See https://github.com/mikegc-aws/amazon-bedrock-mcp for more information.
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP2Lambda
  2 | 
  3 | [![smithery badge](https://smithery.ai/badge/@danilop/MCP2Lambda)](https://smithery.ai/server/@danilop/MCP2Lambda)
  4 | 
  5 | <a href="https://glama.ai/mcp/servers/4hokv207sz">
  6 |   <img width="380" height="200" src="https://glama.ai/mcp/servers/4hokv207sz/badge" alt="MCP2Lambda MCP server" />
  7 | </a>
  8 | 
  9 | Run any [AWS Lambda](https://aws.amazon.com/lambda/) function as a Large Language Model (LLM) **tool** without code changes using [Anthropic](https://www.anthropic.com)'s [Model Context Protocol (MCP)](https://github.com/modelcontextprotocol).
 10 | 
 11 | ```mermaid
 12 | graph LR
 13 |     A[Model] <--> B[MCP Client]
 14 |     B <--> C["MCP2Lambda<br>(MCP Server)"]
 15 |     C <--> D[Lambda Function]
 16 |     D <--> E[Other AWS Services]
 17 |     D <--> F[Internet]
 18 |     D <--> G[VPC]
 19 |     
 20 |     style A fill:#f9f,stroke:#333,stroke-width:2px
 21 |     style B fill:#bbf,stroke:#333,stroke-width:2px
 22 |     style C fill:#bfb,stroke:#333,stroke-width:4px
 23 |     style D fill:#fbb,stroke:#333,stroke-width:2px
 24 |     style E fill:#fbf,stroke:#333,stroke-width:2px
 25 |     style F fill:#dff,stroke:#333,stroke-width:2px
 26 |     style G fill:#ffd,stroke:#333,stroke-width:2px
 27 | ```
 28 | 
 29 | This MCP server acts as a **bridge** between MCP clients and AWS Lambda functions, allowing generative AI models to access and run Lambda functions as tools. This is useful, for example, to access private resources such as internal applications and databases without the need to provide public network access. This approach allows the model to use other AWS services, private networks, and the public internet.
 30 | 
 31 | From a **security** perspective, this approach implements segregation of duties by allowing the model to invoke the Lambda functions but not to access the other AWS services directly. The client only needs AWS credentials to invoke the Lambda functions. The Lambda functions can then interact with other AWS services (using the function role) and access public or private networks.
 32 | 
 33 | The MCP server gives access to two tools:
 34 | 
 35 | 1. The first tool can **autodiscover** all Lambda functions in your account that match a prefix or an allowed list of names. This tool shares the names of the functions and their descriptions with the model.
 36 | 
 37 | 2. The second tool allows to **invoke** those Lambda functions by name passing the required parameters.
 38 | 
 39 | No code changes are required. You should change these configurations to improve results:
 40 | 
 41 | ## Strategy Selection
 42 | 
 43 | The gateway supports two different strategies for handling Lambda functions:
 44 | 
 45 | 1. **Pre-Discovery Mode** (default: enabled): Registers each Lambda function as an individual tool at startup. This provides a more intuitive interface where each function appears as its own named tool.
 46 | 
 47 | 2. **Generic Mode**: Uses two generic tools (`list_lambda_functions` and `invoke_lambda_function`) to interact with Lambda functions.
 48 | 
 49 | You can control this behavior through:
 50 | 
 51 | - Environment variable: `PRE_DISCOVERY=true|false`
 52 | - CLI flag: `--no-pre-discovery` (disables pre-discovery mode)
 53 | 
 54 | Example:
 55 | ```bash
 56 | # Disable pre-discovery mode
 57 | export PRE_DISCOVERY=false
 58 | python main.py
 59 | 
 60 | # Or using CLI flag to disable pre-discovery
 61 | python main.py --no-pre-discovery
 62 | ```
 63 | 
 64 | 1. To provide the MCP client with the knowledge to use a Lambda function, the **description of the Lambda function** should indicate what the function does and which parameters it uses. See the sample functions for a quick demo and more details.
 65 | 
 66 | 2. To help the model use the tools available via AWS Lambda, you can add something like this to your **system prompt**:
 67 | 
 68 | ```
 69 | Use the AWS Lambda tools to improve your answers.
 70 | ```
 71 | 
 72 | ## Overview
 73 | 
 74 | MCP2Lambda enables LLMs to interact with AWS Lambda functions as tools, extending their capabilities beyond text generation. This allows models to:
 75 | 
 76 | - Access real-time and private data, including data sources in your VPCs
 77 | - Execute custom code using a Lambda function as sandbox environment
 78 | - Interact with external services and APIs using Lambda functions internet access (and bandwidth)
 79 | - Perform specialized calculations or data processing
 80 | 
 81 | The server uses the MCP protocol, which standardizes the way AI models can access external tools.
 82 | 
 83 | By default, only functions whose name starts with `mcp2lambda-` will be available to the model.
 84 | 
 85 | ## Prerequisites
 86 | 
 87 | - Python 3.12 or higher
 88 | - AWS account with configured credentials
 89 | - AWS Lambda functions (sample functions provided in the repo)
 90 | - An application using [Amazon Bedrock](https://aws.amazon.com/bedrock/) with the [Converse API](https://docs.aws.amazon.com/bedrock/latest/userguide/converse.html)
 91 | - An MCP-compatible client like [Claude Desktop](https://docs.anthropic.com/en/docs/claude-desktop)
 92 | 
 93 | ## Installation
 94 | 
 95 | ### Installing via Smithery
 96 | 
 97 | To install MCP2Lambda for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@danilop/MCP2Lambda):
 98 | 
 99 | ```bash
100 | npx -y @smithery/cli install @danilop/MCP2Lambda --client claude
101 | ```
102 | 
103 | ### Manual Installation
104 | 1. Clone the repository:
105 |    ```
106 |    git clone https://github.com/yourusername/mcp2lambda.git
107 |    cd mcp2lambda
108 |    ```
109 | 
110 | 2. Configure AWS credentials. For example, using the [AWS CLI](https://aws.amazon.com/cli):
111 |    ```
112 |    aws configure
113 |    ```
114 | 
115 | ## Sample Lambda Functions
116 | 
117 | This repository includes three *sample* Lambda functions that demonstrate different use cases. These functions have basic permissions and can only write to CloudWatch logs.
118 | 
119 | ### CustomerIdFromEmail
120 | Retrieves a customer ID based on an email address. This function takes an email parameter and returns the associated customer ID, demonstrating how to build simple lookup tools. The function is hard coded to reply to the `[email protected]` email address. For example, you can ask the model to get the customer ID for the email `[email protected]`.
121 | 
122 | ### CustomerInfoFromId
123 | Retrieves detailed customer information based on a customer ID. This function returns customer details like name, email, and status, showing how Lambda can provide context-specific data. The function is hard coded to reply to the customer ID returned by the previous function. For example, you can ask the model to "Get the customer status for email `[email protected]`". This will use both functions to get to the result.
124 | 
125 | ### RunPythonCode
126 | Executes arbitrary Python code within a Lambda sandbox environment. This powerful function allows Claude to write and run Python code to perform calculations, data processing, or other operations not built into the model. For example, you can ask the model to "Calculate the number of prime numbers between 1 and 10, 1 and 100, and so on up to 1M".
127 | 
128 | ## Deploying Sample Lambda Functions
129 | 
130 | The repository includes sample Lambda functions in the `sample_functions` directory.
131 | 
132 | 1. Install the AWS SAM CLI: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/install-sam-cli.html
133 | 
134 | 2. Deploy the sample functions:
135 |    ```
136 |    cd sample_functions
137 |    sam build
138 |    sam deploy
139 |    ```
140 | 
141 | The sample functions will be deployed with the prefix `mcp2lambda-`.
142 | 
143 | ## Using with Amazon Bedrock
144 | 
145 | MCP2Lambda can also be used with Amazon Bedrock's Converse API, allowing you to use the MCP protocol with any of the models supported by Bedrock.
146 | 
147 | The `mcp_client_bedrock` directory contains a client implementation that connects MCP2Lambda to Amazon Bedrock models.
148 | 
149 | See https://github.com/mikegc-aws/amazon-bedrock-mcp for more information.
150 | 
151 | ### Prerequisites
152 | 
153 | - Amazon Bedrock access and permissions to use models like Claude, Mistral, Llama, etc.
154 | - Boto3 configured with appropriate credentials
155 | 
156 | ### Installation and Setup
157 | 
158 | 1. Navigate to the mcp_client_bedrock directory:
159 |    ```
160 |    cd mcp_client_bedrock
161 |    ```
162 | 
163 | 2. Install dependencies:
164 |    ```
165 |    uv pip install -e .
166 |    ```
167 | 
168 | 3. Run the client:
169 |    ```
170 |    python main.py
171 |    ```
172 | 
173 | ### Configuration
174 | 
175 | The client is configured to use Anthropic's Claude 3.7 Sonnet by default, but you can modify the `model_id` in `main.py` to use other Bedrock models:
176 | 
177 | ```python
178 | # Examples of supported models:
179 | model_id = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
180 | #model_id = "us.amazon.nova-pro-v1:0"
181 | ```
182 | 
183 | You can also customize the system prompt in the same file to change how the model behaves.
184 | 
185 | ### Usage
186 | 
187 | 1. Start the MCP2Lambda server in one terminal:
188 |    ```
189 |    cd mcp2lambda
190 |    uv run main.py
191 |    ```
192 | 
193 | 2. Run the Bedrock client in another terminal:
194 |    ```
195 |    cd mcp_client_bedrock
196 |    python main.py
197 |    ```
198 | 
199 | 3. Interact with the model through the command-line interface. The model will have access to the Lambda functions deployed earlier.
200 | 
201 | ## Using with Claude Desktop
202 | 
203 | Add the following to your Claude Desktop configuration file:
204 | 
205 | ```json
206 | {
207 |   "mcpServers": {
208 |     "mcp2lambda": {
209 |       "command": "uv",
210 |       "args": [
211 |         "--directory",
212 |         "<full path to the mcp2lambda directory>",
213 |         "run",
214 |         "main.py"
215 |       ]
216 |     }
217 |   }
218 | }
219 | ```
220 | 
221 | To help the model use tools via AWS Lambda, in your settings profile, you can add to your personal preferences a sentence like:
222 | 
223 | ```
224 | Use the AWS Lambda tools to improve your answers.
225 | ```
226 | 
227 | ## Starting the MCP Server
228 | 
229 | Start the MCP server locally:
230 | 
231 | ```sh
232 | cd mcp2lambda
233 | uv run main.py
234 | ```
```

--------------------------------------------------------------------------------
/sample_functions/samconfig.toml:
--------------------------------------------------------------------------------

```toml
1 | version = 0.1
2 | [default.deploy.parameters]
3 | stack_name = "mcp2lambda"
4 | resolve_s3 = true
5 | s3_prefix = "mcp2lambda"
6 | region = "us-east-1"
7 | capabilities = "CAPABILITY_IAM"
8 | image_repositories = []
9 | 
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "mcp2lambda"
 3 | version = "0.1.0"
 4 | description = "MCP2Lambda - A bridge between MCP clients and AWS Lambda functions"
 5 | readme = "README.md"
 6 | requires-python = ">=3.12"
 7 | dependencies = [
 8 |     "boto3>=1.37.0",
 9 |     "mcp==1.3.0",
10 | ]
11 | 
12 | [tool.uv.workspace]
13 | members = ["mcp_bedrock"]
14 | 
```

--------------------------------------------------------------------------------
/mcp_client_bedrock/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "mcp-client-bedrock"
 3 | version = "0.1.0"
 4 | description = "Sample MCP client implementation for Amazon Bedrock (see https://github.com/mikegc-aws/amazon-bedrock-mcp for more information)"
 5 | readme = "README.md"
 6 | requires-python = ">=3.12"
 7 | dependencies = [
 8 |     "boto3>=1.37.0",
 9 |     "mcp==1.3.0",
10 | ]
11 | 
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
 2 | 
 3 | startCommand:
 4 |   type: stdio
 5 |   configSchema:
 6 |     # JSON Schema defining the configuration options for the MCP.
 7 |     {}
 8 |   commandFunction:
 9 |     # A function that produces the CLI command to start the MCP on stdio.
10 |     |-
11 |     (config) => ({ command: 'python', args: ['main.py'] })
12 |   exampleConfig: {}
13 | 
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
 2 | FROM python:3.12-slim
 3 | 
 4 | WORKDIR /app
 5 | 
 6 | # Copy all project files into container
 7 | COPY . .
 8 | 
 9 | # Create a setup.cfg to restrict package discovery and avoid multiple top-level packages
10 | RUN echo "[metadata]\nname = mcp2lambda\nversion = 0.1.0\n\n[options]\npy_modules = main" > setup.cfg
11 | 
12 | # Upgrade pip and install the package without caching
13 | RUN pip install --upgrade pip \
14 |     && pip install . --no-cache-dir
15 | 
16 | CMD ["python", "main.py"]
17 | 
```

--------------------------------------------------------------------------------
/sample_functions/customer-id-from-email/app.py:
--------------------------------------------------------------------------------

```python
 1 | def lambda_handler(event: dict, context: dict) -> dict:
 2 |     """
 3 |     AWS Lambda function to retrieve customer ID based on customer email address.
 4 |     
 5 |     Args:
 6 |         event (dict): The Lambda event object containing the customer email
 7 |                       Expected format: {"email": "[email protected]"}
 8 |         context (dict): AWS Lambda context object
 9 |         
10 |     Returns:
11 |         dict: Customer ID if found, otherwise an error message
12 |               Success format: {"customerId": "123"}
13 |               Error format: {"error": "Customer not found"}
14 |     """
15 |     try:
16 |         # Extract email from the event
17 |         email = event.get('email')
18 |         
19 |         if not email:
20 |             return {"error": "Missing email parameter"}
21 |             
22 |         # This would normally query a database
23 |         # For demo purposes, we'll return mock data
24 |         
25 |         # Simulate database lookup
26 |         if email == "[email protected]":
27 |             return {"customerId": "12345"}
28 |         else:
29 |             return {"error": "Customer not found"}
30 |             
31 |     except Exception as e:
32 |         return {"error": str(e)}
33 | 
```

--------------------------------------------------------------------------------
/sample_functions/template.yml:
--------------------------------------------------------------------------------

```yaml
 1 | AWSTemplateFormatVersion: '2010-09-09'
 2 | Transform: AWS::Serverless-2016-10-31
 3 | Description: Sample functions for MCP servers.
 4 | 
 5 | Resources:
 6 | 
 7 |   CustomerInfoFromId:
 8 |     Type: AWS::Serverless::Function
 9 |     Properties:
10 |       CodeUri: ./customer-info-from-id
11 |       Description: Customer status from { 'customerId' }
12 |       MemorySize: 128
13 |       Timeout: 3
14 |       Handler: app.lambda_handler
15 |       Runtime: python3.13
16 |       Architectures:
17 |         - arm64
18 | 
19 |   CustomerIdFromEmail:
20 |     Type: AWS::Serverless::Function
21 |     Properties:
22 |       CodeUri: ./customer-id-from-email
23 |       Description: Get customer ID from { 'email' }
24 |       MemorySize: 128
25 |       Timeout: 3
26 |       Handler: app.lambda_handler
27 |       Runtime: python3.13
28 |       Architectures:
29 |         - arm64
30 |         
31 |   RunPythonCode:
32 |     Type: AWS::Serverless::Function
33 |     Properties:
34 |       CodeUri: ./run-python-code
35 |       Description: Run Python code in the { 'input_script' }. Install modules if { 'install_modules' } is not an empty list.
36 |       MemorySize: 1024
37 |       Timeout: 60
38 |       Handler: app.lambda_handler
39 |       Runtime: python3.13
40 |       Architectures:
41 |         - arm64
42 | 
43 | Outputs:
44 | 
45 |   CustomerInfoFromId:
46 |     Description: "CustomerInfoFromId Function ARN"
47 |     Value: !GetAtt CustomerInfoFromId.Arn
48 |     
49 |   CustomerIdFromEmail:
50 |     Description: "CustomerIdFromEmail Function ARN"
51 |     Value: !GetAtt CustomerIdFromEmail.Arn
```

--------------------------------------------------------------------------------
/sample_functions/customer-info-from-id/app.py:
--------------------------------------------------------------------------------

```python
 1 | import json
 2 | 
 3 | def lambda_handler(event: dict, context: dict) -> dict:
 4 |     """
 5 |     AWS Lambda function to retrieve customer information based on customer ID.
 6 |     
 7 |     Args:
 8 |         event (dict): The Lambda event object containing the customer ID
 9 |                       Expected format: {"customerId": "123"}
10 |         context (dict): AWS Lambda context object
11 |         
12 |     Returns:
13 |         dict: Customer information if found, otherwise an error message
14 |               Success format: {"customerId": "123", "name": "John Doe", "email": "[email protected]", ...}
15 |               Error format: {"error": "Customer not found"}
16 |     """
17 |     try:
18 |         # Extract customer ID from the event
19 |         customer_id = event.get('customerId')
20 |         
21 |         if not customer_id:
22 |             return {"error": "Missing customerId parameter"}
23 |             
24 |         # This would normally query a database
25 |         # For demo purposes, we'll return mock data
26 |         
27 |         # Simulate database lookup
28 |         if customer_id == "12345":
29 |             return {
30 |                 "customerId": "12345",
31 |                 "name": "John Doe",
32 |                 "email": "[email protected]",
33 |                 "phone": "+1-555-123-4567",
34 |                 "address": {
35 |                     "street": "123 Main St",
36 |                     "city": "Anytown",
37 |                     "state": "CA",
38 |                     "zipCode": "12345"
39 |                 },
40 |                 "accountCreated": "2022-01-15"
41 |             }
42 |         else:
43 |             return {"error": "Customer not found"}
44 |             
45 |     except Exception as e:
46 |         return {"error": str(e)}
47 | 
```

--------------------------------------------------------------------------------
/mcp_client_bedrock/mcp_client.py:
--------------------------------------------------------------------------------

```python
 1 | from mcp import ClientSession, StdioServerParameters
 2 | from mcp.client.stdio import stdio_client
 3 | from typing import Any, List
 4 | 
 5 | class MCPClient:
 6 |     def __init__(self, server_params: StdioServerParameters):
 7 |         self.server_params = server_params
 8 |         self.session = None
 9 |         self._client = None
10 |         
11 |     async def __aenter__(self):
12 |         """Async context manager entry"""
13 |         await self.connect()
14 |         return self
15 |         
16 |     async def __aexit__(self, exc_type, exc_val, exc_tb):
17 |         """Async context manager exit"""
18 |         if self.session:
19 |             await self.session.__aexit__(exc_type, exc_val, exc_tb)
20 |         if self._client:
21 |             await self._client.__aexit__(exc_type, exc_val, exc_tb)
22 | 
23 |     async def connect(self):
24 |         """Establishes connection to MCP server"""
25 |         self._client = stdio_client(self.server_params)
26 |         self.read, self.write = await self._client.__aenter__()
27 |         session = ClientSession(self.read, self.write)
28 |         self.session = await session.__aenter__()
29 |         await self.session.initialize()
30 | 
31 |     async def get_available_tools(self) -> List[Any]:
32 |         """List available tools"""
33 |         if not self.session:
34 |             raise RuntimeError("Not connected to MCP server")
35 |             
36 |         tools = await self.session.list_tools()
37 |         return tools.tools
38 | 
39 |     async def call_tool(self, tool_name: str, arguments: dict) -> Any:
40 |         """Call a tool with given arguments"""
41 |         if not self.session:
42 |             raise RuntimeError("Not connected to MCP server")
43 |             
44 |         result = await self.session.call_tool(tool_name, arguments=arguments)
45 |         return result
46 | 
```

--------------------------------------------------------------------------------
/mcp_client_bedrock/main.py:
--------------------------------------------------------------------------------

```python
 1 | import asyncio
 2 | from mcp import StdioServerParameters
 3 | from converse_agent import ConverseAgent
 4 | from converse_tools import ConverseToolManager
 5 | from mcp_client import MCPClient
 6 | 
 7 | async def main():
 8 |     """
 9 |     Main function that sets up and runs an interactive AI agent with tool integration.
10 |     The agent can process user prompts and utilize registered tools to perform tasks.
11 |     """
12 |     # Initialize model configuration
13 |     model_id = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
14 |     #model_id = "us.amazon.nova-pro-v1:0"
15 |         
16 |     # Set up the agent and tool manager
17 |     agent = ConverseAgent(model_id)
18 |     agent.tools = ConverseToolManager()
19 | 
20 |     # Define the agent's behavior through system prompt
21 |     agent.system_prompt = """You are a helpful assistant that can use tools to help you answer 
22 | questions and perform tasks."""
23 | 
24 |     # Create server parameters for SQLite configuration
25 |     server_params = StdioServerParameters(
26 |         command="uv",
27 | #       args=["--directory", "..", "run", "main.py", "--no-pre-discovery"],
28 |         args=["--directory", "..", "run", "main.py"],
29 |         env=None
30 |     )
31 | 
32 |     # Initialize MCP client with server parameters
33 |     async with MCPClient(server_params) as mcp_client:
34 | 
35 |         # Fetch available tools from the MCP client
36 |         tools = await mcp_client.get_available_tools()
37 | 
38 |         # Register each available tool with the agent
39 |         for tool in tools:
40 |             agent.tools.register_tool(
41 |                 name=tool.name,
42 |                 func=mcp_client.call_tool,
43 |                 description=tool.description,
44 |                 input_schema={'json': tool.inputSchema}
45 |             )
46 | 
47 |         # Start interactive prompt loop
48 |         while True:
49 |             try:
50 |                 # Get user input and check for exit commands
51 |                 user_prompt = input("\nEnter your prompt (or 'quit' to exit): ")
52 |                 if user_prompt.lower() in ['quit', 'exit', 'q']:
53 |                     break
54 |                 
55 |                 # Process the prompt and display the response
56 |                 response = await agent.invoke_with_prompt(user_prompt)
57 |                 print("\nResponse:", response)
58 |                 
59 |             except KeyboardInterrupt:
60 |                 print("\nExiting...")
61 |                 break
62 |             except Exception as e:
63 |                 print(f"\nError occurred: {e}")
64 | 
65 | if __name__ == "__main__":
66 |     # Run the async main function
67 |     asyncio.run(main()) 
```

--------------------------------------------------------------------------------
/mcp_client_bedrock/converse_tools.py:
--------------------------------------------------------------------------------

```python
 1 | from typing import Any, Dict, List, Callable
 2 | 
 3 | class ConverseToolManager:
 4 |     def __init__(self):
 5 |         self._tools = {}
 6 |         self._name_mapping = {}  # Maps sanitized names to original names
 7 |     
 8 |     def _sanitize_name(self, name: str) -> str:
 9 |         """Convert hyphenated names to underscore format"""
10 |         return name.replace('-', '_')
11 |     
12 |     def register_tool(self, name: str, func: Callable, description: str, input_schema: Dict):
13 |         """
14 |         Register a new tool with the system, sanitizing the name for Bedrock compatibility
15 |         """
16 |         sanitized_name = self._sanitize_name(name)
17 |         self._name_mapping[sanitized_name] = name
18 |         self._tools[sanitized_name] = {
19 |             'function': func,
20 |             'description': description,
21 |             'input_schema': input_schema,
22 |             'original_name': name
23 |         }
24 | 
25 |     def get_tools(self) -> Dict[str, List[Dict]]:
26 |         """
27 |         Generate the tools specification using sanitized names
28 |         """
29 |         tool_specs = []
30 |         for sanitized_name, tool in self._tools.items():
31 |             tool_specs.append({
32 |                 'toolSpec': {
33 |                     'name': sanitized_name,  # Use sanitized name for Bedrock
34 |                     'description': tool['description'],
35 |                     'inputSchema': tool['input_schema']
36 |                 }
37 |             })
38 |         
39 |         return {'tools': tool_specs}
40 | 
41 |     async def execute_tool(self, payload: Dict[str, Any]) -> Dict[str, Any]:
42 |         """
43 |         Execute a tool based on the agent's request, handling name translation
44 |         """
45 |         tool_use_id = payload['toolUseId']
46 |         sanitized_name = payload['name']
47 |         tool_input = payload['input']
48 | 
49 |         if sanitized_name not in self._tools:
50 |             raise ValueError(f"Unknown tool: {sanitized_name}")
51 |         try:
52 |             tool_func = self._tools[sanitized_name]['function']
53 |             # Use original name when calling the actual function
54 |             original_name = self._tools[sanitized_name]['original_name']
55 |             result = await tool_func(original_name, tool_input)
56 |             return {
57 |                 'toolUseId': tool_use_id,
58 |                 'content': [{
59 |                     'text': str(result)
60 |                 }],
61 |                 'status': 'success'
62 |             }
63 |         except Exception as e:
64 |             return {
65 |                 'toolUseId': tool_use_id,
66 |                 'content': [{
67 |                     'text': f"Error executing tool: {str(e)}"
68 |                 }],
69 |                 'status': 'error'
70 |             }
71 | 
72 |     def clear_tools(self):
73 |         """Clear all registered tools"""
74 |         self._tools.clear()
75 |     
```

--------------------------------------------------------------------------------
/sample_functions/run-python-code/app.py:
--------------------------------------------------------------------------------

```python
  1 | import os
  2 | import subprocess
  3 | import json
  4 | 
  5 | TMP_DIR = "/tmp"
  6 | 
  7 | 
  8 | def remove_tmp_contents() -> None:
  9 |     """
 10 |     Remove all contents (files and directories) from the temporary directory.
 11 | 
 12 |     This function traverses the /tmp directory tree and removes all files and empty
 13 |     directories. It handles exceptions for each removal attempt and prints any
 14 |     errors encountered.
 15 |     """
 16 |     # Traverse the /tmp directory tree
 17 |     for root, dirs, files in os.walk(TMP_DIR, topdown=False):
 18 |         # Remove files
 19 |         for file in files:
 20 |             file_path: str = os.path.join(root, file)
 21 |             try:
 22 |                 os.remove(file_path)
 23 |             except Exception as e:
 24 |                 print(f"Error removing {file_path}: {e}")
 25 |         
 26 |         # Remove empty directories
 27 |         for dir in dirs:
 28 |             dir_path: str = os.path.join(root, dir)
 29 |             try:
 30 |                 os.rmdir(dir_path)
 31 |             except Exception as e:
 32 |                 print(f"Error removing {dir_path}: {e}")
 33 | 
 34 | 
 35 | def do_install_modules(modules: list[str], current_env: dict[str, str]) -> str:    
 36 |     """
 37 |     Install Python modules using pip.
 38 | 
 39 |     This function takes a list of module names and attempts to install them
 40 |     using pip. It handles exceptions for each module installation and prints
 41 |     any errors encountered.
 42 | 
 43 |     Args:
 44 |         modules (list[str]): A list of module names to install.
 45 |     """
 46 | 
 47 |     output = ''
 48 | 
 49 |     if type(modules) is list and len(modules) > 0:
 50 |         current_env["PYTHONPATH"] = TMP_DIR
 51 |         try:
 52 |             _ = subprocess.run(f"pip install -U pip setuptools wheel -t {TMP_DIR} --no-cache-dir".split(), capture_output=True, text=True, check=True)
 53 |             for module in modules:
 54 |                 _ = subprocess.run(f"pip install {module} -t {TMP_DIR} --no-cache-dir".split(), capture_output=True, text=True, check=True)
 55 |         except Exception as e:
 56 |             error_message = f"Error installing {module}: {e}"
 57 |             print(error_message)
 58 |             output += error_message
 59 | 
 60 |     return output
 61 | 
 62 | 
 63 | def lambda_handler(event: dict, context: dict) -> dict:
 64 |     """
 65 |     AWS Lambda function handler to execute Python code provided in the event.
 66 |     
 67 |     Args:
 68 |         event (dict): The Lambda event object containing the Python code to execute
 69 |                       Expected format: {"code": "your_python_code_as_string"}
 70 |         context (dict): AWS Lambda context object
 71 |         
 72 |     Returns:
 73 |         dict: Results of the code execution containing:
 74 |               - output (str): Output of the executed code or error message
 75 |     """
 76 |     remove_tmp_contents()
 77 | 
 78 |     output = ""
 79 |     current_env = os.environ.copy()
 80 | 
 81 |     # No need to go further if there is no script to run
 82 |     input_script = event.get('input_script', '')
 83 |     if len(input_script) == 0:
 84 |         return {
 85 |             'statusCode': 400,
 86 |             'body': 'Input script is required'
 87 |         }
 88 | 
 89 |     install_modules = event.get('install_modules', [])
 90 |     output += do_install_modules(install_modules, current_env)
 91 | 
 92 |     print(f"Script:\n{input_script}")
 93 |     
 94 |     result = subprocess.run(["python", "-c", input_script], env=current_env, capture_output=True, text=True)
 95 |     output += result.stdout + result.stderr
 96 | 
 97 |     print(f"Output: {output}")
 98 |     print(f"Len: {len(output)}")
 99 | 
100 |     # After running the script
101 |     remove_tmp_contents()
102 | 
103 |     result = {
104 |         'output': output
105 |     }
106 | 
107 |     return {
108 |         'statusCode': 200,
109 |         'body': json.dumps(result)
110 |     }
111 | 
```

--------------------------------------------------------------------------------
/mcp_client_bedrock/converse_agent.py:
--------------------------------------------------------------------------------

```python
  1 | import json
  2 | import re
  3 | 
  4 | import boto3
  5 | 
  6 | class ConverseAgent:
  7 |     def __init__(self, model_id, region='us-west-2', system_prompt='You are a helpful assistant.'):
  8 |         self.model_id = model_id
  9 |         self.region = region
 10 |         self.client = boto3.client('bedrock-runtime', region_name=self.region)
 11 |         self.system_prompt = system_prompt
 12 |         self.messages = []
 13 |         self.tools = None
 14 |         self.response_output_tags = [] # ['<response>', '</response>']
 15 | 
 16 |     async def invoke_with_prompt(self, prompt):
 17 |         content = [
 18 |             {
 19 |                 'text': prompt
 20 |             }
 21 |         ]
 22 |         return await self.invoke(content)
 23 | 
 24 |     async def invoke(self, content):
 25 | 
 26 |         print(f"User: {json.dumps(content, indent=2)}")
 27 | 
 28 |         self.messages.append(
 29 |             {
 30 |                 "role": "user", 
 31 |                 "content": content
 32 |             }
 33 |         )
 34 |         response = self._get_converse_response()
 35 | 
 36 |         print(f"Agent: {json.dumps(response, indent=2)}")
 37 | 
 38 |         return await self._handle_response(response)
 39 | 
 40 |     def _get_converse_response(self):
 41 |         """
 42 |         https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/bedrock-runtime/client/converse.html
 43 |         """
 44 |         
 45 |         # print(f"Invoking with messages: {json.dumps(self.messages, indent=2)}")
 46 |         
 47 |         response = self.client.converse(
 48 |             modelId=self.model_id,
 49 |             messages=self.messages,
 50 |             system=[
 51 |                 {
 52 |                     "text": self.system_prompt
 53 |                 }
 54 |             ],
 55 |             inferenceConfig={
 56 |                 "maxTokens": 4096,
 57 |                 "temperature": 0.7,
 58 |             },
 59 |             toolConfig=self.tools.get_tools()
 60 |         )
 61 |         return(response)
 62 |     
 63 |     async def _handle_response(self, response):
 64 |         # Add the response to the conversation history
 65 |         self.messages.append(response['output']['message'])
 66 | 
 67 |         # Do we need to do anything else?
 68 |         stop_reason = response['stopReason']
 69 | 
 70 |         if stop_reason in ['end_turn', 'stop_sequence']:
 71 |             # Safely extract the text from the nested response structure
 72 |             try:
 73 |                 message = response.get('output', {}).get('message', {})
 74 |                 content = message.get('content', [])
 75 |                 text = content[0].get('text', '')
 76 |                 if hasattr(self, 'response_output_tags') and len(self.response_output_tags) == 2:
 77 |                     pattern = f"(?s).*{re.escape(self.response_output_tags[0])}(.*?){re.escape(self.response_output_tags[1])}"
 78 |                     match = re.search(pattern, text)
 79 |                     if match:
 80 |                         return match.group(1)
 81 |                 return text
 82 |             except (KeyError, IndexError):
 83 |                 return ''
 84 | 
 85 |         elif stop_reason == 'tool_use':
 86 |             try:
 87 |                 # Extract tool use details from response
 88 |                 tool_response = []
 89 |                 for content_item in response['output']['message']['content']:
 90 |                     if 'toolUse' in content_item:
 91 |                         tool_request = {
 92 |                             "toolUseId": content_item['toolUse']['toolUseId'],
 93 |                             "name": content_item['toolUse']['name'],
 94 |                             "input": content_item['toolUse']['input']
 95 |                         }
 96 |                         
 97 |                         tool_result = await self.tools.execute_tool(tool_request)
 98 |                         tool_response.append({'toolResult': tool_result})
 99 |                 
100 |                 return await self.invoke(tool_response)
101 |                 
102 |             except KeyError as e:
103 |                 raise ValueError(f"Missing required tool use field: {e}")
104 |             except Exception as e:
105 |                 raise ValueError(f"Failed to execute tool: {e}")
106 | 
107 |         elif stop_reason == 'max_tokens':
108 |             # Hit token limit (this is one way to handle it.)
109 |             await self.invoke_with_prompt('Please continue.')
110 | 
111 |         else:
112 |             raise ValueError(f"Unknown stop reason: {stop_reason}")
113 | 
114 | 
```

--------------------------------------------------------------------------------
/sample_functions/run-python-code/lambda_function.py:
--------------------------------------------------------------------------------

```python
  1 | import base64
  2 | import json
  3 | import os
  4 | import subprocess
  5 | from typing import Dict, Any
  6 | 
  7 | TMP_DIR = "/tmp"
  8 | 
  9 | IMAGE_EXTENSIONS = ['png', 'jpeg', 'jpg', 'gif', 'webp']
 10 | 
 11 | # To avoid "Matplotlib created a temporary cache directory..." warning
 12 | os.environ['MPLCONFIGDIR'] = os.path.join(TMP_DIR, f'matplotlib_{os.getpid()}')
 13 | 
 14 | 
 15 | def remove_tmp_contents() -> None:
 16 |     """
 17 |     Remove all contents (files and directories) from the temporary directory.
 18 | 
 19 |     This function traverses the /tmp directory tree and removes all files and empty
 20 |     directories. It handles exceptions for each removal attempt and prints any
 21 |     errors encountered.
 22 |     """
 23 |     # Traverse the /tmp directory tree
 24 |     for root, dirs, files in os.walk(TMP_DIR, topdown=False):
 25 |         # Remove files
 26 |         for file in files:
 27 |             file_path: str = os.path.join(root, file)
 28 |             try:
 29 |                 os.remove(file_path)
 30 |             except Exception as e:
 31 |                 print(f"Error removing {file_path}: {e}")
 32 |         
 33 |         # Remove empty directories
 34 |         for dir in dirs:
 35 |             dir_path: str = os.path.join(root, dir)
 36 |             try:
 37 |                 os.rmdir(dir_path)
 38 |             except Exception as e:
 39 |                 print(f"Error removing {dir_path}: {e}")
 40 | 
 41 | 
 42 | def do_install_modules(modules: list[str], current_env: dict[str, str]) -> str:    
 43 |     """
 44 |     Install Python modules using pip.
 45 | 
 46 |     This function takes a list of module names and attempts to install them
 47 |     using pip. It handles exceptions for each module installation and prints
 48 |     any errors encountered.
 49 | 
 50 |     Args:
 51 |         modules (list[str]): A list of module names to install.
 52 |     """
 53 | 
 54 |     output = ''
 55 | 
 56 |     for module in modules:
 57 |         try:
 58 |             subprocess.run(["pip", "install", module], check=True)
 59 |         except Exception as e:
 60 |             print(f"Error installing {module}: {e}")
 61 | 
 62 |     if type(modules) is list and len(modules) > 0:
 63 |         current_env["PYTHONPATH"] = TMP_DIR
 64 |         try:
 65 |             _ = subprocess.run(f"pip install -U pip setuptools wheel -t {TMP_DIR} --no-cache-dir".split(), capture_output=True, text=True, check=True)
 66 |             for module in modules:
 67 |                 _ = subprocess.run(f"pip install {module} -t {TMP_DIR} --no-cache-dir".split(), capture_output=True, text=True, check=True)
 68 |         except Exception as e:
 69 |             error_message = f"Error installing {module}: {e}"
 70 |             print(error_message)
 71 |             output += error_message
 72 | 
 73 |     return output
 74 | 
 75 | 
 76 | def lambda_handler(event: Dict[str, Any], context: Any) -> Dict[str, Any]:
 77 |     """
 78 |     AWS Lambda function handler that executes a Python script and processes its output.
 79 | 
 80 |     This function takes an input Python script, executes it, captures the output,
 81 |     and processes any generated images. It also handles temporary file management.
 82 | 
 83 |     Args:
 84 |         event (Dict[str, Any]): The event dict containing the Lambda function input.
 85 |         context (Any): The context object provided by AWS Lambda.
 86 | 
 87 |     Returns:
 88 |         Dict[str, Any]: A dictionary containing the execution results, including:
 89 |             - statusCode (int): HTTP status code (200 for success, 400 for bad request)
 90 |             - body (str): Error message in case of bad request
 91 |             - output (str): The combined stdout and stderr output from the script execution
 92 |             - images (List[Dict[str, str]]): List of dictionaries containing image data
 93 |     """
 94 |     # Before running the script
 95 |     remove_tmp_contents()
 96 | 
 97 |     output = ""
 98 |     current_env = os.environ.copy()
 99 | 
100 |     # No need to go further if there is no script to run
101 |     input_script = event.get('input_script', '')
102 |     if len(input_script) == 0:
103 |         return {
104 |             'statusCode': 400,
105 |             'body': 'Input script is required'
106 |         }
107 | 
108 |     install_modules = event.get('install_modules', [])
109 |     output += do_install_modules(install_modules, current_env)
110 | 
111 |     print(f"Script:\n{input_script}")
112 |     
113 |     result = subprocess.run(["python", "-c", input_script], env=current_env, capture_output=True, text=True)
114 |     output += result.stdout + result.stderr
115 | 
116 |     # Search for images and convert them to base64
117 |     images = []
118 | 
119 |     for file in os.listdir(TMP_DIR):
120 |         file_path: str = os.path.join(TMP_DIR, file)
121 |         if os.path.isfile(file_path) and any(file.lower().endswith(f".{ext}") for ext in IMAGE_EXTENSIONS):
122 |             try:
123 |                 # Read file content
124 |                 with open(file_path, "rb") as f:
125 |                     file_content: bytes = f.read()
126 |                     images.append({
127 |                         "path": file_path,
128 |                         "base64": base64.b64encode(file_content).decode('utf-8')
129 |                     })
130 |                 output += f"File {file_path} loaded.\n"
131 |             except Exception as e:
132 |                 output += f"Error loading {file_path}: {e}"
133 | 
134 |     print(f"Output: {output}")
135 |     print(f"Len: {len(output)}")
136 |     print(f"Images: {len(images)}")
137 | 
138 |     # After running the script
139 |     remove_tmp_contents()
140 | 
141 |     result: Dict[str, Any] = {
142 |         'output': output,
143 |         'images': images
144 |     }
145 | 
146 |     return {
147 |         'statusCode': 200,
148 |         'body': json.dumps(result)
149 |     }
150 | 
```

--------------------------------------------------------------------------------
/main.py:
--------------------------------------------------------------------------------

```python
  1 | import json
  2 | import os
  3 | import re
  4 | import argparse
  5 | 
  6 | from mcp.server.fastmcp import FastMCP, Context
  7 | import boto3
  8 | 
  9 | # Strategy selection - set to True to register Lambda functions as individual tools
 10 | # set to False to use the original approach with list and invoke tools
 11 | parser = argparse.ArgumentParser(description='MCP Gateway to AWS Lambda')
 12 | parser.add_argument('--no-pre-discovery', 
 13 |                    action='store_true',
 14 |                    help='Disable registering Lambda functions as individual tools at startup')
 15 | 
 16 | # Parse arguments and set default configuration
 17 | args = parser.parse_args()
 18 | 
 19 | # Check environment variable first (takes precedence if set)
 20 | if 'PRE_DISCOVERY' in os.environ:
 21 |     PRE_DISCOVERY = os.environ.get('PRE_DISCOVERY').lower() == 'true'
 22 | else:
 23 |     # Otherwise use CLI argument (default is enabled, --no-pre-discovery disables)
 24 |     PRE_DISCOVERY = not args.no_pre_discovery
 25 | 
 26 | AWS_REGION = os.environ.get("AWS_REGION", "us-east-1")
 27 | FUNCTION_PREFIX = os.environ.get("FUNCTION_PREFIX", "mcp2lambda-")
 28 | FUNCTION_LIST = json.loads(os.environ.get("FUNCTION_LIST", "[]"))
 29 | 
 30 | mcp = FastMCP("MCP Gateway to AWS Lambda")
 31 | 
 32 | lambda_client = boto3.client("lambda", region_name=AWS_REGION)
 33 | 
 34 | 
 35 | def validate_function_name(function_name: str) -> bool:
 36 |     """Validate that the function name is valid and can be called."""
 37 |     return function_name.startswith(FUNCTION_PREFIX) or function_name in FUNCTION_LIST
 38 | 
 39 | 
 40 | def sanitize_tool_name(name: str) -> str:
 41 |     """Sanitize a Lambda function name to be used as a tool name."""
 42 |     # Remove prefix if present
 43 |     if name.startswith(FUNCTION_PREFIX):
 44 |         name = name[len(FUNCTION_PREFIX):]
 45 |     
 46 |     # Replace invalid characters with underscore
 47 |     name = re.sub(r'[^a-zA-Z0-9_]', '_', name)
 48 |     
 49 |     # Ensure name doesn't start with a number
 50 |     if name and name[0].isdigit():
 51 |         name = "_" + name
 52 |     
 53 |     return name
 54 | 
 55 | 
 56 | def format_lambda_response(function_name: str, payload: bytes) -> str:
 57 |     """Format the Lambda function response payload."""
 58 |     try:
 59 |         # Try to parse the payload as JSON
 60 |         payload_json = json.loads(payload)
 61 |         return f"Function {function_name} returned: {json.dumps(payload_json, indent=2)}"
 62 |     except (json.JSONDecodeError, UnicodeDecodeError):
 63 |         # Return raw payload if not JSON
 64 |         return f"Function {function_name} returned payload: {payload}"
 65 | 
 66 | 
 67 | # Define the generic tool functions that can be used directly or as fallbacks
 68 | def list_lambda_functions_impl(ctx: Context) -> str:
 69 |     """Tool that lists all AWS Lambda functions that you can call as tools.
 70 |     Use this list to understand what these functions are and what they do.
 71 |     This functions can help you in many different ways."""
 72 | 
 73 |     ctx.info("Calling AWS Lambda ListFunctions...")
 74 | 
 75 |     functions = lambda_client.list_functions()
 76 | 
 77 |     ctx.info(f"Found {len(functions['Functions'])} functions")
 78 | 
 79 |     functions_with_prefix = [
 80 |         f for f in functions["Functions"] if validate_function_name(f["FunctionName"])
 81 |     ]
 82 | 
 83 |     ctx.info(f"Found {len(functions_with_prefix)} functions with prefix {FUNCTION_PREFIX}")
 84 |     
 85 |     # Pass only function names and descriptions to the model
 86 |     function_names_and_descriptions = [ 
 87 |         {field: f[field] for field in ["FunctionName", "Description"] if field in f}
 88 |         for f in functions_with_prefix
 89 |     ]
 90 |     
 91 |     return json.dumps(function_names_and_descriptions)
 92 | 
 93 | 
 94 | def invoke_lambda_function_impl(function_name: str, parameters: dict, ctx: Context) -> str:
 95 |     """Tool that invokes an AWS Lambda function with a JSON payload.
 96 |     Before using this tool, list the functions available to you."""
 97 |     
 98 |     if not validate_function_name(function_name):
 99 |         return f"Function {function_name} is not valid"
100 | 
101 |     ctx.info(f"Invoking {function_name} with parameters: {parameters}")
102 | 
103 |     response = lambda_client.invoke(
104 |         FunctionName=function_name,
105 |         InvocationType="RequestResponse",
106 |         Payload=json.dumps(parameters),
107 |     )
108 | 
109 |     ctx.info(f"Function {function_name} returned with status code: {response['StatusCode']}")
110 | 
111 |     if "FunctionError" in response:
112 |         error_message = f"Function {function_name} returned with error: {response['FunctionError']}"
113 |         ctx.error(error_message)
114 |         return error_message
115 | 
116 |     payload = response["Payload"].read()
117 |     
118 |     # Format the response payload
119 |     return format_lambda_response(function_name, payload)
120 | 
121 | 
122 | # Register the original tools if not using dynamic tools
123 | if not PRE_DISCOVERY:
124 |     # Register the generic tool functions with MCP
125 |     mcp.tool()(list_lambda_functions_impl)
126 |     mcp.tool()(invoke_lambda_function_impl)
127 |     print("Using generic Lambda tools strategy...")
128 | 
129 | 
130 | def create_lambda_tool(function_name: str, description: str):
131 |     """Create a tool function for a Lambda function."""
132 |     # Create a meaningful tool name
133 |     tool_name = sanitize_tool_name(function_name)
134 |     
135 |     # Define the inner function
136 |     def lambda_function(parameters: dict, ctx: Context) -> str:
137 |         """Tool for invoking a specific AWS Lambda function with parameters."""
138 |         # Use the same implementation as the generic invoke function
139 |         return invoke_lambda_function_impl(function_name, parameters, ctx)
140 |     
141 |     # Set the function's documentation
142 |     lambda_function.__doc__ = description
143 |     
144 |     # Apply the decorator manually with the specific name
145 |     decorated_function = mcp.tool(name=tool_name)(lambda_function)
146 |     
147 |     return decorated_function
148 | 
149 | 
150 | # Register Lambda functions as individual tools if dynamic strategy is enabled
151 | if PRE_DISCOVERY:
152 |     try:
153 |         print("Using dynamic Lambda function registration strategy...")
154 |         functions = lambda_client.list_functions()
155 |         valid_functions = [
156 |             f for f in functions["Functions"] if validate_function_name(f["FunctionName"])
157 |         ]
158 |         
159 |         print(f"Dynamically registering {len(valid_functions)} Lambda functions as tools...")
160 |         
161 |         for function in valid_functions:
162 |             function_name = function["FunctionName"]
163 |             description = function.get("Description", f"AWS Lambda function: {function_name}")
164 |             
165 |             # Extract information about parameters from the description if available
166 |             if "Expected format:" in description:
167 |                 # Add parameter information to the description
168 |                 parameter_info = description.split("Expected format:")[1].strip()
169 |                 description = f"{description}\n\nParameters: {parameter_info}"
170 |             
171 |             # Register the Lambda function as a tool
172 |             create_lambda_tool(function_name, description)
173 |         
174 |         print("Lambda functions registered successfully as individual tools.")
175 |     
176 |     except Exception as e:
177 |         print(f"Error registering Lambda functions as tools: {e}")
178 |         print("Falling back to generic Lambda tools...")
179 |         
180 |         # Register the generic tool functions with MCP as fallback
181 |         mcp.tool()(list_lambda_functions_impl)
182 |         mcp.tool()(invoke_lambda_function_impl)
183 | 
184 | 
185 | if __name__ == "__main__":
186 |     mcp.run()
187 | 
```