# Directory Structure
```
├── .cursor
│   ├── mcp.json
│   └── python.mdc
├── .env.example
├── .gitignore
├── browser.py
├── llms-install.md
├── README.md
├── requirements.txt
└── server.py
```
# Files
--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------
```
ANTHROPIC_API_KEY=api-key-here
LOG_LEVEL=CRITICAL
BROWSER_USE_LOGGING_LEVEL=CRITICAL
LANGCHAIN_TRACING_V2=false
LANGCHAIN_VERBOSE=false
```
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Environment variables
.env
# Virtual Environment
.venv/
venv/
env/
ENV/
# Python cache files
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
# Distribution / packaging
dist/
build/
*.egg-info/
*.egg
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
# Jupyter Notebook
.ipynb_checkpoints
# IDE specific files
.idea/
.vscode/
*.swp
*.swo
.DS_Store
# Browser-use specific
*.gif
browser_screenshots/ 
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
# Uber Eats MCP Server
This is a POC of how you can build an MCP servers on top of Uber Eats
https://github.com/user-attachments/assets/05efbf51-1b95-4bd2-a327-55f1fe2f958b
## What is MCP?
The [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) is an open protocol that enables seamless integration between LLM applications and external tools.
## Prerequisites
- Python 3.12 or higher
- Anthropic API key or other supported LLM provider
## Setup
1. Ensure you have a virtual environment activated:
   ```
   uv venv
   source .venv/bin/activate  # On Unix/Mac
   ```
2. Install required packages:
   ```
   uv pip install -r requirements.txt
   playwright install
   ```
3. Update the `.env` file with your API key:
   ```
   ANTHROPIC_API_KEY=your_openai_api_key_here
   ```
## Note
Since we're using stdio as MCP transport, we have disable all output from browser use
## Debugging
You can run the MCP inspector tool with this command
```bash
uv run mcp dev server.py
```
```
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
```
browser-use
langchain-anthropic
langchain-openai
python-dotenv
fastmcp 
mcp[cli]
```
--------------------------------------------------------------------------------
/.cursor/mcp.json:
--------------------------------------------------------------------------------
```json
{
  "mcpServers": {
      "uber_eats": {
          "command": "uv",
          "args": [
              "--directory",
              "/Users/ericzakariasson/dev/uber-eats-mcp-server",
              "run",
              "server.py"
          ]
      }
  }
}
```
--------------------------------------------------------------------------------
/browser.py:
--------------------------------------------------------------------------------
```python
from typing import Awaitable, Callable
from browser_use import Agent, Browser, BrowserConfig
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
import warnings
load_dotenv()
warnings.filterwarnings("ignore")
browser = Browser(
    config=BrowserConfig(
        chrome_instance_path='/Applications/Google Chrome.app/Contents/MacOS/Google Chrome',  # macOS path
    )
)
llm = ChatAnthropic(model_name="claude-3-5-sonnet-latest")
task_template = """
perform the following task
{task}
"""
async def run_browser_agent(task: str, on_step: Callable[[], Awaitable[None]]):
    """Run the browser-use agent with the specified task."""
    agent = Agent(
        task=task_template.format(task=task),
        browser=browser,
        llm=llm,
        register_new_step_callback=on_step,
        register_done_callback=on_step,
    )
    result = await agent.run()
    await browser.close()
    
    return result.final_result()
```
--------------------------------------------------------------------------------
/llms-install.md:
--------------------------------------------------------------------------------
```markdown
# AI Agents -- Read this!
This guide provides instructions for AI agents to set up the Uber Eats MCP server on behalf of the user.
## Prerequisites
- Python 3.12 or higher
- Anthropic API key or other supported LLM provider
## Setup
1. Create and activate a virtual environment:
   ```
   uv venv
   source .venv/bin/activate  # On Unix/Mac
   ```
2. Install required packages:
   ```
   uv pip install -r requirements.txt
   playwright install
   ```
## MCP Configuration
To use this server with MCP-compatible applications, you need to add it to your MCP settings file:
**Important Notes:**
- Replace `/path/to/uber-eats-mcp-server` with the actual path to your installation
- Replace the censored `ANTHROPIC_API_KEY` with your actual Anthropic API key
- All environment variables can be set directly in the MCP settings JSON file, so you don't need to update the .env file separately
- The command uses `/bin/bash` to activate the virtual environment before running the server
- You may need to restart your application after updating the MCP settings
## Available Tools
This MCP server provides the following tools:
1. `find_menu_options`: Search Uber Eats for restaurants or food items
   - Parameters: `search_term` (string) - Food or restaurant to search for
   - Returns a resource URI that can be used to retrieve the results after a few minutes
2. `order_food`: Order food from a restaurant
   - Parameters:
     - `item_url` (string) - URL of the item to order
     - `item_name` (string) - Name of the item to order
## Example Usage
```python
# Search for pizza options
result = await use_mcp_tool(
    server_name="github.com/ericzakariasson/uber-eats-mcp-server",
    tool_name="find_menu_options",
    arguments={"search_term": "pizza"}
)
# Wait for the search to complete (about 2 minutes)
# Then retrieve the results using the resource URI
search_results = await access_mcp_resource(
    server_name="github.com/ericzakariasson/uber-eats-mcp-server",
    uri="resource://search_results/{request_id}"  # request_id from the previous result
)
# Order food using the URL from the search results
order_result = await use_mcp_tool(
    server_name="github.com/ericzakariasson/uber-eats-mcp-server",
    tool_name="order_food",
    arguments={
        "item_url": "https://www.ubereats.com/...",  # URL from search results
        "item_name": "Pepperoni Pizza"
    }
)
```
## Troubleshooting
If you encounter connection issues:
1. Make sure the virtual environment is activated in the MCP settings file command
2. Check that the paths in your MCP settings file are correct
3. Verify that your Anthropic API key is valid
4. Try adjusting the log levels in the env section of your MCP settings
5. Restart your application after making changes to the MCP settings
```
--------------------------------------------------------------------------------
/server.py:
--------------------------------------------------------------------------------
```python
#!/usr/bin/env python3
import asyncio
from dotenv import load_dotenv
from mcp.server.fastmcp import FastMCP, Context
from browser import run_browser_agent
# Load environment variables from .env file
load_dotenv()
# Initialize FastMCP server
mcp = FastMCP("uber_eats")
# In-memory storage for search results
search_results = {}
@mcp.tool()
async def find_menu_options(search_term: str, context: Context) -> str:
    """Search Uber Eats for restaurants or food items.
    
    Args:
        search_term: Food or restaurant to search for
    """
    
    # Create the search task
    task = f"""
0. Start by going to: https://www.ubereats.com/se-en/
1. Type "{search_term}" in the global search bar and press enter
2. Go to the first search result (this is the most popular restaurant).
3. When you can see the menu options for the resturant, we need to use the specific search input for the resturant located under the banned (identify it by the placeholder "Search in [restaurant name]"
4. Click the input field and type "{search_term}", then press enter
5. Check for menu options related to "{search_term}"
6. Get the name, url and price of the top 3 items related to "{search_term}". URL is very important
"""
    
    search_results[context.request_id] = f"Search for '{search_term}' in progress. Check back in 30 seconds"
    asyncio.create_task(
        perform_search(context.request_id, search_term, task, context)
    )    
    
    return f"Search for '{search_term}' started. Please wait for 2 minutes, then you can retrieve results using the resource URI: resource://search_results/{context.request_id}. Use a terminal sleep statement to wait for 2 minutes."
async def perform_search(request_id: str, search_term: str, task: str, context: Context):
    """Perform the actual search in the background."""
    try:
        step_count = 0
        
        async def step_handler(*args, **kwargs):
            nonlocal step_count
            step_count += 1
            await context.info(f"Step {step_count} completed")
            await context.report_progress(step_count)
        
        result = await run_browser_agent(task=task, on_step=step_handler)
        
        search_results[request_id] = result
    
    except Exception as e:
        # Store the error with the request ID
        search_results[request_id] = f"Error: {str(e)}"
        await context.error(f"Error searching for '{search_term}': {str(e)}")
@mcp.resource(uri="resource://search_results/{request_id}")
async def get_search_results(request_id: str) -> str:
    """Get the search results for a given request ID.
    
    Args:
        request_id: The ID of the request to get the search results for
    """
    # Check if the results exist
    if request_id not in search_results:
        return f"No search results found for request ID: {request_id}"
    
    # Return the successful search results
    return search_results[request_id]
@mcp.tool()
async def order_food(item_url: str, item_name: str, context: Context) -> str:
    """Order food from a restaurant.
    
    Args:
        restaurant_url: URL of the restaurant
        item_name: Name of the item to order
    """
    
    task = f"""
1. Go to {item_url}
2. Click "Add to order"
3. Wait 3 seconds
4. Click "Go to checkout"
5. If there are upsell modals, click "Skip"
6. Click "Place order"
"""
    
    # Start the background task for ordering
    asyncio.create_task(
        perform_order(item_url, item_name, task, context)
    )
    
    # Return a message immediately
    return f"Order for '{item_name}' started. Your order is being processed."
async def perform_order(restaurant_url: str, item_name: str, task: str, context: Context):
    """Perform the actual food ordering in the background."""
    try:
        step_count = 0
        
        async def step_handler(*args, **kwargs):
            nonlocal step_count
            step_count += 1
            await context.info(f"Order step {step_count} completed")
            await context.report_progress(step_count)
        
        result = await run_browser_agent(task=task, on_step=step_handler)
        
        # Report completion
        await context.info(f"Order for '{item_name}' has been placed successfully!")
        return result
    
    except Exception as e:
        error_msg = f"Error ordering '{item_name}': {str(e)}"
        await context.error(error_msg)
        return error_msg
if __name__ == "__main__":
    mcp.run(transport='stdio') 
```