# Directory Structure
```
├── .github
│ └── FUNDING.yml
├── .gitignore
├── assets
│ └── mcp.png
├── docker-compose.yml
├── Dockerfile
├── LICENSE
├── pyrightconfig.json
├── README.md
└── src
├── __init__.py
├── api
│ ├── alexa_api.py
│ ├── config.py
│ ├── main.py
│ └── requirements.txt
├── auth
│ ├── config.py
│ ├── login.py
│ └── requirements.txt
└── mcp
├── __init__.py
├── config.py
├── mcp_server.py
└── requirements.txt
```
# Files
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
# Environment variables
.env
# Python cache
__pycache__/
src/__pycache__/
*.pyc
*.pyo
*.pyd
# Virtual environments
venv/
env/
.venv/
# OS-specific files
.DS_Store
# Cookie file
*.pickle
# Example data directory
app_data_host/
# Build/install artifacts
*.egg-info/
# Custom Exclusions
.cursor/
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
# Alexa Shopping List

## About
Seamlessly manage your Alexa shopping list. Add, remove, and view items instantly.
Interact with your Alexa shopping list via MCP, using AI assistants like Claude or Cursor.
> [!WARNING]
> **Requires Manual Authentication & Cookie Refresh**
>
> This tool uses browser cookies extracted via a manual login process.
> Amazon sessions expire.
> You **will** need to re-run the login script periodically (Step 5 & 6) when the tool stops working.
## Components
1. **API Server (`src/api`):** Docker container (FastAPI) talking to Alexa.
2. **MCP Server (`src/mcp`):** Local script providing MCP tools. Proxies to the API server.
3. **Login Script (`src/auth`):** Local script using Selenium for login and cookie injection.
## Prerequisites
- Python 3.10+
- `uv` (Install: `pip install uv` or see [astral.sh/uv](https://astral.sh/uv))
- Docker & Docker Compose (or Docker Desktop)
- Google Chrome (for login script)
- Amazon Account (with Alexa)
## Setup & Run
**1. Clone Repository**
```bash
# git clone <repository_url>
cd alexa-mcp
```
**2. Configure Components**
Adjust settings in the `config.py` file within each component directory:
- `src/api/config.py`: API server settings (port, internal paths).
- `src/auth/config.py`: Login script settings (Amazon URL, API location, **EMAIL/PASSWORD**).
- `src/mcp/config.py`: MCP server settings (API location).
*Ensure `AMAZON_URL` matches your region and **set your `AMAZON_EMAIL` and `AMAZON_PASSWORD` in `src/auth/config.py`**.* You only need to set these temporarily for the login script to know which Amazon URL to open; the script no longer uses them automatically.
**3. Start API Server Container**
Builds the image and runs the API server in the background.
```bash
docker compose up --build -d alexa_api
```
*(Use `docker compose logs -f alexa_api` to view logs; `docker compose down` to stop.)*
**4. Set Up Local Environment & Install Auth Dependencies**
```bash
# In the project root (alexa-mcp)
uv venv
source .venv/bin/activate
uv pip install -r src/auth/requirements.txt
```
**5. Run Login Script**
This opens a browser window to the Amazon sign-in page.
```bash
# Ensure virtual env is active
python -m src.auth.login
```
**6. Manual Login & Confirmation**
Log in manually using the browser window opened by the script. Handle any 2FA or CAPTCHA steps presented by Amazon.
Once you are successfully logged into Amazon in that browser window, return to the terminal where you ran the script and press `ENTER`.
The script will then attempt to extract the session cookies and send them to the API server.
**7. Test API**
Verify the API server received the cookies and can access your list by opening this URL in your browser (or using `curl`):
[http://127.0.0.1:8000/items/all](http://127.0.0.1:8000/items/all)
You should see a JSON response containing your current Alexa shopping list items. If you get an error (like 401 Unauthorized or 503 Service Unavailable), check the API logs (`docker compose logs alexa_api`) and potentially rerun steps 5 & 6.
* **API Documentation:** FastAPI automatically generates interactive documentation. You can explore all available endpoints and test them directly in your browser at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs).
## Troubleshooting
- **MCP Server Issues:**
- `spawn ENOENT` (Claude Desktop): Verify absolute paths in `mcp.json`.
- Connection Errors/Disconnects: Check API container logs (`docker compose logs alexa_api`). Ensure API container is running and accessible (check `src/mcp/config.py`).
- Import Errors: Ensure dependencies installed in the correct venv (`uv pip install -r src/mcp/requirements.txt`).
- **API Container Issues:**
- Startup Failure: Check logs (`docker compose logs alexa_api`).
- Config Errors: Verify settings in `src/api/config.py`.
- Port Conflicts: Ensure host port `8000` (or mapped port) is free.
- **Login Script Issues (`src/auth/login.py`):**
- Import Errors: Ensure dependencies installed (`uv pip install -r src/auth/requirements.txt`).
- `ModuleNotFoundError: No module named 'distutils'` (on Python 3.12+): Ensure `setuptools` is included in `src/auth/requirements.txt` and dependencies are reinstalled.
- WebDriver Errors: Ensure Chrome is installed/updated. Check `nodriver` compatibility.
- Cookie Errors: Occurs if login fails or cookies cannot be extracted after successful login.
- API Connection Error: Ensure API container is running and reachable (check `src/auth/config.py`). Check `docker compose logs alexa_api`.
- Login Failures: Verify credentials in `src/auth/config.py`. Check for unexpected page changes or Captcha/2FA prompts mentioned in logs or screenshots. Amazon might change selectors (`#ap_email`, `#signInSubmit`, etc.).
- **Tool Errors (401 Unauthorized):** Login failed or cookies expired. Rerun the login script (`python -m src.auth.login`). Ensure credentials in `src/auth/config.py` are correct and check `auth` logs for any 2FA/Captcha issues during the last run.
## Connecting an MCP Client (Claude Desktop / Cursor)
To use this server with an MCP client like Claude Desktop or Cursor, you need to add its configuration to your client's `mcp.json` file. This file tells the client how to find and run your local MCP server.
1. Locate your MCP client's configuration file (often named `mcp.json`). The location varies depending on the client.
2. Open the file and add the following entry within the main `"mcpServers": { ... }` object:
```json
"alexa-shopping-list": {
"displayName": "Alexa Shopping List MCP",
"description": "MCP Server for interacting with Alexa shopping list via local API",
"command": "/path/to/your/alexa-mcp/.venv/bin/python",
"args": [
"-m",
"src.mcp.mcp_server"
],
"workingDirectory": "/path/to/your/alexa-mcp",
"env": {
"PYTHONPATH": "/path/to/your/alexa-mcp"
}
}
```
**IMPORTANT:**
* You **MUST** replace the placeholder absolute paths `/path/to/your/alexa-mcp` in the `command`, `workingDirectory`, and `env.PYTHONPATH` fields with the actual absolute path to **your** project directory on your machine.
* Ensure the `.venv` virtual environment exists at that location and has the MCP dependencies installed (`uv pip install -r src/mcp/requirements.txt`).
3. Save the `mcp.json` file.
4. Restart your MCP client. The "Alexa Shopping List MCP" server should now be available.
## Sponsorship
Like this tool? Consider sponsoring the developer:
[](https://github.com/sponsors/TheSethRose)
```
--------------------------------------------------------------------------------
/src/__init__.py:
--------------------------------------------------------------------------------
```python
```
--------------------------------------------------------------------------------
/src/mcp/requirements.txt:
--------------------------------------------------------------------------------
```
fastmcp
requests
```
--------------------------------------------------------------------------------
/src/auth/requirements.txt:
--------------------------------------------------------------------------------
```
nodriver
requests
```
--------------------------------------------------------------------------------
/.github/FUNDING.yml:
--------------------------------------------------------------------------------
```yaml
github: TheSethRose
```
--------------------------------------------------------------------------------
/src/mcp/__init__.py:
--------------------------------------------------------------------------------
```python
# This file marks src/mcp as a Python package
```
--------------------------------------------------------------------------------
/pyrightconfig.json:
--------------------------------------------------------------------------------
```json
{
"executionEnvironments": [
{
"root": "src"
}
]
}
```
--------------------------------------------------------------------------------
/src/api/requirements.txt:
--------------------------------------------------------------------------------
```
fastapi
uvicorn[standard]
requests
python-multipart
apscheduler
pydantic>=2.0
python-dotenv
```
--------------------------------------------------------------------------------
/src/mcp/config.py:
--------------------------------------------------------------------------------
```python
# Configuration for the MCP Server
import logging
# Logging level for the MCP server
LOG_LEVEL = "INFO"
# Host and Port where the API container is running
# Assumes API container is accessible on localhost from where MCP server runs
API_HOST = "localhost"
API_PORT = 8000
# --- Derived --- #
LOG_LEVEL_INT = getattr(logging, LOG_LEVEL.upper(), logging.INFO)
API_BASE_URL = f"http://{API_HOST}:{API_PORT}"
```
--------------------------------------------------------------------------------
/src/auth/config.py:
--------------------------------------------------------------------------------
```python
# Configuration for the Auth (Login) Script
import logging
# Amazon URL for your locale (e.g., amazon.com, amazon.co.uk)
AMAZON_URL = "https://www.amazon.com"
# Path where the login script temporarily saves the cookie file locally
# before sending it to the API container.
LOCAL_TEMP_COOKIE_PATH = "./alexa_cookie.pickle"
# Logging level for the login script
LOG_LEVEL = "INFO"
# Host and Port of the running API container to send cookies to
# Assumes API container is accessible on localhost from where login script runs
API_HOST = "localhost"
API_PORT = 8000
# --- Derived --- #
LOG_LEVEL_INT = getattr(logging, LOG_LEVEL.upper(), logging.INFO)
API_COOKIE_ENDPOINT = f"http://{API_HOST}:{API_PORT}/auth/cookies"
```
--------------------------------------------------------------------------------
/docker-compose.yml:
--------------------------------------------------------------------------------
```yaml
services:
alexa_api:
build:
context: .
dockerfile: Dockerfile # Use the new Dockerfile at the root
container_name: shopping_list_api
ports:
# Map host port 8000 to container port 8000 (which is defined in src/api/config.py)
- "8000:8000"
volumes:
# Use a named volume for persistent data (e.g., cookies)
- cookie_data:/app/data
# Healthcheck to ensure the application is running
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/"] # Use the root endpoint
interval: 30s
timeout: 10s
retries: 3
# Optional: Resource limits (uncomment if needed)
# deploy:
# resources:
# limits:
# cpus: '1'
# memory: 512M
restart: unless-stopped
# Declare the named volume used by the service
volumes:
cookie_data:
```
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
```dockerfile
# Use a slim Python base image
FROM python:3.12-slim-bookworm
# Set environment variables
ENV PYTHONUNBUFFERED=1
ENV DEBIAN_FRONTEND=noninteractive
# Add the root app directory to PYTHONPATH so src.* imports work
ENV PYTHONPATH="/app"
# Install tini for proper signal handling
RUN apt-get update && apt-get install -y --no-install-recommends tini \
&& rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /app
# Copy only the API requirements file first to leverage Docker cache
COPY src/api/requirements.txt ./
# Install API Python dependencies
RUN pip install --no-cache-dir --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt
# Copy the entire src directory which contains api, mcp, auth
# The API needs access to shared modules like config if they exist at the src level
COPY ./src ./src
# Create the data directory for cookies using an absolute path
# This assumes the API server might write cookies here, adjust if needed
RUN mkdir -p /app/data
# Use tini as the entrypoint
ENTRYPOINT ["/usr/bin/tini", "--"]
# Command to run the application using uvicorn
# Points to the FastAPI app instance within the copied src structure
CMD ["uvicorn", "src.api.main:app", "--host", "0.0.0.0", "--port", "8000"]
```
--------------------------------------------------------------------------------
/src/api/config.py:
--------------------------------------------------------------------------------
```python
"""Configuration management using environment variables."""
import os
import logging
from dataclasses import dataclass
from typing import Optional
import sys
logger = logging.getLogger(__name__)
# Configuration for the API Server (inside Docker)
COOKIE_PATH = "/app/data/cookies.json"
# Amazon URL for your locale (e.g., amazon.com, amazon.co.uk)
# Needs to match the one used for login to construct API paths correctly.
AMAZON_URL = "https://www.amazon.com"
# Logging level for the API server
LOG_LEVEL = "INFO"
# Port the API server listens on inside the container
API_PORT = 8000
# --- Derived --- #
LOG_LEVEL_INT = getattr(logging, LOG_LEVEL.upper(), logging.INFO)
#-def load_config(project_root: Optional[str] = None) -> AppConfig:
#- """Loads configuration from .env file and environment variables."""
#- # Construct the path to the .env file
#- if project_root is None:
#- project_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
#- dotenv_path = os.path.join(project_root, '.env')
#-
#- logger.debug(f"Attempting to load .env file from: {dotenv_path}")
#-
#- # Use find_dotenv to locate the .env file reliably
#- dotenv_path_found = find_dotenv(filename='.env', raise_error_if_not_found=False)
#-
#- if dotenv_path_found:
#- logger.info(f"Loading environment variables from: {dotenv_path_found}")
#- load_dotenv(dotenv_path=dotenv_path_found)
#- else:
#- logger.warning(".env file not found. Relying on environment variables or defaults.")
#-
#- # Load values, providing defaults
#- amazon_url = os.getenv("AMAZON_URL")
#- cookie_path = os.getenv("COOKIE_PATH", "./alexa_cookie.pickle") # Default local path
#- log_level = os.getenv("LOG_LEVEL", "INFO")
#- api_port_str = os.getenv("API_PORT", "8000")
#-
#- # Basic validation
#- if not amazon_url:
#- raise EnvironmentError("AMAZON_URL environment variable is not set.")
#- if not cookie_path:
#- logger.warning("COOKIE_PATH not set, using default '{cookie_path}'.")
#- if log_level.upper() not in ["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"]:
#- logger.warning(f"Invalid LOG_LEVEL '{log_level}', using default 'INFO'.")
#- log_level = "INFO"
#-
#- try:
#- api_port = int(api_port_str)
#- if not (1024 <= api_port <= 65535):
#- logger.warning(f"Invalid API_PORT '{api_port_str}', using default 8000.")
#- api_port = 8000
#- except ValueError:
#- logger.warning(f"Invalid API_PORT '{api_port_str}', using default 8000.")
#- api_port = 8000
#-
#- return AppConfig(
#- amazon_url=amazon_url,
#- cookie_path=cookie_path,
#- log_level=log_level,
#- api_port=api_port
#- )
```
--------------------------------------------------------------------------------
/src/auth/login.py:
--------------------------------------------------------------------------------
```python
"""Script to force login/re-login, generate the Alexa cookie file, and send it to the API container."""
import logging
import sys
import asyncio
import os
from pathlib import Path
import requests # Needed for POSTing cookies
from typing import List, Dict # For cookie formatting type hint
import nodriver as uc
import json # To save cookies locally
import datetime
try:
# Import the local auth config
from . import config as auth_config
except ImportError as e:
print(f"Error importing local config: {e}", file=sys.stderr)
print("Ensure you are running from the project root or have activated the correct environment.", file=sys.stderr)
sys.exit(1)
logger = logging.getLogger("login_script") # Renamed logger for clarity
# Constructing URL based on signIn.js structure but with our return_to target
direct_signin_url = "https://www.amazon.com/"
async def post_cookies_to_api(cookies_for_requests: List[Dict]):
"""Posts the cookies (in requests format) to the API endpoint as JSON."""
# Post to API
logger.info(f"Attempting to send cookies as JSON to API endpoint: {auth_config.API_COOKIE_ENDPOINT}")
try:
# Send the list of cookie dictionaries as JSON
response = requests.post(auth_config.API_COOKIE_ENDPOINT, json=cookies_for_requests, timeout=15)
response.raise_for_status()
logger.info(f"Successfully sent cookies to API. Status: {response.status_code}")
return True
except requests.exceptions.ConnectionError as conn_err:
logger.error(f"Could not connect to the API server at {auth_config.API_COOKIE_ENDPOINT}. Is it running?")
logger.error(f"Error: {conn_err}")
except requests.exceptions.Timeout:
logger.error(f"Timeout while sending cookie file to {auth_config.API_COOKIE_ENDPOINT}.")
except requests.exceptions.RequestException as req_err:
logger.error(f"Error sending cookie file to API: {req_err}")
if req_err.response is not None:
logger.error(f"API Response Status: {req_err.response.status_code}")
logger.error(f"API Response Body: {req_err.response.text}")
except Exception as upload_err:
logger.error(f"Unexpected error during cookie file upload: {upload_err}", exc_info=True)
return False
async def main():
"""Opens browser to sign-in page, waits for manual login, then extracts cookies."""
logging.basicConfig(
level=auth_config.LOG_LEVEL_INT,
format='%(asctime)s - %(name)s [%(levelname)s] %(message)s',
stream=sys.stdout
)
logger.info("Starting Alexa authentication process with nodriver...")
browser = None
page = None
# Flag for final outcome
cookie_upload_success = False
try:
# --- Step 1: Open Browser to Sign-in Page --- #
logger.info("Initializing nodriver browser...")
browser = await uc.start() # Headless by default
page = await browser.get(direct_signin_url)
logger.info(f"Navigated directly to: {direct_signin_url}")
# --- Step 2: Wait for Manual Login --- #
print("-" * 60)
print("*** MANUAL LOGIN REQUIRED ***")
print("A browser window should have opened to the Amazon sign-in page.")
print("Please log in manually within that browser window.")
print("If you encounter 2FA or CAPTCHA, complete those steps in the browser.")
input("--> Press Enter here AFTER you have successfully logged in... ")
print("-" * 60)
logger.info("User indicated manual login complete. Attempting to extract cookies.")
# --- Step 5: Extract and Post Cookies ---
# Always attempt cookie extraction after verification steps or manual intervention pause
# The login_success flag now only indicates if the *automated* part seemed to succeed.
# The login_and_upload_success flag tracks the actual cookie extraction/upload result.
logger.info("Proceeding to Step 5: Extract and Post Cookies attempt...")
# --- Start Cookie Extraction Logic --- #
logger.info("Attempting to extract cookies...")
# Use the documented way to get all cookies for the current context/page
# Access cookies through the browser object, not the page/tab object
raw_cookies = await browser.cookies.get_all(requests_cookie_format=True)
if not raw_cookies:
logger.error("Failed to extract cookies after manual login attempt.")
# Add a re-check here based on greeting if possible (might be useful even after manual intervention)
try:
final_greeting = await page.evaluate('''() => {
const el = document.querySelector('#nav-link-accountList .nav-line-1');
return el ? el.innerText.trim() : null;
}''')
if not (final_greeting and "Hello," in final_greeting):
logger.error("Double-check: Still not logged in according to greeting element, despite manual confirmation.")
# No need for specific log if cookies extraction failed BUT login_success was True and greeting was found
except Exception:
logger.warning("Could not perform final greeting check.")
# login_and_upload_success remains False
else:
logger.info(f"Successfully extracted {len(raw_cookies)} raw cookie objects.")
# Convert Cookie objects to JSON serializable list of dicts
serializable_cookies = []
for cookie in raw_cookies:
# Extract common attributes, handle potential None values
cookie_dict = {
"name": getattr(cookie, 'name', None),
"value": getattr(cookie, 'value', None),
"domain": getattr(cookie, 'domain', None),
"path": getattr(cookie, 'path', None),
"expires": getattr(cookie, 'expires', None), # May need conversion if not serializable
"secure": getattr(cookie, 'secure', False),
"httpOnly": getattr(cookie, 'httpOnly', False), # Try direct access
# Add other relevant fields if needed, e.g., sameSite
}
# Filter out None values if necessary, or handle expires conversion
serializable_cookies.append({k: v for k, v in cookie_dict.items() if v is not None})
logger.info(f"Formatted {len(serializable_cookies)} cookies for JSON.")
# Send the *serializable* list to the API
cookie_upload_success = await post_cookies_to_api(serializable_cookies)
if cookie_upload_success:
logger.info("Manual login confirmation received and cookies sent successfully.")
else:
logger.error("Cookie extraction may have succeeded, but upload to API failed. See previous logs.")
# --- End Cookie Extraction Logic --- #
if not cookie_upload_success:
logger.error("Overall process failed (cookie extraction or upload failed). Exiting with error status.")
sys.exit(1)
else:
logger.info("Process completed successfully.")
except Exception as e:
# Catch top-level errors in the main login flow
logger.exception(f"An unexpected error occurred during the nodriver login process: {e}")
if page:
try:
# Save final page state on uncaught exception - REMOVED
pass
except Exception as screenshot_err:
logger.warning(f"Could not save error screenshot: {screenshot_err}")
sys.exit(1)
finally:
if browser:
logger.info("Closing nodriver browser...")
try:
browser.stop() # Use stop() to close nodriver browser (synchronous)
logger.info("Browser closed.")
except Exception as close_err:
logger.warning(f"Error closing browser: {close_err}")
if __name__ == "__main__":
try:
uc.loop().run_until_complete(main())
except KeyboardInterrupt:
logging.getLogger().info("Login process interrupted by user.")
sys.exit(0)
except Exception as main_err:
# Catch errors happening outside the main async function itself
logging.getLogger().exception(f"Critical error running main: {main_err}")
sys.exit(1)
```
--------------------------------------------------------------------------------
/src/api/alexa_api.py:
--------------------------------------------------------------------------------
```python
"""Functions for interacting with the Alexa API (shopping list)."""
import json
import logging
import requests
from http.cookies import SimpleCookie
from collections import defaultdict
from typing import Optional, List, Dict, Any
# Import the local config module itself
from . import config as api_config
logger = logging.getLogger(__name__)
# Define headers for requests (Consider making configurable or dynamic)
DEFAULT_HEADERS = {
"User-Agent": ("Mozilla/5.0 (iPhone; CPU iPhone OS 13_5_1 like Mac OS X)"
" AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/15E148"
" PitanguiBridge/2.2.345247.0-[HARDWARE=iPhone10_4][SOFTWARE=13.5.1]"),
"Accept": "*/*",
"Accept-Language": "*",
"DNT": "1",
"Upgrade-Insecure-Requests": "1"
}
# --- Cookie Handling ---
# Hardcoded path for cookie loading *within the container*
# Adjusted for JSON format
CONTAINER_COOKIE_PATH = "/app/data/cookies.json"
def load_cookies_from_json_file(cookie_file_path: str) -> Optional[List[Dict[str, Any]]]:
"""Loads cookies from a JSON file (expected list of dicts)."""
try:
with open(cookie_file_path, 'r', encoding='utf-8') as f:
cookies_list = json.load(f) # Load list of cookie dicts
if not isinstance(cookies_list, list):
logger.error(f"Expected a list in {cookie_file_path}, got {type(cookies_list)}.")
return None
# Return the full list of dictionaries for detailed processing
logger.debug(f"Successfully loaded {len(cookies_list)} cookie dicts from JSON: {cookie_file_path}")
return cookies_list
except FileNotFoundError:
logger.error(f"Cookie file not found: {cookie_file_path}")
return None
except json.JSONDecodeError as json_err:
logger.error(f"Failed to decode JSON from {cookie_file_path}: {json_err}")
return None
except Exception as err:
logger.error(f"Failed to load or parse cookies from JSON file {cookie_file_path}: {err}", exc_info=True)
return None
# --- API Request Function ---
def make_authenticated_request(
url: str,
# cookie_file_path: str, # No longer needed as parameter
method: str = 'GET',
payload: Optional[Dict[str, Any]] = None
) -> Optional[requests.Response]:
"""Makes an authenticated request using cookies from the fixed container path."""
try:
session = requests.Session()
session.headers.update(DEFAULT_HEADERS)
# Always load from the container path
cookie_list_of_dicts = load_cookies_from_json_file(CONTAINER_COOKIE_PATH)
if not cookie_list_of_dicts:
logger.error(f"No cookies loaded from {CONTAINER_COOKIE_PATH} for authenticated request.")
return None
# Set cookies individually using requests' set method
for cookie_dict in cookie_list_of_dicts:
name = cookie_dict.get('name')
value = cookie_dict.get('value')
domain = cookie_dict.get('domain')
path = cookie_dict.get('path')
if name and value:
logger.debug(f"Setting cookie: name={name}, domain={domain}, path={path}")
session.cookies.set(
name=name,
value=value,
domain=domain,
path=path
# requests automatically handles secure/expires/httpOnly for its context
# We mainly need name, value, domain, path for session management
)
else:
logger.warning(f"Skipping cookie dict with missing name/value: {cookie_dict}")
logger.debug(f"Making {method} request to {url}")
if method.upper() == 'GET':
response = session.get(url)
elif method.upper() == 'PUT':
logger.debug(f"PUT payload: {payload}")
response = session.put(url, json=payload)
elif method.upper() == 'POST':
logger.debug(f"POST payload: {payload}")
response = session.post(url, json=payload)
elif method.upper() == 'DELETE':
logger.debug(f"DELETE request to {url}")
# Allow DELETE with an optional payload (needed for Alexa API)
response = session.delete(url, json=payload)
else:
logger.error(f"Unsupported method specified: {method}")
return None
response.raise_for_status() # Raise HTTPError for bad responses (4xx or 5xx)
logger.debug(f"Request successful ({response.status_code})")
return response
except requests.exceptions.RequestException as err:
logger.error(f"HTTP request failed: {err}")
return None
except Exception as e:
logger.exception(f"Unexpected error during authenticated request: {e}")
return None
# --- Shopping List Specific Functions ---
def extract_list_items(response_data: Dict[str, Any]) -> Optional[List[Dict[str, Any]]]:
"""Extracts list items from the API response."""
# Adapt based on actual API response structure if needed
for key in response_data.keys():
if isinstance(response_data[key], dict) and 'listItems' in response_data[key]:
return response_data[key]['listItems']
logger.warning("Could not find 'listItems' in response data structure.")
logger.debug(f"Full response keys: {list(response_data.keys())}")
return None
def filter_incomplete_items(list_items: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Filters a list of items to include only those not marked completed."""
return [item for item in list_items if not item.get('completed', False)]
def get_shopping_list_items() -> Optional[List[Dict[str, Any]]]:
"""Gets all items from the Alexa shopping list."""
list_items_url = f"{api_config.AMAZON_URL}/alexashoppinglists/api/getlistitems"
# Pass the config but the function now ignores the cookie_path within it
response = make_authenticated_request(list_items_url, method='GET')
if response:
try:
response_data = response.json()
logger.debug("Successfully retrieved shopping list data.")
return extract_list_items(response_data)
except requests.exceptions.JSONDecodeError as e:
logger.error(f"Failed to decode JSON response from shopping list API: {e}")
logger.debug(f"Response text: {response.text[:500]}") # Log first 500 chars
return None
else:
logger.error("Failed to retrieve shopping list data.")
return None
def add_shopping_list_item(item_value: str) -> bool:
"""Adds a new item to the Alexa shopping list."""
logger.info(f"Adding item to shopping list: {item_value}")
# Use the correct endpoint from documentation
add_item_path = "/alexashoppinglists/api/addlistitem/YW16bjEuYWNjb3VudC5BSERXNEkyVE00U1I0UVQ2VUpINzNWUVpaQU5BLVNIT1BQSU5HX0lURU0="
url = f"{api_config.AMAZON_URL}{add_item_path}"
payload = {
"value": item_value,
"type": "TASK" # Assuming 'TASK' type, common for shopping/todo lists
}
response = make_authenticated_request(
url,
# config.cookie_path, # Removed
method='POST', # Assuming POST for creation
payload=payload
)
if response and response.status_code == 200: # Assuming 200 OK for success
logger.info(f"Successfully added item: {item_value}")
return True
else:
status = response.status_code if response else 'No Response'
logger.error(f"Failed to add item: {item_value} (Status: {status})")
# Log response text for debugging if available and failed
if response is not None:
logger.debug(f"Add item response text: {response.text[:500]}")
return False
def mark_item_as_completed(list_item: Dict[str, Any]) -> bool:
"""Marks a specific shopping list item as completed via the API."""
return _update_item_completion_status(list_item, completed_status=True)
def delete_shopping_list_item(list_item: Dict[str, Any]) -> bool:
"""Deletes a specific shopping list item via the API."""
item_value = list_item.get('value', 'unknown')
item_id = list_item.get('id')
if not item_id:
logger.error(f"Cannot delete item '{item_value}' without an ID.")
return False
logger.info(f"Deleting item: {item_value} (ID: {item_id})")
# Use the correct base endpoint from documentation
delete_item_path = "/alexashoppinglists/api/deletelistitem"
url = f"{api_config.AMAZON_URL}{delete_item_path}"
# Send the item dict (containing ID) as payload
response = make_authenticated_request(
url,
# config.cookie_path, # Removed
method='DELETE',
payload=list_item # Send the whole item dict
)
# Check for successful deletion (often 200 OK or 204 No Content)
if response and (response.status_code == 200 or response.status_code == 204):
logger.info(f"Successfully deleted item: {item_value}")
return True
else:
status = response.status_code if response else 'No Response'
logger.error(f"Failed to delete item: {item_value} (Status: {status})")
# Log response text for debugging if available and failed
if response is not None:
logger.debug(f"Delete item response text: {response.text[:500]}")
return False
def unmark_item_as_completed(list_item: Dict[str, Any]) -> bool:
"""Unmarks a specific shopping list item as completed via the API."""
return _update_item_completion_status(list_item, completed_status=False)
def _update_item_completion_status(list_item: Dict[str, Any], completed_status: bool) -> bool:
"""Internal helper to update the completed status of an item."""
item_value = list_item.get('value', 'unknown')
action = "Marking" if completed_status else "Unmarking"
action_past = "marked" if completed_status else "unmarked"
logger.info(f"{action} item as completed: {item_value}")
url = f"{api_config.AMAZON_URL}/alexashoppinglists/api/updatelistitem"
list_item_copy = list_item.copy()
list_item_copy['completed'] = completed_status
response = make_authenticated_request(
url,
# config.cookie_path, # Removed
method='PUT',
payload=list_item_copy
)
if response and response.status_code == 200:
logger.info(f"Successfully {action_past} item as completed: {item_value}")
return True
else:
status = response.status_code if response else 'No Response'
logger.error(f"Failed to {action.lower()} item as completed: {item_value} (Status: {status})")
if response is not None:
logger.debug(f"{action} item response text: {response.text[:500]}")
return False
```
--------------------------------------------------------------------------------
/src/api/main.py:
--------------------------------------------------------------------------------
```python
import sys
import os
import logging
from typing import List, Dict, Any, Optional, Union
import json # Added json for saving cookies
# --- Path Modification ---
# Add the project root directory to the Python path
# This allows importing modules from the 'src' directory (e.g., alexa_shopping_list)
project_root = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
if project_root not in sys.path:
sys.path.append(project_root)
# --- End Path Modification ---
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel, Field # For request body validation
# --- Scheduler Imports ---
from apscheduler.schedulers.asyncio import AsyncIOScheduler
from contextlib import asynccontextmanager
import asyncio # For potential sleep in task
# --- End Scheduler Imports ---
# Import necessary components using relative imports
try:
# Use the new local config
from . import config as api_config # Alias to avoid name clashes
from .alexa_api import ( # Relative import
get_shopping_list_items,
add_shopping_list_item,
delete_shopping_list_item,
mark_item_as_completed,
unmark_item_as_completed,
filter_incomplete_items,
# No filter_completed_items, we'll do it inline
)
except ImportError as e:
print(f"FATAL ERROR: Could not import alexa_shopping_list modules: {e}", file=sys.stderr)
print("Ensure the script is run from the project root or the src directory is in PYTHONPATH.", file=sys.stderr)
sys.exit(1)
# --- Globals & Setup ---
# Configure basic logging
# Note: Uvicorn will likely handle more advanced logging config when run
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
# Configure logging based on local config
logging.basicConfig(level=api_config.LOG_LEVEL_INT, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
# Suppress noisy library logs based on loaded config
if api_config.LOG_LEVEL_INT > logging.DEBUG:
logging.getLogger("urllib3").setLevel(logging.WARNING)
logging.getLogger("selenium").setLevel(logging.WARNING) # Likely not needed here, but safe
logging.getLogger("webdriver_manager").setLevel(logging.WARNING) # Likely not needed here
logger.debug("Suppressed noisy library logs.")
# --- Scheduler Setup ---
scheduler = AsyncIOScheduler()
async def perform_keep_alive():
"""Task to periodically fetch shopping list to keep session active."""
logger.info("Keep-alive task started: Attempting to fetch shopping list...")
# Check if cookies exist before attempting the request
cookie_path = api_config.COOKIE_PATH
if not os.path.exists(cookie_path):
logger.info(f"Keep-alive skipped: Cookie file not found at {cookie_path}. Login required.")
return # Skip this interval
try:
# Call the function that gets all items, which uses make_authenticated_request
items = get_shopping_list_items()
if items is not None:
logger.info(f"Keep-alive successful: Fetched {len(items)} items.")
else:
# This likely means cookies are invalid/expired or another API error occurred
logger.warning("Keep-alive failed: Could not retrieve shopping list (cookies might be expired). Re-authentication needed.")
except Exception as e:
# Catch any unexpected error during the keep-alive attempt
logger.error(f"Keep-alive task encountered an unexpected error: {e}", exc_info=True)
@asynccontextmanager
async def lifespan(app: FastAPI):
# Startup
logger.info("Starting keep-alive scheduler...")
# Schedule the job to run every 60 seconds
scheduler.add_job(perform_keep_alive, 'interval', seconds=60, id='keep_alive_job')
scheduler.start()
yield
# Shutdown
logger.info("Shutting down keep-alive scheduler...")
scheduler.shutdown()
# --- FastAPI App Instance ---
app = FastAPI(
title="Alexa Shopping List API",
description="API to interact with an Alexa Shopping List using pre-generated cookies.",
version="1.0.0",
lifespan=lifespan # Add the lifespan manager
)
# --- Helper Function ---
def find_item_by_name(items: List[Dict[str, Any]], name: str) -> Dict[str, Any] | None:
"""Finds the first item in a list matching the name (case-insensitive)."""
if items is None:
return None
for item in items:
if item.get("value", "").lower() == name.lower():
return item
return None
# --- Pydantic Models (for Request Bodies) ---
class ItemNameModel(BaseModel):
item_name: str = Field(..., description="The name of the shopping list item.")
# Define a Pydantic model for the expected cookie structure (adjust if needed)
class CookieModel(BaseModel):
name: str
value: str
domain: Optional[str] = None
path: Optional[str] = None
# Add missing fields based on what login.py sends
expires: Optional[Union[str, int, float]] = None # Allow various types for expiry
secure: Optional[bool] = None
httpOnly: Optional[bool] = None
# sameSite: Optional[str] = None # Could add if needed
# --- API Endpoints ---
@app.get("/", tags=["Status"])
async def read_root():
"""Simple health check endpoint."""
return {"status": "Alexa Shopping List API is running"}
@app.get("/items/all", tags=["Items"], response_model=List[Dict[str, Any]])
async def get_all_list_items():
"""Retrieves all items (completed and incomplete) from the shopping list."""
logger.info("Endpoint GET /items/all called.")
items = get_shopping_list_items()
if items is None:
logger.error("Failed to retrieve items from Alexa API.")
raise HTTPException(status_code=503, detail="Could not retrieve shopping list from Alexa.")
return items
@app.get("/items/incomplete", tags=["Items"], response_model=List[Dict[str, Any]])
async def get_incomplete_list_items():
"""Retrieves only the incomplete items from the shopping list."""
logger.info("Endpoint GET /items/incomplete called.")
all_items = get_shopping_list_items() # No longer needs config passed
if all_items is None:
logger.error("Failed to retrieve items from Alexa API.")
raise HTTPException(status_code=503, detail="Could not retrieve shopping list from Alexa.")
incomplete_items = filter_incomplete_items(all_items)
return incomplete_items
@app.get("/items/completed", tags=["Items"], response_model=List[Dict[str, Any]])
async def get_completed_list_items():
"""Retrieves only the completed items from the shopping list."""
logger.info("Endpoint GET /items/completed called.")
all_items = get_shopping_list_items() # No longer needs config passed
if all_items is None:
logger.error("Failed to retrieve items from Alexa API.")
raise HTTPException(status_code=503, detail="Could not retrieve shopping list from Alexa.")
# Filter completed items directly
completed_items = [item for item in all_items if item.get('completed', False)]
return completed_items
@app.post("/items", tags=["Items"], status_code=201) # 201 Created
async def add_new_item(item_data: ItemNameModel):
"""Adds a new item to the shopping list."""
item_name = item_data.item_name
logger.info(f"Endpoint POST /items called to add: '{item_name}'")
success = add_shopping_list_item(item_name) # No longer needs config passed
if not success:
logger.error(f"Failed to add item '{item_name}' via Alexa API.")
raise HTTPException(status_code=500, detail=f"Failed to add item '{item_name}'.")
return {"message": f"Item '{item_name}' added successfully."}
@app.delete("/items", tags=["Items"])
async def remove_item(item_data: ItemNameModel):
"""Deletes an item from the shopping list by name (case-insensitive)."""
item_name = item_data.item_name
logger.info(f"Endpoint DELETE /items called for: '{item_name}'")
all_items = get_shopping_list_items() # No longer needs config passed
item_to_delete = find_item_by_name(all_items or [], item_name)
if not item_to_delete:
logger.warning(f"Item '{item_name}' not found for deletion.")
raise HTTPException(status_code=404, detail=f"Item '{item_name}' not found.")
success = delete_shopping_list_item(item_to_delete) # No longer needs config passed
if not success:
logger.error(f"Failed to delete item '{item_name}' via Alexa API.")
raise HTTPException(status_code=500, detail=f"Failed to delete item '{item_name}'.")
return {"message": f"Item '{item_name}' deleted successfully."}
@app.put("/items/mark_completed", tags=["Items"])
async def mark_item_complete(item_data: ItemNameModel):
"""Marks an item as completed by name (case-insensitive)."""
item_name = item_data.item_name
logger.info(f"Endpoint PUT /items/mark_completed called for: '{item_name}'")
all_items = get_shopping_list_items() # No longer needs config passed
# Find an *incomplete* item matching the name
item_to_mark = find_item_by_name(filter_incomplete_items(all_items or []), item_name)
if not item_to_mark:
logger.warning(f"Incomplete item '{item_name}' not found to mark complete.")
raise HTTPException(status_code=404, detail=f"Incomplete item '{item_name}' not found.")
success = mark_item_as_completed(item_to_mark) # No longer needs config passed
if not success:
logger.error(f"Failed to mark item '{item_name}' completed via Alexa API.")
raise HTTPException(status_code=500, detail=f"Failed to mark item '{item_name}' as completed.")
return {"message": f"Item '{item_name}' marked as completed."}
@app.put("/items/mark_incomplete", tags=["Items"])
async def mark_item_incomplete_endpoint(item_data: ItemNameModel):
"""Marks an item as incomplete by name (case-insensitive)."""
item_name = item_data.item_name
logger.info(f"Endpoint PUT /items/mark_incomplete called for: '{item_name}'")
all_items = get_shopping_list_items() # No longer needs config passed
# Find a *complete* item matching the name
completed_items = [item for item in (all_items or []) if item.get('completed', False)]
item_to_mark = find_item_by_name(completed_items, item_name)
if not item_to_mark:
logger.warning(f"Completed item '{item_name}' not found to mark incomplete.")
raise HTTPException(status_code=404, detail=f"Completed item '{item_name}' not found.")
success = unmark_item_as_completed(item_to_mark) # No longer needs config passed
if not success:
logger.error(f"Failed to mark item '{item_name}' incomplete via Alexa API.")
raise HTTPException(status_code=500, detail=f"Failed to mark item '{item_name}' as incomplete.")
return {"message": f"Item '{item_name}' marked as incomplete."}
# --- Authentication Endpoint ---
@app.post("/auth/cookies", tags=["Authentication"], status_code=200)
async def receive_cookies(cookies_data: List[CookieModel]): # Expect a list of CookieModel
"""Accepts cookies as JSON and saves them to the persistent data volume."""
# Use the COOKIE_PATH directly from the local API config
cookie_path = api_config.COOKIE_PATH
data_dir_container = os.path.dirname(cookie_path) # Get directory from the path
logger.info(f"Received {len(cookies_data)} cookies. Attempting to save as JSON to: {cookie_path}")
# Create directory if it doesn't exist
try:
os.makedirs(data_dir_container, exist_ok=True)
except OSError as e:
logger.error(f"Could not create data directory '{data_dir_container}': {e}", exc_info=True)
raise HTTPException(status_code=500, detail=f"Server error: Could not create data directory.")
try:
# Convert Pydantic models back to dicts for JSON serialization
cookies_list_of_dicts = [cookie.model_dump(exclude_unset=True) for cookie in cookies_data]
# Save the received cookie list as a JSON file
with open(cookie_path, "w", encoding="utf-8") as f:
json.dump(cookies_list_of_dicts, f, indent=2)
logger.info(f"Successfully saved cookie data as JSON to {cookie_path}")
return {"message": "Cookie data received and saved successfully."}
except Exception as e:
logger.error(f"Failed to save cookie data as JSON to {cookie_path}: {e}", exc_info=True)
raise HTTPException(status_code=500, detail=f"Failed to save cookie data: {e}")
# --- Optional: Add main block to run with uvicorn for direct execution ---
if __name__ == "__main__":
import uvicorn
logger.info("Starting Uvicorn server directly for development (keep-alive active)...")
# Note: Host '0.0.0.0' makes it accessible on your network
# Use '127.0.0.1' for local access only
# Reload=True is for development, disable for production
uvicorn.run("main:app", host="127.0.0.1", port=8000, reload=True)
```
--------------------------------------------------------------------------------
/src/mcp/mcp_server.py:
--------------------------------------------------------------------------------
```python
#!/Users/sethrose/Developer/github/Temp/alexa-mcp/.venv/bin/python
import sys
import os
import logging
import requests # For making API calls
import json
from typing import List, Dict, Any, Optional, Union
from pathlib import Path
# --- Path Modification ---
# No longer needed as we read API_PORT directly from env
# --- End Path Modification ---
# --- Setup Project Root ---
# REMOVED File Logging Setup
# --- End Setup ---
# Import the local config
try:
from . import config as mcp_config
except ImportError as e:
print(f"Error importing local MCP config: {e}", file=sys.stderr)
print("Ensure you are running from the project root or have activated the correct environment.", file=sys.stderr)
sys.exit(1)
from fastmcp import FastMCP
# Configure logging based on local config
logging.basicConfig(level=mcp_config.LOG_LEVEL_INT, format='%(asctime)s - %(name)s [%(levelname)s] %(message)s')
logger = logging.getLogger(__name__)
# --- Add File Handler ---
# Create file handler which logs even debug messages
# REMOVED File Handler Setup
# --- End File Handler Setup ---
# API server configuration
# Use base URL directly from local config
API_BASE_URL = mcp_config.API_BASE_URL
logger.info(f"MCP Server configured to connect to API at: {mcp_config.API_BASE_URL}")
# Suppress noisy library logs based on loaded config
if mcp_config.LOG_LEVEL_INT > logging.DEBUG:
logging.getLogger("requests").setLevel(logging.WARNING)
logging.getLogger("urllib3").setLevel(logging.WARNING)
logger.debug("Suppressed noisy library logs.")
print("--- DEBUG: Initializing FastMCP server...", file=sys.stderr)
# --- FastMCP Server Instance ---
mcp = FastMCP("Alexa Shopping List")
print("--- DEBUG: FastMCP server instance created.", file=sys.stderr)
# --- Helper Functions ---
def make_api_request(method: str, endpoint: str, json_data: Optional[Dict] = None) -> Dict:
"""Makes a request to the FastAPI server and handles errors."""
url = f"{API_BASE_URL}{endpoint}"
logger.debug(f"Making {method} request to FastAPI: {url}")
try:
if method.upper() == "GET":
response = requests.get(url)
elif method.upper() == "POST":
response = requests.post(url, json=json_data)
elif method.upper() == "PUT":
response = requests.put(url, json=json_data)
elif method.upper() == "DELETE":
response = requests.delete(url, json=json_data)
else:
logger.error(f"Unsupported HTTP method: {method}")
return {"error": f"Unsupported HTTP method: {method}"}
# Raise exception for 4xx/5xx status codes
response.raise_for_status()
# Try to parse JSON, fall back to text if not JSON
try:
return response.json()
except json.JSONDecodeError:
return {"message": response.text}
except requests.exceptions.ConnectionError:
logger.error(f"Connection error: Could not connect to FastAPI server at {API_BASE_URL}")
return {"error": "Could not connect to FastAPI server. Is it running?"}
except requests.exceptions.HTTPError as e:
logger.error(f"HTTP error: {e}")
# Try to get error details from the response
try:
error_detail = response.json().get("detail", str(e))
except (json.JSONDecodeError, AttributeError):
error_detail = str(e)
return {"error": error_detail}
except Exception as e:
logger.error(f"Error making API request: {e}")
return {"error": str(e)}
# --- Tool Definitions ---
# These now proxy requests to our FastAPI server
@mcp.tool()
def get_all_items() -> list[dict]:
"""
Retrieves all items currently on the Alexa shopping list, including both active (incomplete) and completed items.
Returns a list of dictionaries, where each dictionary represents an item and includes keys like 'id', 'value', and 'completed'.
An empty list is returned if the shopping list is empty or an error occurs.
"""
logger.info("Tool 'get_all_items' called.")
response = make_api_request("GET", "/items/all")
if "error" in response:
logger.error(f"Error in get_all_items: {response['error']}")
return [] # Return empty list on error
# Make sure we return a list even if API somehow returns something else
if isinstance(response, list):
return response # API already returns the list format we need
else:
logger.warning(f"Unexpected response format from API, expected list but got: {type(response)}")
return []
@mcp.tool()
def get_incomplete_items() -> list[dict]:
"""
Retrieves only the active (incomplete) items currently on the Alexa shopping list.
This is useful for seeing what still needs to be purchased.
Returns a list of dictionaries, each representing an item with keys like 'id', 'value', and 'completed' (which will be false).
An empty list is returned if there are no active items or an error occurs.
"""
logger.info("Tool 'get_incomplete_items' called.")
response = make_api_request("GET", "/items/incomplete")
if "error" in response:
logger.error(f"Error in get_incomplete_items: {response['error']}")
return []
# Make sure we return a list even if API somehow returns something else
if isinstance(response, list):
return response
else:
logger.warning(f"Unexpected response format from API, expected list but got: {type(response)}")
return []
@mcp.tool()
def get_completed_items() -> list[dict]:
"""
Retrieves only the completed items currently on the Alexa shopping list.
This shows items that have been marked as done.
Returns a list of dictionaries, each representing an item with keys like 'id', 'value', and 'completed' (which will be true).
An empty list is returned if there are no completed items or an error occurs.
"""
logger.info("Tool 'get_completed_items' called.")
response = make_api_request("GET", "/items/completed")
if "error" in response:
logger.error(f"Error in get_completed_items: {response['error']}")
return []
# Make sure we return a list even if API somehow returns something else
if isinstance(response, list):
return response
else:
logger.warning(f"Unexpected response format from API, expected list but got: {type(response)}")
return []
@mcp.tool()
def add_item(item_name: Union[str, List[str]]) -> dict:
"""
Adds one or more new items to the Alexa shopping list.
Input can be a single item name as a string (e.g., "Milk") or a list of item names as strings (e.g., ["Eggs", "Bread"]).
Returns a dictionary indicating the overall success or failure and a summary message.
If adding multiple items, it attempts to add each one; the overall result is success only if all additions succeed.
"""
logger.info(f"Tool 'add_item' called with item_name(s): '{item_name}'")
item_names = [item_name] if isinstance(item_name, str) else item_name
results = []
all_succeeded = True
for name in item_names:
if not isinstance(name, str) or not name.strip():
logger.warning(f"Skipping invalid item name: {name}")
results.append({"item": name, "success": False, "message": "Invalid item name provided."})
all_succeeded = False
continue
response = make_api_request("POST", "/items", {"item_name": name.strip()})
success = "error" not in response
message = response.get("message", response.get("error", "Unknown result"))
results.append({"item": name.strip(), "success": success, "message": message})
if not success:
all_succeeded = False
logger.error(f"Error adding item '{name.strip()}': {message}")
# Construct summary message
if len(item_names) == 1:
summary_message = results[0]['message']
else:
success_count = sum(1 for r in results if r['success'])
fail_count = len(results) - success_count
if all_succeeded:
summary_message = f"Successfully added {success_count} items."
elif success_count > 0:
summary_message = f"Added {success_count} items, failed to add {fail_count} items. Check logs for details."
else:
summary_message = f"Failed to add all {fail_count} items. Check logs for details."
return {"success": all_succeeded, "message": summary_message, "details": results}
@mcp.tool()
def delete_item(item_name: Union[str, List[str]]) -> dict:
"""
Deletes one or more items from the Alexa shopping list by their exact name (case-insensitive).
Input can be a single item name as a string (e.g., "Milk") or a list of item names as strings (e.g., ["Old Bread", "Expired Yogurt"]).
Requires an exact match of the item name to find it on the list. If multiple items have the same name, only one might be deleted per name provided.
Returns a dictionary indicating the overall success or failure and a summary message.
If deleting multiple items, it attempts each one; the overall result is success only if all deletions succeed.
"""
logger.info(f"Tool 'delete_item' called with item_name(s): '{item_name}'")
item_names = [item_name] if isinstance(item_name, str) else item_name
results = []
all_succeeded = True
for name in item_names:
if not isinstance(name, str) or not name.strip():
logger.warning(f"Skipping invalid item name for deletion: {name}")
results.append({"item": name, "success": False, "message": "Invalid item name provided."})
all_succeeded = False
continue
response = make_api_request("DELETE", "/items", {"item_name": name.strip()})
success = "error" not in response
message = response.get("message", response.get("error", "Unknown result"))
results.append({"item": name.strip(), "success": success, "message": message})
if not success:
all_succeeded = False
logger.error(f"Error deleting item '{name.strip()}': {message}")
# Construct summary message
if len(item_names) == 1:
summary_message = results[0]['message']
else:
success_count = sum(1 for r in results if r['success'])
fail_count = len(results) - success_count
if all_succeeded:
summary_message = f"Successfully deleted {success_count} items."
elif success_count > 0:
summary_message = f"Deleted {success_count} items, failed to delete {fail_count} items (may not exist or error occurred). Check logs."
else:
summary_message = f"Failed to delete any of the {fail_count} specified items (may not exist or error occurred). Check logs."
return {"success": all_succeeded, "message": summary_message, "details": results}
@mcp.tool()
def mark_item_completed(item_name: Union[str, List[str]]) -> dict:
"""
Marks one or more items on the Alexa shopping list as completed by their exact name (case-insensitive).
Input can be a single item name as a string (e.g., "Milk") or a list of item names as strings (e.g., ["Eggs", "Bread"]).
Requires an exact match of the item name to find it on the list. If multiple items have the same name, only one might be marked per name provided.
Returns a dictionary indicating the overall success or failure and a summary message.
If marking multiple items, it attempts each one; the overall result is success only if all attempts succeed.
"""
logger.info(f"Tool 'mark_item_completed' called with item_name(s): '{item_name}'")
item_names = [item_name] if isinstance(item_name, str) else item_name
results = []
all_succeeded = True
for name in item_names:
if not isinstance(name, str) or not name.strip():
logger.warning(f"Skipping invalid item name for completion: {name}")
results.append({"item": name, "success": False, "message": "Invalid item name provided."})
all_succeeded = False
continue
response = make_api_request("PUT", "/items/mark_completed", {"item_name": name.strip()})
success = "error" not in response
message = response.get("message", response.get("error", "Unknown result"))
results.append({"item": name.strip(), "success": success, "message": message})
if not success:
all_succeeded = False
logger.error(f"Error marking item '{name.strip()}' completed: {message}")
# Construct summary message
if len(item_names) == 1:
summary_message = results[0]['message']
else:
success_count = sum(1 for r in results if r['success'])
fail_count = len(results) - success_count
if all_succeeded:
summary_message = f"Successfully marked {success_count} items as completed."
elif success_count > 0:
summary_message = f"Marked {success_count} items completed, failed to mark {fail_count} items (may not exist or error occurred). Check logs."
else:
summary_message = f"Failed to mark any of the {fail_count} specified items as completed (may not exist or error occurred). Check logs."
return {"success": all_succeeded, "message": summary_message, "details": results}
@mcp.tool()
def mark_item_incomplete(item_name: Union[str, List[str]]) -> dict:
"""
Marks one or more previously completed items on the Alexa shopping list as incomplete (active) by their exact name (case-insensitive).
Input can be a single item name as a string (e.g., "Milk") or a list of item names as strings (e.g., ["Eggs", "Bread"]).
Requires an exact match of the item name to find it on the list. If multiple items have the same name, only one might be marked per name provided.
Use this if an item was marked completed by mistake.
Returns a dictionary indicating the overall success or failure and a summary message.
If marking multiple items, it attempts each one; the overall result is success only if all attempts succeed.
"""
logger.info(f"Tool 'mark_item_incomplete' called with item_name(s): '{item_name}'")
item_names = [item_name] if isinstance(item_name, str) else item_name
results = []
all_succeeded = True
for name in item_names:
if not isinstance(name, str) or not name.strip():
logger.warning(f"Skipping invalid item name for marking incomplete: {name}")
results.append({"item": name, "success": False, "message": "Invalid item name provided."})
all_succeeded = False
continue
response = make_api_request("PUT", "/items/mark_incomplete", {"item_name": name.strip()})
success = "error" not in response
message = response.get("message", response.get("error", "Unknown result"))
results.append({"item": name.strip(), "success": success, "message": message})
if not success:
all_succeeded = False
logger.error(f"Error marking item '{name.strip()}' incomplete: {message}")
# Construct summary message
if len(item_names) == 1:
summary_message = results[0]['message']
else:
success_count = sum(1 for r in results if r['success'])
fail_count = len(results) - success_count
if all_succeeded:
summary_message = f"Successfully marked {success_count} items as incomplete."
elif success_count > 0:
summary_message = f"Marked {success_count} items incomplete, failed to mark {fail_count} items (may not exist or error occurred). Check logs."
else:
summary_message = f"Failed to mark any of the {fail_count} specified items as incomplete (may not exist or error occurred). Check logs."
return {"success": all_succeeded, "message": summary_message, "details": results}
# --- API Status Check ---
@mcp.tool()
def check_api_status() -> dict:
"""
Checks if the backend FastAPI server (responsible for communicating with the actual Alexa API) is running and accessible.
This is useful for diagnosing connection issues between the MCP server and the FastAPI server.
Returns a dictionary with 'status' ('OK' or 'ERROR') and a descriptive 'message'.
"""
logger.info("Tool 'check_api_status' called.")
response = make_api_request("GET", "/")
if "error" in response:
logger.error(f"API status check failed: {response['error']}")
return {
"status": "ERROR",
"message": f"FastAPI server is not accessible: {response['error']}"
}
return {
"status": "OK",
"message": "FastAPI server is running and accessible.",
"details": response
}
# --- Run Server ---
if __name__ == "__main__":
print("--- DEBUG: Entering __main__ block.", file=sys.stderr)
logger.info("Starting FastMCP server...")
print("--- MCP Server: Starting ---", file=sys.stderr); sys.stderr.flush()
# Initial API health check with added error handling
# --- TEMPORARILY DISABLED Initial API Health Check for Debugging Startup ---
print("--- MCP Server: Skipping initial API health check... ---", file=sys.stderr); sys.stderr.flush()
# --- End Disabled Check ---
try:
print("--- DEBUG: Calling mcp.run()...", file=sys.stderr)
print("--- MCP Server: Entering mcp.run() ---", file=sys.stderr); sys.stderr.flush()
mcp.run()
print("--- DEBUG: mcp.run() completed (or exited).", file=sys.stderr)
except Exception as e:
print(f"--- MCP Server FATAL ERROR: Exception from mcp.run(): {e} ---", file=sys.stderr)
logger.exception(f"Exception from mcp.run(): {e}") # Log with traceback via logger
import traceback
traceback.print_exc(file=sys.stderr) # Also print traceback directly
sys.stderr.flush()
sys.exit(1) # Ensure exit on error from run
finally:
print("--- MCP Server: mcp.run() exited ---", file=sys.stderr); sys.stderr.flush()
logger.info("FastMCP server finished.")
print("--- DEBUG: Exiting __main__ block normally.", file=sys.stderr)
```