#
tokens: 10089/50000 16/16 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .env
├── Dockerfile
├── example
│   ├── config-smithery.json
│   ├── config.json
│   ├── docker-config.json
│   ├── pydantic_ai_repl.py
│   └── README.md
├── LICENSE
├── pyproject.toml
├── README.md
├── smithery.yaml
└── src
    └── mem0_mcp_server
        ├── __init__.py
        ├── config.json
        ├── http_entry.py
        ├── mcp.json
        ├── py.typed
        ├── schemas.py
        └── server.py
```

# Files

--------------------------------------------------------------------------------
/.env:
--------------------------------------------------------------------------------

```
MEM0_API_KEY=<your-api-key>
OPENAI_API_KEY=<your-openai-api-key>
MEM0_DEFAULT_USER_ID=<your-mem0-user-id>

```

--------------------------------------------------------------------------------
/example/README.md:
--------------------------------------------------------------------------------

```markdown
# Pydantic AI Demo

This directory contains a Pydantic AI agent to interactively test the Mem0 MCP server.

## Quick Start

```bash
# Install the package
pip install mem0-mcp-server
# Or with uv
uv pip install mem0-mcp-server

# Set your API keys
export MEM0_API_KEY="m0-..."
export OPENAI_API_KEY="sk-openai_..."

# Run the REPL
python example/pydantic_ai_repl.py
```

## Using Different Server Configurations

### Local Server (default)
```bash
python example/pydantic_ai_repl.py
```

### Docker Container
```bash
# Start Docker container
docker run --rm -d \
  --name mem0-mcp \
  -e MEM0_API_KEY="m0-..." \
  -p 8080:8081 \
  mem0-mcp-server

# Run agent pointing to Docker
export MEM0_MCP_CONFIG_PATH=example/docker-config.json
export MEM0_MCP_CONFIG_SERVER=mem0-docker
python example/pydantic_ai_repl.py
```

### Smithery Remote Server
```bash
export MEM0_MCP_CONFIG_PATH=example/config-smithery.json
export MEM0_MCP_CONFIG_SERVER=mem0-memory-mcp
python example/pydantic_ai_repl.py
```

## What Happens

1. The script loads the configuration from `example/config.json` by default
2. Starts or connects to the Mem0 MCP server
3. A Pydantic AI agent (Mem0Guide) connects to the server
4. You get an interactive REPL to test memory operations

## Example Prompts

- "Remember that I love tiramisu"
- "Search for my food preferences"
- "Update my project: the mobile app is now 80% complete"
- "Show me all memories about project Phoenix"
- "Delete memories from 2023"

## Config Files

- `config.json` - Local server (default)
- `docker-config.json` - Connect to Docker container on port 8080
- `config-smithery.json` - Connect to Smithery remote server

You can create custom configs by copying and modifying these files.
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# Mem0 MCP Server

[![PyPI version](https://img.shields.io/pypi/v/mem0-mcp-server.svg)](https://pypi.org/project/mem0-mcp-server/) [![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE) [![smithery badge](https://smithery.ai/badge/@mem0ai/mem0-memory-mcp)](https://smithery.ai/server/@mem0ai/mem0-memory-mcp)

`mem0-mcp-server` wraps the official [Mem0](https://mem0.ai) Memory API as a Model Context Protocol (MCP) server so any MCP-compatible client (Claude Desktop, Cursor, custom agents) can add, search, update, and delete long-term memories.

## Tools

The server exposes the following tools to your LLM:

| Tool                  | Description                                                                       |
| --------------------- | --------------------------------------------------------------------------------- |
| `add_memory`          | Save text or conversation history (or explicit message objects) for a user/agent. |
| `search_memories`     | Semantic search across existing memories (filters + limit supported).             |
| `get_memories`        | List memories with structured filters and pagination.                             |
| `get_memory`          | Retrieve one memory by its `memory_id`.                                           |
| `update_memory`       | Overwrite a memory's text once the user confirms the `memory_id`.                 |
| `delete_memory`       | Delete a single memory by `memory_id`.                                            |
| `delete_all_memories` | Bulk delete all memories in the confirmed scope (user/agent/app/run).             |
| `delete_entities`     | Delete a user/agent/app/run entity (and its memories).                            |
| `list_entities`       | Enumerate users/agents/apps/runs stored in Mem0.                                  |

All responses are JSON strings returned directly from the Mem0 API.

## Usage Options

There are three ways to use the Mem0 MCP Server:

1. **Python Package** - Install and run locally using `uvx` with any MCP client
2. **Docker** - Containerized deployment that creates an `/mcp` HTTP endpoint
3. **Smithery** - Remote hosted service for managed deployments

## Quick Start

### Installation

```bash
uv pip install mem0-mcp-server
```

Or with pip:

```bash
pip install mem0-mcp-server
```

### Client Configuration

Add this configuration to your MCP client:

```json
{
  "mcpServers": {
    "mem0": {
      "command": "uvx",
      "args": ["mem0-mcp-server"],
      "env": {
        "MEM0_API_KEY": "m0-...",
        "MEM0_DEFAULT_USER_ID": "your-handle"
      }
    }
  }
}
```

### Test with the Python Agent

<details>
<summary><strong>Click to expand: Test with the Python Agent</strong></summary>

To test the server immediately, use the included Pydantic AI agent:

```bash
# Install the package
pip install mem0-mcp-server
# Or with uv
uv pip install mem0-mcp-server

# Set your API keys
export MEM0_API_KEY="m0-..."
export OPENAI_API_KEY="sk-openai-..."

# Clone and test with the agent
git clone https://github.com/mem0ai/mem0-mcp.git
cd mem0-mcp-server
python example/pydantic_ai_repl.py
```

**Using different server configurations:**

```bash
# Use with Docker container
export MEM0_MCP_CONFIG_PATH=example/docker-config.json
export MEM0_MCP_CONFIG_SERVER=mem0-docker
python example/pydantic_ai_repl.py

# Use with Smithery remote server
export MEM0_MCP_CONFIG_PATH=example/config-smithery.json
export MEM0_MCP_CONFIG_SERVER=mem0-memory-mcp
python example/pydantic_ai_repl.py
```

</details>

## What You Can Do

The Mem0 MCP server enables powerful memory capabilities for your AI applications:

- Remember that I'm allergic to peanuts and shellfish - Add new health information to memory
- Store these trial parameters: 200 participants, double-blind, placebo-controlled study - Save research data
- What do you know about my dietary preferences? - Search and retrieve all food-related memories
- Update my project status: the mobile app is now 80% complete - Modify existing memory with new info
- Delete all memories from 2023, I need a fresh start - Bulk remove outdated memories
- Show me everything I've saved about the Phoenix project - List all memories for a specific topic

## Configuration

### Environment Variables

- `MEM0_API_KEY` (required) – Mem0 platform API key.
- `MEM0_DEFAULT_USER_ID` (optional) – default `user_id` injected into filters and write requests (defaults to `mem0-mcp`).
- `MEM0_ENABLE_GRAPH_DEFAULT` (optional) – Enable graph memories by default (defaults to `false`).
- `MEM0_MCP_AGENT_MODEL` (optional) – default LLM for the bundled agent example (defaults to `openai:gpt-4o-mini`).

## Advanced Setup

<details>
<summary><strong>Click to expand: Docker, Smithery, and Development</strong></summary>

### Docker Deployment

To run with Docker:

1. Build the image:

   ```bash
   docker build -t mem0-mcp-server .
   ```

2. Run the container:

   ```bash
   docker run --rm -d \
     --name mem0-mcp \
     -e MEM0_API_KEY=m0-... \
     -p 8080:8081 \
     mem0-mcp-server
   ```

3. Monitor the container:

   ```bash
   # View logs
   docker logs -f mem0-mcp

   # Check status
   docker ps
   ```

### Running with Smithery Remote Server

To connect to a Smithery-hosted server:

1. Install the MCP server (Smithery dependencies are now bundled):

   ```bash
   pip install mem0-mcp-server
   ```

2. Configure MCP client with Smithery:
   ```json
   {
     "mcpServers": {
       "mem0-memory-mcp": {
         "command": "npx",
         "args": [
           "-y",
           "@smithery/cli@latest",
           "run",
           "@mem0ai/mem0-memory-mcp",
           "--key",
           "your-smithery-key",
           "--profile",
           "your-profile-name"
         ],
         "env": {
           "MEM0_API_KEY": "m0-..."
         }
       }
     }
   }
   ```

### Development Setup

Clone and run from source:

```bash
git clone https://github.com/mem0ai/mem0-mcp.git
cd mem0-mcp-server
pip install -e ".[dev]"

# Run locally
mem0-mcp-server

# Or with uv
uv sync
uv run mem0-mcp-server
```

</details>

## License

[Apache License 2.0](https://github.com/mem0ai/mem0-mcp/blob/main/LICENSE)

```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
runtime: "python"

```

--------------------------------------------------------------------------------
/src/mem0_mcp_server/__init__.py:
--------------------------------------------------------------------------------

```python
"""Mem0 MCP server package."""

from .server import main

__all__ = ["main"]

```

--------------------------------------------------------------------------------
/example/docker-config.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "mem0-docker": {
      "type": "http",
      "url": "http://localhost:8080/mcp"
    }
  }
}

```

--------------------------------------------------------------------------------
/src/mem0_mcp_server/mcp.json:
--------------------------------------------------------------------------------

```json
{
  "name": "Mem0 Memory",
  "description": "Full read/write access to your Mem0 long-term memory",
  "url": "stdio"
}

```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
FROM python:3.12-slim

ENV PYTHONUNBUFFERED=1 \
    UV_SYSTEM_PYTHON=1

WORKDIR /app

RUN pip install --no-cache-dir uv

COPY pyproject.toml README.md ./
COPY src ./src

RUN uv pip install --system .

ENV PORT=8081

CMD ["python", "-m", "mem0_mcp_server.http_entry"]

```

--------------------------------------------------------------------------------
/src/mem0_mcp_server/config.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "mem0": {
      "command": "${MEM0_MCP_COMMAND:-uvx}",
      "args": ["${MEM0_MCP_BINARY:-mem0-mcp-server}"],
      "env": {
        "MEM0_API_KEY": "${MEM0_API_KEY}",
        "MEM0_DEFAULT_USER_ID": "${MEM0_DEFAULT_USER_ID:-mem0-mcp}"
      }
    }
  }
}

```

--------------------------------------------------------------------------------
/example/config.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "mem0-local": {
      "command": "${MEM0_MCP_COMMAND:-python}",
      "args": ["-m", "mem0_mcp_server.server"],
      "env": {
        "MEM0_API_KEY": "${MEM0_API_KEY}",
        "MEM0_DEFAULT_USER_ID": "${MEM0_DEFAULT_USER_ID:-mem0-mcp}"
      },
      "timeout": "${MEM0_MCP_SERVER_TIMEOUT:-30}"
    }
  }
}
```

--------------------------------------------------------------------------------
/example/config-smithery.json:
--------------------------------------------------------------------------------

```json
{
  "mcpServers": {
    "mem0-memory-mcp": {
      "command": "npx",
      "args": [
        "-y",
        "@smithery/cli@latest",
        "run",
        "@mem0ai/mem0-memory-mcp",
        "--key",
        "your-smithery-key-here",
        "--profile",
        "your-profile-name-here"
      ],
      "env": {
        "MEM0_API_KEY": "${MEM0_API_KEY}",
        "MEM0_DEFAULT_USER_ID": "${MEM0_DEFAULT_USER_ID:-mem0-mcp}"
      },
      "timeout": "${MEM0_MCP_SERVER_TIMEOUT:-30}"
    }
  }
}
```

--------------------------------------------------------------------------------
/src/mem0_mcp_server/http_entry.py:
--------------------------------------------------------------------------------

```python
"""Production HTTP entry point for Smithery and other container hosts."""

from __future__ import annotations

import os

from .server import create_server


def main() -> None:
    server = create_server()
    # Ensure runtime overrides are respected if Smithery injects a different port/host.
    server.settings.host = os.getenv("HOST", server.settings.host)
    server.settings.port = int(os.getenv("PORT", server.settings.port))
    server.run(transport="streamable-http")


if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[build-system]
requires = ["hatchling>=1.27.0"]
build-backend = "hatchling.build"

[project]
name = "mem0-mcp-server"
version = "0.2.1"
description = "Model Context Protocol server that exposes the Mem0 long-term memory API as tools"
readme = "README.md"
license = {text = "Apache-2.0"}
authors = [{name = "Mem0"}]
requires-python = ">=3.10"
keywords = [
    "mcp",
    "mem0",
    "memory",
    "agents",
    "tooling",
    "llm",
    "anthropic",
    "claude",
]
classifiers = [
    "Intended Audience :: Developers",
    "License :: OSI Approved :: Apache Software License",
    "Programming Language :: Python",
    "Programming Language :: Python :: 3",
    "Programming Language :: Python :: 3.10",
    "Programming Language :: Python :: 3.11",
    "Programming Language :: Python :: 3.12",
    "Operating System :: OS Independent",
    "Topic :: Software Development :: Libraries",
    "Typing :: Typed",
]

dependencies = [
    "mcp[cli]>=1.6.0",
    "mem0ai>=1.0.1",
    "python-dotenv>=1.2.1",
    "requests>=2.32.5",
    "pydantic-ai-slim[mcp]>=1.14.1",
    "smithery>=0.4.2",
]

[project.urls]
Homepage = "https://mem0.ai"
Repository = "https://github.com/mem0ai/mem0-mcp"
Documentation = "https://docs.mem0.ai"

[project.optional-dependencies]
agent = ["pydantic-ai-slim[mcp]>=1.14.1", "python-dotenv>=1.2.1"]

[dependency-groups]
dev = [
    "pytest>=8.3.4",
    "ruff>=0.7.0",
    "mypy>=1.18.2",
]

[project.scripts]
mem0-mcp-server = "mem0_mcp_server.server:main"
dev = "smithery.cli.dev:main"
start = "smithery.cli.start:main"
playground = "smithery.cli.playground:main"

[project.entry-points."mcp.servers"]
mem0 = "mem0_mcp_server:mcp.json"

[tool.smithery]
server = "mem0_mcp_server.server:create_server"

[tool.hatch.build.targets.wheel]
packages = ["src/mem0_mcp_server"]

[tool.hatch.build.targets.wheel.shared-data]
"src/mem0_mcp_server/mcp.json" = "share/mcp/servers/mem0-mcp-server.json"
"src/mem0_mcp_server/py.typed" = "mem0_mcp_server/py.typed"
"src/mem0_mcp_server/config.json" = "share/mcp/configs/mem0-mcp-server.json"

[tool.ruff]
target-version = "py310"
line-length = 100

[tool.mypy]
python_version = "3.10"
strict = true

```

--------------------------------------------------------------------------------
/src/mem0_mcp_server/schemas.py:
--------------------------------------------------------------------------------

```python
"""Shared Pydantic models for the Mem0 MCP server."""

from __future__ import annotations

from typing import Any, Dict, Optional

from pydantic import BaseModel, Field


# classic structure across all payloads , does not change
class ToolMessage(BaseModel):
    role: str = Field(..., description="Role of the speaker, e.g., user or assistant.")
    content: str = Field(..., description="Full text of the utterance to store.")


class ConfigSchema(BaseModel):
    """Session-level overrides used when hosting via Smithery or HTTP."""

    mem0_api_key: str = Field(..., description="Mem0 API key (required)")
    default_user_id: Optional[str] = Field(
        None, description="Default user_id injected into filters when unspecified."
    )
    enable_graph_default: Optional[bool] = Field(
        None, description="Default enable_graph toggle when clients omit the flag."
    )


class AddMemoryArgs(BaseModel):
    text: Optional[str] = Field(
        None, description="Simple sentence to remember; converted into a user message when set."
    )
    messages: Optional[list[ToolMessage]] = Field(
        None,
        description=(
            "Explicit role/content history for durable storage. Provide this OR `text`; defaults "
            "to the server user_id."
        ),
    )
    user_id: Optional[str] = Field(None, description="Override for the Mem0 user ID.")
    agent_id: Optional[str] = Field(None, description="Optional agent identifier.")
    app_id: Optional[str] = Field(None, description="Optional app identifier.")
    run_id: Optional[str] = Field(None, description="Optional run identifier.")
    metadata: Optional[Dict[str, Any]] = Field(None, description="Opaque metadata to persist.")
    enable_graph: Optional[bool] = Field(
        None, description="Only set True if the user explicitly opts into graph storage."
    )


# this is where we start with filters
class SearchMemoriesArgs(BaseModel):
    query: str = Field(..., description="Describe what you want to find.")
    filters: Optional[Dict[str, Any]] = Field(
        None, description="Additional filter clauses; user_id is injected automatically."
    )
    limit: Optional[int] = Field(None, description="Optional maximum number of matches.")
    enable_graph: Optional[bool] = Field(
        None, description="Set True only when the user asks for graph knowledge."
    )


class GetMemoriesArgs(BaseModel):
    filters: Optional[Dict[str, Any]] = Field(
        None, description="Structured filters; user_id injected automatically."
    )
    page: Optional[int] = Field(None, description="1-indexed page number.")
    page_size: Optional[int] = Field(None, description="Number of memories per page.")
    enable_graph: Optional[bool] = Field(
        None, description="Set True only when the user wants graph knowledge."
    )


class DeleteAllArgs(BaseModel):
    user_id: Optional[str] = Field(
        None, description="User scope to delete; defaults to server user."
    )
    agent_id: Optional[str] = Field(None, description="Optional agent scope filter.")
    app_id: Optional[str] = Field(None, description="Optional app scope filter.")
    run_id: Optional[str] = Field(None, description="Optional run scope filter.")


class DeleteEntitiesArgs(BaseModel):
    user_id: Optional[str] = Field(None, description="Delete this user and all related memories.")
    agent_id: Optional[str] = Field(None, description="Delete this agent and its memories.")
    app_id: Optional[str] = Field(None, description="Delete this app and its memories.")
    run_id: Optional[str] = Field(None, description="Delete this run and its memories.")

```

--------------------------------------------------------------------------------
/example/pydantic_ai_repl.py:
--------------------------------------------------------------------------------

```python
"""Standalone Pydantic AI REPL wired to the Mem0 MCP server.

Run this script from the repo root after installing the package (e.g.,
`pip install -e .[smithery]`). It defaults to the bundled `example/config.json`
so you can connect to the local `mem0_mcp_server.server` entry point without
touching `uvx`.
"""

from __future__ import annotations

import asyncio
import json
import os
import sys
from pathlib import Path

from dotenv import load_dotenv
from pydantic_ai import Agent
from pydantic_ai.messages import ModelMessage
from pydantic_ai.mcp import MCPServerStdio, load_mcp_servers

EXAMPLE_DIR = Path(__file__).resolve().parent
PROJECT_ROOT = EXAMPLE_DIR.parent

# Ensure `src/` is importable when running directly from the repo without
# installing the editable package first. Safe no-op if already installed.
SRC_PATH = PROJECT_ROOT / "src"
if SRC_PATH.exists() and str(SRC_PATH) not in sys.path:
    sys.path.insert(0, str(SRC_PATH))

BASE_DIR = Path(__file__).resolve().parent
DEFAULT_CONFIG_PATH = BASE_DIR / "config.json"
_env_config_raw = os.getenv("MEM0_MCP_CONFIG_PATH")
if not _env_config_raw:
    CONFIG_PATH = DEFAULT_CONFIG_PATH
else:
    CONFIG_PATH = Path(_env_config_raw).expanduser()
CONFIG_SERVER_KEY = os.getenv("MEM0_MCP_CONFIG_SERVER", "mem0-local")
DEFAULT_MODEL = os.getenv("MEM0_MCP_AGENT_MODEL", "openai:gpt-5")
DEFAULT_TIMEOUT = int(os.getenv("MEM0_MCP_SERVER_TIMEOUT", "30"))


def _require_env(var_name: str) -> str:
    value = os.getenv(var_name)
    if not value:
        raise RuntimeError(f"{var_name} must be set before running the agent.")
    return value


def _select_server_index() -> int:
    """Return the index of the requested server key inside the config file."""

    try:
        config = json.loads(CONFIG_PATH.read_text())
    except FileNotFoundError:
        return -1
    servers = config.get("mcpServers") or {}
    if not servers:
        raise RuntimeError(f"No 'mcpServers' definitions found in {CONFIG_PATH}")
    keys = list(servers.keys())
    if CONFIG_SERVER_KEY not in servers:
        if CONFIG_SERVER_KEY:
            raise RuntimeError(
                f"Server '{CONFIG_SERVER_KEY}' not found in {CONFIG_PATH}. Available: {keys}"
            )
        return 0
    return keys.index(CONFIG_SERVER_KEY)


def _load_server_from_config() -> MCPServerStdio | None:
    """Load the MCP server definition from config.json if present."""

    if not CONFIG_PATH.exists():
        return None
    index = _select_server_index()
    servers = load_mcp_servers(CONFIG_PATH)
    if not servers:
        raise RuntimeError(f"{CONFIG_PATH} did not produce any MCP servers.")
    if index >= len(servers):
        raise RuntimeError(
            f"Server index {index} is out of range for {CONFIG_PATH}; found {len(servers)} servers."
        )
    return servers[index]


def build_server() -> MCPServerStdio:
    """Launch the Mem0 MCP server over stdio with inherited env vars."""

    env = os.environ.copy()
    _require_env("MEM0_API_KEY")  # fail fast with a helpful error

    configured = _load_server_from_config()
    if configured:
        return configured

    server_path = PROJECT_ROOT / "src" / "mem0_mcp_server" / "server.py"
    return MCPServerStdio(
        sys.executable,
        args=[str(server_path)],
        env=env,
        timeout=DEFAULT_TIMEOUT,
    )


def build_agent(server: MCPServerStdio) -> tuple[Agent, str]:
    """Create a Pydantic AI agent that can use the Mem0 MCP tools."""

    default_user = os.getenv("MEM0_DEFAULT_USER_ID", "mem0-mcp")
    system_prompt = (
        "You are Mem0Guide, a friendly assistant whose ONLY external actions are the Mem0 MCP tools.\n"
        f"Default to user_id='{default_user}' unless the user gives another value, and inject it into every filter.\n"
        "Operating loop:\n"
        "  1) Treat every new preference/fact/personal detail as durable—call add_memory right away (even if they never say “remember”) unless they opt out. "
        "When a new detail replaces an older one, summarize both so the latest truth is clear (e.g., “was planning Berlin; now relocating to San Francisco”).\n"
        "  2) Only run the search → list IDs → confirm → update/delete flow when the user references an existing memory or ambiguity would be risky.\n"
        "  3) For get/show/list requests, use a single get_memories or search_memories call and expand synonyms yourself.\n"
        "  4) For destructive bulk actions (delete_all_memories, delete_entities) ask for scope once; if the user immediately confirms, execute without re-asking.\n"
        "  5) Keep graph opt-in only.\n"
        "Act decisively: remember the latest confirmation context so you can honor a follow-up “yes/confirm” without repeating questions, run the best-fit tool, mention what you ran, summarize the outcome naturally, and suggest one concise next step. "
        "Mention memory_ids only when needed. Ask clarifying questions only when you truly lack enough info or safety is at risk."
    )
    model = os.getenv("MEM0_MCP_AGENT_MODEL", DEFAULT_MODEL)
    agent = Agent(model=model, toolsets=[server], system_prompt=system_prompt)
    return agent, model


def _print_banner(model: str) -> None:
    print("Mem0 Pydantic AI agent ready. Type a prompt or 'exit' to quit.\n")
    print(f"Model: {model}")
    print("Tools: Mem0 MCP (add/search/get/update/delete)\n")


async def chat_loop(agent: Agent, server: MCPServerStdio, model_name: str) -> None:
    """Interactive REPL that streams requests through the agent."""

    message_history: list[ModelMessage] = []
    async with server:
        async with agent:
            _print_banner(model_name)
            while True:
                try:
                    user_input = input("You> ").strip()
                except (EOFError, KeyboardInterrupt):
                    print("\nBye!")
                    return
                if not user_input:
                    continue
                if user_input.lower() in {"exit", "quit"}:
                    print("Bye!")
                    return
                result = await agent.run(user_input, message_history=message_history)
                message_history.extend(result.new_messages())
                print(f"\nAgent> {result.output}\n")


async def main() -> None:
    load_dotenv()
    server = build_server()
    agent, model_name = build_agent(server)
    await chat_loop(agent, server, model_name)


if __name__ == "__main__":
    asyncio.run(main())

```

--------------------------------------------------------------------------------
/src/mem0_mcp_server/server.py:
--------------------------------------------------------------------------------

```python
"""MCP server that exposes Mem0 REST endpoints as MCP tools."""

from __future__ import annotations

import json
import logging
import os
from typing import Annotated, Any, Callable, Dict, Optional, TypeVar

from dotenv import load_dotenv
from mcp.server.fastmcp import Context, FastMCP
from mcp.server.transport_security import TransportSecuritySettings
from mem0 import MemoryClient
from mem0.exceptions import MemoryError
from pydantic import Field

try:  # Support both package (`python -m mem0_mcp.server`) and script (`python mem0_mcp/server.py`) runs.
    from .schemas import (
        AddMemoryArgs,
        ConfigSchema,
        DeleteAllArgs,
        DeleteEntitiesArgs,
        GetMemoriesArgs,
        SearchMemoriesArgs,
        ToolMessage,
    )
except ImportError:  # pragma: no cover - fallback for script execution
    from schemas import (
        AddMemoryArgs,
        ConfigSchema,
        DeleteAllArgs,
        DeleteEntitiesArgs,
        GetMemoriesArgs,
        SearchMemoriesArgs,
        ToolMessage,
    )

load_dotenv()

logging.basicConfig(level=logging.INFO, format="%(levelname)s %(name)s | %(message)s")
logger = logging.getLogger("mem0_mcp_server")




T = TypeVar("T")

try:
    from smithery.decorators import smithery
except ImportError:  # pragma: no cover - Smithery optional

    class _SmitheryFallback:
        @staticmethod
        def server(*args, **kwargs):  # type: ignore[misc]
            def decorator(func: Callable[..., T]) -> Callable[..., T]:  # type: ignore[type-var]
                return func

            return decorator

    smithery = _SmitheryFallback()  # type: ignore[assignment]


# graph remains off by default , also set the default user_id to "mem0-mcp" when nothing set
ENV_API_KEY = os.getenv("MEM0_API_KEY")
ENV_DEFAULT_USER_ID = os.getenv("MEM0_DEFAULT_USER_ID", "mem0-mcp")
ENV_ENABLE_GRAPH_DEFAULT = os.getenv("MEM0_ENABLE_GRAPH_DEFAULT", "false").lower() in {
    "1",
    "true",
    "yes",
}

_CLIENT_CACHE: Dict[str, MemoryClient] = {}


def _config_value(source: Any, field: str):
    if source is None:
        return None
    if isinstance(source, dict):
        return source.get(field)
    return getattr(source, field, None)


def _with_default_filters(
    default_user_id: str, filters: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
    """Ensure filters exist and include the default user_id at the top level."""
    if not filters:
        return {"AND": [{"user_id": default_user_id}]}
    if not any(key in filters for key in ("AND", "OR", "NOT")):
        filters = {"AND": [filters]}
    has_user = json.dumps(filters, sort_keys=True).find('"user_id"') != -1
    if not has_user:
        and_list = filters.setdefault("AND", [])
        if not isinstance(and_list, list):
            raise ValueError("filters['AND'] must be a list when present.")
        and_list.insert(0, {"user_id": default_user_id})
    return filters


def _mem0_call(func, *args, **kwargs):
    try:
        result = func(*args, **kwargs)
    except MemoryError as exc:  # surface structured error back to MCP client
        logger.error("Mem0 call failed: %s", exc)
        # returns the erorr to the model
        return json.dumps(
            {
                "error": str(exc),
                "status": getattr(exc, "status", None),
                "payload": getattr(exc, "payload", None),
            },
            ensure_ascii=False,
        )
    return json.dumps(result, ensure_ascii=False)


def _resolve_settings(ctx: Context | None) -> tuple[str, str, bool]:
    session_config = getattr(ctx, "session_config", None)
    api_key = _config_value(session_config, "mem0_api_key") or ENV_API_KEY
    if not api_key:
        raise RuntimeError(
            "MEM0_API_KEY is required (via Smithery config, session config, or environment) to run the Mem0 MCP server."
        )

    default_user = _config_value(session_config, "default_user_id") or ENV_DEFAULT_USER_ID
    enable_graph_default = _config_value(session_config, "enable_graph_default")
    if enable_graph_default is None:
        enable_graph_default = ENV_ENABLE_GRAPH_DEFAULT

    return api_key, default_user, enable_graph_default


# init the client
def _mem0_client(api_key: str) -> MemoryClient:
    client = _CLIENT_CACHE.get(api_key)
    if client is None:
        client = MemoryClient(api_key=api_key)
        _CLIENT_CACHE[api_key] = client
    return client


def _default_enable_graph(enable_graph: Optional[bool], default: bool) -> bool:
    if enable_graph is None:
        return default
    return enable_graph


@smithery.server(config_schema=ConfigSchema)
def create_server() -> FastMCP:
    """Create a FastMCP server usable via stdio, Docker, or Smithery."""

    # When running inside Smithery, the platform probes the server without user-provided
    # session config, so we defer the hard requirement for MEM0_API_KEY until a tool call.
    if not ENV_API_KEY:
        logger.warning(
            "MEM0_API_KEY is not set; Smithery health checks will pass, but every tool "
            "invocation will fail until a key is supplied via session config or env vars."
        )

    server = FastMCP(
        "mem0",
        host=os.getenv("HOST", "0.0.0.0"),
        port=int(os.getenv("PORT", "8081")),
        transport_security=TransportSecuritySettings(enable_dns_rebinding_protection=False),
    )

    # graph is disabled by default to make queries simpler and fast
    # Mention " Enable/Use graph while calling memory " in your system prompt to run it in each instance

    @server.tool(description="Store a new preference, fact, or conversation snippet. Requires at least one: user_id, agent_id, or run_id.")
    def add_memory(
        text: Annotated[
            str,
            Field(
                description="Plain sentence summarizing what to store. Required even if `messages` is provided."
            ),
        ],
        messages: Annotated[
            Optional[list[Dict[str, str]]],
            Field(
                default=None,
                description="Structured conversation history with `role`/`content`. "
                "Use when you have multiple turns.",
            ),
        ] = None,
        user_id: Annotated[
            Optional[str],
            Field(default=None, description="Override the default user scope for this write."),
        ] = None,
        agent_id: Annotated[
            Optional[str], Field(default=None, description="Optional agent identifier.")
        ] = None,
        app_id: Annotated[
            Optional[str], Field(default=None, description="Optional app identifier.")
        ] = None,
        run_id: Annotated[
            Optional[str], Field(default=None, description="Optional run identifier.")
        ] = None,
        metadata: Annotated[
            Optional[Dict[str, Any]],
            Field(default=None, description="Attach arbitrary metadata JSON to the memory."),
        ] = None,
        enable_graph: Annotated[
            Optional[bool],
            Field(
                default=None,
                description="Set true only if the caller explicitly wants Mem0 graph memory.",
            ),
        ] = None,
        ctx: Context | None = None,
    ) -> str:
        """Write durable information to Mem0."""

        api_key, default_user, graph_default = _resolve_settings(ctx)
        args = AddMemoryArgs(
            text=text,
            messages=[ToolMessage(**msg) for msg in messages] if messages else None,
            user_id=user_id if user_id else (default_user if not (agent_id or run_id) else None),
            agent_id=agent_id,
            app_id=app_id,
            run_id=run_id,
            metadata=metadata,
            enable_graph=_default_enable_graph(enable_graph, graph_default),
        )
        payload = args.model_dump(exclude_none=True)
        payload.setdefault("enable_graph", graph_default)
        conversation = payload.pop("messages", None)
        if not conversation:
            derived_text = payload.pop("text", None)
            if derived_text:
                conversation = [{"role": "user", "content": derived_text}]
            else:
                return json.dumps(
                    {
                        "error": "messages_missing",
                        "detail": "Provide either `text` or `messages` so Mem0 knows what to store.",
                    },
                    ensure_ascii=False,
                )
        else:
            payload.pop("text", None)

        client = _mem0_client(api_key)
        return _mem0_call(client.add, conversation, **payload)

    @server.tool(
        description="""Run a semantic search over existing memories.

        Use filters to narrow results. Common filter patterns:
        - Single user: {"AND": [{"user_id": "john"}]}
        - Agent memories: {"AND": [{"agent_id": "agent_name"}]}
        - Recent memories: {"AND": [{"user_id": "john"}, {"created_at": {"gte": "2024-01-01"}}]}
        - Multiple users: {"AND": [{"user_id": {"in": ["john", "jane"]}}]}
        - Cross-entity: {"OR": [{"user_id": "john"}, {"agent_id": "agent_name"}]}

        user_id is automatically added to filters if not provided.
        """
    )
    def search_memories(
        query: Annotated[str, Field(description="Natural language description of what to find.")],
        filters: Annotated[
            Optional[Dict[str, Any]],
            Field(default=None, description="Additional filter clauses (user_id injected automatically)."),
        ] = None,
        limit: Annotated[
            Optional[int], Field(default=None, description="Maximum number of results to return.")
        ] = None,
        enable_graph: Annotated[
            Optional[bool],
            Field(
                default=None,
                description="Set true only when the user explicitly wants graph-derived memories.",
            ),
        ] = None,
        ctx: Context | None = None,
    ) -> str:
        """Semantic search against existing memories."""

        api_key, default_user, graph_default = _resolve_settings(ctx)
        args = SearchMemoriesArgs(
            query=query,
            filters=filters,
            limit=limit,
            enable_graph=_default_enable_graph(enable_graph, graph_default),
        )
        payload = args.model_dump(exclude_none=True)
        payload["filters"] = _with_default_filters(default_user, payload.get("filters"))
        payload.setdefault("enable_graph", graph_default)
        client = _mem0_client(api_key)
        return _mem0_call(client.search, **payload)

    @server.tool(
        description="""Page through memories using filters instead of search.

        Use filters to list specific memories. Common filter patterns:
        - Single user: {"AND": [{"user_id": "john"}]}
        - Agent memories: {"AND": [{"agent_id": "agent_name"}]}
        - Recent memories: {"AND": [{"user_id": "john"}, {"created_at": {"gte": "2024-01-01"}}]}
        - Multiple users: {"AND": [{"user_id": {"in": ["john", "jane"]}}]}

        Pagination: Use page (1-indexed) and page_size for browsing results.
        user_id is automatically added to filters if not provided.
        """
    )
    def get_memories(
        filters: Annotated[
            Optional[Dict[str, Any]],
            Field(default=None, description="Structured filters; user_id injected automatically."),
        ] = None,
        page: Annotated[
            Optional[int], Field(default=None, description="1-indexed page number when paginating.")
        ] = None,
        page_size: Annotated[
            Optional[int], Field(default=None, description="Number of memories per page (default 10).")
        ] = None,
        enable_graph: Annotated[
            Optional[bool],
            Field(
                default=None,
                description="Set true only if the caller explicitly wants graph-derived memories.",
            ),
        ] = None,
        ctx: Context | None = None,
    ) -> str:
        """List memories via structured filters or pagination."""

        api_key, default_user, graph_default = _resolve_settings(ctx)
        args = GetMemoriesArgs(
            filters=filters,
            page=page,
            page_size=page_size,
            enable_graph=_default_enable_graph(enable_graph, graph_default),
        )
        payload = args.model_dump(exclude_none=True)
        payload["filters"] = _with_default_filters(default_user, payload.get("filters"))
        payload.setdefault("enable_graph", graph_default)
        client = _mem0_client(api_key)
        return _mem0_call(client.get_all, **payload)

    @server.tool(
        description="Delete every memory in the given user/agent/app/run but keep the entity."
    )
    def delete_all_memories(
        user_id: Annotated[
            Optional[str], Field(default=None, description="User scope to delete; defaults to server user.")
        ] = None,
        agent_id: Annotated[
            Optional[str], Field(default=None, description="Optional agent scope to delete.")
        ] = None,
        app_id: Annotated[
            Optional[str], Field(default=None, description="Optional app scope to delete.")
        ] = None,
        run_id: Annotated[
            Optional[str], Field(default=None, description="Optional run scope to delete.")
        ] = None,
        ctx: Context | None = None,
    ) -> str:
        """Bulk-delete every memory in the confirmed scope."""

        api_key, default_user, _ = _resolve_settings(ctx)
        args = DeleteAllArgs(
            user_id=user_id or default_user,
            agent_id=agent_id,
            app_id=app_id,
            run_id=run_id,
        )
        payload = args.model_dump(exclude_none=True)
        client = _mem0_client(api_key)
        return _mem0_call(client.delete_all, **payload)

    @server.tool(description="List which users/agents/apps/runs currently hold memories.")
    def list_entities(ctx: Context | None = None) -> str:
        """List users/agents/apps/runs with stored memories."""

        api_key, _, _ = _resolve_settings(ctx)
        client = _mem0_client(api_key)
        return _mem0_call(client.users)

    @server.tool(description="Fetch a single memory once you know its memory_id.")
    def get_memory(
        memory_id: Annotated[str, Field(description="Exact memory_id to fetch.")],
        ctx: Context | None = None,
    ) -> str:
        """Retrieve a single memory once the user has picked an exact ID."""

        api_key, _, _ = _resolve_settings(ctx)
        client = _mem0_client(api_key)
        return _mem0_call(client.get, memory_id)

    @server.tool(description="Overwrite an existing memory’s text.")
    def update_memory(
        memory_id: Annotated[str, Field(description="Exact memory_id to overwrite.")],
        text: Annotated[str, Field(description="Replacement text for the memory.")],
        ctx: Context | None = None,
    ) -> str:
        """Overwrite an existing memory’s text after the user confirms the exact memory_id."""

        api_key, _, _ = _resolve_settings(ctx)
        client = _mem0_client(api_key)
        return _mem0_call(client.update, memory_id=memory_id, text=text)

    @server.tool(description="Delete one memory after the user confirms its memory_id.")
    def delete_memory(
        memory_id: Annotated[str, Field(description="Exact memory_id to delete.")],
        ctx: Context | None = None,
    ) -> str:
        """Delete a memory once the user explicitly confirms the memory_id to remove."""

        api_key, _, _ = _resolve_settings(ctx)
        client = _mem0_client(api_key)
        return _mem0_call(client.delete, memory_id)

    @server.tool(
        description="Remove a user/agent/app/run record entirely (and cascade-delete its memories)."
    )
    def delete_entities(
        user_id: Annotated[
            Optional[str], Field(default=None, description="Delete this user and its memories.")
        ] = None,
        agent_id: Annotated[
            Optional[str], Field(default=None, description="Delete this agent and its memories.")
        ] = None,
        app_id: Annotated[
            Optional[str], Field(default=None, description="Delete this app and its memories.")
        ] = None,
        run_id: Annotated[
            Optional[str], Field(default=None, description="Delete this run and its memories.")
        ] = None,
        ctx: Context | None = None,
    ) -> str:
        """Delete a user/agent/app/run (and its memories) once the user confirms the scope."""

        api_key, _, _ = _resolve_settings(ctx)
        args = DeleteEntitiesArgs(
            user_id=user_id,
            agent_id=agent_id,
            app_id=app_id,
            run_id=run_id,
        )
        if not any([args.user_id, args.agent_id, args.app_id, args.run_id]):
            return json.dumps(
                {
                    "error": "scope_missing",
                    "detail": "Provide user_id, agent_id, app_id, or run_id before calling delete_entities.",
                },
                ensure_ascii=False,
            )
        payload = args.model_dump(exclude_none=True)
        client = _mem0_client(api_key)
        return _mem0_call(client.delete_users, **payload)

    # Add a simple prompt for server capabilities
    @server.prompt()
    def memory_assistant() -> str:
        """Get help with memory operations and best practices."""
        return """You are using the Mem0 MCP server for long-term memory management.

Quick Start:
1. Store memories: Use add_memory to save facts, preferences, or conversations
2. Search memories: Use search_memories for semantic queries
3. List memories: Use get_memories for filtered browsing
4. Update/Delete: Use update_memory and delete_memory for modifications

Filter Examples:
- User memories: {"AND": [{"user_id": "john"}]}
- Agent memories: {"AND": [{"agent_id": "agent_name"}]}
- Recent only: {"AND": [{"user_id": "john"}, {"created_at": {"gte": "2024-01-01"}}]}

Tips:
- user_id is automatically added to filters
- Use "*" as wildcard for any non-null value
- Combine filters with AND/OR/NOT for complex queries"""

    return server


def main() -> None:
    """Run the MCP server over stdio."""

    server = create_server()
    logger.info("Starting Mem0 MCP server (default user=%s)", ENV_DEFAULT_USER_ID)
    server.run(transport="stdio")


if __name__ == "__main__":
    main()

```