# Directory Structure
```
├── docker-compose.yml
├── Dockerfile
├── docs
│ ├── architecture.md
│ ├── claude_integration.md
│ ├── compatibility.md
│ ├── docker_usage.md
│ └── user_guide.md
├── examples
│ ├── claude_desktop_config.md
│ ├── retrieve_memory_example.py
│ └── store_memory_example.py
├── LICENSE
├── memory_mcp
│ ├── __init__.py
│ ├── __main__.py
│ ├── auto_memory
│ │ ├── __init__.py
│ │ ├── auto_capture.py
│ │ └── system_prompt.py
│ ├── domains
│ │ ├── __init__.py
│ │ ├── episodic.py
│ │ ├── manager.py
│ │ ├── persistence.py
│ │ ├── semantic.py
│ │ └── temporal.py
│ ├── mcp
│ │ ├── __init__.py
│ │ ├── server.py
│ │ └── tools.py
│ └── utils
│ ├── __init__.py
│ ├── compatibility
│ │ ├── __init__.py
│ │ └── version_checker.py
│ ├── config.py
│ ├── embeddings.py
│ └── schema.py
├── pyproject.toml
├── README.md
├── requirements.txt
├── setup.sh
└── tests
├── __init__.py
└── test_memory_mcp.py
```
# Files
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
1 | # Claude Memory MCP Server
2 |
3 | An MCP (Model Context Protocol) server implementation that provides persistent memory capabilities for Large Language Models, specifically designed to integrate with the Claude desktop application.
4 |
5 | 
6 |
7 | ## Overview
8 |
9 | This project implements optimal memory techniques based on comprehensive research of current approaches in the field. It provides a standardized way for Claude to maintain persistent memory across conversations and sessions.
10 |
11 | ## Features
12 |
13 | - **Tiered Memory Architecture**: Short-term, long-term, and archival memory tiers
14 | - **Multiple Memory Types**: Support for conversations, knowledge, entities, and reflections
15 | - **Semantic Search**: Retrieve memories based on semantic similarity
16 | - **Automatic Memory Management**: Intelligent memory capture without explicit commands
17 | - **Memory Consolidation**: Automatic consolidation of short-term memories into long-term memory
18 | - **Memory Management**: Importance-based memory retention and forgetting
19 | - **Claude Integration**: Ready-to-use integration with Claude desktop application
20 | - **MCP Protocol Support**: Compatible with the Model Context Protocol
21 | - **Docker Support**: Easy deployment using Docker containers
22 |
23 | ## Quick Start
24 |
25 | ### Option 1: Using Docker (Recommended)
26 |
27 | ```bash
28 | # Clone the repository
29 | git clone https://github.com/WhenMoon-afk/claude-memory-mcp.git
30 | cd claude-memory-mcp
31 |
32 | # Start with Docker Compose
33 | docker-compose up -d
34 | ```
35 |
36 | Configure Claude Desktop to use the containerized MCP server (see [Docker Usage Guide](docs/docker_usage.md) for details).
37 |
38 | ### Option 2: Standard Installation
39 |
40 | 1. **Prerequisites**:
41 | - Python 3.8-3.12
42 | - pip package manager
43 |
44 | 2. **Installation**:
45 | ```bash
46 | # Clone the repository
47 | git clone https://github.com/WhenMoon-afk/claude-memory-mcp.git
48 | cd claude-memory-mcp
49 |
50 | # Install dependencies
51 | pip install -r requirements.txt
52 |
53 | # Run setup script
54 | chmod +x setup.sh
55 | ./setup.sh
56 | ```
57 |
58 | 3. **Claude Desktop Integration**:
59 |
60 | Add the following to your Claude configuration file:
61 |
62 | ```json
63 | {
64 | "mcpServers": {
65 | "memory": {
66 | "command": "python",
67 | "args": ["-m", "memory_mcp"],
68 | "env": {
69 | "MEMORY_FILE_PATH": "/path/to/your/memory.json"
70 | }
71 | }
72 | }
73 | }
74 | ```
75 |
76 | ## Using Memory with Claude
77 |
78 | The Memory MCP Server enables Claude to remember information across conversations without requiring explicit commands.
79 |
80 | 1. **Automatic Memory**: Claude will automatically:
81 | - Remember important details you share
82 | - Store user preferences and facts
83 | - Recall relevant information when needed
84 |
85 | 2. **Memory Recall**: To see what Claude remembers, simply ask:
86 | - "What do you remember about me?"
87 | - "What do you know about my preferences?"
88 |
89 | 3. **System Prompt**: For optimal memory usage, add this to your Claude system prompt:
90 |
91 | ```
92 | This Claude instance has been enhanced with persistent memory capabilities.
93 | Claude will automatically remember important details about you across
94 | conversations and recall them when relevant, without needing explicit commands.
95 | ```
96 |
97 | See the [User Guide](docs/user_guide.md) for detailed usage instructions and examples.
98 |
99 | ## Documentation
100 |
101 | - [User Guide](docs/user_guide.md)
102 | - [Docker Usage Guide](docs/docker_usage.md)
103 | - [Compatibility Guide](docs/compatibility.md)
104 | - [Architecture](docs/architecture.md)
105 | - [Claude Integration Guide](docs/claude_integration.md)
106 |
107 | ## Examples
108 |
109 | The `examples` directory contains scripts demonstrating how to interact with the Memory MCP Server:
110 |
111 | - `store_memory_example.py`: Example of storing a memory
112 | - `retrieve_memory_example.py`: Example of retrieving memories
113 |
114 | ## Troubleshooting
115 |
116 | If you encounter issues:
117 |
118 | 1. Check the [Compatibility Guide](docs/compatibility.md) for dependency requirements
119 | 2. Ensure your Python version is 3.8-3.12
120 | 3. For NumPy issues, use: `pip install "numpy>=1.20.0,<2.0.0"`
121 | 4. Try using Docker for simplified deployment
122 |
123 | ## Contributing
124 |
125 | Contributions are welcome! Please feel free to submit a Pull Request.
126 |
127 | ## License
128 |
129 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
```
--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Test package for the Memory MCP Server.
3 | """
4 |
```
--------------------------------------------------------------------------------
/memory_mcp/utils/__init__.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Utility modules for the memory MCP server.
3 | """
4 |
```
--------------------------------------------------------------------------------
/memory_mcp/mcp/__init__.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | MCP (Model Context Protocol) functionality for the Memory MCP Server.
3 | """
4 |
```
--------------------------------------------------------------------------------
/memory_mcp/utils/compatibility/__init__.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Compatibility utility for checking and reporting version issues.
3 | """
4 |
5 | from .version_checker import check_compatibility, CompatibilityReport
6 |
```
--------------------------------------------------------------------------------
/memory_mcp/__init__.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Claude Memory MCP Server
3 |
4 | An MCP server implementation that provides persistent memory capabilities for Large Language Models,
5 | specifically designed to work with the Claude desktop application.
6 | """
7 |
8 | __version__ = "0.1.0"
9 |
```
--------------------------------------------------------------------------------
/docker-compose.yml:
--------------------------------------------------------------------------------
```yaml
1 | version: '3'
2 |
3 | services:
4 | memory-mcp:
5 | build: .
6 | volumes:
7 | - ./config:/app/config
8 | - ./data:/app/data
9 | environment:
10 | - MEMORY_FILE_PATH=/app/data/memory.json
11 | - MCP_CONFIG_DIR=/app/config
12 | - MCP_DATA_DIR=/app/data
13 | restart: unless-stopped
```
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
```
1 | mcp-cli>=0.1.0,<0.3.0
2 | mcp-server>=0.1.0,<0.3.0
3 | pydantic>=2.4.0,<3.0.0
4 | sentence-transformers>=2.2.2,<3.0.0
5 | numpy>=1.20.0,<2.0.0
6 | hnswlib>=0.7.0,<0.8.0
7 | fastapi>=0.100.0,<0.110.0
8 | uvicorn>=0.23.0,<0.30.0
9 | python-dotenv>=1.0.0,<2.0.0
10 | pytest>=7.3.1,<8.0.0
11 | python-jose>=3.3.0,<4.0.0
12 | loguru>=0.7.0,<0.8.0
```
--------------------------------------------------------------------------------
/memory_mcp/auto_memory/__init__.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Automatic memory management module.
3 |
4 | This module provides automatic memory capture and retrieval capabilities
5 | to make memory functionality more intuitive and seamless.
6 | """
7 |
8 | from .system_prompt import get_memory_system_prompt, get_memory_integration_template
9 | from .auto_capture import should_store_memory, extract_memory_content
10 |
```
--------------------------------------------------------------------------------
/memory_mcp/domains/__init__.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Domain modules for the memory system.
3 |
4 | The memory system is organized into functional domains:
5 | - Episodic Domain: Manages episodic memories (conversations, experiences)
6 | - Semantic Domain: Manages semantic memories (facts, knowledge)
7 | - Temporal Domain: Manages time-aware memory processing
8 | - Persistence Domain: Manages storage and retrieval
9 | """
10 |
```
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
```dockerfile
1 | FROM python:3.10-slim as builder
2 |
3 | WORKDIR /app
4 | COPY requirements.txt pyproject.toml ./
5 | RUN pip install --user --no-warn-script-location -r requirements.txt
6 |
7 | FROM python:3.10-slim
8 |
9 | WORKDIR /app
10 | COPY --from=builder /root/.local /root/.local
11 | COPY . .
12 |
13 | ENV PATH=/root/.local/bin:$PATH
14 | ENV PYTHONPATH=/app
15 |
16 | # Default configuration
17 | ENV MCP_CONFIG_DIR=/app/config
18 | ENV MCP_DATA_DIR=/app/data
19 | ENV MEMORY_FILE_PATH=/app/data/memory.json
20 |
21 | # Create necessary directories
22 | RUN mkdir -p /app/config /app/data /app/cache
23 |
24 | # Set permissions
25 | RUN chmod +x setup.sh
26 |
27 | # Create volume mount points for persistence
28 | VOLUME ["/app/config", "/app/data"]
29 |
30 | ENTRYPOINT ["python", "-m", "memory_mcp"]
```
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
```toml
1 | [build-system]
2 | requires = ["setuptools>=42", "wheel"]
3 | build-backend = "setuptools.build_meta"
4 |
5 | [project]
6 | name = "memory_mcp"
7 | version = "0.1.0"
8 | description = "MCP server implementation for LLM persistent memory"
9 | readme = "README.md"
10 | authors = [
11 | {name = "Aurora", email = "[email protected]"}
12 | ]
13 | license = {text = "MIT"}
14 | classifiers = [
15 | "Programming Language :: Python :: 3",
16 | "Programming Language :: Python :: 3.8",
17 | "Programming Language :: Python :: 3.9",
18 | "Programming Language :: Python :: 3.10",
19 | "Programming Language :: Python :: 3.11",
20 | "Programming Language :: Python :: 3.12",
21 | "License :: OSI Approved :: MIT License",
22 | "Operating System :: OS Independent",
23 | ]
24 | requires-python = ">=3.8"
25 | dependencies = [
26 | "mcp-cli>=0.1.0,<0.3.0",
27 | "mcp-server>=0.1.0,<0.3.0",
28 | "pydantic>=2.4.0,<3.0.0",
29 | "sentence-transformers>=2.2.2,<3.0.0",
30 | "numpy>=1.20.0,<2.0.0",
31 | "hnswlib>=0.7.0,<0.8.0",
32 | "fastapi>=0.100.0,<0.110.0",
33 | "uvicorn>=0.23.0,<0.30.0",
34 | "python-dotenv>=1.0.0,<2.0.0",
35 | "python-jose>=3.3.0,<4.0.0",
36 | "loguru>=0.7.0,<0.8.0",
37 | ]
38 |
39 | [project.optional-dependencies]
40 | dev = [
41 | "pytest>=7.3.1,<8.0.0",
42 | "pytest-cov>=4.1.0,<5.0.0",
43 | "black>=23.3.0,<24.0.0",
44 | "isort>=5.12.0,<6.0.0",
45 | "mypy>=1.3.0,<2.0.0",
46 | ]
47 |
48 | [tool.setuptools]
49 | packages = ["memory_mcp"]
50 |
51 | [tool.black]
52 | line-length = 88
53 | target-version = ["py38", "py39", "py310", "py311", "py312"]
54 |
55 | [tool.isort]
56 | profile = "black"
57 | line_length = 88
58 |
59 | [tool.mypy]
60 | python_version = "3.8"
61 | warn_return_any = true
62 | warn_unused_configs = true
63 | disallow_untyped_defs = true
64 | disallow_incomplete_defs = true
```
--------------------------------------------------------------------------------
/memory_mcp/__main__.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Command-line entry point for the Memory MCP Server
3 | """
4 |
5 | import os
6 | import logging
7 | import argparse
8 | from pathlib import Path
9 |
10 | from loguru import logger
11 |
12 | from memory_mcp.mcp.server import MemoryMcpServer
13 | from memory_mcp.utils.config import load_config
14 |
15 |
16 | def main() -> None:
17 | """Entry point for the Memory MCP Server."""
18 | parser = argparse.ArgumentParser(description="Memory MCP Server")
19 | parser.add_argument(
20 | "--config",
21 | type=str,
22 | help="Path to configuration file"
23 | )
24 | parser.add_argument(
25 | "--memory-file",
26 | type=str,
27 | help="Path to memory file"
28 | )
29 | parser.add_argument(
30 | "--debug",
31 | action="store_true",
32 | help="Enable debug mode"
33 | )
34 |
35 | args = parser.parse_args()
36 |
37 | # Configure logging
38 | log_level = "DEBUG" if args.debug else "INFO"
39 | logger.remove()
40 | logger.add(
41 | os.sys.stderr,
42 | level=log_level,
43 | format="<green>{time:YYYY-MM-DD HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>",
44 | )
45 |
46 | # Load configuration
47 | config_path = args.config
48 | if not config_path:
49 | config_dir = os.environ.get("MCP_CONFIG_DIR", os.path.expanduser("~/.memory_mcp/config"))
50 | config_path = os.path.join(config_dir, "config.json")
51 |
52 | config = load_config(config_path)
53 |
54 | # Override memory file path if specified
55 | if args.memory_file:
56 | config["memory"]["file_path"] = args.memory_file
57 | elif "MEMORY_FILE_PATH" in os.environ:
58 | config["memory"]["file_path"] = os.environ["MEMORY_FILE_PATH"]
59 |
60 | memory_file_path = config["memory"]["file_path"]
61 |
62 | # Ensure memory file path exists
63 | memory_file_dir = os.path.dirname(memory_file_path)
64 | os.makedirs(memory_file_dir, exist_ok=True)
65 |
66 | logger.info(f"Starting Memory MCP Server")
67 | logger.info(f"Using configuration from {config_path}")
68 | logger.info(f"Using memory file: {memory_file_path}")
69 |
70 | # Start the server
71 | server = MemoryMcpServer(config)
72 | server.start()
73 |
74 |
75 | if __name__ == "__main__":
76 | main()
77 |
```
--------------------------------------------------------------------------------
/docs/docker_usage.md:
--------------------------------------------------------------------------------
```markdown
1 | # Docker Deployment
2 |
3 | This document explains how to run the Memory MCP Server using Docker.
4 |
5 | ## Prerequisites
6 |
7 | - Docker installed on your system
8 | - Docker Compose (optional, for easier deployment)
9 |
10 | ## Option 1: Using Docker Compose (Recommended)
11 |
12 | 1. Clone the repository:
13 | ```
14 | git clone https://github.com/WhenMoon-afk/claude-memory-mcp.git
15 | cd claude-memory-mcp
16 | ```
17 |
18 | 2. Start the service:
19 | ```
20 | docker-compose up -d
21 | ```
22 |
23 | 3. Configure Claude Desktop to use the containerized MCP server by adding the following to your Claude configuration file:
24 | ```json
25 | {
26 | "mcpServers": {
27 | "memory": {
28 | "command": "docker",
29 | "args": [
30 | "exec",
31 | "-i",
32 | "claude-memory-mcp_memory-mcp_1",
33 | "python",
34 | "-m",
35 | "memory_mcp"
36 | ],
37 | "env": {
38 | "MEMORY_FILE_PATH": "/app/data/memory.json"
39 | }
40 | }
41 | }
42 | }
43 | ```
44 |
45 | ## Option 2: Using Docker Directly
46 |
47 | 1. Build the Docker image:
48 | ```
49 | docker build -t memory-mcp .
50 | ```
51 |
52 | 2. Create directories for configuration and data:
53 | ```
54 | mkdir -p config data
55 | ```
56 |
57 | 3. Run the container:
58 | ```
59 | docker run -d \
60 | --name memory-mcp \
61 | -v "$(pwd)/config:/app/config" \
62 | -v "$(pwd)/data:/app/data" \
63 | memory-mcp
64 | ```
65 |
66 | 4. Configure Claude Desktop to use the containerized MCP server by adding the following to your Claude configuration file:
67 | ```json
68 | {
69 | "mcpServers": {
70 | "memory": {
71 | "command": "docker",
72 | "args": [
73 | "exec",
74 | "-i",
75 | "memory-mcp",
76 | "python",
77 | "-m",
78 | "memory_mcp"
79 | ],
80 | "env": {
81 | "MEMORY_FILE_PATH": "/app/data/memory.json"
82 | }
83 | }
84 | }
85 | }
86 | ```
87 |
88 | ## Using Prebuilt Images
89 |
90 | You can also use the prebuilt Docker image from Docker Hub:
91 |
92 | ```
93 | docker run -d \
94 | --name memory-mcp \
95 | -v "$(pwd)/config:/app/config" \
96 | -v "$(pwd)/data:/app/data" \
97 | whenmoon-afk/claude-memory-mcp
98 | ```
99 |
100 | ## Customizing Configuration
101 |
102 | You can customize the server configuration by creating a `config.json` file in the `config` directory before starting the container.
```
--------------------------------------------------------------------------------
/docs/compatibility.md:
--------------------------------------------------------------------------------
```markdown
1 | # Compatibility Guide
2 |
3 | This guide helps you resolve compatibility issues with the Memory MCP Server.
4 |
5 | ## Supported Environments
6 |
7 | The Memory MCP Server is compatible with:
8 |
9 | - **Python Versions**: 3.8, 3.9, 3.10, 3.11, 3.12
10 | - **Operating Systems**: Windows, macOS, Linux
11 |
12 | ## Key Dependencies
13 |
14 | | Dependency | Supported Versions | Notes |
15 | |------------|-------------------|-------|
16 | | NumPy | 1.20.0 - 1.x.x | **Not compatible with NumPy 2.x** |
17 | | Pydantic | 2.4.0 - 2.x.x | |
18 | | sentence-transformers | 2.2.2 - 2.x.x | |
19 | | MCP libraries | 0.1.0 - 0.2.x | |
20 |
21 | ## Common Issues and Solutions
22 |
23 | ### NumPy 2.x Incompatibility
24 |
25 | **Issue**: The error message mentions NumPy version incompatibility.
26 |
27 | **Solution**:
28 | ```bash
29 | pip uninstall numpy
30 | pip install "numpy>=1.20.0,<2.0.0"
31 | ```
32 |
33 | ### Python Version Errors
34 |
35 | **Issue**: You see an error about unsupported Python version.
36 |
37 | **Solution**:
38 | 1. Check your Python version: `python --version`
39 | 2. Install a supported Python version (3.8-3.12)
40 | 3. Create a new virtual environment with the supported version:
41 | ```bash
42 | python3.10 -m venv venv
43 | source venv/bin/activate # On Windows: venv\Scripts\activate
44 | pip install -r requirements.txt
45 | ```
46 |
47 | ### MCP Libraries Not Found
48 |
49 | **Issue**: Error about missing MCP libraries.
50 |
51 | **Solution**:
52 | ```bash
53 | pip install mcp-cli>=0.1.0,<0.3.0 mcp-server>=0.1.0,<0.3.0
54 | ```
55 |
56 | If you need a newer version of the MCP libraries, you can install them directly:
57 | ```bash
58 | pip install git+https://github.com/anthropics/mcp-cli.git
59 | pip install git+https://github.com/anthropics/mcp-server.git
60 | ```
61 |
62 | ### Other Dependency Issues
63 |
64 | **Solution**:
65 | 1. Create a fresh virtual environment:
66 | ```bash
67 | python -m venv venv
68 | source venv/bin/activate # On Windows: venv\Scripts\activate
69 | ```
70 |
71 | 2. Install dependencies:
72 | ```bash
73 | pip install -r requirements.txt
74 | ```
75 |
76 | ## Docker Option
77 |
78 | If you continue to have dependency issues, consider using Docker instead:
79 |
80 | ```bash
81 | docker run -d \
82 | --name memory-mcp \
83 | -v "$(pwd)/config:/app/config" \
84 | -v "$(pwd)/data:/app/data" \
85 | whenmoon-afk/claude-memory-mcp
86 | ```
87 |
88 | See the [Docker Usage Guide](docker_usage.md) for more details.
89 |
90 | ## Bypassing Compatibility Check
91 |
92 | If you want to skip the compatibility check (not recommended):
93 |
94 | ```bash
95 | python -m memory_mcp --skip-compatibility-check
96 | ```
```
--------------------------------------------------------------------------------
/memory_mcp/auto_memory/system_prompt.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | System prompt templates for memory integration.
3 |
4 | This module provides system prompt templates that instruct Claude
5 | how to effectively use the memory tools without requiring explicit
6 | commands from the user.
7 | """
8 |
9 | def get_memory_system_prompt() -> str:
10 | """
11 | Get the system prompt template for memory integration.
12 |
13 | Returns:
14 | System prompt template for memory integration
15 | """
16 | return """
17 | When starting a new conversation or when relevant to the current topic, automatically check your memory to retrieve relevant information about the user or topic without being explicitly asked to do so.
18 |
19 | Follow these memory guidelines:
20 |
21 | 1. Automatic Memory Retrieval:
22 | - At the start of conversations, silently use the retrieve_memory tool to find relevant memories
23 | - Do not mention the retrieval process to the user unless they ask about your memory directly
24 | - Naturally incorporate relevant memories into your responses
25 |
26 | 2. Automatic Memory Storage:
27 | - Store important user information when learned (preferences, facts, personal details)
28 | - Capture key facts or information shared in conversation
29 | - Don't explicitly tell the user you're storing information unless they ask
30 | - Assign higher importance (0.7-0.9) to personal user information
31 | - Assign medium importance (0.4-0.6) to general facts and preferences
32 |
33 | 3. Memory Types Usage:
34 | - Use "entity" type for user preferences, traits, and personal information
35 | - Use "fact" type for factual information shared by the user
36 | - Use "conversation" type for significant conversational exchanges
37 | - Use "reflection" type for insights about the user
38 |
39 | 4. When Asked About Memory:
40 | - If the user asks what you remember, use the retrieve_memory tool with their name/topic
41 | - Present the information in a natural, conversational way
42 | - If asked how your memory works, explain you maintain persistent memory across conversations
43 |
44 | Always prioritize creating a natural conversation experience where memory augments the interaction without becoming the focus.
45 | """
46 |
47 |
48 | def get_memory_integration_template() -> str:
49 | """
50 | Get the template for instructing Claude how to integrate with memory.
51 |
52 | Returns:
53 | Template for memory integration instructions
54 | """
55 | return """
56 | This Claude instance has been enhanced with persistent memory capabilities.
57 | Claude will automatically:
58 | 1. Remember important details about you across conversations
59 | 2. Store key facts and preferences you share
60 | 3. Recall relevant information when needed
61 |
62 | You don't need to explicitly ask Claude to remember or recall information.
63 | Simply have natural conversations, and Claude will maintain memory of important details.
64 |
65 | To see what Claude remembers about you, just ask "What do you remember about me?"
66 | """
```
--------------------------------------------------------------------------------
/setup.sh:
--------------------------------------------------------------------------------
```bash
1 | #!/bin/bash
2 |
3 | # Claude Memory MCP Setup Script
4 |
5 | echo "Setting up Claude Memory MCP Server..."
6 |
7 | # Create configuration directory
8 | CONFIG_DIR="$HOME/.memory_mcp/config"
9 | DATA_DIR="$HOME/.memory_mcp/data"
10 |
11 | mkdir -p $CONFIG_DIR
12 | mkdir -p $DATA_DIR
13 |
14 | # Generate default configuration if it doesn't exist
15 | if [ ! -f "$CONFIG_DIR/config.json" ]; then
16 | echo "Creating default configuration..."
17 | cat > "$CONFIG_DIR/config.json" << EOF
18 | {
19 | "server": {
20 | "host": "127.0.0.1",
21 | "port": 8000,
22 | "debug": false
23 | },
24 | "memory": {
25 | "max_short_term_items": 100,
26 | "max_long_term_items": 1000,
27 | "max_archival_items": 10000,
28 | "consolidation_interval_hours": 24,
29 | "file_path": "$DATA_DIR/memory.json"
30 | },
31 | "embedding": {
32 | "model": "sentence-transformers/all-MiniLM-L6-v2",
33 | "dimensions": 384,
34 | "cache_dir": "$HOME/.memory_mcp/cache"
35 | }
36 | }
37 | EOF
38 | echo "Default configuration created at $CONFIG_DIR/config.json"
39 | fi
40 |
41 | # Create default memory file if it doesn't exist
42 | if [ ! -f "$DATA_DIR/memory.json" ]; then
43 | echo "Creating empty memory file..."
44 | cat > "$DATA_DIR/memory.json" << EOF
45 | {
46 | "metadata": {
47 | "version": "1.0",
48 | "created_at": "$(date -u +"%Y-%m-%dT%H:%M:%SZ")",
49 | "updated_at": "$(date -u +"%Y-%m-%dT%H:%M:%SZ")",
50 | "memory_stats": {
51 | "total_memories": 0,
52 | "active_memories": 0,
53 | "archived_memories": 0
54 | }
55 | },
56 | "memory_index": {
57 | "index_type": "hnsw",
58 | "index_parameters": {
59 | "m": 16,
60 | "ef_construction": 200,
61 | "ef": 50
62 | },
63 | "entries": {}
64 | },
65 | "short_term_memory": [],
66 | "long_term_memory": [],
67 | "archived_memory": [],
68 | "memory_schema": {
69 | "conversation": {
70 | "required_fields": ["role", "message"],
71 | "optional_fields": ["summary", "entities", "sentiment", "intent"]
72 | },
73 | "fact": {
74 | "required_fields": ["fact", "confidence"],
75 | "optional_fields": ["domain", "entities", "references"]
76 | },
77 | "document": {
78 | "required_fields": ["title", "text"],
79 | "optional_fields": ["summary", "chunks", "metadata"]
80 | },
81 | "code": {
82 | "required_fields": ["language", "code"],
83 | "optional_fields": ["description", "purpose", "dependencies"]
84 | }
85 | },
86 | "config": {
87 | "memory_management": {
88 | "max_short_term_memories": 100,
89 | "max_long_term_memories": 10000,
90 | "archival_threshold_days": 30,
91 | "deletion_threshold_days": 365,
92 | "importance_decay_rate": 0.01,
93 | "minimum_importance_threshold": 0.2
94 | },
95 | "retrieval": {
96 | "default_top_k": 5,
97 | "semantic_threshold": 0.75,
98 | "recency_weight": 0.3,
99 | "importance_weight": 0.7
100 | },
101 | "embedding": {
102 | "default_model": "sentence-transformers/all-MiniLM-L6-v2",
103 | "dimensions": 384,
104 | "batch_size": 8
105 | }
106 | }
107 | }
108 | EOF
109 | echo "Empty memory file created at $DATA_DIR/memory.json"
110 | fi
111 |
112 | # Install dependencies
113 | echo "Installing dependencies..."
114 | pip install -r requirements.txt
115 |
116 | echo "Setup complete! You can now start the memory MCP server with: python -m memory_mcp"
117 |
```
--------------------------------------------------------------------------------
/examples/store_memory_example.py:
--------------------------------------------------------------------------------
```python
1 | #!/usr/bin/env python3
2 | """
3 | Example script showing how to store a memory using the Memory MCP Server API.
4 | """
5 |
6 | import json
7 | import asyncio
8 | import argparse
9 | import sys
10 | import os
11 | import subprocess
12 | from typing import Dict, Any
13 |
14 | # Add project root to path
15 | sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
16 |
17 |
18 | async def store_memory_example(memory_type: str, content: Dict[str, Any], importance: float) -> None:
19 | """
20 | Example of storing a memory using subprocess to communicate with the MCP server.
21 |
22 | Args:
23 | memory_type: Type of memory (conversation, fact, entity, etc.)
24 | content: Memory content as a dictionary
25 | importance: Importance score (0.0-1.0)
26 | """
27 | # Construct the request
28 | request = {
29 | "jsonrpc": "2.0",
30 | "id": 1,
31 | "method": "executeFunction",
32 | "params": {
33 | "name": "store_memory",
34 | "arguments": {
35 | "type": memory_type,
36 | "content": content,
37 | "importance": importance
38 | }
39 | }
40 | }
41 |
42 | # Convert to JSON
43 | request_json = json.dumps(request)
44 |
45 | # Execute MCP server process
46 | process = subprocess.Popen(
47 | ["python", "-m", "memory_mcp"],
48 | stdin=subprocess.PIPE,
49 | stdout=subprocess.PIPE,
50 | stderr=subprocess.PIPE,
51 | text=True
52 | )
53 |
54 | # Send request
55 | stdout, stderr = process.communicate(input=request_json + "\n")
56 |
57 | # Parse response
58 | try:
59 | response = json.loads(stdout)
60 | if "result" in response and "value" in response["result"]:
61 | result = json.loads(response["result"]["value"][0]["text"])
62 | if result.get("success"):
63 | print(f"Memory stored successfully with ID: {result.get('memory_id')}")
64 | else:
65 | print(f"Error storing memory: {result.get('error')}")
66 | else:
67 | print(f"Unexpected response: {response}")
68 | except json.JSONDecodeError:
69 | print(f"Error parsing response: {stdout}")
70 | print(f"Error output: {stderr}")
71 |
72 |
73 | def main() -> None:
74 | """Main function for the example script."""
75 | parser = argparse.ArgumentParser(description="Memory MCP Store Example")
76 | parser.add_argument("--type", choices=["conversation", "fact", "entity", "reflection", "code"], default="fact")
77 | parser.add_argument("--content", help="Content string for the memory")
78 | parser.add_argument("--importance", type=float, default=0.7, help="Importance score (0.0-1.0)")
79 |
80 | args = parser.parse_args()
81 |
82 | # Construct memory content based on type
83 | if args.type == "fact":
84 | content = {
85 | "fact": args.content or "Paris is the capital of France",
86 | "confidence": 0.95,
87 | "domain": "geography"
88 | }
89 | elif args.type == "entity":
90 | content = {
91 | "name": "user",
92 | "entity_type": "person",
93 | "attributes": {
94 | "preference": args.content or "Python programming language"
95 | }
96 | }
97 | elif args.type == "conversation":
98 | content = {
99 | "role": "user",
100 | "message": args.content or "I really enjoy machine learning and data science."
101 | }
102 | elif args.type == "reflection":
103 | content = {
104 | "subject": "user preferences",
105 | "reflection": args.content or "The user seems to prefer technical discussions about AI and programming."
106 | }
107 | elif args.type == "code":
108 | content = {
109 | "language": "python",
110 | "code": args.content or "print('Hello, world!')",
111 | "description": "Simple hello world program"
112 | }
113 |
114 | # Run the example
115 | asyncio.run(store_memory_example(args.type, content, args.importance))
116 |
117 |
118 | if __name__ == "__main__":
119 | main()
```
--------------------------------------------------------------------------------
/memory_mcp/domains/episodic.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Episodic Domain for managing episodic memories.
3 |
4 | The Episodic Domain is responsible for:
5 | - Recording and retrieving conversation histories
6 | - Managing session-based interactions
7 | - Contextualizing memories with temporal and situational details
8 | - Narrative memory construction
9 | - Recording agent reflections and observations
10 | """
11 |
12 | from typing import Any, Dict, List
13 |
14 | from loguru import logger
15 |
16 | from memory_mcp.domains.persistence import PersistenceDomain
17 |
18 |
19 | class EpisodicDomain:
20 | """
21 | Manages episodic memories (conversations, experiences, reflections).
22 |
23 | This domain handles memories that are experiential in nature,
24 | including conversation histories, reflections, and interactions.
25 | """
26 |
27 | def __init__(self, config: Dict[str, Any], persistence_domain: PersistenceDomain) -> None:
28 | """
29 | Initialize the episodic domain.
30 |
31 | Args:
32 | config: Configuration dictionary
33 | persistence_domain: Reference to the persistence domain
34 | """
35 | self.config = config
36 | self.persistence_domain = persistence_domain
37 |
38 | async def initialize(self) -> None:
39 | """Initialize the episodic domain."""
40 | logger.info("Initializing Episodic Domain")
41 | # Initialization logic will be implemented here
42 |
43 | async def process_memory(self, memory: Dict[str, Any]) -> Dict[str, Any]:
44 | """
45 | Process an episodic memory.
46 |
47 | This includes extracting key information, generating embeddings,
48 | and enriching the memory with additional metadata.
49 |
50 | Args:
51 | memory: The memory to process
52 |
53 | Returns:
54 | Processed memory
55 | """
56 | logger.debug(f"Processing episodic memory: {memory['id']}")
57 |
58 | # Extract text representation for embedding
59 | text_content = self._extract_text_content(memory)
60 |
61 | # Generate embedding
62 | embedding = await self.persistence_domain.generate_embedding(text_content)
63 | memory["embedding"] = embedding
64 |
65 | # Additional processing will be implemented here
66 |
67 | return memory
68 |
69 | def _extract_text_content(self, memory: Dict[str, Any]) -> str:
70 | """
71 | Extract text content from a memory for embedding generation.
72 |
73 | Args:
74 | memory: The memory to extract text from
75 |
76 | Returns:
77 | Text representation of the memory
78 | """
79 | if memory["type"] == "conversation":
80 | # For conversation memories, extract from the message content
81 | if "role" in memory["content"] and "message" in memory["content"]:
82 | return f"{memory['content']['role']}: {memory['content']['message']}"
83 |
84 | # Handle conversation arrays
85 | if "messages" in memory["content"]:
86 | messages = memory["content"]["messages"]
87 | if isinstance(messages, list):
88 | return "\n".join([f"{m.get('role', 'unknown')}: {m.get('content', '')}" for m in messages])
89 |
90 | elif memory["type"] == "reflection":
91 | # For reflection memories, combine subject and reflection
92 | if "subject" in memory["content"] and "reflection" in memory["content"]:
93 | return f"{memory['content']['subject']}: {memory['content']['reflection']}"
94 |
95 | # Fallback: try to convert content to string
96 | try:
97 | return str(memory["content"])
98 | except:
99 | return f"Memory {memory['id']} of type {memory['type']}"
100 |
101 | async def get_stats(self) -> Dict[str, Any]:
102 | """
103 | Get statistics about the episodic domain.
104 |
105 | Returns:
106 | Episodic domain statistics
107 | """
108 | return {
109 | "memory_types": {
110 | "conversation": 0,
111 | "reflection": 0
112 | },
113 | "status": "initialized"
114 | }
115 |
```
--------------------------------------------------------------------------------
/examples/retrieve_memory_example.py:
--------------------------------------------------------------------------------
```python
1 | #!/usr/bin/env python3
2 | """
3 | Example script showing how to retrieve memories using the Memory MCP Server API.
4 | """
5 |
6 | import json
7 | import asyncio
8 | import argparse
9 | import sys
10 | import os
11 | import subprocess
12 | from typing import Dict, Any, List
13 |
14 | # Add project root to path
15 | sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
16 |
17 |
18 | async def retrieve_memory_example(query: str, limit: int = 5, memory_types: List[str] = None,
19 | min_similarity: float = 0.6) -> None:
20 | """
21 | Example of retrieving memories using subprocess to communicate with the MCP server.
22 |
23 | Args:
24 | query: Query string to search for memories
25 | limit: Maximum number of memories to retrieve
26 | memory_types: Types of memories to include (None for all types)
27 | min_similarity: Minimum similarity score for results
28 | """
29 | # Construct the request
30 | request = {
31 | "jsonrpc": "2.0",
32 | "id": 1,
33 | "method": "executeFunction",
34 | "params": {
35 | "name": "retrieve_memory",
36 | "arguments": {
37 | "query": query,
38 | "limit": limit,
39 | "types": memory_types,
40 | "min_similarity": min_similarity,
41 | "include_metadata": True
42 | }
43 | }
44 | }
45 |
46 | # Convert to JSON
47 | request_json = json.dumps(request)
48 |
49 | # Execute MCP server process
50 | process = subprocess.Popen(
51 | ["python", "-m", "memory_mcp"],
52 | stdin=subprocess.PIPE,
53 | stdout=subprocess.PIPE,
54 | stderr=subprocess.PIPE,
55 | text=True
56 | )
57 |
58 | # Send request
59 | stdout, stderr = process.communicate(input=request_json + "\n")
60 |
61 | # Parse response
62 | try:
63 | response = json.loads(stdout)
64 | if "result" in response and "value" in response["result"]:
65 | result = json.loads(response["result"]["value"][0]["text"])
66 | if result.get("success"):
67 | memories = result.get("memories", [])
68 | if not memories:
69 | print(f"No memories found for query: '{query}'")
70 | else:
71 | print(f"Found {len(memories)} memories for query: '{query}'")
72 | for i, memory in enumerate(memories):
73 | print(f"\nMemory {i+1}:")
74 | print(f" Type: {memory['type']}")
75 | print(f" Similarity: {memory.get('similarity', 0.0):.2f}")
76 |
77 | if memory["type"] == "fact":
78 | print(f" Fact: {memory['content'].get('fact', 'N/A')}")
79 | elif memory["type"] == "entity":
80 | print(f" Entity: {memory['content'].get('name', 'N/A')}")
81 | print(f" Attributes: {memory['content'].get('attributes', {})}")
82 | elif memory["type"] == "conversation":
83 | print(f" Role: {memory['content'].get('role', 'N/A')}")
84 | print(f" Message: {memory['content'].get('message', 'N/A')}")
85 |
86 | if "metadata" in memory:
87 | print(f" Created: {memory.get('created_at', 'N/A')}")
88 | print(f" Last Accessed: {memory.get('last_accessed', 'N/A')}")
89 | print(f" Importance: {memory.get('importance', 0.0)}")
90 | else:
91 | print(f"Error retrieving memories: {result.get('error')}")
92 | else:
93 | print(f"Unexpected response: {response}")
94 | except json.JSONDecodeError:
95 | print(f"Error parsing response: {stdout}")
96 | print(f"Error output: {stderr}")
97 |
98 |
99 | def main() -> None:
100 | """Main function for the example script."""
101 | parser = argparse.ArgumentParser(description="Memory MCP Retrieve Example")
102 | parser.add_argument("--query", default="user preferences", help="Query string to search for memories")
103 | parser.add_argument("--limit", type=int, default=5, help="Maximum number of memories to retrieve")
104 | parser.add_argument("--types", nargs="+", choices=["conversation", "fact", "entity", "reflection", "code"],
105 | help="Types of memories to include")
106 | parser.add_argument("--min-similarity", type=float, default=0.6, help="Minimum similarity score (0.0-1.0)")
107 |
108 | args = parser.parse_args()
109 |
110 | # Run the example
111 | asyncio.run(retrieve_memory_example(args.query, args.limit, args.types, args.min_similarity))
112 |
113 |
114 | if __name__ == "__main__":
115 | main()
```
--------------------------------------------------------------------------------
/memory_mcp/domains/semantic.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Semantic Domain for managing semantic memories.
3 |
4 | The Semantic Domain is responsible for:
5 | - Managing factual information and knowledge
6 | - Organizing categorical and conceptual information
7 | - Handling entity relationships and attributes
8 | - Knowledge consolidation and organization
9 | - Abstract concept representation
10 | """
11 |
12 | from typing import Any, Dict, List
13 |
14 | from loguru import logger
15 |
16 | from memory_mcp.domains.persistence import PersistenceDomain
17 |
18 |
19 | class SemanticDomain:
20 | """
21 | Manages semantic memories (facts, knowledge, entities).
22 |
23 | This domain handles factual information, knowledge, and
24 | entity-relationship structures.
25 | """
26 |
27 | def __init__(self, config: Dict[str, Any], persistence_domain: PersistenceDomain) -> None:
28 | """
29 | Initialize the semantic domain.
30 |
31 | Args:
32 | config: Configuration dictionary
33 | persistence_domain: Reference to the persistence domain
34 | """
35 | self.config = config
36 | self.persistence_domain = persistence_domain
37 |
38 | async def initialize(self) -> None:
39 | """Initialize the semantic domain."""
40 | logger.info("Initializing Semantic Domain")
41 | # Initialization logic will be implemented here
42 |
43 | async def process_memory(self, memory: Dict[str, Any]) -> Dict[str, Any]:
44 | """
45 | Process a semantic memory.
46 |
47 | This includes extracting key information, generating embeddings,
48 | and enriching the memory with additional metadata.
49 |
50 | Args:
51 | memory: The memory to process
52 |
53 | Returns:
54 | Processed memory
55 | """
56 | logger.debug(f"Processing semantic memory: {memory['id']}")
57 |
58 | # Extract text representation for embedding
59 | text_content = self._extract_text_content(memory)
60 |
61 | # Generate embedding
62 | embedding = await self.persistence_domain.generate_embedding(text_content)
63 | memory["embedding"] = embedding
64 |
65 | # Additional processing based on memory type
66 | if memory["type"] == "entity":
67 | memory = self._process_entity_memory(memory)
68 | elif memory["type"] == "fact":
69 | memory = self._process_fact_memory(memory)
70 |
71 | return memory
72 |
73 | def _extract_text_content(self, memory: Dict[str, Any]) -> str:
74 | """
75 | Extract text content from a memory for embedding generation.
76 |
77 | Args:
78 | memory: The memory to extract text from
79 |
80 | Returns:
81 | Text representation of the memory
82 | """
83 | if memory["type"] == "fact":
84 | # For fact memories, use the fact text
85 | if "fact" in memory["content"]:
86 | return memory["content"]["fact"]
87 |
88 | elif memory["type"] == "document":
89 | # For document memories, combine title and text
90 | title = memory["content"].get("title", "")
91 | text = memory["content"].get("text", "")
92 | return f"{title}\n{text}"
93 |
94 | elif memory["type"] == "entity":
95 | # For entity memories, combine name and attributes
96 | name = memory["content"].get("name", "")
97 | entity_type = memory["content"].get("entity_type", "")
98 |
99 | # Extract attributes as text
100 | attributes = memory["content"].get("attributes", {})
101 | attr_text = ""
102 | if attributes and isinstance(attributes, dict):
103 | attr_text = "\n".join([f"{k}: {v}" for k, v in attributes.items()])
104 |
105 | return f"{name} ({entity_type})\n{attr_text}"
106 |
107 | # Fallback: try to convert content to string
108 | try:
109 | return str(memory["content"])
110 | except:
111 | return f"Memory {memory['id']} of type {memory['type']}"
112 |
113 | def _process_entity_memory(self, memory: Dict[str, Any]) -> Dict[str, Any]:
114 | """
115 | Process an entity memory.
116 |
117 | Args:
118 | memory: The entity memory to process
119 |
120 | Returns:
121 | Processed memory
122 | """
123 | # Entity-specific processing
124 | # This is a placeholder for future implementation
125 | return memory
126 |
127 | def _process_fact_memory(self, memory: Dict[str, Any]) -> Dict[str, Any]:
128 | """
129 | Process a fact memory.
130 |
131 | Args:
132 | memory: The fact memory to process
133 |
134 | Returns:
135 | Processed memory
136 | """
137 | # Fact-specific processing
138 | # This is a placeholder for future implementation
139 | return memory
140 |
141 | async def get_stats(self) -> Dict[str, Any]:
142 | """
143 | Get statistics about the semantic domain.
144 |
145 | Returns:
146 | Semantic domain statistics
147 | """
148 | return {
149 | "memory_types": {
150 | "fact": 0,
151 | "document": 0,
152 | "entity": 0
153 | },
154 | "status": "initialized"
155 | }
156 |
```
--------------------------------------------------------------------------------
/memory_mcp/utils/config.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Configuration utilities for the memory MCP server.
3 | """
4 |
5 | import os
6 | import json
7 | from pathlib import Path
8 | from typing import Any, Dict
9 |
10 | from loguru import logger
11 |
12 |
13 | def load_config(config_path: str) -> Dict[str, Any]:
14 | """
15 | Load configuration from a JSON file.
16 |
17 | Args:
18 | config_path: Path to the configuration file
19 |
20 | Returns:
21 | Configuration dictionary
22 | """
23 | config_path = os.path.expanduser(config_path)
24 |
25 | # Check if config file exists
26 | if not os.path.exists(config_path):
27 | logger.warning(f"Configuration file not found: {config_path}")
28 | return create_default_config(config_path)
29 |
30 | try:
31 | with open(config_path, "r") as f:
32 | config = json.load(f)
33 | logger.info(f"Loaded configuration from {config_path}")
34 |
35 | # Validate and merge with defaults
36 | config = validate_config(config)
37 |
38 | return config
39 | except json.JSONDecodeError:
40 | logger.error(f"Error parsing configuration file: {config_path}")
41 | return create_default_config(config_path)
42 | except Exception as e:
43 | logger.error(f"Error loading configuration: {str(e)}")
44 | return create_default_config(config_path)
45 |
46 |
47 | def create_default_config(config_path: str) -> Dict[str, Any]:
48 | """
49 | Create default configuration.
50 |
51 | Args:
52 | config_path: Path to save the configuration file
53 |
54 | Returns:
55 | Default configuration dictionary
56 | """
57 | logger.info(f"Creating default configuration at {config_path}")
58 |
59 | # Create config directory if it doesn't exist
60 | os.makedirs(os.path.dirname(config_path), exist_ok=True)
61 |
62 | # Default configuration
63 | config = {
64 | "server": {
65 | "host": "127.0.0.1",
66 | "port": 8000,
67 | "debug": False
68 | },
69 | "memory": {
70 | "max_short_term_items": 100,
71 | "max_long_term_items": 1000,
72 | "max_archival_items": 10000,
73 | "consolidation_interval_hours": 24,
74 | "short_term_threshold": 0.3,
75 | "file_path": os.path.join(
76 | os.path.expanduser("~/.memory_mcp/data"),
77 | "memory.json"
78 | )
79 | },
80 | "embedding": {
81 | "model": "sentence-transformers/all-MiniLM-L6-v2",
82 | "dimensions": 384,
83 | "cache_dir": os.path.expanduser("~/.memory_mcp/cache")
84 | },
85 | "retrieval": {
86 | "default_top_k": 5,
87 | "semantic_threshold": 0.75,
88 | "recency_weight": 0.3,
89 | "importance_weight": 0.7
90 | }
91 | }
92 |
93 | # Save default config
94 | try:
95 | with open(config_path, "w") as f:
96 | json.dump(config, f, indent=2)
97 | except Exception as e:
98 | logger.error(f"Error saving default configuration: {str(e)}")
99 |
100 | return config
101 |
102 |
103 | def validate_config(config: Dict[str, Any]) -> Dict[str, Any]:
104 | """
105 | Validate and normalize configuration.
106 |
107 | Args:
108 | config: Configuration dictionary
109 |
110 | Returns:
111 | Validated configuration dictionary
112 | """
113 | # Create default config
114 | default_config = {
115 | "server": {
116 | "host": "127.0.0.1",
117 | "port": 8000,
118 | "debug": False
119 | },
120 | "memory": {
121 | "max_short_term_items": 100,
122 | "max_long_term_items": 1000,
123 | "max_archival_items": 10000,
124 | "consolidation_interval_hours": 24,
125 | "short_term_threshold": 0.3,
126 | "file_path": os.path.join(
127 | os.path.expanduser("~/.memory_mcp/data"),
128 | "memory.json"
129 | )
130 | },
131 | "embedding": {
132 | "model": "sentence-transformers/all-MiniLM-L6-v2",
133 | "dimensions": 384,
134 | "cache_dir": os.path.expanduser("~/.memory_mcp/cache")
135 | },
136 | "retrieval": {
137 | "default_top_k": 5,
138 | "semantic_threshold": 0.75,
139 | "recency_weight": 0.3,
140 | "importance_weight": 0.7
141 | }
142 | }
143 |
144 | # Merge with user config (deep merge)
145 | merged_config = deep_merge(default_config, config)
146 |
147 | # Convert relative paths to absolute
148 | if "memory" in merged_config and "file_path" in merged_config["memory"]:
149 | file_path = merged_config["memory"]["file_path"]
150 | if not os.path.isabs(file_path):
151 | merged_config["memory"]["file_path"] = os.path.abspath(file_path)
152 |
153 | return merged_config
154 |
155 |
156 | def deep_merge(base: Dict[str, Any], override: Dict[str, Any]) -> Dict[str, Any]:
157 | """
158 | Deep merge two dictionaries.
159 |
160 | Args:
161 | base: Base dictionary
162 | override: Override dictionary
163 |
164 | Returns:
165 | Merged dictionary
166 | """
167 | result = base.copy()
168 |
169 | for key, value in override.items():
170 | if key in result and isinstance(result[key], dict) and isinstance(value, dict):
171 | result[key] = deep_merge(result[key], value)
172 | else:
173 | result[key] = value
174 |
175 | return result
176 |
```
--------------------------------------------------------------------------------
/examples/claude_desktop_config.md:
--------------------------------------------------------------------------------
```markdown
1 | # Claude Desktop Integration Guide
2 |
3 | This guide explains how to integrate the Memory MCP Server with the Claude Desktop application for enhanced memory capabilities.
4 |
5 | ## Overview
6 |
7 | The Memory MCP Server implements the Model Context Protocol (MCP) to provide Claude with persistent memory capabilities. After setting up the server, you can configure Claude Desktop to use it for remembering information across conversations.
8 |
9 | ## Prerequisites
10 |
11 | - Claude Desktop application installed
12 | - Memory MCP Server installed and configured
13 |
14 | ## Configuration
15 |
16 | ### 1. Locate Claude Desktop Configuration
17 |
18 | The Claude Desktop configuration file is typically located at:
19 |
20 | - **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
21 | - **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
22 | - **Linux**: `~/.config/Claude/claude_desktop_config.json`
23 |
24 | ### 2. Add Memory MCP Server Configuration
25 |
26 | Edit your `claude_desktop_config.json` file to include the Memory MCP Server:
27 |
28 | ```json
29 | {
30 | "mcpServers": {
31 | "memory": {
32 | "command": "python",
33 | "args": ["-m", "memory_mcp"],
34 | "env": {
35 | "MEMORY_FILE_PATH": "/path/to/your/memory.json"
36 | }
37 | }
38 | }
39 | }
40 | ```
41 |
42 | Replace `/path/to/your/memory.json` with your desired memory file location.
43 |
44 | ### 3. Optional: Configure MCP Server
45 |
46 | You can customize the Memory MCP Server by creating a configuration file at `~/.memory_mcp/config/config.json`:
47 |
48 | ```json
49 | {
50 | "server": {
51 | "host": "127.0.0.1",
52 | "port": 8000,
53 | "debug": false
54 | },
55 | "memory": {
56 | "max_short_term_items": 100,
57 | "max_long_term_items": 1000,
58 | "max_archival_items": 10000,
59 | "consolidation_interval_hours": 24,
60 | "short_term_threshold": 0.3,
61 | "file_path": "/path/to/your/memory.json"
62 | },
63 | "embedding": {
64 | "model": "sentence-transformers/all-MiniLM-L6-v2",
65 | "dimensions": 384,
66 | "cache_dir": "~/.memory_mcp/cache"
67 | },
68 | "retrieval": {
69 | "default_top_k": 5,
70 | "semantic_threshold": 0.75,
71 | "recency_weight": 0.3,
72 | "importance_weight": 0.7
73 | }
74 | }
75 | ```
76 |
77 | ### 4. Docker Container Option
78 |
79 | Alternatively, you can run the Memory MCP Server as a Docker container:
80 |
81 | ```json
82 | {
83 | "mcpServers": {
84 | "memory": {
85 | "command": "docker",
86 | "args": [
87 | "run",
88 | "-i",
89 | "-v", "/path/to/memory/directory:/app/memory",
90 | "--rm",
91 | "whenmoon-afk/claude-memory-mcp"
92 | ],
93 | "env": {
94 | "MEMORY_FILE_PATH": "/app/memory/memory.json"
95 | }
96 | }
97 | }
98 | }
99 | ```
100 |
101 | Make sure to create the directory `/path/to/memory/directory` on your host system before running.
102 |
103 | ## Using Memory Tools in Claude
104 |
105 | Once configured, Claude Desktop will automatically connect to the Memory MCP Server. You can use the provided memory tools in your conversations with Claude:
106 |
107 | ### Store Memory
108 |
109 | To explicitly store information in memory:
110 |
111 | ```
112 | Could you remember that my favorite color is blue?
113 | ```
114 |
115 | Claude will use the `store_memory` tool to save this information.
116 |
117 | ### Retrieve Memory
118 |
119 | To recall information from memory:
120 |
121 | ```
122 | What's my favorite color?
123 | ```
124 |
125 | Claude will use the `retrieve_memory` tool to search for relevant memories.
126 |
127 | ### System Prompt
128 |
129 | For optimal memory usage, consider adding these instructions to your Claude Desktop System Prompt:
130 |
131 | ```
132 | Follow these steps for each interaction:
133 |
134 | 1. Memory Retrieval:
135 | - Always begin your chat by saying only "Remembering..." and retrieve all relevant information from your knowledge graph
136 | - Always refer to your knowledge graph as your "memory"
137 |
138 | 2. Memory Update:
139 | - While conversing with the user, be attentive to any new information about the user
140 | - If any new information was gathered during the interaction, update your memory
141 | ```
142 |
143 | ## Troubleshooting
144 |
145 | ### Memory Server Not Starting
146 |
147 | If the Memory MCP Server fails to start:
148 |
149 | 1. Check your Python installation and ensure all dependencies are installed
150 | 2. Verify the configuration file paths are correct
151 | 3. Check if the memory file directory exists and is writable
152 | 4. Look for error messages in the Claude Desktop logs
153 |
154 | ### Memory Not Being Stored
155 |
156 | If Claude is not storing memories:
157 |
158 | 1. Ensure the MCP server is running (check Claude Desktop logs)
159 | 2. Verify that your system prompt includes instructions to use memory
160 | 3. Make sure Claude has clear information to store (be explicit)
161 |
162 | ### Memory File Corruption
163 |
164 | If the memory file becomes corrupted:
165 |
166 | 1. Stop Claude Desktop
167 | 2. Rename the corrupted file
168 | 3. The MCP server will create a new empty memory file on next start
169 |
170 | ## Advanced Configuration
171 |
172 | ### Custom Embedding Models
173 |
174 | You can use different embedding models by changing the `embedding.model` configuration:
175 |
176 | ```json
177 | "embedding": {
178 | "model": "sentence-transformers/paraphrase-MiniLM-L6-v2",
179 | "dimensions": 384
180 | }
181 | ```
182 |
183 | ### Memory Consolidation Settings
184 |
185 | Adjust memory consolidation behavior:
186 |
187 | ```json
188 | "memory": {
189 | "consolidation_interval_hours": 12,
190 | "importance_decay_rate": 0.02
191 | }
192 | ```
193 |
194 | ### Retrieval Fine-Tuning
195 |
196 | Fine-tune memory retrieval by adjusting these parameters:
197 |
198 | ```json
199 | "retrieval": {
200 | "recency_weight": 0.4,
201 | "importance_weight": 0.6
202 | }
203 | ```
204 |
205 | Increase `recency_weight` to prioritize recent memories, or increase `importance_weight` to prioritize important memories.
206 |
```
--------------------------------------------------------------------------------
/memory_mcp/utils/compatibility/version_checker.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Version compatibility checker for memory_mcp.
3 | """
4 |
5 | import importlib
6 | import importlib.metadata
7 | import sys
8 | from dataclasses import dataclass
9 | from typing import Dict, List, Optional, Tuple
10 |
11 | from loguru import logger
12 |
13 |
14 | @dataclass
15 | class CompatibilityReport:
16 | """Compatibility report for dependency versions."""
17 |
18 | compatible: bool
19 | issues: List[str]
20 | python_version: str
21 |
22 |
23 | def check_python_version() -> Tuple[bool, Optional[str]]:
24 | """
25 | Check if the current Python version is compatible.
26 |
27 | Returns:
28 | Tuple of (is_compatible, error_message)
29 | """
30 | python_version = sys.version_info
31 |
32 | # We support Python 3.8 to 3.12
33 | if python_version.major != 3 or python_version.minor < 8 or python_version.minor > 12:
34 | return False, f"Python version {python_version.major}.{python_version.minor}.{python_version.micro} is not supported. Please use Python 3.8-3.12."
35 |
36 | return True, None
37 |
38 |
39 | def check_dependency_version(package_name: str, min_version: str, max_version: str) -> Tuple[bool, Optional[str]]:
40 | """
41 | Check if a dependency version is within the expected range.
42 |
43 | Args:
44 | package_name: Name of the package to check
45 | min_version: Minimum supported version (inclusive)
46 | max_version: Maximum supported version (exclusive)
47 |
48 | Returns:
49 | Tuple of (is_compatible, error_message)
50 | """
51 | try:
52 | version = importlib.metadata.version(package_name)
53 |
54 | # Simple version comparison (assumes semantic versioning)
55 | min_parts = [int(x) for x in min_version.split('.')]
56 | max_parts = [int(x) for x in max_version.split('.')]
57 | version_parts = [int(x) for x in version.split('.')]
58 |
59 | # Check minimum version
60 | for i in range(len(min_parts)):
61 | if i >= len(version_parts):
62 | break
63 | if version_parts[i] < min_parts[i]:
64 | return False, f"{package_name} version {version} is lower than the minimum supported version {min_version}"
65 | if version_parts[i] > min_parts[i]:
66 | break
67 |
68 | # Check maximum version
69 | for i in range(len(max_parts)):
70 | if i >= len(version_parts):
71 | break
72 | if version_parts[i] >= max_parts[i]:
73 | return False, f"{package_name} version {version} is higher than the maximum supported version {max_version}"
74 | if version_parts[i] < max_parts[i]:
75 | break
76 |
77 | return True, None
78 | except importlib.metadata.PackageNotFoundError:
79 | return False, f"{package_name} is not installed"
80 | except Exception as e:
81 | return False, f"Error checking {package_name} version: {str(e)}"
82 |
83 |
84 | def check_compatibility() -> CompatibilityReport:
85 | """
86 | Check compatibility of the current environment.
87 |
88 | Returns:
89 | CompatibilityReport with details about compatibility
90 | """
91 | issues = []
92 |
93 | # Check Python version
94 | python_compatible, python_error = check_python_version()
95 | if not python_compatible:
96 | issues.append(python_error)
97 |
98 | # Critical dependencies and their version ranges
99 | dependencies = {
100 | "numpy": ("1.20.0", "2.0.0"),
101 | "pydantic": ("2.4.0", "3.0.0"),
102 | "sentence-transformers": ("2.2.2", "3.0.0"),
103 | "hnswlib": ("0.7.0", "0.8.0"),
104 | "mcp-cli": ("0.1.0", "0.3.0"),
105 | "mcp-server": ("0.1.0", "0.3.0")
106 | }
107 |
108 | # Check each dependency
109 | for package, (min_version, max_version) in dependencies.items():
110 | try:
111 | compatible, error = check_dependency_version(package, min_version, max_version)
112 | if not compatible:
113 | issues.append(error)
114 | except Exception as e:
115 | issues.append(f"Error checking {package}: {str(e)}")
116 |
117 | # Special check for NumPy to ensure it's not v2.x
118 | try:
119 | import numpy
120 | numpy_version = numpy.__version__
121 | if numpy_version.startswith("2."):
122 | issues.append(f"NumPy version {numpy_version} is not supported. Please use NumPy 1.x (e.g., 1.20.0 or higher).")
123 | except ImportError:
124 | # Already reported by the dependency check
125 | pass
126 |
127 | return CompatibilityReport(
128 | compatible=len(issues) == 0,
129 | issues=issues,
130 | python_version=f"{sys.version_info.major}.{sys.version_info.minor}.{sys.version_info.micro}"
131 | )
132 |
133 |
134 | def print_compatibility_report(report: CompatibilityReport) -> None:
135 | """
136 | Print a compatibility report to the logger.
137 |
138 | Args:
139 | report: The compatibility report to print
140 | """
141 | if report.compatible:
142 | logger.info(f"Environment is compatible (Python {report.python_version})")
143 | else:
144 | logger.error(f"Environment has compatibility issues (Python {report.python_version}):")
145 | for issue in report.issues:
146 | logger.error(f" - {issue}")
147 |
148 | # Print helpful message
149 | logger.info("To resolve these issues, you can try:")
150 | logger.info(" - Use Python 3.8-3.12")
151 | logger.info(" - Install dependencies with: pip install -r requirements.txt")
152 | logger.info(" - If using NumPy 2.x, downgrade with: pip install \"numpy>=1.20.0,<2.0.0\"")
153 |
```
--------------------------------------------------------------------------------
/memory_mcp/utils/schema.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Schema validation utilities for the memory MCP server.
3 | """
4 |
5 | import re
6 | from datetime import datetime
7 | from typing import Any, Dict, List, Optional, Union
8 |
9 | from pydantic import BaseModel, Field, validator
10 |
11 |
12 | class MemoryBase(BaseModel):
13 | """Base model for memory objects."""
14 | id: str
15 | type: str
16 | importance: float = 0.5
17 |
18 | @validator("importance")
19 | def validate_importance(cls, v: float) -> float:
20 | """Validate importance score."""
21 | if not 0.0 <= v <= 1.0:
22 | raise ValueError("Importance must be between 0.0 and 1.0")
23 | return v
24 |
25 |
26 | class ConversationMemory(MemoryBase):
27 | """Model for conversation memories."""
28 | type: str = "conversation"
29 | content: Dict[str, Any]
30 |
31 | @validator("content")
32 | def validate_content(cls, v: Dict[str, Any]) -> Dict[str, Any]:
33 | """Validate conversation content."""
34 | if "role" not in v and "messages" not in v:
35 | raise ValueError("Conversation must have either 'role' or 'messages'")
36 |
37 | if "role" in v and "message" not in v:
38 | raise ValueError("Conversation with 'role' must have 'message'")
39 |
40 | if "messages" in v and not isinstance(v["messages"], list):
41 | raise ValueError("Conversation 'messages' must be a list")
42 |
43 | return v
44 |
45 |
46 | class FactMemory(MemoryBase):
47 | """Model for fact memories."""
48 | type: str = "fact"
49 | content: Dict[str, Any]
50 |
51 | @validator("content")
52 | def validate_content(cls, v: Dict[str, Any]) -> Dict[str, Any]:
53 | """Validate fact content."""
54 | if "fact" not in v:
55 | raise ValueError("Fact must have 'fact' field")
56 |
57 | if "confidence" in v and not 0.0 <= v["confidence"] <= 1.0:
58 | raise ValueError("Fact confidence must be between 0.0 and 1.0")
59 |
60 | return v
61 |
62 |
63 | class DocumentMemory(MemoryBase):
64 | """Model for document memories."""
65 | type: str = "document"
66 | content: Dict[str, Any]
67 |
68 | @validator("content")
69 | def validate_content(cls, v: Dict[str, Any]) -> Dict[str, Any]:
70 | """Validate document content."""
71 | if "title" not in v or "text" not in v:
72 | raise ValueError("Document must have 'title' and 'text' fields")
73 |
74 | return v
75 |
76 |
77 | class EntityMemory(MemoryBase):
78 | """Model for entity memories."""
79 | type: str = "entity"
80 | content: Dict[str, Any]
81 |
82 | @validator("content")
83 | def validate_content(cls, v: Dict[str, Any]) -> Dict[str, Any]:
84 | """Validate entity content."""
85 | if "name" not in v or "entity_type" not in v:
86 | raise ValueError("Entity must have 'name' and 'entity_type' fields")
87 |
88 | return v
89 |
90 |
91 | class ReflectionMemory(MemoryBase):
92 | """Model for reflection memories."""
93 | type: str = "reflection"
94 | content: Dict[str, Any]
95 |
96 | @validator("content")
97 | def validate_content(cls, v: Dict[str, Any]) -> Dict[str, Any]:
98 | """Validate reflection content."""
99 | if "subject" not in v or "reflection" not in v:
100 | raise ValueError("Reflection must have 'subject' and 'reflection' fields")
101 |
102 | return v
103 |
104 |
105 | class CodeMemory(MemoryBase):
106 | """Model for code memories."""
107 | type: str = "code"
108 | content: Dict[str, Any]
109 |
110 | @validator("content")
111 | def validate_content(cls, v: Dict[str, Any]) -> Dict[str, Any]:
112 | """Validate code content."""
113 | if "language" not in v or "code" not in v:
114 | raise ValueError("Code must have 'language' and 'code' fields")
115 |
116 | return v
117 |
118 |
119 | def validate_memory(memory: Dict[str, Any]) -> Dict[str, Any]:
120 | """
121 | Validate a memory object against its schema.
122 |
123 | Args:
124 | memory: Memory dictionary
125 |
126 | Returns:
127 | Validated memory dictionary
128 |
129 | Raises:
130 | ValueError: If memory is invalid
131 | """
132 | if "type" not in memory:
133 | raise ValueError("Memory must have a 'type' field")
134 |
135 | memory_type = memory["type"]
136 |
137 | # Choose validator based on type
138 | validators = {
139 | "conversation": ConversationMemory,
140 | "fact": FactMemory,
141 | "document": DocumentMemory,
142 | "entity": EntityMemory,
143 | "reflection": ReflectionMemory,
144 | "code": CodeMemory
145 | }
146 |
147 | if memory_type not in validators:
148 | raise ValueError(f"Unknown memory type: {memory_type}")
149 |
150 | # Validate using Pydantic model
151 | model = validators[memory_type](**memory)
152 |
153 | # Return validated model as dict
154 | return model.dict()
155 |
156 |
157 | def validate_iso_timestamp(timestamp: str) -> bool:
158 | """
159 | Validate ISO timestamp format.
160 |
161 | Args:
162 | timestamp: Timestamp string
163 |
164 | Returns:
165 | True if valid, False otherwise
166 | """
167 | try:
168 | datetime.fromisoformat(timestamp)
169 | return True
170 | except ValueError:
171 | return False
172 |
173 |
174 | def validate_memory_id(memory_id: str) -> bool:
175 | """
176 | Validate memory ID format.
177 |
178 | Args:
179 | memory_id: Memory ID string
180 |
181 | Returns:
182 | True if valid, False otherwise
183 | """
184 | # Memory IDs should start with "mem_" followed by alphanumeric chars
185 | pattern = r"^mem_[a-zA-Z0-9_-]+$"
186 | return bool(re.match(pattern, memory_id))
187 |
```
--------------------------------------------------------------------------------
/memory_mcp/auto_memory/auto_capture.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Automatic memory capture utilities.
3 |
4 | This module provides functions for automatically determining
5 | when to store memories and extracting content from messages.
6 | """
7 |
8 | import re
9 | from typing import Any, Dict, List, Optional, Tuple
10 |
11 | import numpy as np
12 |
13 |
14 | def should_store_memory(message: str, threshold: float = 0.6) -> bool:
15 | """
16 | Determine if a message contains information worth storing in memory.
17 |
18 | Uses simple heuristics to decide if the message likely contains personal
19 | information, preferences, or important facts.
20 |
21 | Args:
22 | message: The message to analyze
23 | threshold: Threshold for importance (0.0-1.0)
24 |
25 | Returns:
26 | True if the message should be stored, False otherwise
27 | """
28 | # Check for personal preference indicators
29 | preference_patterns = [
30 | r"I (?:like|love|enjoy|prefer|favorite|hate|dislike)",
31 | r"my favorite",
32 | r"I am (?:a|an)",
33 | r"I'm (?:a|an)",
34 | r"my name is",
35 | r"call me",
36 | r"I work",
37 | r"I live",
38 | r"my (?:husband|wife|partner|spouse|child|son|daughter|pet)",
39 | r"I have (?:a|an|\\d+)",
40 | r"I often",
41 | r"I usually",
42 | r"I always",
43 | r"I never",
44 | ]
45 |
46 | # Check for factual information
47 | fact_patterns = [
48 | r"(?:is|are|was|were) (?:born|founded|created|established|started) (?:in|on|by)",
49 | r"(?:is|are|was|were) (?:the|a|an) (?:capital|largest|smallest|best|worst|most|least)",
50 | r"(?:is|are|was|were) (?:located|situated|found|discovered)",
51 | r"(?:is|are|was|were) (?:invented|designed|developed)",
52 | ]
53 |
54 | # Calculate message complexity (proxy for information richness)
55 | words = message.split()
56 | complexity = min(1.0, len(words) / 50.0) # Normalize to 0.0-1.0
57 |
58 | # Check for presence of preference indicators
59 | preference_score = 0.0
60 | for pattern in preference_patterns:
61 | if re.search(pattern, message, re.IGNORECASE):
62 | preference_score = 0.8
63 | break
64 |
65 | # Check for presence of fact indicators
66 | fact_score = 0.0
67 | for pattern in fact_patterns:
68 | if re.search(pattern, message, re.IGNORECASE):
69 | fact_score = 0.6
70 | break
71 |
72 | # Question sentences typically don't contain storable information
73 | question_ratio = len(re.findall(r"\?", message)) / max(1, len(re.findall(r"[.!?]", message)))
74 |
75 | # Combined score
76 | combined_score = max(preference_score, fact_score) * (1.0 - question_ratio) * complexity
77 |
78 | return combined_score >= threshold
79 |
80 |
81 | def extract_memory_content(message: str) -> Tuple[str, Dict[str, Any], float]:
82 | """
83 | Extract memory content, type, and importance from a message.
84 |
85 | Args:
86 | message: The message to extract from
87 |
88 | Returns:
89 | Tuple of (memory_type, content_dict, importance)
90 | """
91 | # Check if it's likely about the user (preferences, personal info)
92 | user_patterns = [
93 | r"I (?:like|love|enjoy|prefer|favorite|hate|dislike)",
94 | r"my favorite",
95 | r"I am (?:a|an)",
96 | r"I'm (?:a|an)",
97 | r"my name is",
98 | r"call me",
99 | r"I work",
100 | r"I live",
101 | ]
102 |
103 | # Check for fact patterns
104 | fact_patterns = [
105 | r"(?:is|are|was|were) (?:born|founded|created|established|started) (?:in|on|by)",
106 | r"(?:is|are|was|were) (?:the|a|an) (?:capital|largest|smallest|best|worst|most|least)",
107 | r"(?:is|are|was|were) (?:located|situated|found|discovered)",
108 | r"(?:is|are|was|were) (?:invented|designed|developed)",
109 | ]
110 |
111 | # Default values
112 | memory_type = "conversation"
113 | content = {"role": "user", "message": message}
114 | importance = 0.5
115 |
116 | # Check for user preferences or traits (entity memory)
117 | for pattern in user_patterns:
118 | if re.search(pattern, message, re.IGNORECASE):
119 | memory_type = "entity"
120 | # Basic extraction of attribute
121 | attribute_match = re.search(r"I (?:like|love|enjoy|prefer|hate|dislike) (.+?)(?:\.|$|,)", message, re.IGNORECASE)
122 | if attribute_match:
123 | attribute_value = attribute_match.group(1).strip()
124 | content = {
125 | "name": "user",
126 | "entity_type": "person",
127 | "attributes": {
128 | "preference": attribute_value
129 | }
130 | }
131 | importance = 0.7
132 | return memory_type, content, importance
133 |
134 | # Check for "I am" statements
135 | trait_match = re.search(r"I (?:am|'m) (?:a|an) (.+?)(?:\.|$|,)", message, re.IGNORECASE)
136 | if trait_match:
137 | trait_value = trait_match.group(1).strip()
138 | content = {
139 | "name": "user",
140 | "entity_type": "person",
141 | "attributes": {
142 | "trait": trait_value
143 | }
144 | }
145 | importance = 0.7
146 | return memory_type, content, importance
147 |
148 | # Default entity if specific extraction fails
149 | content = {
150 | "name": "user",
151 | "entity_type": "person",
152 | "attributes": {
153 | "statement": message
154 | }
155 | }
156 | importance = 0.6
157 | return memory_type, content, importance
158 |
159 | # Check for factual information
160 | for pattern in fact_patterns:
161 | if re.search(pattern, message, re.IGNORECASE):
162 | memory_type = "fact"
163 | content = {
164 | "fact": message,
165 | "confidence": 0.8,
166 | "domain": "general"
167 | }
168 | importance = 0.6
169 | return memory_type, content, importance
170 |
171 | # Default as conversation memory with moderate importance
172 | return memory_type, content, importance
```
--------------------------------------------------------------------------------
/memory_mcp/utils/embeddings.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Embedding utilities for the memory MCP server.
3 | """
4 |
5 | import os
6 | from typing import Any, Dict, List, Optional, Union
7 |
8 | import numpy as np
9 | from loguru import logger
10 | from sentence_transformers import SentenceTransformer
11 |
12 |
13 | class EmbeddingManager:
14 | """
15 | Manages embedding generation and similarity calculations.
16 |
17 | This class handles the loading of embedding models, generation
18 | of embeddings for text, and calculation of similarity between
19 | embeddings.
20 | """
21 |
22 | def __init__(self, config: Dict[str, Any]) -> None:
23 | """
24 | Initialize the embedding manager.
25 |
26 | Args:
27 | config: Configuration dictionary
28 | """
29 | self.config = config
30 | self.model_name = config["embedding"].get("model", "sentence-transformers/all-MiniLM-L6-v2")
31 | self.dimensions = config["embedding"].get("dimensions", 384)
32 | self.cache_dir = config["embedding"].get("cache_dir", None)
33 |
34 | # Model will be loaded on first use
35 | self.model = None
36 |
37 | def get_model(self) -> SentenceTransformer:
38 | """
39 | Get or load the embedding model.
40 |
41 | Returns:
42 | SentenceTransformer model
43 | """
44 | if self.model is None:
45 | # Create cache directory if specified
46 | if self.cache_dir:
47 | os.makedirs(self.cache_dir, exist_ok=True)
48 |
49 | # Load model
50 | logger.info(f"Loading embedding model: {self.model_name}")
51 | try:
52 | self.model = SentenceTransformer(
53 | self.model_name,
54 | cache_folder=self.cache_dir
55 | )
56 | logger.info(f"Embedding model loaded: {self.model_name}")
57 | except Exception as e:
58 | logger.error(f"Error loading embedding model: {str(e)}")
59 | raise RuntimeError(f"Failed to load embedding model: {str(e)}")
60 |
61 | return self.model
62 |
63 | def generate_embedding(self, text: str) -> List[float]:
64 | """
65 | Generate an embedding vector for text.
66 |
67 | Args:
68 | text: Text to embed
69 |
70 | Returns:
71 | Embedding vector as a list of floats
72 | """
73 | model = self.get_model()
74 |
75 | # Generate embedding
76 | try:
77 | embedding = model.encode(text)
78 |
79 | # Convert to list of floats for JSON serialization
80 | return embedding.tolist()
81 | except Exception as e:
82 | logger.error(f"Error generating embedding: {str(e)}")
83 | # Return zero vector as fallback
84 | return [0.0] * self.dimensions
85 |
86 | def batch_generate_embeddings(self, texts: List[str]) -> List[List[float]]:
87 | """
88 | Generate embeddings for multiple texts.
89 |
90 | Args:
91 | texts: List of texts to embed
92 |
93 | Returns:
94 | List of embedding vectors
95 | """
96 | model = self.get_model()
97 |
98 | # Generate embeddings in batch
99 | try:
100 | embeddings = model.encode(texts)
101 |
102 | # Convert to list of lists for JSON serialization
103 | return [embedding.tolist() for embedding in embeddings]
104 | except Exception as e:
105 | logger.error(f"Error generating batch embeddings: {str(e)}")
106 | # Return zero vectors as fallback
107 | return [[0.0] * self.dimensions] * len(texts)
108 |
109 | def calculate_similarity(
110 | self,
111 | embedding1: Union[List[float], np.ndarray],
112 | embedding2: Union[List[float], np.ndarray]
113 | ) -> float:
114 | """
115 | Calculate cosine similarity between two embeddings.
116 |
117 | Args:
118 | embedding1: First embedding vector
119 | embedding2: Second embedding vector
120 |
121 | Returns:
122 | Cosine similarity (0.0-1.0)
123 | """
124 | # Convert to numpy arrays if needed
125 | if isinstance(embedding1, list):
126 | embedding1 = np.array(embedding1)
127 | if isinstance(embedding2, list):
128 | embedding2 = np.array(embedding2)
129 |
130 | # Calculate cosine similarity
131 | norm1 = np.linalg.norm(embedding1)
132 | norm2 = np.linalg.norm(embedding2)
133 |
134 | if norm1 == 0 or norm2 == 0:
135 | return 0.0
136 |
137 | return float(np.dot(embedding1, embedding2) / (norm1 * norm2))
138 |
139 | def find_most_similar(
140 | self,
141 | query_embedding: Union[List[float], np.ndarray],
142 | embeddings: List[Union[List[float], np.ndarray]],
143 | min_similarity: float = 0.0,
144 | limit: int = 5
145 | ) -> List[Dict[str, Union[int, float]]]:
146 | """
147 | Find most similar embeddings to a query embedding.
148 |
149 | Args:
150 | query_embedding: Query embedding vector
151 | embeddings: List of embeddings to compare against
152 | min_similarity: Minimum similarity threshold
153 | limit: Maximum number of results
154 |
155 | Returns:
156 | List of dictionaries with index and similarity
157 | """
158 | # Convert query to numpy array if needed
159 | if isinstance(query_embedding, list):
160 | query_embedding = np.array(query_embedding)
161 |
162 | # Calculate similarities
163 | similarities = []
164 |
165 | for i, embedding in enumerate(embeddings):
166 | # Convert to numpy array if needed
167 | if isinstance(embedding, list):
168 | embedding = np.array(embedding)
169 |
170 | # Calculate similarity
171 | similarity = self.calculate_similarity(query_embedding, embedding)
172 |
173 | if similarity >= min_similarity:
174 | similarities.append({
175 | "index": i,
176 | "similarity": similarity
177 | })
178 |
179 | # Sort by similarity (descending)
180 | similarities.sort(key=lambda x: x["similarity"], reverse=True)
181 |
182 | # Limit results
183 | return similarities[:limit]
184 |
```
--------------------------------------------------------------------------------
/docs/user_guide.md:
--------------------------------------------------------------------------------
```markdown
1 | # User Guide: Claude Memory MCP Server
2 |
3 | This guide explains how to set up and use the Memory MCP Server with Claude Desktop for persistent memory capabilities.
4 |
5 | ## Table of Contents
6 |
7 | 1. [Installation](#installation)
8 | 2. [Configuration](#configuration)
9 | 3. [How Memory Works](#how-memory-works)
10 | 4. [Usage Examples](#usage-examples)
11 | 5. [Advanced Configuration](#advanced-configuration)
12 | 6. [Troubleshooting](#troubleshooting)
13 |
14 | ## Installation
15 |
16 | ### Option 1: Standard Installation
17 |
18 | 1. **Prerequisites**:
19 | - Python 3.8-3.12
20 | - pip package manager
21 |
22 | 2. **Clone the repository**:
23 | ```bash
24 | git clone https://github.com/WhenMoon-afk/claude-memory-mcp.git
25 | cd claude-memory-mcp
26 | ```
27 |
28 | 3. **Install dependencies**:
29 | ```bash
30 | pip install -r requirements.txt
31 | ```
32 |
33 | 4. **Run setup script**:
34 | ```bash
35 | chmod +x setup.sh
36 | ./setup.sh
37 | ```
38 |
39 | ### Option 2: Docker Installation (Recommended)
40 |
41 | See the [Docker Usage Guide](docker_usage.md) for detailed instructions on running the server in a container.
42 |
43 | ## Configuration
44 |
45 | ### Claude Desktop Integration
46 |
47 | To integrate with Claude Desktop, add the Memory MCP Server to your Claude configuration file:
48 |
49 | **Location**:
50 | - Windows: `%APPDATA%\Claude\claude_desktop_config.json`
51 | - macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
52 | - Linux: `~/.config/Claude/claude_desktop_config.json`
53 |
54 | **Configuration**:
55 | ```json
56 | {
57 | "mcpServers": {
58 | "memory": {
59 | "command": "python",
60 | "args": ["-m", "memory_mcp"],
61 | "env": {
62 | "MEMORY_FILE_PATH": "/path/to/your/memory.json"
63 | }
64 | }
65 | }
66 | }
67 | ```
68 |
69 | ### Memory System Prompt
70 |
71 | For optimal memory usage, add these instructions to your Claude Desktop System Prompt:
72 |
73 | ```
74 | This Claude instance has been enhanced with persistent memory capabilities.
75 | Claude will automatically:
76 | 1. Remember important details about you across conversations
77 | 2. Store key facts and preferences you share
78 | 3. Recall relevant information when needed
79 |
80 | You don't need to explicitly ask Claude to remember or recall information.
81 | Simply have natural conversations, and Claude will maintain memory of important details.
82 |
83 | To see what Claude remembers about you, just ask "What do you remember about me?"
84 | ```
85 |
86 | ## How Memory Works
87 |
88 | ### Memory Types
89 |
90 | The Memory MCP Server supports several types of memories:
91 |
92 | 1. **Entity Memories**: Information about people, places, things
93 | - User preferences and traits
94 | - Personal information
95 |
96 | 2. **Fact Memories**: Factual information
97 | - General knowledge
98 | - Specific facts shared by the user
99 |
100 | 3. **Conversation Memories**: Important parts of conversations
101 | - Significant exchanges
102 | - Key discussion points
103 |
104 | 4. **Reflection Memories**: Insights and patterns
105 | - Observations about the user
106 | - Recurring themes
107 |
108 | ### Memory Tiers
109 |
110 | Memories are stored in three tiers:
111 |
112 | 1. **Short-term Memory**: Recently created or accessed memories
113 | - Higher importance (>0.3 by default)
114 | - Frequently accessed
115 |
116 | 2. **Long-term Memory**: Older, less frequently accessed memories
117 | - Lower importance (<0.3 by default)
118 | - Less frequently accessed
119 |
120 | 3. **Archived Memory**: Rarely accessed but potentially valuable memories
121 | - Used for long-term storage
122 | - Still searchable but less likely to be retrieved
123 |
124 | ## Usage Examples
125 |
126 | ### Scenario 1: Remembering User Preferences
127 |
128 | **User**: "I really prefer to code in Python rather than JavaScript."
129 |
130 | *Claude will automatically store this preference without any explicit command. In future conversations, Claude will remember this preference and tailor responses accordingly.*
131 |
132 | **User**: "What programming language do I prefer?"
133 |
134 | *Claude will automatically retrieve the memory:*
135 |
136 | **Claude**: "You've mentioned that you prefer to code in Python rather than JavaScript."
137 |
138 | ### Scenario 2: Storing and Retrieving Personal Information
139 |
140 | **User**: "My dog's name is Buddy, he's a golden retriever."
141 |
142 | *Claude will automatically store this entity information.*
143 |
144 | **User**: "What do you remember about my pet?"
145 |
146 | **Claude**: "You mentioned that you have a golden retriever named Buddy."
147 |
148 | ### Scenario 3: Explicit Memory Operations (if needed)
149 |
150 | While automatic memory is enabled by default, you can still use explicit commands:
151 |
152 | **User**: "Please remember that my favorite color is blue."
153 |
154 | **Claude**: "I'll remember that your favorite color is blue."
155 |
156 | **User**: "What's my favorite color?"
157 |
158 | **Claude**: "Your favorite color is blue."
159 |
160 | ## Advanced Configuration
161 |
162 | ### Custom Configuration File
163 |
164 | Create a custom configuration file at `~/.memory_mcp/config/config.json`:
165 |
166 | ```json
167 | {
168 | "auto_memory": {
169 | "enabled": true,
170 | "threshold": 0.6,
171 | "store_assistant_messages": false,
172 | "entity_extraction_enabled": true
173 | },
174 | "memory": {
175 | "max_short_term_items": 200,
176 | "max_long_term_items": 2000,
177 | "consolidation_interval_hours": 48
178 | }
179 | }
180 | ```
181 |
182 | ### Auto-Memory Settings
183 |
184 | - `enabled`: Enable/disable automatic memory (default: true)
185 | - `threshold`: Minimum importance threshold for auto-storage (0.0-1.0)
186 | - `store_assistant_messages`: Whether to store assistant messages (default: false)
187 | - `entity_extraction_enabled`: Enable entity extraction from messages (default: true)
188 |
189 | ## Troubleshooting
190 |
191 | ### Memory Not Being Stored
192 |
193 | 1. **Check auto-memory settings**: Ensure auto_memory.enabled is true in config
194 | 2. **Check threshold**: Lower the auto_memory.threshold value (e.g., to 0.4)
195 | 3. **Use explicit commands**: You can always use explicit "please remember..." commands
196 |
197 | ### Memory Not Being Retrieved
198 |
199 | 1. **Check query relevance**: Ensure your query is related to stored memories
200 | 2. **Check memory existence**: Use the list_memories tool to see if the memory exists
201 | 3. **Try more specific queries**: Be more specific in your retrieval queries
202 |
203 | ### Server Not Starting
204 |
205 | See the [Compatibility Guide](compatibility.md) for resolving dependency and compatibility issues.
206 |
207 | ### Additional Help
208 |
209 | If you continue to experience issues, please:
210 | 1. Check the server logs for error messages
211 | 2. Refer to the [Compatibility Guide](compatibility.md)
212 | 3. Open an issue on GitHub with detailed information about your problem
```
--------------------------------------------------------------------------------
/tests/test_memory_mcp.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Tests for the Memory MCP Server.
3 | """
4 |
5 | import os
6 | import json
7 | import tempfile
8 | import unittest
9 | from typing import Dict, Any
10 |
11 | from memory_mcp.utils.config import load_config, create_default_config
12 | from memory_mcp.utils.schema import validate_memory
13 | from memory_mcp.utils.embeddings import EmbeddingManager
14 |
15 |
16 | class TestConfig(unittest.TestCase):
17 | """Tests for configuration utilities."""
18 |
19 | def test_create_default_config(self):
20 | """Test creating default configuration."""
21 | with tempfile.NamedTemporaryFile(suffix=".json", delete=False) as temp:
22 | try:
23 | # Create default config
24 | config = create_default_config(temp.name)
25 |
26 | # Check if config file was created
27 | self.assertTrue(os.path.exists(temp.name))
28 |
29 | # Check if config has expected sections
30 | self.assertIn("server", config)
31 | self.assertIn("memory", config)
32 | self.assertIn("embedding", config)
33 |
34 | # Load the created config
35 | loaded_config = load_config(temp.name)
36 |
37 | # Check if loaded config matches
38 | self.assertEqual(config, loaded_config)
39 | finally:
40 | # Clean up
41 | os.unlink(temp.name)
42 |
43 | def test_load_nonexistent_config(self):
44 | """Test loading nonexistent configuration."""
45 | # Use a path that doesn't exist
46 | with tempfile.NamedTemporaryFile(suffix=".json") as temp:
47 | pass # File is deleted on close
48 |
49 | # Load config (should create default)
50 | config = load_config(temp.name)
51 |
52 | # Check if config has expected sections
53 | self.assertIn("server", config)
54 | self.assertIn("memory", config)
55 | self.assertIn("embedding", config)
56 |
57 | # Clean up
58 | if os.path.exists(temp.name):
59 | os.unlink(temp.name)
60 |
61 |
62 | class TestSchema(unittest.TestCase):
63 | """Tests for schema validation utilities."""
64 |
65 | def test_validate_conversation_memory(self):
66 | """Test validating conversation memory."""
67 | # Valid conversation with role/message
68 | memory = {
69 | "id": "mem_test1",
70 | "type": "conversation",
71 | "importance": 0.8,
72 | "content": {
73 | "role": "user",
74 | "message": "Hello, Claude!"
75 | }
76 | }
77 |
78 | validated = validate_memory(memory)
79 | self.assertEqual(validated["id"], "mem_test1")
80 | self.assertEqual(validated["type"], "conversation")
81 |
82 | # Valid conversation with messages array
83 | memory = {
84 | "id": "mem_test2",
85 | "type": "conversation",
86 | "importance": 0.7,
87 | "content": {
88 | "messages": [
89 | {"role": "user", "content": "Hello"},
90 | {"role": "assistant", "content": "Hi there!"}
91 | ]
92 | }
93 | }
94 |
95 | validated = validate_memory(memory)
96 | self.assertEqual(validated["id"], "mem_test2")
97 | self.assertEqual(validated["type"], "conversation")
98 |
99 | # Invalid: missing required fields
100 | memory = {
101 | "id": "mem_test3",
102 | "type": "conversation",
103 | "importance": 0.5,
104 | "content": {}
105 | }
106 |
107 | with self.assertRaises(ValueError):
108 | validate_memory(memory)
109 |
110 | def test_validate_fact_memory(self):
111 | """Test validating fact memory."""
112 | # Valid fact
113 | memory = {
114 | "id": "mem_test4",
115 | "type": "fact",
116 | "importance": 0.9,
117 | "content": {
118 | "fact": "The capital of France is Paris.",
119 | "confidence": 0.95
120 | }
121 | }
122 |
123 | validated = validate_memory(memory)
124 | self.assertEqual(validated["id"], "mem_test4")
125 | self.assertEqual(validated["type"], "fact")
126 |
127 | # Invalid: missing fact field
128 | memory = {
129 | "id": "mem_test5",
130 | "type": "fact",
131 | "importance": 0.7,
132 | "content": {
133 | "confidence": 0.8
134 | }
135 | }
136 |
137 | with self.assertRaises(ValueError):
138 | validate_memory(memory)
139 |
140 |
141 | class TestEmbeddings(unittest.TestCase):
142 | """Tests for embedding utilities."""
143 |
144 | def test_embedding_manager_init(self):
145 | """Test initializing the embedding manager."""
146 | config = {
147 | "embedding": {
148 | "model": "sentence-transformers/paraphrase-MiniLM-L3-v2",
149 | "dimensions": 384,
150 | "cache_dir": None
151 | }
152 | }
153 |
154 | manager = EmbeddingManager(config)
155 | self.assertEqual(manager.model_name, "sentence-transformers/paraphrase-MiniLM-L3-v2")
156 | self.assertEqual(manager.dimensions, 384)
157 | self.assertIsNone(manager.model) # Model should be None initially
158 |
159 | def test_similarity_calculation(self):
160 | """Test similarity calculation between embeddings."""
161 | config = {
162 | "embedding": {
163 | "model": "sentence-transformers/paraphrase-MiniLM-L3-v2",
164 | "dimensions": 384
165 | }
166 | }
167 |
168 | manager = EmbeddingManager(config)
169 |
170 | # Test with numpy arrays
171 | import numpy as np
172 | v1 = np.array([1.0, 0.0, 0.0])
173 | v2 = np.array([0.0, 1.0, 0.0])
174 | v3 = np.array([1.0, 1.0, 0.0])
175 |
176 | # Orthogonal vectors should have similarity 0
177 | self.assertAlmostEqual(manager.calculate_similarity(v1, v2), 0.0)
178 |
179 | # Same vector should have similarity 1
180 | self.assertAlmostEqual(manager.calculate_similarity(v1, v1), 1.0)
181 |
182 | # Test with lists
183 | v1_list = [1.0, 0.0, 0.0]
184 | v2_list = [0.0, 1.0, 0.0]
185 |
186 | # Orthogonal vectors should have similarity 0
187 | self.assertAlmostEqual(manager.calculate_similarity(v1_list, v2_list), 0.0)
188 |
189 |
190 | if __name__ == "__main__":
191 | unittest.main()
192 |
```
--------------------------------------------------------------------------------
/memory_mcp/mcp/tools.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | MCP tool definitions for the memory system.
3 | """
4 |
5 | from typing import Dict, Any
6 |
7 | from memory_mcp.domains.manager import MemoryDomainManager
8 |
9 |
10 | class MemoryToolDefinitions:
11 | """
12 | Defines MCP tools for the memory system.
13 |
14 | This class contains the schema definitions and validation for
15 | the MCP tools exposed by the memory server.
16 | """
17 |
18 | def __init__(self, domain_manager: MemoryDomainManager) -> None:
19 | """
20 | Initialize the tool definitions.
21 |
22 | Args:
23 | domain_manager: The memory domain manager
24 | """
25 | self.domain_manager = domain_manager
26 |
27 | @property
28 | def store_memory_schema(self) -> Dict[str, Any]:
29 | """Schema for the store_memory tool."""
30 | return {
31 | "type": "object",
32 | "properties": {
33 | "type": {
34 | "type": "string",
35 | "description": "Type of memory to store (conversation, fact, document, entity, reflection)",
36 | "enum": ["conversation", "fact", "document", "entity", "reflection", "code"]
37 | },
38 | "content": {
39 | "type": "object",
40 | "description": "Content of the memory (type-specific structure)"
41 | },
42 | "importance": {
43 | "type": "number",
44 | "description": "Importance score (0.0-1.0, higher is more important)",
45 | "minimum": 0.0,
46 | "maximum": 1.0
47 | },
48 | "metadata": {
49 | "type": "object",
50 | "description": "Additional metadata for the memory"
51 | },
52 | "context": {
53 | "type": "object",
54 | "description": "Contextual information for the memory"
55 | }
56 | },
57 | "required": ["type", "content"]
58 | }
59 |
60 | @property
61 | def retrieve_memory_schema(self) -> Dict[str, Any]:
62 | """Schema for the retrieve_memory tool."""
63 | return {
64 | "type": "object",
65 | "properties": {
66 | "query": {
67 | "type": "string",
68 | "description": "Query string to search for relevant memories"
69 | },
70 | "limit": {
71 | "type": "integer",
72 | "description": "Maximum number of memories to retrieve (default: 5)",
73 | "minimum": 1,
74 | "maximum": 50
75 | },
76 | "types": {
77 | "type": "array",
78 | "description": "Types of memories to include (null for all types)",
79 | "items": {
80 | "type": "string",
81 | "enum": ["conversation", "fact", "document", "entity", "reflection", "code"]
82 | }
83 | },
84 | "min_similarity": {
85 | "type": "number",
86 | "description": "Minimum similarity score (0.0-1.0) for results",
87 | "minimum": 0.0,
88 | "maximum": 1.0
89 | },
90 | "include_metadata": {
91 | "type": "boolean",
92 | "description": "Whether to include metadata in the results"
93 | }
94 | },
95 | "required": ["query"]
96 | }
97 |
98 | @property
99 | def list_memories_schema(self) -> Dict[str, Any]:
100 | """Schema for the list_memories tool."""
101 | return {
102 | "type": "object",
103 | "properties": {
104 | "types": {
105 | "type": "array",
106 | "description": "Types of memories to include (null for all types)",
107 | "items": {
108 | "type": "string",
109 | "enum": ["conversation", "fact", "document", "entity", "reflection", "code"]
110 | }
111 | },
112 | "limit": {
113 | "type": "integer",
114 | "description": "Maximum number of memories to retrieve (default: 20)",
115 | "minimum": 1,
116 | "maximum": 100
117 | },
118 | "offset": {
119 | "type": "integer",
120 | "description": "Offset for pagination (default: 0)",
121 | "minimum": 0
122 | },
123 | "tier": {
124 | "type": "string",
125 | "description": "Memory tier to retrieve from (null for all tiers)",
126 | "enum": ["short_term", "long_term", "archived"]
127 | },
128 | "include_content": {
129 | "type": "boolean",
130 | "description": "Whether to include memory content in the results (default: false)"
131 | }
132 | }
133 | }
134 |
135 | @property
136 | def update_memory_schema(self) -> Dict[str, Any]:
137 | """Schema for the update_memory tool."""
138 | return {
139 | "type": "object",
140 | "properties": {
141 | "memory_id": {
142 | "type": "string",
143 | "description": "ID of the memory to update"
144 | },
145 | "updates": {
146 | "type": "object",
147 | "description": "Updates to apply to the memory",
148 | "properties": {
149 | "content": {
150 | "type": "object",
151 | "description": "New content for the memory"
152 | },
153 | "importance": {
154 | "type": "number",
155 | "description": "New importance score (0.0-1.0)",
156 | "minimum": 0.0,
157 | "maximum": 1.0
158 | },
159 | "metadata": {
160 | "type": "object",
161 | "description": "Updates to memory metadata"
162 | },
163 | "context": {
164 | "type": "object",
165 | "description": "Updates to memory context"
166 | }
167 | }
168 | }
169 | },
170 | "required": ["memory_id", "updates"]
171 | }
172 |
173 | @property
174 | def delete_memory_schema(self) -> Dict[str, Any]:
175 | """Schema for the delete_memory tool."""
176 | return {
177 | "type": "object",
178 | "properties": {
179 | "memory_ids": {
180 | "type": "array",
181 | "description": "IDs of memories to delete",
182 | "items": {
183 | "type": "string"
184 | }
185 | }
186 | },
187 | "required": ["memory_ids"]
188 | }
189 |
190 | @property
191 | def memory_stats_schema(self) -> Dict[str, Any]:
192 | """Schema for the memory_stats tool."""
193 | return {
194 | "type": "object",
195 | "properties": {}
196 | }
197 |
```
--------------------------------------------------------------------------------
/docs/claude_integration.md:
--------------------------------------------------------------------------------
```markdown
1 | # Claude Desktop Integration Guide
2 |
3 | This guide explains how to set up and use the Memory MCP Server with the Claude Desktop application.
4 |
5 | ## Installation
6 |
7 | First, ensure you have installed the Memory MCP Server by following the instructions in the [README.md](../README.md) file.
8 |
9 | ## Configuration
10 |
11 | ### 1. Configure Claude Desktop
12 |
13 | To enable the Memory MCP Server in Claude Desktop, you need to add it to the Claude Desktop configuration file.
14 |
15 | The configuration file is typically located at:
16 | - Windows: `%APPDATA%\Claude\claude_desktop_config.json`
17 | - macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
18 | - Linux: `~/.config/Claude/claude_desktop_config.json`
19 |
20 | Edit the file to add the following MCP server configuration:
21 |
22 | ```json
23 | {
24 | "mcpServers": {
25 | "memory": {
26 | "command": "python",
27 | "args": ["-m", "memory_mcp"],
28 | "env": {
29 | "MEMORY_FILE_PATH": "/path/to/your/memory.json"
30 | }
31 | }
32 | }
33 | }
34 | ```
35 |
36 | ### 2. Configure Environment Variables (Optional)
37 |
38 | You can customize the behavior of the Memory MCP Server by setting environment variables:
39 |
40 | - `MCP_DATA_DIR`: Directory for memory data (default: `~/.memory_mcp`)
41 | - `MCP_CONFIG_DIR`: Directory for configuration files (default: `~/.memory_mcp/config`)
42 |
43 | ### 3. Customize Memory File Location (Optional)
44 |
45 | By default, the Memory MCP Server stores memory data in:
46 | - `~/.memory_mcp/data/memory.json`
47 |
48 | You can customize this location by setting the `MEMORY_FILE_PATH` environment variable in the Claude Desktop configuration.
49 |
50 | ## Using Memory Features in Claude
51 |
52 | ### 1. Starting Claude Desktop
53 |
54 | After configuring the MCP server, start Claude Desktop. The Memory MCP Server will start automatically when Claude connects to it.
55 |
56 | ### 2. Available Memory Tools
57 |
58 | Claude has access to the following memory-related tools:
59 |
60 | #### store_memory
61 | Store new information in memory.
62 |
63 | ```json
64 | {
65 | "type": "conversation|fact|document|entity|reflection|code",
66 | "content": {
67 | // Type-specific content structure
68 | },
69 | "importance": 0.75, // Optional: 0.0-1.0 (higher is more important)
70 | "metadata": {}, // Optional: Additional metadata
71 | "context": {} // Optional: Contextual information
72 | }
73 | ```
74 |
75 | #### retrieve_memory
76 | Retrieve relevant memories based on a query.
77 |
78 | ```json
79 | {
80 | "query": "What is the capital of France?",
81 | "limit": 5, // Optional: Maximum number of results
82 | "types": ["fact", "document"], // Optional: Memory types to include
83 | "min_similarity": 0.6, // Optional: Minimum similarity score
84 | "include_metadata": true // Optional: Include metadata in results
85 | }
86 | ```
87 |
88 | #### list_memories
89 | List available memories with filtering options.
90 |
91 | ```json
92 | {
93 | "types": ["conversation", "fact"], // Optional: Memory types to include
94 | "limit": 20, // Optional: Maximum number of results
95 | "offset": 0, // Optional: Offset for pagination
96 | "tier": "short_term", // Optional: Memory tier to filter by
97 | "include_content": true // Optional: Include memory content in results
98 | }
99 | ```
100 |
101 | #### update_memory
102 | Update existing memory entries.
103 |
104 | ```json
105 | {
106 | "memory_id": "mem_1234567890",
107 | "updates": {
108 | "content": {}, // Optional: New content
109 | "importance": 0.8, // Optional: New importance score
110 | "metadata": {}, // Optional: Updates to metadata
111 | "context": {} // Optional: Updates to context
112 | }
113 | }
114 | ```
115 |
116 | #### delete_memory
117 | Remove specific memories.
118 |
119 | ```json
120 | {
121 | "memory_ids": ["mem_1234567890", "mem_0987654321"]
122 | }
123 | ```
124 |
125 | #### memory_stats
126 | Get statistics about the memory store.
127 |
128 | ```json
129 | {}
130 | ```
131 |
132 | ### 3. Example Usage
133 |
134 | Claude can use these memory tools to store and retrieve information. Here are some example prompts:
135 |
136 | #### Storing a Fact
137 |
138 | ```
139 | Please remember that Paris is the capital of France.
140 | ```
141 |
142 | Claude might use the `store_memory` tool to save this fact:
143 |
144 | ```json
145 | {
146 | "type": "fact",
147 | "content": {
148 | "fact": "Paris is the capital of France",
149 | "confidence": 0.98,
150 | "domain": "geography"
151 | },
152 | "importance": 0.7
153 | }
154 | ```
155 |
156 | #### Retrieving Information
157 |
158 | ```
159 | What important geographical facts do you remember?
160 | ```
161 |
162 | Claude might use the `retrieve_memory` tool to find relevant facts:
163 |
164 | ```json
165 | {
166 | "query": "important geographical facts",
167 | "types": ["fact"],
168 | "min_similarity": 0.6
169 | }
170 | ```
171 |
172 | #### Saving User Preferences
173 |
174 | ```
175 | Please remember that I prefer to see code examples in Python, not JavaScript.
176 | ```
177 |
178 | Claude might use the `store_memory` tool to save this preference:
179 |
180 | ```json
181 | {
182 | "type": "entity",
183 | "content": {
184 | "name": "user",
185 | "entity_type": "person",
186 | "attributes": {
187 | "code_preference": "Python"
188 | }
189 | },
190 | "importance": 0.8
191 | }
192 | ```
193 |
194 | ### 4. Memory Persistence
195 |
196 | The Memory MCP Server maintains memory persistence across conversations.
197 |
198 | When Claude starts a new conversation, it can access memories from previous conversations. The memory system uses a tiered approach:
199 |
200 | - **Short-term memory**: Recently created or accessed memories
201 | - **Long-term memory**: Older, less frequently accessed memories
202 | - **Archived memory**: Rarely accessed memories that may still be valuable
203 |
204 | The system automatically manages the movement of memories between tiers based on access patterns, importance, and other factors.
205 |
206 | ## Advanced Configuration
207 |
208 | ### Memory Consolidation
209 |
210 | The Memory MCP Server automatically consolidates memories based on the configured interval (default: 24 hours).
211 |
212 | You can customize this behavior by setting the `consolidation_interval_hours` parameter in the configuration file.
213 |
214 | ### Memory Tiers
215 |
216 | The memory tiers have default size limits that you can adjust in the configuration:
217 |
218 | ```json
219 | {
220 | "memory": {
221 | "max_short_term_items": 100,
222 | "max_long_term_items": 1000,
223 | "max_archival_items": 10000
224 | }
225 | }
226 | ```
227 |
228 | ### Embedding Model
229 |
230 | The Memory MCP Server uses an embedding model to convert text into vector representations for semantic search.
231 |
232 | You can customize the embedding model in the configuration:
233 |
234 | ```json
235 | {
236 | "embedding": {
237 | "model": "sentence-transformers/all-MiniLM-L6-v2",
238 | "dimensions": 384,
239 | "cache_dir": "~/.memory_mcp/cache"
240 | }
241 | }
242 | ```
243 |
244 | ## Troubleshooting
245 |
246 | ### Checking Server Status
247 |
248 | The Memory MCP Server logs to standard error. In the Claude Desktop console output, you should see messages indicating the server is running.
249 |
250 | ### Common Issues
251 |
252 | #### Server won't start
253 |
254 | - Check if the path to the memory file is valid
255 | - Verify that all dependencies are installed
256 | - Check permissions for data directories
257 |
258 | #### Memory not persisting
259 |
260 | - Verify that the memory file path is correct
261 | - Check if the memory file exists and is writable
262 | - Ensure Claude has permission to execute the MCP server
263 |
264 | #### Embedding model issues
265 |
266 | - Check if the embedding model is installed
267 | - Verify that the model name is correct
268 | - Ensure you have sufficient disk space for model caching
269 |
270 | ## Security Considerations
271 |
272 | The Memory MCP Server stores memories on your local file system. Consider these security aspects:
273 |
274 | - **Data Privacy**: The memory file contains all stored memories, which may include sensitive information.
275 | - **File Permissions**: Ensure the memory file has appropriate permissions to prevent unauthorized access.
276 | - **Encryption**: Consider encrypting the memory file if it contains sensitive information.
277 |
278 | ## Further Resources
279 |
280 | - [Model Context Protocol Documentation](https://modelcontextprotocol.io/)
281 | - [Claude Desktop Documentation](https://claude.ai/docs)
282 | - [Memory MCP Server GitHub Repository](https://github.com/WhenMoon-afk/claude-memory-mcp)
283 |
```
--------------------------------------------------------------------------------
/memory_mcp/domains/temporal.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Temporal Domain for time-aware memory processing.
3 |
4 | The Temporal Domain is responsible for:
5 | - Managing memory decay and importance over time
6 | - Temporal indexing and sequencing
7 | - Chronological relationship tracking
8 | - Time-based memory consolidation
9 | - Recency effects in retrieval
10 | """
11 |
12 | import time
13 | from datetime import datetime, timedelta
14 | from typing import Any, Dict, List
15 |
16 | from loguru import logger
17 |
18 | from memory_mcp.domains.persistence import PersistenceDomain
19 |
20 |
21 | class TemporalDomain:
22 | """
23 | Manages time-aware memory processing.
24 |
25 | This domain handles temporal aspects of memory, including
26 | decay over time, recency-based relevance, and time-based
27 | consolidation of memories.
28 | """
29 |
30 | def __init__(self, config: Dict[str, Any], persistence_domain: PersistenceDomain) -> None:
31 | """
32 | Initialize the temporal domain.
33 |
34 | Args:
35 | config: Configuration dictionary
36 | persistence_domain: Reference to the persistence domain
37 | """
38 | self.config = config
39 | self.persistence_domain = persistence_domain
40 | self.last_consolidation = datetime.now()
41 |
42 | async def initialize(self) -> None:
43 | """Initialize the temporal domain."""
44 | logger.info("Initializing Temporal Domain")
45 |
46 | # Schedule initial consolidation if needed
47 | consolidation_interval = self.config["memory"].get("consolidation_interval_hours", 24)
48 | self.consolidation_interval = timedelta(hours=consolidation_interval)
49 |
50 | # Get last consolidation time from persistence
51 | last_consolidation = await self.persistence_domain.get_metadata("last_consolidation")
52 | if last_consolidation:
53 | try:
54 | self.last_consolidation = datetime.fromisoformat(last_consolidation)
55 | except ValueError:
56 | logger.warning(f"Invalid last_consolidation timestamp: {last_consolidation}")
57 | self.last_consolidation = datetime.now()
58 |
59 | # Check if consolidation is due
60 | if datetime.now() - self.last_consolidation > self.consolidation_interval:
61 | logger.info("Consolidation is due. Will run after initialization.")
62 | # Note: We don't run consolidation here to avoid slow startup
63 | # It will run on the next memory operation
64 |
65 | async def process_new_memory(self, memory: Dict[str, Any]) -> Dict[str, Any]:
66 | """
67 | Process a new memory with temporal information.
68 |
69 | Args:
70 | memory: The memory to process
71 |
72 | Returns:
73 | Processed memory with temporal information
74 | """
75 | # Add timestamps
76 | now = datetime.now().isoformat()
77 | memory["created_at"] = now
78 | memory["last_accessed"] = now
79 | memory["last_modified"] = now
80 | memory["access_count"] = 0
81 |
82 | return memory
83 |
84 | async def update_memory_access(self, memory_id: str) -> None:
85 | """
86 | Update the access time for a memory.
87 |
88 | Args:
89 | memory_id: ID of the memory to update
90 | """
91 | # Get the memory
92 | memory = await self.persistence_domain.get_memory(memory_id)
93 | if not memory:
94 | logger.warning(f"Memory {memory_id} not found for access update")
95 | return
96 |
97 | # Update access time and count
98 | memory["last_accessed"] = datetime.now().isoformat()
99 | memory["access_count"] = memory.get("access_count", 0) + 1
100 |
101 | # Save the updated memory
102 | current_tier = await self.persistence_domain.get_memory_tier(memory_id)
103 | await self.persistence_domain.update_memory(memory, current_tier)
104 |
105 | # Check if consolidation is due
106 | await self._check_consolidation()
107 |
108 | async def update_memory_modification(self, memory: Dict[str, Any]) -> Dict[str, Any]:
109 | """
110 | Update the modification time for a memory.
111 |
112 | Args:
113 | memory: The memory to update
114 |
115 | Returns:
116 | Updated memory
117 | """
118 | memory["last_modified"] = datetime.now().isoformat()
119 | return memory
120 |
121 | async def adjust_memory_relevance(
122 | self,
123 | memories: List[Dict[str, Any]],
124 | query: str
125 | ) -> List[Dict[str, Any]]:
126 | """
127 | Adjust memory relevance based on temporal factors.
128 |
129 | Args:
130 | memories: List of memories to adjust
131 | query: The query string
132 |
133 | Returns:
134 | Adjusted memories
135 | """
136 | # Weight configuration
137 | recency_weight = self.config["memory"].get("retrieval", {}).get("recency_weight", 0.3)
138 | importance_weight = self.config["memory"].get("retrieval", {}).get("importance_weight", 0.7)
139 |
140 | now = datetime.now()
141 | adjusted_memories = []
142 |
143 | for memory in memories:
144 | # Calculate recency score
145 | last_accessed_str = memory.get("last_accessed", memory.get("created_at"))
146 | try:
147 | last_accessed = datetime.fromisoformat(last_accessed_str)
148 | days_since_access = (now - last_accessed).days
149 | # Recency score: 1.0 for just accessed, decreasing with time
150 | recency_score = 1.0 / (1.0 + days_since_access)
151 | except (ValueError, TypeError):
152 | recency_score = 0.5 # Default if timestamp is invalid
153 |
154 | # Get importance score
155 | importance_score = memory.get("importance", 0.5)
156 |
157 | # Get similarity score (from semantic search)
158 | similarity_score = memory.get("similarity", 0.5)
159 |
160 | # Combine scores
161 | combined_score = (
162 | similarity_score * (1.0 - recency_weight - importance_weight) +
163 | recency_score * recency_weight +
164 | importance_score * importance_weight
165 | )
166 |
167 | # Update memory with combined score
168 | memory["adjusted_score"] = combined_score
169 | memory["recency_score"] = recency_score
170 |
171 | adjusted_memories.append(memory)
172 |
173 | # Sort by combined score
174 | adjusted_memories.sort(key=lambda m: m["adjusted_score"], reverse=True)
175 |
176 | return adjusted_memories
177 |
178 | async def _check_consolidation(self) -> None:
179 | """Check if memory consolidation is due and run if needed."""
180 | now = datetime.now()
181 |
182 | # Check if enough time has passed since last consolidation
183 | if now - self.last_consolidation > self.consolidation_interval:
184 | logger.info("Running memory consolidation")
185 | await self._consolidate_memories()
186 |
187 | # Update last consolidation time
188 | self.last_consolidation = now
189 | await self.persistence_domain.set_metadata("last_consolidation", now.isoformat())
190 |
191 | async def _consolidate_memories(self) -> None:
192 | """
193 | Consolidate memories based on temporal patterns.
194 |
195 | This includes:
196 | - Moving old short-term memories to long-term
197 | - Archiving rarely accessed long-term memories
198 | - Adjusting importance scores based on access patterns
199 | """
200 | # Placeholder for consolidation logic
201 | logger.info("Memory consolidation not yet implemented")
202 |
203 | async def get_stats(self) -> Dict[str, Any]:
204 | """
205 | Get statistics about the temporal domain.
206 |
207 | Returns:
208 | Temporal domain statistics
209 | """
210 | return {
211 | "last_consolidation": self.last_consolidation.isoformat(),
212 | "next_consolidation": (self.last_consolidation + self.consolidation_interval).isoformat(),
213 | "status": "initialized"
214 | }
215 |
```
--------------------------------------------------------------------------------
/memory_mcp/mcp/server.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | MCP server implementation for the memory system.
3 | """
4 |
5 | import json
6 | import sys
7 | from typing import Any, Dict, List, Optional
8 |
9 | from loguru import logger
10 | from mcp.server import Server
11 | from mcp.server.stdio import stdio_server
12 |
13 | from memory_mcp.mcp.tools import MemoryToolDefinitions
14 | from memory_mcp.domains.manager import MemoryDomainManager
15 |
16 |
17 | class MemoryMcpServer:
18 | """
19 | MCP server implementation for the memory system.
20 |
21 | This class sets up an MCP server that exposes memory-related tools
22 | and handles MCP protocol communication with Claude Desktop.
23 | """
24 |
25 | def __init__(self, config: Dict[str, Any]) -> None:
26 | """
27 | Initialize the Memory MCP Server.
28 |
29 | Args:
30 | config: Configuration dictionary
31 | """
32 | self.config = config
33 | self.domain_manager = MemoryDomainManager(config)
34 | self.app = Server("memory-mcp-server")
35 | self.tool_definitions = MemoryToolDefinitions(self.domain_manager)
36 |
37 | # Register tools
38 | self._register_tools()
39 |
40 | def _register_tools(self) -> None:
41 | """Register memory-related tools with the MCP server."""
42 |
43 | # Store memory
44 | @self.app.tool(
45 | name="store_memory",
46 | description="Store new information in memory",
47 | schema=self.tool_definitions.store_memory_schema
48 | )
49 | async def store_memory_handler(arguments: Dict[str, Any]) -> List[Dict[str, Any]]:
50 | """Handle store_memory tool requests."""
51 | try:
52 | memory_id = await self.domain_manager.store_memory(
53 | memory_type=arguments["type"],
54 | content=arguments["content"],
55 | importance=arguments.get("importance", 0.5),
56 | metadata=arguments.get("metadata", {}),
57 | context=arguments.get("context", {})
58 | )
59 |
60 | return [{
61 | "type": "text",
62 | "text": json.dumps({
63 | "success": True,
64 | "memory_id": memory_id
65 | })
66 | }]
67 | except Exception as e:
68 | logger.error(f"Error in store_memory: {str(e)}")
69 | return [{
70 | "type": "text",
71 | "text": json.dumps({
72 | "success": False,
73 | "error": str(e)
74 | }),
75 | "is_error": True
76 | }]
77 |
78 | # Retrieve memory
79 | @self.app.tool(
80 | name="retrieve_memory",
81 | description="Retrieve relevant memories based on query",
82 | schema=self.tool_definitions.retrieve_memory_schema
83 | )
84 | async def retrieve_memory_handler(arguments: Dict[str, Any]) -> List[Dict[str, Any]]:
85 | """Handle retrieve_memory tool requests."""
86 | try:
87 | query = arguments["query"]
88 | limit = arguments.get("limit", 5)
89 | memory_types = arguments.get("types", None)
90 | min_similarity = arguments.get("min_similarity", 0.6)
91 | include_metadata = arguments.get("include_metadata", False)
92 |
93 | memories = await self.domain_manager.retrieve_memories(
94 | query=query,
95 | limit=limit,
96 | memory_types=memory_types,
97 | min_similarity=min_similarity,
98 | include_metadata=include_metadata
99 | )
100 |
101 | return [{
102 | "type": "text",
103 | "text": json.dumps({
104 | "success": True,
105 | "memories": memories
106 | })
107 | }]
108 | except Exception as e:
109 | logger.error(f"Error in retrieve_memory: {str(e)}")
110 | return [{
111 | "type": "text",
112 | "text": json.dumps({
113 | "success": False,
114 | "error": str(e)
115 | }),
116 | "is_error": True
117 | }]
118 |
119 | # List memories
120 | @self.app.tool(
121 | name="list_memories",
122 | description="List available memories with filtering options",
123 | schema=self.tool_definitions.list_memories_schema
124 | )
125 | async def list_memories_handler(arguments: Dict[str, Any]) -> List[Dict[str, Any]]:
126 | """Handle list_memories tool requests."""
127 | try:
128 | memory_types = arguments.get("types", None)
129 | limit = arguments.get("limit", 20)
130 | offset = arguments.get("offset", 0)
131 | tier = arguments.get("tier", None)
132 | include_content = arguments.get("include_content", False)
133 |
134 | memories = await self.domain_manager.list_memories(
135 | memory_types=memory_types,
136 | limit=limit,
137 | offset=offset,
138 | tier=tier,
139 | include_content=include_content
140 | )
141 |
142 | return [{
143 | "type": "text",
144 | "text": json.dumps({
145 | "success": True,
146 | "memories": memories
147 | })
148 | }]
149 | except Exception as e:
150 | logger.error(f"Error in list_memories: {str(e)}")
151 | return [{
152 | "type": "text",
153 | "text": json.dumps({
154 | "success": False,
155 | "error": str(e)
156 | }),
157 | "is_error": True
158 | }]
159 |
160 | # Update memory
161 | @self.app.tool(
162 | name="update_memory",
163 | description="Update existing memory entries",
164 | schema=self.tool_definitions.update_memory_schema
165 | )
166 | async def update_memory_handler(arguments: Dict[str, Any]) -> List[Dict[str, Any]]:
167 | """Handle update_memory tool requests."""
168 | try:
169 | memory_id = arguments["memory_id"]
170 | updates = arguments["updates"]
171 |
172 | success = await self.domain_manager.update_memory(
173 | memory_id=memory_id,
174 | updates=updates
175 | )
176 |
177 | return [{
178 | "type": "text",
179 | "text": json.dumps({
180 | "success": success
181 | })
182 | }]
183 | except Exception as e:
184 | logger.error(f"Error in update_memory: {str(e)}")
185 | return [{
186 | "type": "text",
187 | "text": json.dumps({
188 | "success": False,
189 | "error": str(e)
190 | }),
191 | "is_error": True
192 | }]
193 |
194 | # Delete memory
195 | @self.app.tool(
196 | name="delete_memory",
197 | description="Remove specific memories",
198 | schema=self.tool_definitions.delete_memory_schema
199 | )
200 | async def delete_memory_handler(arguments: Dict[str, Any]) -> List[Dict[str, Any]]:
201 | """Handle delete_memory tool requests."""
202 | try:
203 | memory_ids = arguments["memory_ids"]
204 |
205 | success = await self.domain_manager.delete_memories(
206 | memory_ids=memory_ids
207 | )
208 |
209 | return [{
210 | "type": "text",
211 | "text": json.dumps({
212 | "success": success
213 | })
214 | }]
215 | except Exception as e:
216 | logger.error(f"Error in delete_memory: {str(e)}")
217 | return [{
218 | "type": "text",
219 | "text": json.dumps({
220 | "success": False,
221 | "error": str(e)
222 | }),
223 | "is_error": True
224 | }]
225 |
226 | # Memory stats
227 | @self.app.tool(
228 | name="memory_stats",
229 | description="Get statistics about the memory store",
230 | schema=self.tool_definitions.memory_stats_schema
231 | )
232 | async def memory_stats_handler(arguments: Dict[str, Any]) -> List[Dict[str, Any]]:
233 | """Handle memory_stats tool requests."""
234 | try:
235 | stats = await self.domain_manager.get_memory_stats()
236 |
237 | return [{
238 | "type": "text",
239 | "text": json.dumps({
240 | "success": True,
241 | "stats": stats
242 | })
243 | }]
244 | except Exception as e:
245 | logger.error(f"Error in memory_stats: {str(e)}")
246 | return [{
247 | "type": "text",
248 | "text": json.dumps({
249 | "success": False,
250 | "error": str(e)
251 | }),
252 | "is_error": True
253 | }]
254 |
255 | async def start(self) -> None:
256 | """Start the MCP server."""
257 | # Initialize the memory domain manager
258 | await self.domain_manager.initialize()
259 |
260 | logger.info("Starting Memory MCP Server using stdio transport")
261 |
262 | # Start the server using stdio transport
263 | async with stdio_server() as streams:
264 | await self.app.run(
265 | streams[0],
266 | streams[1],
267 | self.app.create_initialization_options()
268 | )
269 |
```
--------------------------------------------------------------------------------
/memory_mcp/domains/manager.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Memory Domain Manager that orchestrates all memory operations.
3 | """
4 |
5 | import uuid
6 | from typing import Any, Dict, List, Optional, Union
7 |
8 | from loguru import logger
9 |
10 | from memory_mcp.domains.episodic import EpisodicDomain
11 | from memory_mcp.domains.semantic import SemanticDomain
12 | from memory_mcp.domains.temporal import TemporalDomain
13 | from memory_mcp.domains.persistence import PersistenceDomain
14 |
15 |
16 | class MemoryDomainManager:
17 | """
18 | Orchestrates operations across all memory domains.
19 |
20 | This class coordinates interactions between the different functional domains
21 | of the memory system. It provides a unified interface for memory operations
22 | while delegating specific tasks to the appropriate domain.
23 | """
24 |
25 | def __init__(self, config: Dict[str, Any]) -> None:
26 | """
27 | Initialize the memory domain manager.
28 |
29 | Args:
30 | config: Configuration dictionary
31 | """
32 | self.config = config
33 |
34 | # Initialize domains
35 | self.persistence_domain = PersistenceDomain(config)
36 | self.episodic_domain = EpisodicDomain(config, self.persistence_domain)
37 | self.semantic_domain = SemanticDomain(config, self.persistence_domain)
38 | self.temporal_domain = TemporalDomain(config, self.persistence_domain)
39 |
40 | async def initialize(self) -> None:
41 | """Initialize all domains."""
42 | logger.info("Initializing Memory Domain Manager")
43 |
44 | # Initialize domains in order (persistence first)
45 | await self.persistence_domain.initialize()
46 | await self.episodic_domain.initialize()
47 | await self.semantic_domain.initialize()
48 | await self.temporal_domain.initialize()
49 |
50 | logger.info("Memory Domain Manager initialized")
51 |
52 | async def store_memory(
53 | self,
54 | memory_type: str,
55 | content: Dict[str, Any],
56 | importance: float = 0.5,
57 | metadata: Optional[Dict[str, Any]] = None,
58 | context: Optional[Dict[str, Any]] = None
59 | ) -> str:
60 | """
61 | Store a new memory.
62 |
63 | Args:
64 | memory_type: Type of memory (conversation, fact, document, entity, reflection, code)
65 | content: Memory content (type-specific structure)
66 | importance: Importance score (0.0-1.0)
67 | metadata: Additional metadata
68 | context: Contextual information
69 |
70 | Returns:
71 | Memory ID
72 | """
73 | # Generate a unique ID for the memory
74 | memory_id = f"mem_{str(uuid.uuid4())}"
75 |
76 | # Create memory object
77 | memory = {
78 | "id": memory_id,
79 | "type": memory_type,
80 | "content": content,
81 | "importance": importance,
82 | "metadata": metadata or {},
83 | "context": context or {}
84 | }
85 |
86 | # Add temporal information
87 | memory = await self.temporal_domain.process_new_memory(memory)
88 |
89 | # Process based on memory type
90 | if memory_type in ["conversation", "reflection"]:
91 | memory = await self.episodic_domain.process_memory(memory)
92 | elif memory_type in ["fact", "document", "entity"]:
93 | memory = await self.semantic_domain.process_memory(memory)
94 | elif memory_type == "code":
95 | # Code memories get processed by both domains
96 | memory = await self.episodic_domain.process_memory(memory)
97 | memory = await self.semantic_domain.process_memory(memory)
98 |
99 | # Determine memory tier based on importance and recency
100 | tier = "short_term"
101 | if importance < self.config["memory"].get("short_term_threshold", 0.3):
102 | tier = "long_term"
103 |
104 | # Store the memory
105 | await self.persistence_domain.store_memory(memory, tier)
106 |
107 | logger.info(f"Stored {memory_type} memory with ID {memory_id} in {tier} tier")
108 |
109 | return memory_id
110 |
111 | async def retrieve_memories(
112 | self,
113 | query: str,
114 | limit: int = 5,
115 | memory_types: Optional[List[str]] = None,
116 | min_similarity: float = 0.6,
117 | include_metadata: bool = False
118 | ) -> List[Dict[str, Any]]:
119 | """
120 | Retrieve memories based on a query.
121 |
122 | Args:
123 | query: Query string
124 | limit: Maximum number of memories to retrieve
125 | memory_types: Types of memories to include (None for all types)
126 | min_similarity: Minimum similarity score for results
127 | include_metadata: Whether to include metadata in the results
128 |
129 | Returns:
130 | List of relevant memories
131 | """
132 | # Generate query embedding
133 | embedding = await self.persistence_domain.generate_embedding(query)
134 |
135 | # Retrieve memories using semantic search
136 | memories = await self.persistence_domain.search_memories(
137 | embedding=embedding,
138 | limit=limit,
139 | types=memory_types,
140 | min_similarity=min_similarity
141 | )
142 |
143 | # Apply temporal adjustments to relevance
144 | memories = await self.temporal_domain.adjust_memory_relevance(memories, query)
145 |
146 | # Format results
147 | result_memories = []
148 | for memory in memories:
149 | result_memory = {
150 | "id": memory["id"],
151 | "type": memory["type"],
152 | "content": memory["content"],
153 | "similarity": memory.get("similarity", 0.0)
154 | }
155 |
156 | # Include metadata if requested
157 | if include_metadata:
158 | result_memory["metadata"] = memory.get("metadata", {})
159 | result_memory["created_at"] = memory.get("created_at")
160 | result_memory["last_accessed"] = memory.get("last_accessed")
161 | result_memory["importance"] = memory.get("importance", 0.5)
162 |
163 | result_memories.append(result_memory)
164 |
165 | # Update access time for retrieved memories
166 | for memory in memories:
167 | await self.temporal_domain.update_memory_access(memory["id"])
168 |
169 | return result_memories
170 |
171 | async def list_memories(
172 | self,
173 | memory_types: Optional[List[str]] = None,
174 | limit: int = 20,
175 | offset: int = 0,
176 | tier: Optional[str] = None,
177 | include_content: bool = False
178 | ) -> List[Dict[str, Any]]:
179 | """
180 | List available memories with filtering options.
181 |
182 | Args:
183 | memory_types: Types of memories to include (None for all types)
184 | limit: Maximum number of memories to retrieve
185 | offset: Offset for pagination
186 | tier: Memory tier to retrieve from (None for all tiers)
187 | include_content: Whether to include memory content in the results
188 |
189 | Returns:
190 | List of memories
191 | """
192 | # Retrieve memories from persistence domain
193 | memories = await self.persistence_domain.list_memories(
194 | types=memory_types,
195 | limit=limit,
196 | offset=offset,
197 | tier=tier
198 | )
199 |
200 | # Format results
201 | result_memories = []
202 | for memory in memories:
203 | result_memory = {
204 | "id": memory["id"],
205 | "type": memory["type"],
206 | "created_at": memory.get("created_at"),
207 | "last_accessed": memory.get("last_accessed"),
208 | "importance": memory.get("importance", 0.5),
209 | "tier": memory.get("tier", "short_term")
210 | }
211 |
212 | # Include content if requested
213 | if include_content:
214 | result_memory["content"] = memory["content"]
215 |
216 | result_memories.append(result_memory)
217 |
218 | return result_memories
219 |
220 | async def update_memory(
221 | self,
222 | memory_id: str,
223 | updates: Dict[str, Any]
224 | ) -> bool:
225 | """
226 | Update an existing memory.
227 |
228 | Args:
229 | memory_id: ID of the memory to update
230 | updates: Updates to apply to the memory
231 |
232 | Returns:
233 | Success flag
234 | """
235 | # Retrieve the memory
236 | memory = await self.persistence_domain.get_memory(memory_id)
237 | if not memory:
238 | logger.error(f"Memory {memory_id} not found")
239 | return False
240 |
241 | # Apply updates
242 | if "content" in updates:
243 | memory["content"] = updates["content"]
244 |
245 | # Re-process embedding if content changes
246 | if memory["type"] in ["conversation", "reflection"]:
247 | memory = await self.episodic_domain.process_memory(memory)
248 | elif memory["type"] in ["fact", "document", "entity"]:
249 | memory = await self.semantic_domain.process_memory(memory)
250 | elif memory["type"] == "code":
251 | memory = await self.episodic_domain.process_memory(memory)
252 | memory = await self.semantic_domain.process_memory(memory)
253 |
254 | if "importance" in updates:
255 | memory["importance"] = updates["importance"]
256 |
257 | if "metadata" in updates:
258 | memory["metadata"].update(updates["metadata"])
259 |
260 | if "context" in updates:
261 | memory["context"].update(updates["context"])
262 |
263 | # Update last_modified timestamp
264 | memory = await self.temporal_domain.update_memory_modification(memory)
265 |
266 | # Determine if memory tier should change based on updates
267 | current_tier = await self.persistence_domain.get_memory_tier(memory_id)
268 | new_tier = current_tier
269 |
270 | if "importance" in updates:
271 | if updates["importance"] >= self.config["memory"].get("short_term_threshold", 0.3) and current_tier != "short_term":
272 | new_tier = "short_term"
273 | elif updates["importance"] < self.config["memory"].get("short_term_threshold", 0.3) and current_tier == "short_term":
274 | new_tier = "long_term"
275 |
276 | # Store the updated memory
277 | await self.persistence_domain.update_memory(memory, new_tier)
278 |
279 | logger.info(f"Updated memory {memory_id}")
280 |
281 | return True
282 |
283 | async def delete_memories(
284 | self,
285 | memory_ids: List[str]
286 | ) -> bool:
287 | """
288 | Delete memories.
289 |
290 | Args:
291 | memory_ids: IDs of memories to delete
292 |
293 | Returns:
294 | Success flag
295 | """
296 | success = await self.persistence_domain.delete_memories(memory_ids)
297 |
298 | if success:
299 | logger.info(f"Deleted {len(memory_ids)} memories")
300 | else:
301 | logger.error(f"Failed to delete memories")
302 |
303 | return success
304 |
305 | async def get_memory_stats(self) -> Dict[str, Any]:
306 | """
307 | Get statistics about the memory store.
308 |
309 | Returns:
310 | Memory statistics
311 | """
312 | # Get basic stats from persistence domain
313 | stats = await self.persistence_domain.get_memory_stats()
314 |
315 | # Enrich with domain-specific stats
316 | episodic_stats = await self.episodic_domain.get_stats()
317 | semantic_stats = await self.semantic_domain.get_stats()
318 | temporal_stats = await self.temporal_domain.get_stats()
319 |
320 | stats.update({
321 | "episodic_domain": episodic_stats,
322 | "semantic_domain": semantic_stats,
323 | "temporal_domain": temporal_stats
324 | })
325 |
326 | return stats
327 |
```
--------------------------------------------------------------------------------
/memory_mcp/domains/persistence.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Persistence Domain for storage and retrieval of memories.
3 |
4 | The Persistence Domain is responsible for:
5 | - File system operations
6 | - Vector embedding generation and storage
7 | - Index management
8 | - Memory file structure
9 | - Backup and recovery
10 | - Efficient storage formats
11 | """
12 |
13 | import os
14 | import json
15 | import time
16 | from datetime import datetime
17 | from pathlib import Path
18 | from typing import Any, Dict, List, Optional, Tuple, Union
19 |
20 | import numpy as np
21 | from loguru import logger
22 | from sentence_transformers import SentenceTransformer
23 |
24 |
25 | class PersistenceDomain:
26 | """
27 | Manages the storage and retrieval of memories.
28 |
29 | This domain handles file operations, embedding generation,
30 | and index management for the memory system.
31 | """
32 |
33 | def __init__(self, config: Dict[str, Any]) -> None:
34 | """
35 | Initialize the persistence domain.
36 |
37 | Args:
38 | config: Configuration dictionary
39 | """
40 | self.config = config
41 | self.memory_file_path = self.config["memory"].get("file_path", "memory.json")
42 | self.embedding_model_name = self.config["embedding"].get("default_model", "sentence-transformers/all-MiniLM-L6-v2")
43 | self.embedding_dimensions = self.config["embedding"].get("dimensions", 384)
44 |
45 | # Will be initialized during initialize()
46 | self.embedding_model = None
47 | self.memory_data = None
48 |
49 | async def initialize(self) -> None:
50 | """Initialize the persistence domain."""
51 | logger.info("Initializing Persistence Domain")
52 | logger.info(f"Using memory file: {self.memory_file_path}")
53 |
54 | # Create memory file directory if it doesn't exist
55 | os.makedirs(os.path.dirname(self.memory_file_path), exist_ok=True)
56 |
57 | # Load memory file or create if it doesn't exist
58 | self.memory_data = await self._load_memory_file()
59 |
60 | # Initialize embedding model
61 | logger.info(f"Loading embedding model: {self.embedding_model_name}")
62 | self.embedding_model = SentenceTransformer(self.embedding_model_name)
63 |
64 | logger.info("Persistence Domain initialized")
65 |
66 | async def generate_embedding(self, text: str) -> List[float]:
67 | """
68 | Generate an embedding vector for text.
69 |
70 | Args:
71 | text: Text to embed
72 |
73 | Returns:
74 | Embedding vector as a list of floats
75 | """
76 | if not self.embedding_model:
77 | raise RuntimeError("Embedding model not initialized")
78 |
79 | # Generate embedding
80 | embedding = self.embedding_model.encode(text)
81 |
82 | # Convert to list of floats for JSON serialization
83 | return embedding.tolist()
84 |
85 | async def store_memory(self, memory: Dict[str, Any], tier: str = "short_term") -> None:
86 | """
87 | Store a memory.
88 |
89 | Args:
90 | memory: Memory to store
91 | tier: Memory tier (short_term, long_term, archived)
92 | """
93 | # Ensure memory has all required fields
94 | if "id" not in memory:
95 | raise ValueError("Memory must have an ID")
96 |
97 | # Add to appropriate tier
98 | valid_tiers = ["short_term", "long_term", "archived"]
99 | if tier not in valid_tiers:
100 | raise ValueError(f"Invalid tier: {tier}. Must be one of {valid_tiers}")
101 |
102 | tier_key = f"{tier}_memory"
103 | if tier_key not in self.memory_data:
104 | self.memory_data[tier_key] = []
105 |
106 | # Check for existing memory with same ID
107 | existing_index = None
108 | for i, existing_memory in enumerate(self.memory_data[tier_key]):
109 | if existing_memory.get("id") == memory["id"]:
110 | existing_index = i
111 | break
112 |
113 | if existing_index is not None:
114 | # Update existing memory
115 | self.memory_data[tier_key][existing_index] = memory
116 | else:
117 | # Add new memory
118 | self.memory_data[tier_key].append(memory)
119 |
120 | # Update memory index if embedding exists
121 | if "embedding" in memory:
122 | await self._update_memory_index(memory, tier)
123 |
124 | # Update memory stats
125 | self._update_memory_stats()
126 |
127 | # Save memory file
128 | await self._save_memory_file()
129 |
130 | async def get_memory(self, memory_id: str) -> Optional[Dict[str, Any]]:
131 | """
132 | Get a memory by ID.
133 |
134 | Args:
135 | memory_id: Memory ID
136 |
137 | Returns:
138 | Memory dict or None if not found
139 | """
140 | # Check all tiers
141 | for tier in ["short_term_memory", "long_term_memory", "archived_memory"]:
142 | if tier not in self.memory_data:
143 | continue
144 |
145 | for memory in self.memory_data[tier]:
146 | if memory.get("id") == memory_id:
147 | return memory
148 |
149 | return None
150 |
151 | async def get_memory_tier(self, memory_id: str) -> Optional[str]:
152 | """
153 | Get the tier of a memory.
154 |
155 | Args:
156 | memory_id: Memory ID
157 |
158 | Returns:
159 | Memory tier or None if not found
160 | """
161 | # Check all tiers
162 | for tier_key in ["short_term_memory", "long_term_memory", "archived_memory"]:
163 | if tier_key not in self.memory_data:
164 | continue
165 |
166 | for memory in self.memory_data[tier_key]:
167 | if memory.get("id") == memory_id:
168 | # Convert tier_key to tier name
169 | return tier_key.replace("_memory", "")
170 |
171 | return None
172 |
173 | async def update_memory(self, memory: Dict[str, Any], tier: str) -> None:
174 | """
175 | Update an existing memory.
176 |
177 | Args:
178 | memory: Updated memory dict
179 | tier: Memory tier
180 | """
181 | # Get current tier
182 | current_tier = await self.get_memory_tier(memory["id"])
183 |
184 | if current_tier is None:
185 | # Memory doesn't exist, store as new
186 | await self.store_memory(memory, tier)
187 | return
188 |
189 | if current_tier == tier:
190 | # Same tier, just update the memory
191 | tier_key = f"{tier}_memory"
192 | for i, existing_memory in enumerate(self.memory_data[tier_key]):
193 | if existing_memory.get("id") == memory["id"]:
194 | self.memory_data[tier_key][i] = memory
195 | break
196 |
197 | # Update memory index if embedding exists
198 | if "embedding" in memory:
199 | await self._update_memory_index(memory, tier)
200 |
201 | # Save memory file
202 | await self._save_memory_file()
203 | else:
204 | # Different tier, remove from old tier and add to new tier
205 | old_tier_key = f"{current_tier}_memory"
206 |
207 | # Remove from old tier
208 | self.memory_data[old_tier_key] = [
209 | m for m in self.memory_data[old_tier_key]
210 | if m.get("id") != memory["id"]
211 | ]
212 |
213 | # Add to new tier
214 | await self.store_memory(memory, tier)
215 |
216 | async def delete_memories(self, memory_ids: List[str]) -> bool:
217 | """
218 | Delete memories.
219 |
220 | Args:
221 | memory_ids: List of memory IDs to delete
222 |
223 | Returns:
224 | Success flag
225 | """
226 | deleted_count = 0
227 |
228 | # Check all tiers
229 | for tier_key in ["short_term_memory", "long_term_memory", "archived_memory"]:
230 | if tier_key not in self.memory_data:
231 | continue
232 |
233 | # Filter out memories to delete
234 | original_count = len(self.memory_data[tier_key])
235 | self.memory_data[tier_key] = [
236 | memory for memory in self.memory_data[tier_key]
237 | if memory.get("id") not in memory_ids
238 | ]
239 | deleted_count += original_count - len(self.memory_data[tier_key])
240 |
241 | # Update memory index
242 | for memory_id in memory_ids:
243 | await self._remove_from_memory_index(memory_id)
244 |
245 | # Update memory stats
246 | self._update_memory_stats()
247 |
248 | # Save memory file
249 | await self._save_memory_file()
250 |
251 | return deleted_count > 0
252 |
253 | async def search_memories(
254 | self,
255 | embedding: List[float],
256 | limit: int = 5,
257 | types: Optional[List[str]] = None,
258 | min_similarity: float = 0.6
259 | ) -> List[Dict[str, Any]]:
260 | """
261 | Search for memories using vector similarity.
262 |
263 | Args:
264 | embedding: Query embedding vector
265 | limit: Maximum number of results
266 | types: Memory types to include (None for all)
267 | min_similarity: Minimum similarity score
268 |
269 | Returns:
270 | List of matching memories with similarity scores
271 | """
272 | # Convert embedding to numpy array
273 | query_embedding = np.array(embedding)
274 |
275 | # Get all memories with embeddings
276 | memories_with_embeddings = []
277 |
278 | for tier_key in ["short_term_memory", "long_term_memory", "archived_memory"]:
279 | if tier_key not in self.memory_data:
280 | continue
281 |
282 | for memory in self.memory_data[tier_key]:
283 | if "embedding" in memory:
284 | # Filter by type if specified
285 | if types and memory.get("type") not in types:
286 | continue
287 |
288 | memories_with_embeddings.append(memory)
289 |
290 | # Calculate similarities
291 | results_with_scores = []
292 |
293 | for memory in memories_with_embeddings:
294 | memory_embedding = np.array(memory["embedding"])
295 |
296 | # Calculate cosine similarity
297 | similarity = self._cosine_similarity(query_embedding, memory_embedding)
298 |
299 | if similarity >= min_similarity:
300 | # Create a copy to avoid modifying the original
301 | result = memory.copy()
302 | result["similarity"] = float(similarity)
303 | results_with_scores.append(result)
304 |
305 | # Sort by similarity
306 | results_with_scores.sort(key=lambda x: x["similarity"], reverse=True)
307 |
308 | # Limit results
309 | return results_with_scores[:limit]
310 |
311 | async def list_memories(
312 | self,
313 | types: Optional[List[str]] = None,
314 | limit: int = 20,
315 | offset: int = 0,
316 | tier: Optional[str] = None
317 | ) -> List[Dict[str, Any]]:
318 | """
319 | List memories with filtering options.
320 |
321 | Args:
322 | types: Memory types to include (None for all)
323 | limit: Maximum number of memories to return
324 | offset: Offset for pagination
325 | tier: Memory tier to filter by (None for all)
326 |
327 | Returns:
328 | List of memories
329 | """
330 | all_memories = []
331 |
332 | # Determine which tiers to include
333 | tiers_to_include = []
334 | if tier:
335 | tiers_to_include = [f"{tier}_memory"]
336 | else:
337 | tiers_to_include = ["short_term_memory", "long_term_memory", "archived_memory"]
338 |
339 | # Collect memories from selected tiers
340 | for tier_key in tiers_to_include:
341 | if tier_key not in self.memory_data:
342 | continue
343 |
344 | for memory in self.memory_data[tier_key]:
345 | # Filter by type if specified
346 | if types and memory.get("type") not in types:
347 | continue
348 |
349 | # Add tier info
350 | memory_copy = memory.copy()
351 | memory_copy["tier"] = tier_key.replace("_memory", "")
352 | all_memories.append(memory_copy)
353 |
354 | # Sort by creation time (newest first)
355 | all_memories.sort(
356 | key=lambda m: m.get("created_at", ""),
357 | reverse=True
358 | )
359 |
360 | # Apply pagination
361 | paginated_memories = all_memories[offset:offset+limit]
362 |
363 | return paginated_memories
364 |
365 | async def get_metadata(self, key: str) -> Optional[str]:
366 | """
367 | Get metadata value.
368 |
369 | Args:
370 | key: Metadata key
371 |
372 | Returns:
373 | Metadata value or None if not found
374 | """
375 | metadata = self.memory_data.get("metadata", {})
376 | return metadata.get(key)
377 |
378 | async def set_metadata(self, key: str, value: str) -> None:
379 | """
380 | Set metadata value.
381 |
382 | Args:
383 | key: Metadata key
384 | value: Metadata value
385 | """
386 | if "metadata" not in self.memory_data:
387 | self.memory_data["metadata"] = {}
388 |
389 | self.memory_data["metadata"][key] = value
390 |
391 | # Save memory file
392 | await self._save_memory_file()
393 |
394 | async def get_memory_stats(self) -> Dict[str, Any]:
395 | """
396 | Get memory statistics.
397 |
398 | Returns:
399 | Memory statistics
400 | """
401 | return self.memory_data.get("metadata", {}).get("memory_stats", {})
402 |
403 | async def _load_memory_file(self) -> Dict[str, Any]:
404 | """
405 | Load the memory file.
406 |
407 | Returns:
408 | Memory data
409 | """
410 | if not os.path.exists(self.memory_file_path):
411 | logger.info(f"Memory file not found, creating new file: {self.memory_file_path}")
412 | return self._create_empty_memory_file()
413 |
414 | try:
415 | with open(self.memory_file_path, "r") as f:
416 | data = json.load(f)
417 | logger.info(f"Loaded memory file with {self._count_memories(data)} memories")
418 | return data
419 | except json.JSONDecodeError:
420 | logger.error(f"Error parsing memory file: {self.memory_file_path}")
421 | logger.info("Creating new memory file")
422 | return self._create_empty_memory_file()
423 |
424 | def _create_empty_memory_file(self) -> Dict[str, Any]:
425 | """
426 | Create an empty memory file structure.
427 |
428 | Returns:
429 | Empty memory data
430 | """
431 | return {
432 | "metadata": {
433 | "version": "1.0",
434 | "created_at": datetime.now().isoformat(),
435 | "updated_at": datetime.now().isoformat(),
436 | "memory_stats": {
437 | "total_memories": 0,
438 | "active_memories": 0,
439 | "archived_memories": 0
440 | }
441 | },
442 | "memory_index": {
443 | "index_type": "hnsw",
444 | "index_parameters": {
445 | "m": 16,
446 | "ef_construction": 200,
447 | "ef": 50
448 | },
449 | "entries": {}
450 | },
451 | "short_term_memory": [],
452 | "long_term_memory": [],
453 | "archived_memory": [],
454 | "memory_schema": {
455 | "conversation": {
456 | "required_fields": ["role", "message"],
457 | "optional_fields": ["summary", "entities", "sentiment", "intent"]
458 | },
459 | "fact": {
460 | "required_fields": ["fact", "confidence"],
461 | "optional_fields": ["domain", "entities", "references"]
462 | },
463 | "document": {
464 | "required_fields": ["title", "text"],
465 | "optional_fields": ["summary", "chunks", "metadata"]
466 | },
467 | "code": {
468 | "required_fields": ["language", "code"],
469 | "optional_fields": ["description", "purpose", "dependencies"]
470 | }
471 | },
472 | "config": {
473 | "memory_management": {
474 | "max_short_term_memories": 100,
475 | "max_long_term_memories": 10000,
476 | "archival_threshold_days": 30,
477 | "deletion_threshold_days": 365,
478 | "importance_decay_rate": 0.01,
479 | "minimum_importance_threshold": 0.2
480 | },
481 | "retrieval": {
482 | "default_top_k": 5,
483 | "semantic_threshold": 0.75,
484 | "recency_weight": 0.3,
485 | "importance_weight": 0.7
486 | },
487 | "embedding": {
488 | "default_model": self.embedding_model_name,
489 | "dimensions": self.embedding_dimensions,
490 | "batch_size": 8
491 | }
492 | }
493 | }
494 |
495 | async def _save_memory_file(self) -> None:
496 | """Save the memory file."""
497 | # Update metadata
498 | self.memory_data["metadata"]["updated_at"] = datetime.now().isoformat()
499 |
500 | # Create temp file
501 | temp_file = f"{self.memory_file_path}.tmp"
502 |
503 | try:
504 | with open(temp_file, "w") as f:
505 | json.dump(self.memory_data, f, indent=2)
506 |
507 | # Rename temp file to actual file (atomic operation)
508 | os.replace(temp_file, self.memory_file_path)
509 | logger.debug(f"Memory file saved: {self.memory_file_path}")
510 | except Exception as e:
511 | logger.error(f"Error saving memory file: {str(e)}")
512 | # Clean up temp file if it exists
513 | if os.path.exists(temp_file):
514 | os.remove(temp_file)
515 |
516 | def _count_memories(self, data: Dict[str, Any]) -> int:
517 | """
518 | Count the total number of memories.
519 |
520 | Args:
521 | data: Memory data
522 |
523 | Returns:
524 | Total number of memories
525 | """
526 | count = 0
527 | for tier in ["short_term_memory", "long_term_memory", "archived_memory"]:
528 | if tier in data:
529 | count += len(data[tier])
530 | return count
531 |
532 | def _update_memory_stats(self) -> None:
533 | """Update memory statistics."""
534 | # Initialize stats if not present
535 | if "metadata" not in self.memory_data:
536 | self.memory_data["metadata"] = {}
537 |
538 | if "memory_stats" not in self.memory_data["metadata"]:
539 | self.memory_data["metadata"]["memory_stats"] = {}
540 |
541 | # Count memories in each tier
542 | short_term_count = len(self.memory_data.get("short_term_memory", []))
543 | long_term_count = len(self.memory_data.get("long_term_memory", []))
544 | archived_count = len(self.memory_data.get("archived_memory", []))
545 |
546 | # Update stats
547 | stats = self.memory_data["metadata"]["memory_stats"]
548 | stats["total_memories"] = short_term_count + long_term_count + archived_count
549 | stats["active_memories"] = short_term_count + long_term_count
550 | stats["archived_memories"] = archived_count
551 | stats["short_term_count"] = short_term_count
552 | stats["long_term_count"] = long_term_count
553 |
554 | async def _update_memory_index(self, memory: Dict[str, Any], tier: str) -> None:
555 | """
556 | Update the memory index.
557 |
558 | Args:
559 | memory: Memory to index
560 | tier: Memory tier
561 | """
562 | if "memory_index" not in self.memory_data:
563 | self.memory_data["memory_index"] = {
564 | "index_type": "hnsw",
565 | "index_parameters": {
566 | "m": 16,
567 | "ef_construction": 200,
568 | "ef": 50
569 | },
570 | "entries": {}
571 | }
572 |
573 | if "entries" not in self.memory_data["memory_index"]:
574 | self.memory_data["memory_index"]["entries"] = {}
575 |
576 | # Add to index
577 | memory_id = memory["id"]
578 |
579 | self.memory_data["memory_index"]["entries"][memory_id] = {
580 | "tier": tier,
581 | "type": memory.get("type", "unknown"),
582 | "importance": memory.get("importance", 0.5),
583 | "recency": memory.get("created_at", datetime.now().isoformat())
584 | }
585 |
586 | async def _remove_from_memory_index(self, memory_id: str) -> None:
587 | """
588 | Remove a memory from the index.
589 |
590 | Args:
591 | memory_id: Memory ID
592 | """
593 | if "memory_index" not in self.memory_data or "entries" not in self.memory_data["memory_index"]:
594 | return
595 |
596 | if memory_id in self.memory_data["memory_index"]["entries"]:
597 | del self.memory_data["memory_index"]["entries"][memory_id]
598 |
599 | def _cosine_similarity(self, a: np.ndarray, b: np.ndarray) -> float:
600 | """
601 | Calculate cosine similarity between two vectors.
602 |
603 | Args:
604 | a: First vector
605 | b: Second vector
606 |
607 | Returns:
608 | Cosine similarity (0.0-1.0)
609 | """
610 | norm_a = np.linalg.norm(a)
611 | norm_b = np.linalg.norm(b)
612 |
613 | if norm_a == 0 or norm_b == 0:
614 | return 0.0
615 |
616 | return float(np.dot(a, b) / (norm_a * norm_b))
617 |
```