#
tokens: 49822/50000 118/416 files (page 1/19)
lines: off (toggle) GitHub
raw markdown copy
This is page 1 of 19. Use http://codebase.md/basicmachines-co/basic-memory?page={x} to view the full context.

# Directory Structure

```
├── .claude
│   ├── commands
│   │   ├── release
│   │   │   ├── beta.md
│   │   │   ├── changelog.md
│   │   │   ├── release-check.md
│   │   │   └── release.md
│   │   ├── spec.md
│   │   └── test-live.md
│   └── settings.json
├── .dockerignore
├── .env.example
├── .github
│   ├── dependabot.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.md
│   │   ├── config.yml
│   │   ├── documentation.md
│   │   └── feature_request.md
│   └── workflows
│       ├── claude-code-review.yml
│       ├── claude-issue-triage.yml
│       ├── claude.yml
│       ├── dev-release.yml
│       ├── docker.yml
│       ├── pr-title.yml
│       ├── release.yml
│       └── test.yml
├── .gitignore
├── .python-version
├── CHANGELOG.md
├── CITATION.cff
├── CLA.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── docker-compose-postgres.yml
├── docker-compose.yml
├── Dockerfile
├── docs
│   ├── ai-assistant-guide-extended.md
│   ├── ARCHITECTURE.md
│   ├── character-handling.md
│   ├── cloud-cli.md
│   ├── Docker.md
│   └── testing-coverage.md
├── justfile
├── LICENSE
├── llms-install.md
├── pyproject.toml
├── README.md
├── SECURITY.md
├── smithery.yaml
├── specs
│   ├── SPEC-1 Specification-Driven Development Process.md
│   ├── SPEC-10 Unified Deployment Workflow and Event Tracking.md
│   ├── SPEC-11 Basic Memory API Performance Optimization.md
│   ├── SPEC-12 OpenTelemetry Observability.md
│   ├── SPEC-13 CLI Authentication with Subscription Validation.md
│   ├── SPEC-14 Cloud Git Versioning & GitHub Backup.md
│   ├── SPEC-14- Cloud Git Versioning & GitHub Backup.md
│   ├── SPEC-15 Configuration Persistence via Tigris for Cloud Tenants.md
│   ├── SPEC-16 MCP Cloud Service Consolidation.md
│   ├── SPEC-17 Semantic Search with ChromaDB.md
│   ├── SPEC-18 AI Memory Management Tool.md
│   ├── SPEC-19 Sync Performance and Memory Optimization.md
│   ├── SPEC-2 Slash Commands Reference.md
│   ├── SPEC-20 Simplified Project-Scoped Rclone Sync.md
│   ├── SPEC-3 Agent Definitions.md
│   ├── SPEC-4 Notes Web UI Component Architecture.md
│   ├── SPEC-5 CLI Cloud Upload via WebDAV.md
│   ├── SPEC-6 Explicit Project Parameter Architecture.md
│   ├── SPEC-7 POC to spike Tigris Turso for local access to cloud data.md
│   ├── SPEC-8 TigrisFS Integration.md
│   ├── SPEC-9 Multi-Project Bidirectional Sync Architecture.md
│   ├── SPEC-9 Signed Header Tenant Information.md
│   └── SPEC-9-1 Follow-Ups- Conflict, Sync, and Observability.md
├── src
│   └── basic_memory
│       ├── __init__.py
│       ├── alembic
│       │   ├── alembic.ini
│       │   ├── env.py
│       │   ├── migrations.py
│       │   ├── script.py.mako
│       │   └── versions
│       │       ├── 314f1ea54dc4_add_postgres_full_text_search_support_.py
│       │       ├── 3dae7c7b1564_initial_schema.py
│       │       ├── 502b60eaa905_remove_required_from_entity_permalink.py
│       │       ├── 5fe1ab1ccebe_add_projects_table.py
│       │       ├── 647e7a75e2cd_project_constraint_fix.py
│       │       ├── 6830751f5fb6_merge_multiple_heads.py
│       │       ├── 9d9c1cb7d8f5_add_mtime_and_size_columns_to_entity_.py
│       │       ├── a1b2c3d4e5f6_fix_project_foreign_keys.py
│       │       ├── a2b3c4d5e6f7_add_search_index_entity_cascade.py
│       │       ├── b3c3938bacdb_relation_to_name_unique_index.py
│       │       ├── cc7172b46608_update_search_index_schema.py
│       │       ├── e7e1f4367280_add_scan_watermark_tracking_to_project.py
│       │       ├── f8a9b2c3d4e5_add_pg_trgm_for_fuzzy_link_resolution.py
│       │       └── g9a0b3c4d5e6_add_external_id_to_project_and_entity.py
│       ├── api
│       │   ├── __init__.py
│       │   ├── app.py
│       │   ├── container.py
│       │   ├── routers
│       │   │   ├── __init__.py
│       │   │   ├── directory_router.py
│       │   │   ├── importer_router.py
│       │   │   ├── knowledge_router.py
│       │   │   ├── management_router.py
│       │   │   ├── memory_router.py
│       │   │   ├── project_router.py
│       │   │   ├── prompt_router.py
│       │   │   ├── resource_router.py
│       │   │   ├── search_router.py
│       │   │   └── utils.py
│       │   ├── template_loader.py
│       │   └── v2
│       │       ├── __init__.py
│       │       └── routers
│       │           ├── __init__.py
│       │           ├── directory_router.py
│       │           ├── importer_router.py
│       │           ├── knowledge_router.py
│       │           ├── memory_router.py
│       │           ├── project_router.py
│       │           ├── prompt_router.py
│       │           ├── resource_router.py
│       │           └── search_router.py
│       ├── cli
│       │   ├── __init__.py
│       │   ├── app.py
│       │   ├── auth.py
│       │   ├── commands
│       │   │   ├── __init__.py
│       │   │   ├── cloud
│       │   │   │   ├── __init__.py
│       │   │   │   ├── api_client.py
│       │   │   │   ├── bisync_commands.py
│       │   │   │   ├── cloud_utils.py
│       │   │   │   ├── core_commands.py
│       │   │   │   ├── rclone_commands.py
│       │   │   │   ├── rclone_config.py
│       │   │   │   ├── rclone_installer.py
│       │   │   │   ├── upload_command.py
│       │   │   │   └── upload.py
│       │   │   ├── command_utils.py
│       │   │   ├── db.py
│       │   │   ├── format.py
│       │   │   ├── import_chatgpt.py
│       │   │   ├── import_claude_conversations.py
│       │   │   ├── import_claude_projects.py
│       │   │   ├── import_memory_json.py
│       │   │   ├── mcp.py
│       │   │   ├── project.py
│       │   │   ├── status.py
│       │   │   ├── telemetry.py
│       │   │   └── tool.py
│       │   ├── container.py
│       │   └── main.py
│       ├── config.py
│       ├── db.py
│       ├── deps
│       │   ├── __init__.py
│       │   ├── config.py
│       │   ├── db.py
│       │   ├── importers.py
│       │   ├── projects.py
│       │   ├── repositories.py
│       │   └── services.py
│       ├── deps.py
│       ├── file_utils.py
│       ├── ignore_utils.py
│       ├── importers
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── chatgpt_importer.py
│       │   ├── claude_conversations_importer.py
│       │   ├── claude_projects_importer.py
│       │   ├── memory_json_importer.py
│       │   └── utils.py
│       ├── markdown
│       │   ├── __init__.py
│       │   ├── entity_parser.py
│       │   ├── markdown_processor.py
│       │   ├── plugins.py
│       │   ├── schemas.py
│       │   └── utils.py
│       ├── mcp
│       │   ├── __init__.py
│       │   ├── async_client.py
│       │   ├── clients
│       │   │   ├── __init__.py
│       │   │   ├── directory.py
│       │   │   ├── knowledge.py
│       │   │   ├── memory.py
│       │   │   ├── project.py
│       │   │   ├── resource.py
│       │   │   └── search.py
│       │   ├── container.py
│       │   ├── project_context.py
│       │   ├── prompts
│       │   │   ├── __init__.py
│       │   │   ├── ai_assistant_guide.py
│       │   │   ├── continue_conversation.py
│       │   │   ├── recent_activity.py
│       │   │   ├── search.py
│       │   │   └── utils.py
│       │   ├── resources
│       │   │   ├── ai_assistant_guide.md
│       │   │   └── project_info.py
│       │   ├── server.py
│       │   └── tools
│       │       ├── __init__.py
│       │       ├── build_context.py
│       │       ├── canvas.py
│       │       ├── chatgpt_tools.py
│       │       ├── delete_note.py
│       │       ├── edit_note.py
│       │       ├── list_directory.py
│       │       ├── move_note.py
│       │       ├── project_management.py
│       │       ├── read_content.py
│       │       ├── read_note.py
│       │       ├── recent_activity.py
│       │       ├── search.py
│       │       ├── utils.py
│       │       ├── view_note.py
│       │       └── write_note.py
│       ├── models
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── knowledge.py
│       │   ├── project.py
│       │   └── search.py
│       ├── project_resolver.py
│       ├── repository
│       │   ├── __init__.py
│       │   ├── entity_repository.py
│       │   ├── observation_repository.py
│       │   ├── postgres_search_repository.py
│       │   ├── project_info_repository.py
│       │   ├── project_repository.py
│       │   ├── relation_repository.py
│       │   ├── repository.py
│       │   ├── search_index_row.py
│       │   ├── search_repository_base.py
│       │   ├── search_repository.py
│       │   └── sqlite_search_repository.py
│       ├── runtime.py
│       ├── schemas
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── cloud.py
│       │   ├── delete.py
│       │   ├── directory.py
│       │   ├── importer.py
│       │   ├── memory.py
│       │   ├── project_info.py
│       │   ├── prompt.py
│       │   ├── request.py
│       │   ├── response.py
│       │   ├── search.py
│       │   ├── sync_report.py
│       │   └── v2
│       │       ├── __init__.py
│       │       ├── entity.py
│       │       └── resource.py
│       ├── services
│       │   ├── __init__.py
│       │   ├── context_service.py
│       │   ├── directory_service.py
│       │   ├── entity_service.py
│       │   ├── exceptions.py
│       │   ├── file_service.py
│       │   ├── initialization.py
│       │   ├── link_resolver.py
│       │   ├── project_service.py
│       │   ├── search_service.py
│       │   └── service.py
│       ├── sync
│       │   ├── __init__.py
│       │   ├── background_sync.py
│       │   ├── coordinator.py
│       │   ├── sync_service.py
│       │   └── watch_service.py
│       ├── telemetry.py
│       ├── templates
│       │   └── prompts
│       │       ├── continue_conversation.hbs
│       │       └── search.hbs
│       └── utils.py
├── test-int
│   ├── BENCHMARKS.md
│   ├── cli
│   │   ├── test_project_commands_integration.py
│   │   └── test_version_integration.py
│   ├── conftest.py
│   ├── mcp
│   │   ├── test_build_context_underscore.py
│   │   ├── test_build_context_validation.py
│   │   ├── test_chatgpt_tools_integration.py
│   │   ├── test_default_project_mode_integration.py
│   │   ├── test_delete_note_integration.py
│   │   ├── test_edit_note_integration.py
│   │   ├── test_lifespan_shutdown_sync_task_cancellation_integration.py
│   │   ├── test_list_directory_integration.py
│   │   ├── test_move_note_integration.py
│   │   ├── test_project_management_integration.py
│   │   ├── test_project_state_sync_integration.py
│   │   ├── test_read_content_integration.py
│   │   ├── test_read_note_integration.py
│   │   ├── test_search_integration.py
│   │   ├── test_single_project_mcp_integration.py
│   │   └── test_write_note_integration.py
│   ├── test_db_wal_mode.py
│   └── test_disable_permalinks_integration.py
├── tests
│   ├── __init__.py
│   ├── api
│   │   ├── conftest.py
│   │   ├── test_api_container.py
│   │   ├── test_async_client.py
│   │   ├── test_continue_conversation_template.py
│   │   ├── test_directory_router.py
│   │   ├── test_importer_router.py
│   │   ├── test_knowledge_router.py
│   │   ├── test_management_router.py
│   │   ├── test_memory_router.py
│   │   ├── test_project_router_operations.py
│   │   ├── test_project_router.py
│   │   ├── test_prompt_router.py
│   │   ├── test_relation_background_resolution.py
│   │   ├── test_resource_router.py
│   │   ├── test_search_router.py
│   │   ├── test_search_template.py
│   │   ├── test_template_loader_helpers.py
│   │   ├── test_template_loader.py
│   │   └── v2
│   │       ├── __init__.py
│   │       ├── conftest.py
│   │       ├── test_directory_router.py
│   │       ├── test_importer_router.py
│   │       ├── test_knowledge_router.py
│   │       ├── test_memory_router.py
│   │       ├── test_project_router.py
│   │       ├── test_prompt_router.py
│   │       ├── test_resource_router.py
│   │       └── test_search_router.py
│   ├── cli
│   │   ├── cloud
│   │   │   ├── test_cloud_api_client_and_utils.py
│   │   │   ├── test_rclone_config_and_bmignore_filters.py
│   │   │   └── test_upload_path.py
│   │   ├── conftest.py
│   │   ├── test_auth_cli_auth.py
│   │   ├── test_cli_container.py
│   │   ├── test_cli_exit.py
│   │   ├── test_cli_tool_exit.py
│   │   ├── test_cli_tools.py
│   │   ├── test_cloud_authentication.py
│   │   ├── test_ignore_utils.py
│   │   ├── test_import_chatgpt.py
│   │   ├── test_import_claude_conversations.py
│   │   ├── test_import_claude_projects.py
│   │   ├── test_import_memory_json.py
│   │   ├── test_project_add_with_local_path.py
│   │   └── test_upload.py
│   ├── conftest.py
│   ├── db
│   │   └── test_issue_254_foreign_key_constraints.py
│   ├── importers
│   │   ├── test_conversation_indexing.py
│   │   ├── test_importer_base.py
│   │   └── test_importer_utils.py
│   ├── markdown
│   │   ├── __init__.py
│   │   ├── test_date_frontmatter_parsing.py
│   │   ├── test_entity_parser_error_handling.py
│   │   ├── test_entity_parser.py
│   │   ├── test_markdown_plugins.py
│   │   ├── test_markdown_processor.py
│   │   ├── test_observation_edge_cases.py
│   │   ├── test_parser_edge_cases.py
│   │   ├── test_relation_edge_cases.py
│   │   └── test_task_detection.py
│   ├── mcp
│   │   ├── clients
│   │   │   ├── __init__.py
│   │   │   └── test_clients.py
│   │   ├── conftest.py
│   │   ├── test_async_client_modes.py
│   │   ├── test_mcp_container.py
│   │   ├── test_obsidian_yaml_formatting.py
│   │   ├── test_permalink_collision_file_overwrite.py
│   │   ├── test_project_context.py
│   │   ├── test_prompts.py
│   │   ├── test_recent_activity_prompt_modes.py
│   │   ├── test_resources.py
│   │   ├── test_server_lifespan_branches.py
│   │   ├── test_tool_build_context.py
│   │   ├── test_tool_canvas.py
│   │   ├── test_tool_delete_note.py
│   │   ├── test_tool_edit_note.py
│   │   ├── test_tool_list_directory.py
│   │   ├── test_tool_move_note.py
│   │   ├── test_tool_project_management.py
│   │   ├── test_tool_read_content.py
│   │   ├── test_tool_read_note.py
│   │   ├── test_tool_recent_activity.py
│   │   ├── test_tool_resource.py
│   │   ├── test_tool_search.py
│   │   ├── test_tool_utils.py
│   │   ├── test_tool_view_note.py
│   │   ├── test_tool_write_note_kebab_filenames.py
│   │   ├── test_tool_write_note.py
│   │   └── tools
│   │       └── test_chatgpt_tools.py
│   ├── Non-MarkdownFileSupport.pdf
│   ├── README.md
│   ├── repository
│   │   ├── test_entity_repository_upsert.py
│   │   ├── test_entity_repository.py
│   │   ├── test_entity_upsert_issue_187.py
│   │   ├── test_observation_repository.py
│   │   ├── test_postgres_search_repository.py
│   │   ├── test_project_info_repository.py
│   │   ├── test_project_repository.py
│   │   ├── test_relation_repository.py
│   │   ├── test_repository.py
│   │   ├── test_search_repository_edit_bug_fix.py
│   │   └── test_search_repository.py
│   ├── schemas
│   │   ├── test_base_timeframe_minimum.py
│   │   ├── test_memory_serialization.py
│   │   ├── test_memory_url_validation.py
│   │   ├── test_memory_url.py
│   │   ├── test_relation_response_reference_resolution.py
│   │   ├── test_schemas.py
│   │   └── test_search.py
│   ├── Screenshot.png
│   ├── services
│   │   ├── test_context_service.py
│   │   ├── test_directory_service.py
│   │   ├── test_entity_service_disable_permalinks.py
│   │   ├── test_entity_service.py
│   │   ├── test_file_service.py
│   │   ├── test_initialization_cloud_mode_branches.py
│   │   ├── test_initialization.py
│   │   ├── test_link_resolver.py
│   │   ├── test_project_removal_bug.py
│   │   ├── test_project_service_operations.py
│   │   ├── test_project_service.py
│   │   └── test_search_service.py
│   ├── sync
│   │   ├── test_character_conflicts.py
│   │   ├── test_coordinator.py
│   │   ├── test_sync_service_incremental.py
│   │   ├── test_sync_service.py
│   │   ├── test_sync_wikilink_issue.py
│   │   ├── test_tmp_files.py
│   │   ├── test_watch_service_atomic_adds.py
│   │   ├── test_watch_service_edge_cases.py
│   │   ├── test_watch_service_reload.py
│   │   └── test_watch_service.py
│   ├── test_config.py
│   ├── test_deps.py
│   ├── test_production_cascade_delete.py
│   ├── test_project_resolver.py
│   ├── test_rclone_commands.py
│   ├── test_runtime.py
│   ├── test_telemetry.py
│   └── utils
│       ├── test_file_utils.py
│       ├── test_frontmatter_obsidian_compatible.py
│       ├── test_parse_tags.py
│       ├── test_permalink_formatting.py
│       ├── test_timezone_utils.py
│       ├── test_utf8_handling.py
│       └── test_validate_project_path.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
3.14

```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
*.py[cod]
__pycache__/
.pytest_cache/
.coverage
htmlcov/

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# Installer artifacts
installer/build/
installer/dist/
rw.*.dmg  # Temporary disk images

# Virtual environments
.env
.venv
env/
venv/
ENV/

# IDE
.idea/
.vscode/
*.swp
*.swo

# macOS
.DS_Store
.coverage.*

# obsidian docs:
/docs/.obsidian/
/examples/.obsidian/
/examples/.basic-memory/


# claude action
claude-output
**/.claude/settings.local.json
.mcp.json

```

--------------------------------------------------------------------------------
/.dockerignore:
--------------------------------------------------------------------------------

```
# Git files
.git/
.gitignore
.gitattributes

# Development files
.vscode/
.idea/
*.swp
*.swo
*~

# Testing files
tests/
test-int/
.pytest_cache/
.coverage
htmlcov/

# Build artifacts
build/
dist/
*.egg-info/
__pycache__/
*.pyc
*.pyo
*.pyd
.Python

# Virtual environments (uv creates these during build)
.venv/
venv/
.env

# CI/CD files
.github/

# Documentation (keep README.md and pyproject.toml)
docs/
CHANGELOG.md
CLAUDE.md
CONTRIBUTING.md

# Example files not needed for runtime
examples/

# Local development files
.basic-memory/
*.db
*.sqlite3

# OS files
.DS_Store
Thumbs.db

# Temporary files
tmp/
temp/
*.tmp
*.log
```

--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------

```
# Basic Memory Environment Variables Example
# Copy this file to .env and customize as needed
# Note: .env files are gitignored and should never be committed

# ============================================================================
# PostgreSQL Test Database Configuration
# ============================================================================
# These variables allow you to override the default test database credentials
# Default values match docker-compose-postgres.yml for local development
#
# Only needed if you want to use different credentials or a remote test database
# By default, tests use: postgresql://basic_memory_user:dev_password@localhost:5433/basic_memory_test

# Full PostgreSQL test database URL (used by tests and migrations)
# POSTGRES_TEST_URL=postgresql+asyncpg://basic_memory_user:dev_password@localhost:5433/basic_memory_test

# Individual components (used by justfile postgres-reset command)
# POSTGRES_USER=basic_memory_user
# POSTGRES_TEST_DB=basic_memory_test

# ============================================================================
# Production Database Configuration
# ============================================================================
# For production use, set these in your deployment environment
# DO NOT use the test credentials above in production!

# BASIC_MEMORY_DATABASE_BACKEND=postgres  # or "sqlite"
# BASIC_MEMORY_DATABASE_URL=postgresql+asyncpg://user:password@host:port/database

```

--------------------------------------------------------------------------------
/tests/README.md:
--------------------------------------------------------------------------------

```markdown
# Dual-Backend Testing

Basic Memory tests run against both SQLite and Postgres backends to ensure compatibility.

## Quick Start

```bash
# Run tests against SQLite only (default, no setup needed)
pytest

# Run tests against Postgres only (requires docker-compose)
docker-compose -f docker-compose-postgres.yml up -d
pytest -m postgres

# Run tests against BOTH backends
docker-compose -f docker-compose-postgres.yml up -d
pytest --run-all-backends  # Not yet implemented - run both commands above
```

## How It Works

### Parametrized Backend Fixture

The `db_backend` fixture is parametrized to run tests against both `sqlite` and `postgres`:

```python
@pytest.fixture(
    params=[
        pytest.param("sqlite", id="sqlite"),
        pytest.param("postgres", id="postgres", marks=pytest.mark.postgres),
    ]
)
def db_backend(request) -> Literal["sqlite", "postgres"]:
    return request.param
```

### Backend-Specific Engine Factories

Each backend has its own engine factory implementation:

- **`sqlite_engine_factory`** - Uses in-memory SQLite (fast, isolated)
- **`postgres_engine_factory`** - Uses Postgres test database (realistic, requires Docker)

The main `engine_factory` fixture delegates to the appropriate implementation based on `db_backend`.

### Configuration

The `app_config` fixture automatically configures the correct backend:

```python
# SQLite config
database_backend = DatabaseBackend.SQLITE
database_url = None  # Uses default SQLite path

# Postgres config
database_backend = DatabaseBackend.POSTGRES
database_url = "postgresql+asyncpg://basic_memory_user:dev_password@localhost:5433/basic_memory_test"
```

## Running Postgres Tests

### 1. Start Postgres Docker Container

```bash
docker-compose -f docker-compose-postgres.yml up -d
```

This starts:
- Postgres 17 on port **5433** (not 5432 to avoid conflicts)
- Test database: `basic_memory_test`
- Credentials: `basic_memory_user` / `dev_password`

### 2. Run Postgres Tests

```bash
# Run only Postgres tests
pytest -m postgres

# Run specific test with Postgres
pytest tests/test_entity_repository.py::test_create -m postgres

# Skip Postgres tests (default behavior)
pytest -m "not postgres"
```

### 3. Stop Docker Container

```bash
docker-compose -f docker-compose-postgres.yml down
```

## Test Isolation

### SQLite Tests
- Each test gets a fresh in-memory database
- Automatic cleanup (database destroyed after test)
- No setup required

### Postgres Tests
- Database is **cleaned before each test** (drop all tables, recreate)
- Tests share the same Postgres instance but get isolated schemas
- Requires Docker Compose to be running

## Markers

- `postgres` - Marks tests that run against Postgres backend
- Use `-m postgres` to run only Postgres tests
- Use `-m "not postgres"` to skip Postgres tests (default)

## CI Integration

### GitHub Actions

Use service containers for Postgres (no Docker Compose needed):

```yaml
jobs:
  test:
    runs-on: ubuntu-latest

    # Postgres service container
    services:
      postgres:
        image: postgres:17
        env:
          POSTGRES_DB: basic_memory_test
          POSTGRES_USER: basic_memory_user
          POSTGRES_PASSWORD: dev_password
        ports:
          - 5433:5432
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5

    steps:
      - name: Run SQLite tests
        run: pytest -m "not postgres"

      - name: Run Postgres tests
        run: pytest -m postgres
```

## Troubleshooting

### Postgres tests fail with "connection refused"

Make sure Docker Compose is running:
```bash
docker-compose -f docker-compose-postgres.yml ps
docker-compose -f docker-compose-postgres.yml logs postgres
```

### Port 5433 already in use

Either:
- Stop the conflicting service
- Change the port in `docker-compose-postgres.yml` and `tests/conftest.py`

### Tests hang or timeout

Check Postgres health:
```bash
docker-compose -f docker-compose-postgres.yml exec postgres pg_isready -U basic_memory_user
```

## Future Enhancements

- [ ] Add `--run-all-backends` CLI flag to run both backends in sequence
- [ ] Implement test fixtures for backend-specific features (e.g., Postgres full-text search vs SQLite FTS5)
- [ ] Add performance comparison benchmarks between backends
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
[![License: AGPL v3](https://img.shields.io/badge/License-AGPL_v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![PyPI version](https://badge.fury.io/py/basic-memory.svg)](https://badge.fury.io/py/basic-memory)
[![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/)
[![Tests](https://github.com/basicmachines-co/basic-memory/workflows/Tests/badge.svg)](https://github.com/basicmachines-co/basic-memory/actions)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
![](https://badge.mcpx.dev?type=server 'MCP Server')
![](https://badge.mcpx.dev?type=dev 'MCP Dev')
[![smithery badge](https://smithery.ai/badge/@basicmachines-co/basic-memory)](https://smithery.ai/server/@basicmachines-co/basic-memory)

## 🚀 Basic Memory Cloud is Live!

- **Cross-device and multi-platform support is here.** Your knowledge graph now works on desktop, web, and mobile - seamlessly synced across all your AI tools (Claude, ChatGPT, Gemini, Claude Code, and Codex) 
- **Early Supporter Pricing:** Early users get 25% off forever. 
The open source project continues as always. Cloud just makes it work everywhere.

[Sign up now →](https://basicmemory.com/beta)

with a 7 day free trial

# Basic Memory

Basic Memory lets you build persistent knowledge through natural conversations with Large Language Models (LLMs) like
Claude, while keeping everything in simple Markdown files on your computer. It uses the Model Context Protocol (MCP) to
enable any compatible LLM to read and write to your local knowledge base.

- Website: https://basicmachines.co
- Documentation: https://memory.basicmachines.co

## Pick up your conversation right where you left off

- AI assistants can load context from local files in a new conversation
- Notes are saved locally as Markdown files in real time
- No project knowledge or special prompting required

https://github.com/user-attachments/assets/a55d8238-8dd0-454a-be4c-8860dbbd0ddc

## Quick Start

```bash
# Install with uv (recommended)
uv tool install basic-memory

# Configure Claude Desktop (edit ~/Library/Application Support/Claude/claude_desktop_config.json)
# Add this to your config:
{
  "mcpServers": {
    "basic-memory": {
      "command": "uvx",
      "args": [
        "basic-memory",
        "mcp"
      ]
    }
  }
}
# Now in Claude Desktop, you can:
# - Write notes with "Create a note about coffee brewing methods"
# - Read notes with "What do I know about pour over coffee?"
# - Search with "Find information about Ethiopian beans"

```

You can view shared context via files in `~/basic-memory` (default directory location).

### Alternative Installation via Smithery

You can use [Smithery](https://smithery.ai/server/@basicmachines-co/basic-memory) to automatically configure Basic
Memory for Claude Desktop:

```bash
npx -y @smithery/cli install @basicmachines-co/basic-memory --client claude
```

This installs and configures Basic Memory without requiring manual edits to the Claude Desktop configuration file. The
Smithery server hosts the MCP server component, while your data remains stored locally as Markdown files.

### Glama.ai

<a href="https://glama.ai/mcp/servers/o90kttu9ym">
  <img width="380" height="200" src="https://glama.ai/mcp/servers/o90kttu9ym/badge" alt="basic-memory MCP server" />
</a>

## Why Basic Memory?

Most LLM interactions are ephemeral - you ask a question, get an answer, and everything is forgotten. Each conversation
starts fresh, without the context or knowledge from previous ones. Current workarounds have limitations:

- Chat histories capture conversations but aren't structured knowledge
- RAG systems can query documents but don't let LLMs write back
- Vector databases require complex setups and often live in the cloud
- Knowledge graphs typically need specialized tools to maintain

Basic Memory addresses these problems with a simple approach: structured Markdown files that both humans and LLMs can
read
and write to. The key advantages:

- **Local-first:** All knowledge stays in files you control
- **Bi-directional:** Both you and the LLM read and write to the same files
- **Structured yet simple:** Uses familiar Markdown with semantic patterns
- **Traversable knowledge graph:** LLMs can follow links between topics
- **Standard formats:** Works with existing editors like Obsidian
- **Lightweight infrastructure:** Just local files indexed in a local SQLite database

With Basic Memory, you can:

- Have conversations that build on previous knowledge
- Create structured notes during natural conversations
- Have conversations with LLMs that remember what you've discussed before
- Navigate your knowledge graph semantically
- Keep everything local and under your control
- Use familiar tools like Obsidian to view and edit notes
- Build a personal knowledge base that grows over time
- Sync your knowledge to the cloud with bidirectional synchronization
- Authenticate and manage cloud projects with subscription validation
- Mount cloud storage for direct file access

## How It Works in Practice

Let's say you're exploring coffee brewing methods and want to capture your knowledge. Here's how it works:

1. Start by chatting normally:

```
I've been experimenting with different coffee brewing methods. Key things I've learned:

- Pour over gives more clarity in flavor than French press
- Water temperature is critical - around 205°F seems best
- Freshly ground beans make a huge difference
```

... continue conversation.

2. Ask the LLM to help structure this knowledge:

```
"Let's write a note about coffee brewing methods."
```

LLM creates a new Markdown file on your system (which you can see instantly in Obsidian or your editor):

```markdown
---
title: Coffee Brewing Methods
permalink: coffee-brewing-methods
tags:
- coffee
- brewing
---

# Coffee Brewing Methods

## Observations

- [method] Pour over provides more clarity and highlights subtle flavors
- [technique] Water temperature at 205°F (96°C) extracts optimal compounds
- [principle] Freshly ground beans preserve aromatics and flavor

## Relations

- relates_to [[Coffee Bean Origins]]
- requires [[Proper Grinding Technique]]
- affects [[Flavor Extraction]]
```

The note embeds semantic content and links to other topics via simple Markdown formatting.

3. You see this file on your computer in real time in the current project directory (default `~/$HOME/basic-memory`).

- Realtime sync can be enabled via running `basic-memory sync --watch`

4. In a chat with the LLM, you can reference a topic:

```
Look at `coffee-brewing-methods` for context about pour over coffee
```

The LLM can now build rich context from the knowledge graph. For example:

```
Following relation 'relates_to [[Coffee Bean Origins]]':
- Found information about Ethiopian Yirgacheffe
- Notes on Colombian beans' nutty profile
- Altitude effects on bean characteristics

Following relation 'requires [[Proper Grinding Technique]]':
- Burr vs. blade grinder comparisons
- Grind size recommendations for different methods
- Impact of consistent particle size on extraction
```

Each related document can lead to more context, building a rich semantic understanding of your knowledge base.

This creates a two-way flow where:

- Humans write and edit Markdown files
- LLMs read and write through the MCP protocol
- Sync keeps everything consistent
- All knowledge stays in local files.

## Technical Implementation

Under the hood, Basic Memory:

1. Stores everything in Markdown files
2. Uses a SQLite database for searching and indexing
3. Extracts semantic meaning from simple Markdown patterns
    - Files become `Entity` objects
    - Each `Entity` can have `Observations`, or facts associated with it
    - `Relations` connect entities together to form the knowledge graph
4. Maintains the local knowledge graph derived from the files
5. Provides bidirectional synchronization between files and the knowledge graph
6. Implements the Model Context Protocol (MCP) for AI integration
7. Exposes tools that let AI assistants traverse and manipulate the knowledge graph
8. Uses memory:// URLs to reference entities across tools and conversations

The file format is just Markdown with some simple markup:

Each Markdown file has:

### Frontmatter

```markdown
title: <Entity title>
type: <The type of Entity> (e.g. note)
permalink: <a uri slug>

- <optional metadata> (such as tags) 
```

### Observations

Observations are facts about a topic.
They can be added by creating a Markdown list with a special format that can reference a `category`, `tags` using a
"#" character, and an optional `context`.

Observation Markdown format:

```markdown
- [category] content #tag (optional context)
```

Examples of observations:

```markdown
- [method] Pour over extracts more floral notes than French press
- [tip] Grind size should be medium-fine for pour over #brewing
- [preference] Ethiopian beans have bright, fruity flavors (especially from Yirgacheffe)
- [fact] Lighter roasts generally contain more caffeine than dark roasts
- [experiment] Tried 1:15 coffee-to-water ratio with good results
- [resource] James Hoffman's V60 technique on YouTube is excellent
- [question] Does water temperature affect extraction of different compounds differently?
- [note] My favorite local shop uses a 30-second bloom time
```

### Relations

Relations are links to other topics. They define how entities connect in the knowledge graph.

Markdown format:

```markdown
- relation_type [[WikiLink]] (optional context)
```

Examples of relations:

```markdown
- pairs_well_with [[Chocolate Desserts]]
- grown_in [[Ethiopia]]
- contrasts_with [[Tea Brewing Methods]]
- requires [[Burr Grinder]]
- improves_with [[Fresh Beans]]
- relates_to [[Morning Routine]]
- inspired_by [[Japanese Coffee Culture]]
- documented_in [[Coffee Journal]]
```

## Using with VS Code

Add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing `Ctrl + Shift + P` and typing `Preferences: Open User Settings (JSON)`.

```json
{
  "mcp": {
    "servers": {
      "basic-memory": {
        "command": "uvx",
        "args": ["basic-memory", "mcp"]
      }
    }
  }
}
```

Optionally, you can add it to a file called `.vscode/mcp.json` in your workspace. This will allow you to share the configuration with others.

```json
{
  "servers": {
    "basic-memory": {
      "command": "uvx",
      "args": ["basic-memory", "mcp"]
    }
  }
}
```

You can use Basic Memory with VS Code to easily retrieve and store information while coding.

## Using with Claude Desktop

Basic Memory is built using the MCP (Model Context Protocol) and works with the Claude desktop app (https://claude.ai/):

1. Configure Claude Desktop to use Basic Memory:

Edit your MCP configuration file (usually located at `~/Library/Application Support/Claude/claude_desktop_config.json`
for OS X):

```json
{
  "mcpServers": {
    "basic-memory": {
      "command": "uvx",
      "args": [
        "basic-memory",
        "mcp"
      ]
    }
  }
}
```

If you want to use a specific project (see [Multiple Projects](#multiple-projects) below), update your Claude Desktop
config:

```json
{
  "mcpServers": {
    "basic-memory": {
      "command": "uvx",
      "args": [
        "basic-memory",
        "mcp",
        "--project",
        "your-project-name"
      ]
    }
  }
}
```

2. Sync your knowledge:

```bash
# One-time sync of local knowledge updates
basic-memory sync

# Run realtime sync process (recommended)
basic-memory sync --watch
```

3. Cloud features (optional, requires subscription):

```bash
# Authenticate with cloud
basic-memory cloud login

# Bidirectional sync with cloud
basic-memory cloud sync

# Verify cloud integrity
basic-memory cloud check

# Mount cloud storage
basic-memory cloud mount
```

4. In Claude Desktop, the LLM can now use these tools:

**Content Management:**
```
write_note(title, content, folder, tags) - Create or update notes
read_note(identifier, page, page_size) - Read notes by title or permalink
read_content(path) - Read raw file content (text, images, binaries)
view_note(identifier) - View notes as formatted artifacts
edit_note(identifier, operation, content) - Edit notes incrementally
move_note(identifier, destination_path) - Move notes with database consistency
delete_note(identifier) - Delete notes from knowledge base
```

**Knowledge Graph Navigation:**
```
build_context(url, depth, timeframe) - Navigate knowledge graph via memory:// URLs
recent_activity(type, depth, timeframe) - Find recently updated information
list_directory(dir_name, depth) - Browse directory contents with filtering
```

**Search & Discovery:**
```
search(query, page, page_size) - Search across your knowledge base
```

**Project Management:**
```
list_memory_projects() - List all available projects
create_memory_project(project_name, project_path) - Create new projects
get_current_project() - Show current project stats
sync_status() - Check synchronization status
```

**Visualization:**
```
canvas(nodes, edges, title, folder) - Generate knowledge visualizations
```

5. Example prompts to try:

```
"Create a note about our project architecture decisions"
"Find information about JWT authentication in my notes"
"Create a canvas visualization of my project components"
"Read my notes on the authentication system"
"What have I been working on in the past week?"
```

## Futher info

See the [Documentation](https://memory.basicmachines.co/) for more info, including:

- [Complete User Guide](https://docs.basicmemory.com/user-guide/)
- [CLI tools](https://docs.basicmemory.com/guides/cli-reference/)
- [Cloud CLI and Sync](https://docs.basicmemory.com/guides/cloud-cli/)
- [Managing multiple Projects](https://docs.basicmemory.com/guides/cli-reference/#project)
- [Importing data from OpenAI/Claude Projects](https://docs.basicmemory.com/guides/cli-reference/#import)

## Logging

Basic Memory uses [Loguru](https://github.com/Delgan/loguru) for logging. The logging behavior varies by entry point:

| Entry Point | Default Behavior | Use Case |
|-------------|------------------|----------|
| CLI commands | File only | Prevents log output from interfering with command output |
| MCP server | File only | Stdout would corrupt the JSON-RPC protocol |
| API server | File (local) or stdout (cloud) | Docker/cloud deployments use stdout |

**Log file location:** `~/.basic-memory/basic-memory.log` (10MB rotation, 10 days retention)

### Environment Variables

| Variable | Default | Description |
|----------|---------|-------------|
| `BASIC_MEMORY_LOG_LEVEL` | `INFO` | Log level: DEBUG, INFO, WARNING, ERROR |
| `BASIC_MEMORY_CLOUD_MODE` | `false` | When `true`, API logs to stdout with structured context |
| `BASIC_MEMORY_ENV` | `dev` | Set to `test` for test mode (stderr only) |

### Examples

```bash
# Enable debug logging
BASIC_MEMORY_LOG_LEVEL=DEBUG basic-memory sync

# View logs
tail -f ~/.basic-memory/basic-memory.log

# Cloud/Docker mode (stdout logging with structured context)
BASIC_MEMORY_CLOUD_MODE=true uvicorn basic_memory.api.app:app
```

## Telemetry

Basic Memory collects anonymous usage statistics to help improve the software. This follows the [Homebrew model](https://docs.brew.sh/Analytics) - telemetry is on by default with easy opt-out.

**What we collect:**
- App version, Python version, OS, architecture
- Feature usage (which MCP tools and CLI commands are used)
- Error types (sanitized - no file paths or personal data)

**What we NEVER collect:**
- Note content, file names, or paths
- Personal information
- IP addresses

**Opting out:**
```bash
# Disable telemetry
basic-memory telemetry disable

# Check status
basic-memory telemetry status

# Re-enable
basic-memory telemetry enable
```

Or set the environment variable:
```bash
export BASIC_MEMORY_TELEMETRY_ENABLED=false
```

For more details, see the [Telemetry documentation](https://basicmemory.com/telemetry).

## Development

### Running Tests

Basic Memory supports dual database backends (SQLite and Postgres). By default, tests run against SQLite. Set `BASIC_MEMORY_TEST_POSTGRES=1` to run against Postgres (uses testcontainers - Docker required).

**Quick Start:**
```bash
# Run all tests against SQLite (default, fast)
just test-sqlite

# Run all tests against Postgres (uses testcontainers)
just test-postgres

# Run both SQLite and Postgres tests
just test
```

**Available Test Commands:**

- `just test` - Run all tests against both SQLite and Postgres
- `just test-sqlite` - Run all tests against SQLite (fast, no Docker needed)
- `just test-postgres` - Run all tests against Postgres (uses testcontainers)
- `just test-unit-sqlite` - Run unit tests against SQLite
- `just test-unit-postgres` - Run unit tests against Postgres
- `just test-int-sqlite` - Run integration tests against SQLite
- `just test-int-postgres` - Run integration tests against Postgres
- `just test-windows` - Run Windows-specific tests (auto-skips on other platforms)
- `just test-benchmark` - Run performance benchmark tests

**Postgres Testing:**

Postgres tests use [testcontainers](https://testcontainers-python.readthedocs.io/) which automatically spins up a Postgres instance in Docker. No manual database setup required - just have Docker running.

**Test Markers:**

Tests use pytest markers for selective execution:
- `windows` - Windows-specific database optimizations
- `benchmark` - Performance tests (excluded from default runs)

**Other Development Commands:**
```bash
just install          # Install with dev dependencies
just lint             # Run linting checks
just typecheck        # Run type checking
just format           # Format code with ruff
just check            # Run all quality checks
just migration "msg"  # Create database migration
```

See the [justfile](justfile) for the complete list of development commands.

## License

AGPL-3.0

Contributions are welcome. See the [Contributing](CONTRIBUTING.md) guide for info about setting up the project locally
and submitting PRs.

## Star History

<a href="https://www.star-history.com/#basicmachines-co/basic-memory&Date">
 <picture>
   <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=basicmachines-co/basic-memory&type=Date&theme=dark" />
   <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=basicmachines-co/basic-memory&type=Date" />
   <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=basicmachines-co/basic-memory&type=Date" />
 </picture>
</a>

Built with ♥️ by Basic Machines

```

--------------------------------------------------------------------------------
/SECURITY.md:
--------------------------------------------------------------------------------

```markdown
# Security Policy

## Supported Versions

| Version | Supported          |
| ------- | ------------------ |
| 0.x.x   | :white_check_mark: |

## Reporting a Vulnerability

Use this section to tell people how to report a vulnerability.

If you find a vulnerability, please contact [email protected]

```

--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------

```markdown
# Code of Conduct

## Purpose

Maintain a respectful and professional environment where contributions can be made without harassment or
negativity.

## Standards

Respectful communication and collaboration are expected. Offensive behavior, harassment, or personal attacks will not be
tolerated.

## Reporting Issues

To report inappropriate behavior, contact [[email protected]].

## Consequences

Violations of this code may lead to consequences, including being banned from contributing to the project.

```

--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------

```markdown
# Contributing to Basic Memory

Thank you for considering contributing to Basic Memory! This document outlines the process for contributing to the
project and how to get started as a developer.

## Getting Started

### Development Environment

1. **Clone the Repository**:
   ```bash
   git clone https://github.com/basicmachines-co/basic-memory.git
   cd basic-memory
   ```

2. **Install Dependencies**:
   ```bash
   # Using just (recommended)
   just install
   
   # Or using uv
   uv install -e ".[dev]"
   
   # Or using pip
   pip install -e ".[dev]"
   ```

   > **Note**: Basic Memory uses [just](https://just.systems) as a modern command runner. Install with `brew install just` or `cargo install just`.

3. **Activate the Virtual Environment**
   ```bash
   source .venv/bin/activate
   ```

4. **Run the Tests**:
   ```bash
   # Run all tests with unified coverage (unit + integration)
   just test

   # Run unit tests only (fast, no coverage)
   just test-unit

   # Run integration tests only (fast, no coverage)
   just test-int

   # Generate HTML coverage report
   just coverage

   # Run a specific test
   pytest tests/path/to/test_file.py::test_function_name
   ```

### Development Workflow

1. **Fork the Repo**: Fork the repository on GitHub and clone your copy.
2. **Create a Branch**: Create a new branch for your feature or fix.
   ```bash
   git checkout -b feature/your-feature-name
   # or
   git checkout -b fix/issue-you-are-fixing
   ```
3. **Make Your Changes**: Implement your changes with appropriate test coverage.
4. **Check Code Quality**:
   ```bash
   # Run all checks at once
   just check
   
   # Or run individual checks
   just lint      # Run linting
   just format    # Format code
   just type-check  # Type checking
   ```
5. **Test Your Changes**: Ensure all tests pass locally and maintain 100% test coverage.
   ```bash
   just test
   ```
6. **Submit a PR**: Submit a pull request with a detailed description of your changes.

## LLM-Assisted Development

This project is designed for collaborative development between humans and LLMs (Large Language Models):

1. **CLAUDE.md**: The repository includes a `CLAUDE.md` file that serves as a project guide for both humans and LLMs.
   This file contains:
    - Key project information and architectural overview
    - Development commands and workflows
    - Code style guidelines
    - Documentation standards

2. **AI-Human Collaborative Workflow**:
    - We encourage using LLMs like Claude for code generation, reviews, and documentation
    - When possible, save context in markdown files that can be referenced later
    - This enables seamless knowledge transfer between different development sessions
    - Claude can help with implementation details while you focus on architecture and design

3. **Adding to CLAUDE.md**:
    - If you discover useful project information or common commands, consider adding them to CLAUDE.md
    - This helps all contributors (human and AI) maintain consistent knowledge of the project

## Pull Request Process

1. **Create a Pull Request**: Open a PR against the `main` branch with a clear title and description.
2. **Sign the Developer Certificate of Origin (DCO)**: All contributions require signing our DCO, which certifies that
   you have the right to submit your contributions. This will be automatically checked by our CLA assistant when you
   create a PR.
3. **PR Description**: Include:
    - What the PR changes
    - Why the change is needed
    - How you tested the changes
    - Any related issues (use "Fixes #123" to automatically close issues)
4. **Code Review**: Wait for code review and address any feedback.
5. **CI Checks**: Ensure all CI checks pass.
6. **Merge**: Once approved, a maintainer will merge your PR.

## Developer Certificate of Origin

By contributing to this project, you agree to the [Developer Certificate of Origin (DCO)](CLA.md). This means you
certify that:

- You have the right to submit your contributions
- You're not knowingly submitting code with patent or copyright issues
- Your contributions are provided under the project's license (AGPL-3.0)

This is a lightweight alternative to a Contributor License Agreement and helps ensure that all contributions can be
properly incorporated into the project and potentially used in commercial applications.

### Signing Your Commits

Sign your commit:

**Using the `-s` or `--signoff` flag**:

```bash
git commit -s -m "Your commit message"
```

This adds a `Signed-off-by` line to your commit message, certifying that you adhere to the DCO.

The sign-off certifies that you have the right to submit your contribution under the project's license and verifies your
agreement to the DCO.

## Code Style Guidelines

- **Python Version**: Python 3.12+ with full type annotations (3.12+ required for type parameter syntax)
- **Line Length**: 100 characters maximum
- **Formatting**: Use ruff for consistent styling
- **Import Order**: Standard lib, third-party, local imports
- **Naming**: Use snake_case for functions/variables, PascalCase for classes
- **Documentation**: Add docstrings to public functions, classes, and methods
- **Type Annotations**: Use type hints for all functions and methods

## Testing Guidelines

### Test Structure

Basic Memory uses two test directories with unified coverage reporting:

- **`tests/`**: Unit tests that test individual components in isolation
  - Fast execution with extensive mocking
  - Test individual functions, classes, and modules
  - Run with: `just test-unit` (no coverage, fast)

- **`test-int/`**: Integration tests that test real-world scenarios
  - Test full workflows with real database and file operations
  - Include performance benchmarks
  - More realistic but slower than unit tests
  - Run with: `just test-int` (no coverage, fast)

### Running Tests

```bash
# Run all tests with unified coverage report
just test

# Run only unit tests (fast iteration)
just test-unit

# Run only integration tests
just test-int

# Generate HTML coverage report
just coverage

# Run specific test
pytest tests/path/to/test_file.py::test_function_name

# Run tests excluding benchmarks
pytest -m "not benchmark"

# Run only benchmark tests
pytest -m benchmark test-int/test_sync_performance_benchmark.py
```

### Performance Benchmarks

The `test-int/test_sync_performance_benchmark.py` file contains performance benchmarks that measure sync and indexing speed:

- `test_benchmark_sync_100_files` - Small repository performance
- `test_benchmark_sync_500_files` - Medium repository performance
- `test_benchmark_sync_1000_files` - Large repository performance (marked slow)
- `test_benchmark_resync_no_changes` - Re-sync performance baseline

Run benchmarks with:
```bash
# Run all benchmarks (excluding slow ones)
pytest test-int/test_sync_performance_benchmark.py -v -m "benchmark and not slow"

# Run all benchmarks including slow ones
pytest test-int/test_sync_performance_benchmark.py -v -m benchmark

# Run specific benchmark
pytest test-int/test_sync_performance_benchmark.py::test_benchmark_sync_100_files -v
```

See `test-int/BENCHMARKS.md` for detailed benchmark documentation.

### Testing Best Practices

- **Coverage Target**: We aim for high test coverage for all code
- **Test Framework**: Use pytest for unit and integration tests
- **Mocking**: Avoid mocking in integration tests; use sparingly in unit tests
- **Edge Cases**: Test both normal operation and edge cases
- **Database Testing**: Use in-memory SQLite for testing database operations
- **Fixtures**: Use async pytest fixtures for setup and teardown
- **Markers**: Use `@pytest.mark.benchmark` for benchmarks, `@pytest.mark.slow` for slow tests

## Release Process

Basic Memory uses automatic versioning based on git tags with `uv-dynamic-versioning`. Here's how releases work:

### Version Management
- **Development versions**: Automatically generated from git commits (e.g., `0.12.4.dev26+468a22f`)
- **Beta releases**: Created by tagging with beta suffixes (e.g., `git tag v0.13.0b1`)
- **Stable releases**: Created by tagging with version numbers (e.g., `git tag v0.13.0`)

### Release Workflows

#### Development Builds
- Automatically published to PyPI on every commit to `main`
- Version format: `0.12.4.dev26+468a22f` (base version + dev + commit count + hash)
- Users install with: `pip install basic-memory --pre --force-reinstall`

#### Beta Releases
1. Create and push a beta tag: `git tag v0.13.0b1 && git push origin v0.13.0b1`
2. GitHub Actions automatically builds and publishes to PyPI
3. Users install with: `pip install basic-memory --pre`

#### Stable Releases
1. Create and push a version tag: `git tag v0.13.0 && git push origin v0.13.0`
2. GitHub Actions automatically:
   - Builds the package with version `0.13.0`
   - Creates GitHub release with auto-generated notes
   - Publishes to PyPI
3. Users install with: `pip install basic-memory`

### For Contributors
- No manual version bumping required
- Versions are automatically derived from git tags
- Focus on code changes, not version management

## Creating Issues

If you're planning to work on something, please create an issue first to discuss the approach. Include:

- A clear title and description
- Steps to reproduce if reporting a bug
- Expected behavior vs. actual behavior
- Any relevant logs or screenshots
- Your proposed solution, if you have one

## Code of Conduct

All contributors must follow the [Code of Conduct](CODE_OF_CONDUCT.md).

## Thank You!

Your contributions help make Basic Memory better. We appreciate your time and effort!
```

--------------------------------------------------------------------------------
/CLAUDE.md:
--------------------------------------------------------------------------------

```markdown
# CLAUDE.md - Basic Memory Project Guide

## Project Overview

Basic Memory is a local-first knowledge management system built on the Model Context Protocol (MCP). It enables
bidirectional communication between LLMs (like Claude) and markdown files, creating a personal knowledge graph that can
be traversed using links between documents.

## CODEBASE DEVELOPMENT

### Project information

See the [README.md](README.md) file for a project overview.

### Build and Test Commands

- Install: `just install` or `pip install -e ".[dev]"`
- Run all tests (SQLite + Postgres): `just test`
- Run all tests against SQLite: `just test-sqlite`
- Run all tests against Postgres: `just test-postgres` (uses testcontainers)
- Run unit tests (SQLite): `just test-unit-sqlite`
- Run unit tests (Postgres): `just test-unit-postgres`
- Run integration tests (SQLite): `just test-int-sqlite`
- Run integration tests (Postgres): `just test-int-postgres`
- Generate HTML coverage: `just coverage`
- Single test: `pytest tests/path/to/test_file.py::test_function_name`
- Run benchmarks: `pytest test-int/test_sync_performance_benchmark.py -v -m "benchmark and not slow"`
- Lint: `just lint` or `ruff check . --fix`
- Type check: `just typecheck` or `uv run pyright`
- Format: `just format` or `uv run ruff format .`
- Run all code checks: `just check` (runs lint, format, typecheck, test)
- Create db migration: `just migration "Your migration message"`
- Run development MCP Inspector: `just run-inspector`

**Note:** Project requires Python 3.12+ (uses type parameter syntax and `type` aliases introduced in 3.12)

**Postgres Testing:** Uses [testcontainers](https://testcontainers-python.readthedocs.io/) which automatically spins up a Postgres instance in Docker. No manual database setup required - just have Docker running.

### Test Structure

- `tests/` - Unit tests for individual components (mocked, fast)
- `test-int/` - Integration tests for real-world scenarios (no mocks, realistic)
- Both directories are covered by unified coverage reporting
- Benchmark tests in `test-int/` are marked with `@pytest.mark.benchmark`
- Slow tests are marked with `@pytest.mark.slow`

### Code Style Guidelines

- Line length: 100 characters max
- Python 3.12+ with full type annotations (uses type parameters and type aliases)
- Format with ruff (consistent styling)
- Import order: standard lib, third-party, local imports
- Naming: snake_case for functions/variables, PascalCase for classes
- Prefer async patterns with SQLAlchemy 2.0
- Use Pydantic v2 for data validation and schemas
- CLI uses Typer for command structure
- API uses FastAPI for endpoints
- Follow the repository pattern for data access
- Tools communicate to api routers via the httpx ASGI client (in process)

### Code Change Guidelines

- **Full file read before edits**: Before editing any file, read it in full first to ensure complete context; partial reads lead to corrupted edits
- **Minimize diffs**: Prefer the smallest change that satisfies the request. Avoid unrelated refactors or style rewrites unless necessary for correctness
- **No speculative getattr**: Never use `getattr(obj, "attr", default)` when unsure about attribute names. Check the class definition or source code first
- **Fail fast**: Write code with fail-fast logic by default. Do not swallow exceptions with errors or warnings
- **No fallback logic**: Do not add fallback logic unless explicitly told to and agreed with the user
- **No guessing**: Do not say "The issue is..." before you actually know what the issue is. Investigate first.

### Literate Programming Style

Code should tell a story. Comments must explain the "why" and narrative flow, not just the "what".

**Section Headers:**
For files with multiple phases of logic, add section headers so the control flow reads like chapters:
```python
# --- Authentication ---
# ... auth logic ...

# --- Data Validation ---
# ... validation logic ...

# --- Business Logic ---
# ... core logic ...
```

**Decision Point Comments:**
For conditionals that materially change behavior (gates, fallbacks, retries, feature flags), add comments with:
- **Trigger**: what condition causes this branch
- **Why**: the rationale (cost, correctness, UX, determinism)
- **Outcome**: what changes downstream

```python
# Trigger: project has no active sync watcher
# Why: avoid duplicate file system watchers consuming resources
# Outcome: starts new watcher, registers in active_watchers dict
if project_id not in active_watchers:
    start_watcher(project_id)
```

**Constraint Comments:**
If code exists because of a constraint (async requirements, rate limits, schema compatibility), explain the constraint near the code:
```python
# SQLite requires WAL mode for concurrent read/write access
connection.execute("PRAGMA journal_mode=WAL")
```

**What NOT to Comment:**
Avoid comments that restate obvious code:
```python
# Bad - restates code
counter += 1  # increment counter

# Good - explains why
counter += 1  # track retries for backoff calculation
```

### Codebase Architecture

See [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) for detailed architecture documentation.

**Directory Structure:**
- `/alembic` - Alembic db migrations
- `/api` - FastAPI REST endpoints + `container.py` composition root
- `/cli` - Typer CLI + `container.py` composition root
- `/deps` - Feature-scoped FastAPI dependencies (config, db, projects, repositories, services, importers)
- `/importers` - Import functionality for Claude, ChatGPT, and other sources
- `/markdown` - Markdown parsing and processing
- `/mcp` - MCP server + `container.py` composition root + `clients/` typed API clients
- `/models` - SQLAlchemy ORM models
- `/repository` - Data access layer
- `/schemas` - Pydantic models for validation
- `/services` - Business logic layer
- `/sync` - File synchronization services + `coordinator.py` for lifecycle management

**Composition Roots:**
Each entrypoint (API, MCP, CLI) has a composition root that:
- Reads `ConfigManager` (the only place that reads global config)
- Resolves runtime mode via `RuntimeMode` enum (TEST > CLOUD > LOCAL)
- Provides dependencies to downstream code explicitly

**Typed API Clients (MCP):**
MCP tools use typed clients in `mcp/clients/` to communicate with the API:
- `KnowledgeClient` - Entity CRUD operations
- `SearchClient` - Search operations
- `MemoryClient` - Context building
- `DirectoryClient` - Directory listing
- `ResourceClient` - Resource reading
- `ProjectClient` - Project management

Flow: MCP Tool → Typed Client → HTTP API → Router → Service → Repository

### Development Notes

- MCP tools are defined in src/basic_memory/mcp/tools/
- MCP prompts are defined in src/basic_memory/mcp/prompts/
- MCP tools should be atomic, composable operations
- Use `textwrap.dedent()` for multi-line string formatting in prompts and tools
- MCP Prompts are used to invoke tools and format content with instructions for an LLM
- Schema changes require Alembic migrations
- SQLite is used for indexing and full text search, files are source of truth
- Testing uses pytest with asyncio support (strict mode)
- Unit tests (`tests/`) use mocks when necessary; integration tests (`test-int/`) use real implementations
- By default, tests run against SQLite (fast, no Docker needed)
- Set `BASIC_MEMORY_TEST_POSTGRES=1` to run against Postgres (uses testcontainers - Docker required)
- Each test runs in a standalone environment with isolated database and tmp_path directory
- CI runs SQLite and Postgres tests in parallel for faster feedback
- Performance benchmarks are in `test-int/test_sync_performance_benchmark.py`
- Use pytest markers: `@pytest.mark.benchmark` for benchmarks, `@pytest.mark.slow` for slow tests
- **Coverage must stay at 100%**: Write tests for new code. Only use `# pragma: no cover` when tests would require excessive mocking (e.g., TYPE_CHECKING blocks, error handlers that need failure injection, runtime-mode-dependent code paths)

### Async Client Pattern (Important!)

**All MCP tools and CLI commands use the context manager pattern for HTTP clients:**

```python
from basic_memory.mcp.async_client import get_client

async def my_mcp_tool():
    async with get_client() as client:
        # Use client for API calls
        response = await call_get(client, "/path")
        return response
```

**Do NOT use:**
- ❌ `from basic_memory.mcp.async_client import client` (deprecated module-level client)
- ❌ Manual auth header management
- ❌ `inject_auth_header()` (deleted)

**Key principles:**
- Auth happens at client creation, not per-request
- Proper resource management via context managers
- Supports three modes: Local (ASGI), CLI cloud (HTTP + auth), Cloud app (factory injection)
- Factory pattern enables dependency injection for cloud consolidation

**For cloud app integration:**
```python
from basic_memory.mcp import async_client

# Set custom factory before importing tools
async_client.set_client_factory(your_custom_factory)
```

See SPEC-16 for full context manager refactor details.

## BASIC MEMORY PRODUCT USAGE

### Knowledge Structure

- Entity: Any concept, document, or idea represented as a markdown file
- Observation: A categorized fact about an entity (`- [category] content`)
- Relation: A directional link between entities (`- relation_type [[Target]]`)
- Frontmatter: YAML metadata at the top of markdown files
- Knowledge representation follows precise markdown format:
    - Observations with [category] prefixes
    - Relations with WikiLinks [[Entity]]
    - Frontmatter with metadata

### Basic Memory Commands

**Local Commands:**
- Check sync status: `basic-memory status`
- Import from Claude: `basic-memory import claude conversations`
- Import from ChatGPT: `basic-memory import chatgpt`
- Import from Memory JSON: `basic-memory import memory-json`
- Tool access: `basic-memory tool` (provides CLI access to MCP tools)
    - Continue: `basic-memory tool continue-conversation --topic="search"`

**Project Management:**
- List projects: `basic-memory project list`
- Add project: `basic-memory project add "name" ~/path`
- Project info: `basic-memory project info`
- One-way sync (local -> cloud): `basic-memory project sync`
- Bidirectional sync: `basic-memory project bisync`
- Integrity check: `basic-memory project check`

**Cloud Commands (requires subscription):**
- Authenticate: `basic-memory cloud login`
- Logout: `basic-memory cloud logout`
- Check cloud status: `basic-memory cloud status`
- Setup cloud sync: `basic-memory cloud setup`

### MCP Capabilities

- Basic Memory exposes these MCP tools to LLMs:

  **Content Management:**
    - `write_note(title, content, folder, tags)` - Create/update markdown notes with semantic observations and relations
    - `read_note(identifier, page, page_size)` - Read notes by title, permalink, or memory:// URL with knowledge graph awareness
    - `read_content(path)` - Read raw file content (text, images, binaries) without knowledge graph processing
    - `view_note(identifier, page, page_size)` - View notes as formatted artifacts for better readability
    - `edit_note(identifier, operation, content)` - Edit notes incrementally (append, prepend, find/replace, replace_section)
    - `move_note(identifier, destination_path)` - Move notes to new locations, updating database and maintaining links
    - `delete_note(identifier)` - Delete notes from the knowledge base

  **Knowledge Graph Navigation:**
    - `build_context(url, depth, timeframe)` - Navigate the knowledge graph via memory:// URLs for conversation continuity
    - `recent_activity(type, depth, timeframe)` - Get recently updated information with specified timeframe (e.g., "1d", "1 week")
    - `list_directory(dir_name, depth, file_name_glob)` - Browse directory contents with filtering and depth control

  **Search & Discovery:**
    - `search_notes(query, page, page_size, search_type, types, entity_types, after_date)` - Full-text search across all content with advanced filtering options

  **Project Management:**
    - `list_memory_projects()` - List all available projects with their status
    - `create_memory_project(project_name, project_path, set_default)` - Create new Basic Memory projects
    - `delete_project(project_name)` - Delete a project from configuration

  **Visualization:**
    - `canvas(nodes, edges, title, folder)` - Generate Obsidian canvas files for knowledge graph visualization

  **ChatGPT-Compatible Tools:**
    - `search(query)` - Search across knowledge base (OpenAI actions compatible)
    - `fetch(id)` - Fetch full content of a search result document

- MCP Prompts for better AI interaction:
    - `ai_assistant_guide()` - Guidance on effectively using Basic Memory tools for AI assistants
    - `continue_conversation(topic, timeframe)` - Continue previous conversations with relevant historical context
    - `search(query, after_date)` - Search with detailed, formatted results for better context understanding
    - `recent_activity(timeframe)` - View recently changed items with formatted output

### Cloud Features (v0.15.0+)

Basic Memory now supports cloud synchronization and storage (requires active subscription):

**Authentication:**
- JWT-based authentication with subscription validation
- Secure session management with token refresh
- Support for multiple cloud projects

**Bidirectional Sync:**
- rclone bisync integration for two-way synchronization
- Conflict resolution and integrity verification
- Real-time sync with change detection
- Mount/unmount cloud storage for direct file access

**Cloud Project Management:**
- Create and manage projects in the cloud
- Toggle between local and cloud modes
- Per-project sync configuration
- Subscription-based access control

**Security & Performance:**
- Removed .env file loading for improved security
- .gitignore integration (respects gitignored files)
- WAL mode for SQLite performance
- Background relation resolution (non-blocking startup)
- API performance optimizations (SPEC-11)

## AI-Human Collaborative Development

Basic Memory emerged from and enables a new kind of development process that combines human and AI capabilities. Instead
of using AI just for code generation, we've developed a true collaborative workflow:

1. AI (LLM) writes initial implementation based on specifications and context
2. Human reviews, runs tests, and commits code with any necessary adjustments
3. Knowledge persists across conversations using Basic Memory's knowledge graph
4. Development continues seamlessly across different AI sessions with consistent context
5. Results improve through iterative collaboration and shared understanding

This approach has allowed us to tackle more complex challenges and build a more robust system than either humans or AI
could achieve independently.

**Problem-Solving Guidance:**
- If a solution isn't working after reasonable effort, suggest alternative approaches
- Don't persist with a problematic library or pattern when better alternatives exist
- Example: When py-pglite caused cascading test failures, switching to testcontainers-postgres was the right call

## GitHub Integration

Basic Memory has taken AI-Human collaboration to the next level by integrating Claude directly into the development workflow through GitHub:

### GitHub MCP Tools

Using the GitHub Model Context Protocol server, Claude can now:

- **Repository Management**:
  - View repository files and structure
  - Read file contents
  - Create new branches
  - Create and update files

- **Issue Management**:
  - Create new issues
  - Comment on existing issues
  - Close and update issues
  - Search across issues

- **Pull Request Workflow**:
  - Create pull requests
  - Review code changes
  - Add comments to PRs

This integration enables Claude to participate as a full team member in the development process, not just as a code generation tool. Claude's GitHub account ([bm-claudeai](https://github.com/bm-claudeai)) is a member of the Basic Machines organization with direct contributor access to the codebase.

### Collaborative Development Process

With GitHub integration, the development workflow includes:

1. **Direct code review** - Claude can analyze PRs and provide detailed feedback
2. **Contribution tracking** - All of Claude's contributions are properly attributed in the Git history
3. **Branch management** - Claude can create feature branches for implementations
4. **Documentation maintenance** - Claude can keep documentation updated as the code evolves
5. **Code Commits**: ALWAYS sign off commits with `git commit -s`

This level of integration represents a new paradigm in AI-human collaboration, where the AI assistant becomes a full-fledged team member rather than just a tool for generating code snippets.

```

--------------------------------------------------------------------------------
/tests/markdown/__init__.py:
--------------------------------------------------------------------------------

```python

```

--------------------------------------------------------------------------------
/tests/mcp/clients/__init__.py:
--------------------------------------------------------------------------------

```python

```

--------------------------------------------------------------------------------
/tests/api/v2/__init__.py:
--------------------------------------------------------------------------------

```python
"""V2 API tests."""

```

--------------------------------------------------------------------------------
/src/basic_memory/cli/__init__.py:
--------------------------------------------------------------------------------

```python
"""CLI tools for basic-memory"""

```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/__init__.py:
--------------------------------------------------------------------------------

```python
"""MCP server for basic-memory."""

```

--------------------------------------------------------------------------------
/.claude/settings.json:
--------------------------------------------------------------------------------

```json
{
  "enabledPlugins": {
    "basic-memory@basicmachines": true
  }
}

```

--------------------------------------------------------------------------------
/src/basic_memory/api/__init__.py:
--------------------------------------------------------------------------------

```python
"""Basic Memory API module."""

from .app import app

__all__ = ["app"]

```

--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------

```python
import os

# set config.env to "test" for pytest to prevent logging to file in utils.setup_logging()
os.environ["BASIC_MEMORY_ENV"] = "test"

```

--------------------------------------------------------------------------------
/src/basic_memory/models/base.py:
--------------------------------------------------------------------------------

```python
"""Base model class for SQLAlchemy models."""

from sqlalchemy.ext.asyncio import AsyncAttrs
from sqlalchemy.orm import DeclarativeBase


class Base(AsyncAttrs, DeclarativeBase):
    """Base class for all models"""

    pass

```

--------------------------------------------------------------------------------
/src/basic_memory/sync/__init__.py:
--------------------------------------------------------------------------------

```python
"""Basic Memory sync services."""

from .coordinator import SyncCoordinator, SyncStatus
from .sync_service import SyncService
from .watch_service import WatchService

__all__ = ["SyncService", "WatchService", "SyncCoordinator", "SyncStatus"]

```

--------------------------------------------------------------------------------
/src/basic_memory/__init__.py:
--------------------------------------------------------------------------------

```python
"""basic-memory - Local-first knowledge management combining Zettelkasten with knowledge graphs"""

# Package version - updated by release automation
__version__ = "0.17.5"

# API version for FastAPI - independent of package version
__api_version__ = "v0"

```

--------------------------------------------------------------------------------
/src/basic_memory/services/__init__.py:
--------------------------------------------------------------------------------

```python
"""Services package."""

from .service import BaseService
from .file_service import FileService
from .entity_service import EntityService
from .project_service import ProjectService

__all__ = ["BaseService", "FileService", "EntityService", "ProjectService"]

```

--------------------------------------------------------------------------------
/src/basic_memory/repository/__init__.py:
--------------------------------------------------------------------------------

```python
from .entity_repository import EntityRepository
from .observation_repository import ObservationRepository
from .project_repository import ProjectRepository
from .relation_repository import RelationRepository

__all__ = [
    "EntityRepository",
    "ObservationRepository",
    "ProjectRepository",
    "RelationRepository",
]

```

--------------------------------------------------------------------------------
/src/basic_memory/models/__init__.py:
--------------------------------------------------------------------------------

```python
"""Models package for basic-memory."""

import basic_memory
from basic_memory.models.base import Base
from basic_memory.models.knowledge import Entity, Observation, Relation
from basic_memory.models.project import Project

__all__ = [
    "Base",
    "Entity",
    "Observation",
    "Relation",
    "Project",
    "basic_memory",
]

```

--------------------------------------------------------------------------------
/src/basic_memory/services/service.py:
--------------------------------------------------------------------------------

```python
"""Base service class."""

from typing import TypeVar, Generic

from basic_memory.models import Base

T = TypeVar("T", bound=Base)


class BaseService(Generic[T]):
    """Base service that takes a repository."""

    def __init__(self, repository):
        """Initialize service with repository."""
        self.repository = repository

```

--------------------------------------------------------------------------------
/src/basic_memory/repository/project_info_repository.py:
--------------------------------------------------------------------------------

```python
from basic_memory.repository.repository import Repository
from basic_memory.models.project import Project


class ProjectInfoRepository(Repository):
    """Repository for statistics queries."""

    def __init__(self, session_maker):
        # Initialize with Project model as a reference
        super().__init__(session_maker, Project)

```

--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/cloud/__init__.py:
--------------------------------------------------------------------------------

```python
"""Cloud commands package."""

# Import all commands to register them with typer
from basic_memory.cli.commands.cloud.core_commands import *  # noqa: F401,F403
from basic_memory.cli.commands.cloud.api_client import get_authenticated_headers, get_cloud_config  # noqa: F401
from basic_memory.cli.commands.cloud.upload_command import *  # noqa: F401,F403

```

--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/config.yml:
--------------------------------------------------------------------------------

```yaml
blank_issues_enabled: false
contact_links:
  - name: Basic Memory Discussions
    url: https://github.com/basicmachines-co/basic-memory/discussions
    about: For questions, ideas, or more open-ended discussions
  - name: Documentation
    url: https://github.com/basicmachines-co/basic-memory#readme
    about: Please check the documentation first before reporting an issue
```

--------------------------------------------------------------------------------
/test-int/cli/test_version_integration.py:
--------------------------------------------------------------------------------

```python
"""Integration tests for version command."""

from typer.testing import CliRunner

from basic_memory.cli.main import app
import basic_memory


def test_version_command():
    """Test 'bm --version' command shows version."""
    runner = CliRunner()
    result = runner.invoke(app, ["--version"])

    assert result.exit_code == 0
    assert basic_memory.__version__ in result.stdout

```

--------------------------------------------------------------------------------
/src/basic_memory/api/routers/__init__.py:
--------------------------------------------------------------------------------

```python
"""API routers."""

from . import knowledge_router as knowledge
from . import management_router as management
from . import memory_router as memory
from . import project_router as project
from . import resource_router as resource
from . import search_router as search
from . import prompt_router as prompt

__all__ = ["knowledge", "management", "memory", "project", "resource", "search", "prompt"]

```

--------------------------------------------------------------------------------
/tests/markdown/test_task_detection.py:
--------------------------------------------------------------------------------

```python
"""Test how markdown-it handles task lists."""

from markdown_it import MarkdownIt


def test_task_token_type():
    """Verify how markdown-it parses task list items."""
    md = MarkdownIt()
    content = """
    - [ ] Unchecked task
    - [x] Completed task 
    - [-] In progress task
    """

    tokens = md.parse(content)
    for token in tokens:
        print(f"{token.type}: {token.content}")

```

--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/__init__.py:
--------------------------------------------------------------------------------

```python
"""CLI commands for basic-memory."""

from . import status, db, import_memory_json, mcp, import_claude_conversations
from . import import_claude_projects, import_chatgpt, tool, project, format, telemetry

__all__ = [
    "status",
    "db",
    "import_memory_json",
    "mcp",
    "import_claude_conversations",
    "import_claude_projects",
    "import_chatgpt",
    "tool",
    "project",
    "format",
    "telemetry",
]

```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml

startCommand:
  type: stdio
  configSchema:
    # JSON Schema defining the configuration options for the MCP.
    type: object
    properties: {}
    description: No configuration required. This MCP server runs using the default command.
  commandFunction: |-
    (config) => ({
      command: 'basic-memory',
      args: ['mcp']
    })
  exampleConfig: {}
```

--------------------------------------------------------------------------------
/src/basic_memory/alembic/versions/6830751f5fb6_merge_multiple_heads.py:
--------------------------------------------------------------------------------

```python
"""Merge multiple heads

Revision ID: 6830751f5fb6
Revises: a2b3c4d5e6f7, g9a0b3c4d5e6
Create Date: 2025-12-29 12:46:46.476268

"""

from typing import Sequence, Union


# revision identifiers, used by Alembic.
revision: str = "6830751f5fb6"
down_revision: Union[str, Sequence[str], None] = ("a2b3c4d5e6f7", "g9a0b3c4d5e6")
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None


def upgrade() -> None:
    pass


def downgrade() -> None:
    pass

```

--------------------------------------------------------------------------------
/src/basic_memory/markdown/__init__.py:
--------------------------------------------------------------------------------

```python
"""Base package for markdown parsing."""

from basic_memory.file_utils import ParseError
from basic_memory.markdown.entity_parser import EntityParser
from basic_memory.markdown.markdown_processor import MarkdownProcessor
from basic_memory.markdown.schemas import (
    EntityMarkdown,
    EntityFrontmatter,
    Observation,
    Relation,
)

__all__ = [
    "EntityMarkdown",
    "EntityFrontmatter",
    "EntityParser",
    "MarkdownProcessor",
    "Observation",
    "Relation",
    "ParseError",
]

```

--------------------------------------------------------------------------------
/.github/dependabot.yml:
--------------------------------------------------------------------------------

```yaml
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file

version: 2
updates:
  - package-ecosystem: "" # See documentation for possible values
    directory: "/" # Location of package manifests
    schedule:
      interval: "weekly"
      

```

--------------------------------------------------------------------------------
/tests/mcp/test_resources.py:
--------------------------------------------------------------------------------

```python
from basic_memory.mcp.prompts.ai_assistant_guide import ai_assistant_guide


import pytest


@pytest.mark.asyncio
async def test_ai_assistant_guide_exists(app):
    """Test that the canvas spec resource exists and returns content."""
    # Call the resource function
    guide = ai_assistant_guide.fn()

    # Verify basic characteristics of the content
    assert guide is not None
    assert isinstance(guide, str)
    assert len(guide) > 0

    # Verify it contains expected sections of the Canvas spec
    assert "# AI Assistant Guide" in guide

```

--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/documentation.md:
--------------------------------------------------------------------------------

```markdown
---
name: Documentation improvement
about: Suggest improvements or report issues with documentation
title: '[DOCS] '
labels: documentation
assignees: ''
---

## Documentation Issue
Describe what's missing, unclear, or incorrect in the current documentation.

## Location
Where is the problematic documentation? (URL, file path, or section)

## Suggested Improvement
How would you improve this documentation? Please be as specific as possible.

## Additional Context
Any additional information or screenshots that might help explain the issue or improvement.
```

--------------------------------------------------------------------------------
/tests/api/v2/conftest.py:
--------------------------------------------------------------------------------

```python
"""Fixtures for V2 API tests."""

import pytest

from basic_memory.models import Project


@pytest.fixture
def v2_project_url(test_project: Project) -> str:
    """Create a URL prefix for v2 project-scoped routes using project external_id.

    This helps tests generate the correct URL for v2 project-scoped routes
    which use external_id UUIDs instead of permalinks or integer IDs.
    """
    return f"/v2/projects/{test_project.external_id}"


@pytest.fixture
def v2_projects_url() -> str:
    """Base URL for v2 project management endpoints."""
    return "/v2/projects"

```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/prompts/__init__.py:
--------------------------------------------------------------------------------

```python
"""Basic Memory MCP prompts.

Prompts are a special type of tool that returns a string response
formatted for a user to read, typically invoking one or more tools
and transforming their results into user-friendly text.
"""

# Import individual prompt modules to register them with the MCP server
from basic_memory.mcp.prompts import continue_conversation
from basic_memory.mcp.prompts import recent_activity
from basic_memory.mcp.prompts import search
from basic_memory.mcp.prompts import ai_assistant_guide

__all__ = [
    "ai_assistant_guide",
    "continue_conversation",
    "recent_activity",
    "search",
]

```

--------------------------------------------------------------------------------
/tests/services/test_initialization_cloud_mode_branches.py:
--------------------------------------------------------------------------------

```python
import pytest

from basic_memory.services.initialization import (
    ensure_initialization,
    initialize_app,
    initialize_file_sync,
)


@pytest.mark.asyncio
async def test_initialize_app_noop_in_cloud_mode(app_config):
    app_config.cloud_mode = True
    await initialize_app(app_config)


def test_ensure_initialization_noop_in_cloud_mode(app_config):
    app_config.cloud_mode = True
    ensure_initialization(app_config)


@pytest.mark.asyncio
async def test_initialize_file_sync_skips_in_test_env(app_config):
    # app_config fixture uses env="test"
    assert app_config.is_test_env is True
    await initialize_file_sync(app_config)

```

--------------------------------------------------------------------------------
/src/basic_memory/schemas/v2/__init__.py:
--------------------------------------------------------------------------------

```python
"""V2 API schemas - ID-based entity and project references."""

from basic_memory.schemas.v2.entity import (
    EntityResolveRequest,
    EntityResolveResponse,
    EntityResponseV2,
    MoveEntityRequestV2,
    ProjectResolveRequest,
    ProjectResolveResponse,
)
from basic_memory.schemas.v2.resource import (
    CreateResourceRequest,
    UpdateResourceRequest,
    ResourceResponse,
)

__all__ = [
    "EntityResolveRequest",
    "EntityResolveResponse",
    "EntityResponseV2",
    "MoveEntityRequestV2",
    "ProjectResolveRequest",
    "ProjectResolveResponse",
    "CreateResourceRequest",
    "UpdateResourceRequest",
    "ResourceResponse",
]

```

--------------------------------------------------------------------------------
/src/basic_memory/deps.py:
--------------------------------------------------------------------------------

```python
"""Dependency injection functions for basic-memory services.

DEPRECATED: This module is a backwards-compatibility shim.
Import from basic_memory.deps package submodules instead:
- basic_memory.deps.config for configuration
- basic_memory.deps.db for database/session
- basic_memory.deps.projects for project resolution
- basic_memory.deps.repositories for data access
- basic_memory.deps.services for business logic
- basic_memory.deps.importers for import functionality

This file will be removed once all callers are migrated.
"""

# Re-export everything from the deps package for backwards compatibility
from basic_memory.deps import *  # noqa: F401, F403  # pragma: no cover

```

--------------------------------------------------------------------------------
/src/basic_memory/cli/main.py:
--------------------------------------------------------------------------------

```python
"""Main CLI entry point for basic-memory."""  # pragma: no cover

from basic_memory.cli.app import app  # pragma: no cover

# Register commands
from basic_memory.cli.commands import (  # noqa: F401  # pragma: no cover
    cloud,
    db,
    import_chatgpt,
    import_claude_conversations,
    import_claude_projects,
    import_memory_json,
    mcp,
    project,
    status,
    telemetry,
    tool,
)

# Re-apply warning filter AFTER all imports
# (authlib adds a DeprecationWarning filter that overrides ours)
import warnings  # pragma: no cover

warnings.filterwarnings("ignore")  # pragma: no cover

if __name__ == "__main__":  # pragma: no cover
    # start the app
    app()

```

--------------------------------------------------------------------------------
/src/basic_memory/schemas/importer.py:
--------------------------------------------------------------------------------

```python
"""Schemas for import services."""

from typing import Dict, Optional

from pydantic import BaseModel


class ImportResult(BaseModel):
    """Common import result schema."""

    import_count: Dict[str, int]
    success: bool
    error_message: Optional[str] = None


class ChatImportResult(ImportResult):
    """Result schema for chat imports."""

    conversations: int = 0
    messages: int = 0


class ProjectImportResult(ImportResult):
    """Result schema for project imports."""

    documents: int = 0
    prompts: int = 0


class EntityImportResult(ImportResult):
    """Result schema for entity imports."""

    entities: int = 0
    relations: int = 0
    skipped_entities: int = 0

```

--------------------------------------------------------------------------------
/src/basic_memory/alembic/migrations.py:
--------------------------------------------------------------------------------

```python
"""Functions for managing database migrations."""

from pathlib import Path
from loguru import logger
from alembic.config import Config
from alembic import command


def get_alembic_config() -> Config:  # pragma: no cover
    """Get alembic config with correct paths."""
    migrations_path = Path(__file__).parent
    alembic_ini = migrations_path / "alembic.ini"

    config = Config(alembic_ini)
    config.set_main_option("script_location", str(migrations_path))
    return config


def reset_database():  # pragma: no cover
    """Drop and recreate all tables."""
    logger.info("Resetting database...")
    config = get_alembic_config()
    command.downgrade(config, "base")
    command.upgrade(config, "head")

```

--------------------------------------------------------------------------------
/src/basic_memory/sync/background_sync.py:
--------------------------------------------------------------------------------

```python
import asyncio

from loguru import logger

from basic_memory.config import get_project_config
from basic_memory.sync import SyncService, WatchService


async def sync_and_watch(
    sync_service: SyncService, watch_service: WatchService
):  # pragma: no cover
    """Run sync and watch service."""

    config = get_project_config()
    logger.info(f"Starting watch service to sync file changes in dir: {config.home}")
    # full sync
    await sync_service.sync(config.home)

    # watch changes
    await watch_service.run()


async def create_background_sync_task(
    sync_service: SyncService, watch_service: WatchService
):  # pragma: no cover
    return asyncio.create_task(sync_and_watch(sync_service, watch_service))

```

--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.md:
--------------------------------------------------------------------------------

```markdown
---
name: Feature request
about: Suggest an idea for Basic Memory
title: '[FEATURE] '
labels: enhancement
assignees: ''
---

## Feature Description
A clear and concise description of the feature you'd like to see implemented.

## Problem This Feature Solves
Describe the problem or limitation you're experiencing that this feature would address.

## Proposed Solution
Describe how you envision this feature working. Include:
- User workflow
- Interface design (if applicable)
- Technical approach (if you have ideas)

## Alternative Solutions
Have you considered any alternative solutions or workarounds?

## Additional Context
Add any other context, screenshots, or examples about the feature request here.

## Impact
How would this feature benefit you and other users of Basic Memory?
```

--------------------------------------------------------------------------------
/src/basic_memory/importers/__init__.py:
--------------------------------------------------------------------------------

```python
"""Import services for Basic Memory."""

from basic_memory.importers.base import Importer
from basic_memory.importers.chatgpt_importer import ChatGPTImporter
from basic_memory.importers.claude_conversations_importer import (
    ClaudeConversationsImporter,
)
from basic_memory.importers.claude_projects_importer import ClaudeProjectsImporter
from basic_memory.importers.memory_json_importer import MemoryJsonImporter
from basic_memory.schemas.importer import (
    ChatImportResult,
    EntityImportResult,
    ImportResult,
    ProjectImportResult,
)

__all__ = [
    "Importer",
    "ChatGPTImporter",
    "ClaudeConversationsImporter",
    "ClaudeProjectsImporter",
    "MemoryJsonImporter",
    "ImportResult",
    "ChatImportResult",
    "EntityImportResult",
    "ProjectImportResult",
]

```

--------------------------------------------------------------------------------
/tests/mcp/test_server_lifespan_branches.py:
--------------------------------------------------------------------------------

```python
import pytest

from basic_memory import db
from basic_memory.mcp.server import lifespan, mcp


@pytest.mark.asyncio
async def test_mcp_lifespan_sync_disabled_branch(config_manager, monkeypatch):
    cfg = config_manager.load_config()
    cfg.sync_changes = False
    cfg.cloud_mode = False
    config_manager.save_config(cfg)

    async with lifespan(mcp):
        pass


@pytest.mark.asyncio
async def test_mcp_lifespan_cloud_mode_branch(config_manager):
    cfg = config_manager.load_config()
    cfg.sync_changes = True
    cfg.cloud_mode = True
    config_manager.save_config(cfg)

    async with lifespan(mcp):
        pass


@pytest.mark.asyncio
async def test_mcp_lifespan_shuts_down_db_when_engine_was_none(config_manager):
    db._engine = None
    async with lifespan(mcp):
        pass

```

--------------------------------------------------------------------------------
/src/basic_memory/deps/config.py:
--------------------------------------------------------------------------------

```python
"""Configuration dependency injection for basic-memory.

This module provides configuration-related dependencies.
Note: Long-term goal is to minimize direct ConfigManager access
and inject config from composition roots instead.
"""

from typing import Annotated

from fastapi import Depends

from basic_memory.config import BasicMemoryConfig, ConfigManager


def get_app_config() -> BasicMemoryConfig:  # pragma: no cover
    """Get the application configuration.

    Note: This is a transitional dependency. The goal is for composition roots
    to read ConfigManager and inject config explicitly. During migration,
    this provides the same behavior as before.
    """
    app_config = ConfigManager().config
    return app_config


AppConfigDep = Annotated[BasicMemoryConfig, Depends(get_app_config)]

```

--------------------------------------------------------------------------------
/src/basic_memory/services/exceptions.py:
--------------------------------------------------------------------------------

```python
class FileOperationError(Exception):
    """Raised when file operations fail"""

    pass


class EntityNotFoundError(Exception):
    """Raised when an entity cannot be found"""

    pass


class EntityCreationError(Exception):
    """Raised when an entity cannot be created"""

    pass


class DirectoryOperationError(Exception):
    """Raised when directory operations fail"""

    pass


class SyncFatalError(Exception):
    """Raised when sync encounters a fatal error that prevents continuation.

    Fatal errors include:
    - Project deleted during sync (FOREIGN KEY constraint)
    - Database corruption
    - Critical system failures

    When this exception is raised, the entire sync operation should be terminated
    immediately rather than attempting to continue with remaining files.
    """

    pass

```

--------------------------------------------------------------------------------
/src/basic_memory/api/v2/__init__.py:
--------------------------------------------------------------------------------

```python
"""API v2 module - ID-based entity references.

Version 2 of the Basic Memory API uses integer entity IDs as the primary
identifier for improved performance and stability.

Key changes from v1:
- Entity lookups use integer IDs instead of paths/permalinks
- Direct database queries instead of cascading resolution
- Stable references that don't change with file moves
- Better caching support

All v2 routers are registered with the /v2 prefix.
"""

from basic_memory.api.v2.routers import (
    knowledge_router,
    memory_router,
    project_router,
    resource_router,
    search_router,
    directory_router,
    prompt_router,
    importer_router,
)

__all__ = [
    "knowledge_router",
    "memory_router",
    "project_router",
    "resource_router",
    "search_router",
    "directory_router",
    "prompt_router",
    "importer_router",
]

```

--------------------------------------------------------------------------------
/src/basic_memory/api/v2/routers/__init__.py:
--------------------------------------------------------------------------------

```python
"""V2 API routers."""

from basic_memory.api.v2.routers.knowledge_router import router as knowledge_router
from basic_memory.api.v2.routers.project_router import router as project_router
from basic_memory.api.v2.routers.memory_router import router as memory_router
from basic_memory.api.v2.routers.search_router import router as search_router
from basic_memory.api.v2.routers.resource_router import router as resource_router
from basic_memory.api.v2.routers.directory_router import router as directory_router
from basic_memory.api.v2.routers.prompt_router import router as prompt_router
from basic_memory.api.v2.routers.importer_router import router as importer_router

__all__ = [
    "knowledge_router",
    "project_router",
    "memory_router",
    "search_router",
    "resource_router",
    "directory_router",
    "prompt_router",
    "importer_router",
]

```

--------------------------------------------------------------------------------
/.github/workflows/pr-title.yml:
--------------------------------------------------------------------------------

```yaml
name: "Pull Request Title"

on:
  pull_request:
    types:
      - opened
      - edited
      - synchronize

jobs:
  main:
    runs-on: ubuntu-latest
    steps:
      - uses: amannn/action-semantic-pull-request@v5
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        with:
          # Configure allowed types based on what we want in our changelog
          types: |
            feat
            fix
            chore
            docs
            style
            refactor
            perf
            test
            build
            ci
          # Require at least one from scope list (optional)
          scopes: |
            core
            cli
            api
            mcp
            sync
            ui
            deps
            installer
          # Allow breaking changes (needs "!" after type/scope)
          requireScopeForBreakingChange: true
```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/clients/__init__.py:
--------------------------------------------------------------------------------

```python
"""Typed internal API clients for MCP tools.

These clients encapsulate API paths, error handling, and response validation.
MCP tools become thin adapters that call these clients and format results.

Usage:
    from basic_memory.mcp.clients import KnowledgeClient, SearchClient

    async with get_client() as http_client:
        knowledge = KnowledgeClient(http_client, project_id)
        entity = await knowledge.create_entity(entity_data)
"""

from basic_memory.mcp.clients.knowledge import KnowledgeClient
from basic_memory.mcp.clients.search import SearchClient
from basic_memory.mcp.clients.memory import MemoryClient
from basic_memory.mcp.clients.directory import DirectoryClient
from basic_memory.mcp.clients.resource import ResourceClient
from basic_memory.mcp.clients.project import ProjectClient

__all__ = [
    "KnowledgeClient",
    "SearchClient",
    "MemoryClient",
    "DirectoryClient",
    "ResourceClient",
    "ProjectClient",
]

```

--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------

```markdown
---
name: Bug report
about: Create a report to help us improve Basic Memory
title: '[BUG] '
labels: bug
assignees: ''
---

## Bug Description
A clear and concise description of what the bug is.

## Steps To Reproduce
Steps to reproduce the behavior:
1. Install version '...'
2. Run command '...'
3. Use tool/feature '...'
4. See error

## Expected Behavior
A clear and concise description of what you expected to happen.

## Actual Behavior
What actually happened, including error messages and output.

## Environment
- OS: [e.g. macOS 14.2, Ubuntu 22.04]
- Python version: [e.g. 3.12.1]
- Basic Memory version: [e.g. 0.1.0]
- Installation method: [e.g. pip, uv, source]
- Claude Desktop version (if applicable):

## Additional Context
- Configuration files (if relevant)
- Logs or screenshots
- Any special configuration or environment variables

## Possible Solution
If you have any ideas on what might be causing the issue or how to fix it, please share them here.
```

--------------------------------------------------------------------------------
/src/basic_memory/schemas/directory.py:
--------------------------------------------------------------------------------

```python
"""Schemas for directory tree operations."""

from datetime import datetime
from typing import List, Optional, Literal

from pydantic import BaseModel


class DirectoryNode(BaseModel):
    """Directory node in file system."""

    name: str
    file_path: Optional[str] = None  # Original path without leading slash (matches DB)
    directory_path: str  # Path with leading slash for directory navigation
    type: Literal["directory", "file"]
    children: List["DirectoryNode"] = []  # Default to empty list
    title: Optional[str] = None
    permalink: Optional[str] = None
    external_id: Optional[str] = None  # UUID (primary API identifier for v2)
    entity_id: Optional[int] = None  # Internal numeric ID
    entity_type: Optional[str] = None
    content_type: Optional[str] = None
    updated_at: Optional[datetime] = None

    @property
    def has_children(self) -> bool:
        return bool(self.children)


# Support for recursive model
DirectoryNode.model_rebuild()

```

--------------------------------------------------------------------------------
/src/basic_memory/alembic/versions/e7e1f4367280_add_scan_watermark_tracking_to_project.py:
--------------------------------------------------------------------------------

```python
"""Add scan watermark tracking to Project

Revision ID: e7e1f4367280
Revises: 9d9c1cb7d8f5
Create Date: 2025-10-20 16:42:46.625075

"""

from typing import Sequence, Union

from alembic import op
import sqlalchemy as sa


# revision identifiers, used by Alembic.
revision: str = "e7e1f4367280"
down_revision: Union[str, None] = "9d9c1cb7d8f5"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None


def upgrade() -> None:
    # ### commands auto generated by Alembic - please adjust! ###
    with op.batch_alter_table("project", schema=None) as batch_op:
        batch_op.add_column(sa.Column("last_scan_timestamp", sa.Float(), nullable=True))
        batch_op.add_column(sa.Column("last_file_count", sa.Integer(), nullable=True))

    # ### end Alembic commands ###


def downgrade() -> None:
    # ### commands auto generated by Alembic - please adjust! ###
    with op.batch_alter_table("project", schema=None) as batch_op:
        batch_op.drop_column("last_file_count")
        batch_op.drop_column("last_scan_timestamp")

    # ### end Alembic commands ###

```

--------------------------------------------------------------------------------
/src/basic_memory/schemas/delete.py:
--------------------------------------------------------------------------------

```python
"""Delete operation schemas for the knowledge graph.

This module defines the request schemas for removing entities, relations,
and observations from the knowledge graph. Each operation has specific
implications and safety considerations.

Deletion Hierarchy:
1. Entity deletion removes the entity and all its relations
2. Relation deletion only removes the connection between entities
3. Observation deletion preserves entity and relations

Key Considerations:
- All deletions are permanent
- Entity deletions cascade to relations
- Files are removed along with entities
- Operations are atomic - they fully succeed or fail
"""

from typing import List, Annotated

from annotated_types import MinLen
from pydantic import BaseModel

from basic_memory.schemas.base import Permalink


class DeleteEntitiesRequest(BaseModel):
    """Delete one or more entities from the knowledge graph.

    This operation:
    1. Removes the entity from the database
    2. Deletes all observations attached to the entity
    3. Removes all relations where the entity is source or target
    4. Deletes the corresponding markdown file
    """

    permalinks: Annotated[List[Permalink], MinLen(1)]

```

--------------------------------------------------------------------------------
/tests/repository/test_project_info_repository.py:
--------------------------------------------------------------------------------

```python
"""Tests for the ProjectInfoRepository."""

import pytest
from sqlalchemy import text

from basic_memory.repository.project_info_repository import ProjectInfoRepository
from basic_memory.models.project import Project  # Add a model reference


@pytest.mark.asyncio
async def test_project_info_repository_init(session_maker):
    """Test ProjectInfoRepository initialization."""
    # Create a ProjectInfoRepository
    repository = ProjectInfoRepository(session_maker)

    # Verify it was initialized properly
    assert repository is not None
    assert repository.session_maker == session_maker
    # Model is set to a dummy value (Project is used as a reference here)
    assert repository.Model is Project


@pytest.mark.asyncio
async def test_project_info_repository_execute_query(session_maker):
    """Test ProjectInfoRepository execute_query method."""
    # Create a ProjectInfoRepository
    repository = ProjectInfoRepository(session_maker)

    # Execute a simple query
    result = await repository.execute_query(text("SELECT 1 as test"))

    # Verify the result
    assert result is not None
    row = result.fetchone()
    assert row is not None
    assert row[0] == 1

```

--------------------------------------------------------------------------------
/src/basic_memory/api/routers/search_router.py:
--------------------------------------------------------------------------------

```python
"""Router for search operations."""

from fastapi import APIRouter, BackgroundTasks

from basic_memory.api.routers.utils import to_search_results
from basic_memory.schemas.search import SearchQuery, SearchResponse
from basic_memory.deps import SearchServiceDep, EntityServiceDep

router = APIRouter(prefix="/search", tags=["search"])


@router.post("/", response_model=SearchResponse)
async def search(
    query: SearchQuery,
    search_service: SearchServiceDep,
    entity_service: EntityServiceDep,
    page: int = 1,
    page_size: int = 10,
):
    """Search across all knowledge and documents."""
    limit = page_size
    offset = (page - 1) * page_size
    results = await search_service.search(query, limit=limit, offset=offset)
    search_results = await to_search_results(entity_service, results)
    return SearchResponse(
        results=search_results,
        current_page=page,
        page_size=page_size,
    )


@router.post("/reindex")
async def reindex(background_tasks: BackgroundTasks, search_service: SearchServiceDep):
    """Recreate and populate the search index."""
    await search_service.reindex_all(background_tasks=background_tasks)
    return {"status": "ok", "message": "Reindex initiated"}

```

--------------------------------------------------------------------------------
/docker-compose-postgres.yml:
--------------------------------------------------------------------------------

```yaml
# Docker Compose configuration for Basic Memory with PostgreSQL
# Use this for local development and testing with Postgres backend
#
# Usage:
#   docker-compose -f docker-compose-postgres.yml up -d
#   docker-compose -f docker-compose-postgres.yml down

services:
  postgres:
    image: postgres:17
    container_name: basic-memory-postgres
    environment:
      # Local development/test credentials - NOT for production
      # These values are referenced by tests and justfile commands
      POSTGRES_DB: basic_memory
      POSTGRES_USER: basic_memory_user
      POSTGRES_PASSWORD: dev_password  # Simple password for local testing only
    ports:
      - "5433:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U basic_memory_user -d basic_memory"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped

volumes:
  # Named volume for Postgres data
  postgres_data:
    driver: local

  # Named volume for persistent configuration
  # Database will be stored in Postgres, not in this volume
  basic-memory-config:
    driver: local

# Network configuration (optional)
# networks:
#   basic-memory-net:
#     driver: bridge

```

--------------------------------------------------------------------------------
/tests/cli/test_cli_tools.py:
--------------------------------------------------------------------------------

```python
"""Tests for the Basic Memory CLI tools.

These tests verify CLI tool functionality. Some tests that previously used
subprocess have been removed due to a pre-existing CLI architecture issue
where ASGI transport doesn't trigger FastAPI lifespan initialization.

The subprocess-based integration tests are kept in test_cli_integration.py
for future use when the CLI initialization issue is fixed.
"""

import pytest


def test_ensure_migrations_functionality(app_config, monkeypatch):
    """Test the database initialization functionality."""
    import basic_memory.services.initialization as init_mod

    calls = {"count": 0}

    async def fake_initialize_database(*args, **kwargs):
        calls["count"] += 1

    monkeypatch.setattr(init_mod, "initialize_database", fake_initialize_database)
    init_mod.ensure_initialization(app_config)
    assert calls["count"] == 1


def test_ensure_migrations_propagates_errors(app_config, monkeypatch):
    """Test that initialization errors propagate to caller."""
    import basic_memory.services.initialization as init_mod

    async def fake_initialize_database(*args, **kwargs):
        raise Exception("Test error")

    monkeypatch.setattr(init_mod, "initialize_database", fake_initialize_database)

    with pytest.raises(Exception, match="Test error"):
        init_mod.ensure_initialization(app_config)

```

--------------------------------------------------------------------------------
/tests/schemas/test_relation_response_reference_resolution.py:
--------------------------------------------------------------------------------

```python
from basic_memory.schemas.response import RelationResponse


def test_relation_response_resolves_from_to_from_dict_fallbacks():
    data = {
        "permalink": "rel/1",
        "relation_type": "relates_to",
        "context": "ctx",
        "to_name": None,
        "from_entity": {"permalink": None, "file_path": "From.md"},
        "to_entity": {"permalink": None, "file_path": "To.md", "title": "To Title"},
    }

    rel = RelationResponse.model_validate(data)
    assert rel.from_id == "From.md"
    assert rel.to_id == "To.md"
    assert rel.to_name == "To Title"


def test_relation_response_resolves_from_to_from_orm_like_object_fallbacks():
    class EntityLike:
        def __init__(self, permalink, file_path, title=None):
            self.permalink = permalink
            self.file_path = file_path
            self.title = title

    class RelationLike:
        def __init__(self):
            self.permalink = "rel/2"
            self.relation_type = "relates_to"
            self.context = "ctx"
            self.to_name = None
            self.from_entity = EntityLike(permalink=None, file_path="From2.md")
            self.to_entity = EntityLike(permalink=None, file_path="To2.md", title="To2 Title")

    rel = RelationResponse.model_validate(RelationLike())
    assert rel.from_id == "From2.md"
    assert rel.to_id == "To2.md"
    assert rel.to_name == "To2 Title"

```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
FROM python:3.12-slim-bookworm

# Build arguments for user ID and group ID (defaults to 1000)
ARG UID=1000
ARG GID=1000

# Copy uv from official image
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/

# Set environment variables
ENV PYTHONUNBUFFERED=1 \
    PYTHONDONTWRITEBYTECODE=1

# Create a group and user with the provided UID/GID
# Check if the GID already exists, if not create appgroup
RUN (getent group ${GID} || groupadd --gid ${GID} appgroup) && \
    useradd --uid ${UID} --gid ${GID} --create-home --shell /bin/bash appuser

# Copy the project into the image
ADD . /app

# Sync the project into a new environment, asserting the lockfile is up to date
WORKDIR /app
RUN uv sync --locked

# Create necessary directories and set ownership
RUN mkdir -p /app/data/basic-memory /app/.basic-memory && \
    chown -R appuser:${GID} /app

# Set default data directory and add venv to PATH
ENV BASIC_MEMORY_HOME=/app/data/basic-memory \
    BASIC_MEMORY_PROJECT_ROOT=/app/data \
    PATH="/app/.venv/bin:$PATH"

# Switch to the non-root user
USER appuser

# Expose port
EXPOSE 8000

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
    CMD basic-memory --version || exit 1

# Use the basic-memory entrypoint to run the MCP server with default SSE transport
CMD ["basic-memory", "mcp", "--transport", "sse", "--host", "0.0.0.0", "--port", "8000"]
```

--------------------------------------------------------------------------------
/tests/api/conftest.py:
--------------------------------------------------------------------------------

```python
"""Tests for knowledge graph API routes."""

from typing import AsyncGenerator

import pytest
import pytest_asyncio
from fastapi import FastAPI
from httpx import AsyncClient, ASGITransport

from basic_memory.deps import get_project_config, get_engine_factory, get_app_config
from basic_memory.models import Project


@pytest_asyncio.fixture
async def app(test_config, engine_factory, app_config) -> FastAPI:
    """Create FastAPI test application."""
    from basic_memory.api.app import app

    app.dependency_overrides[get_app_config] = lambda: app_config
    app.dependency_overrides[get_project_config] = lambda: test_config.project_config
    app.dependency_overrides[get_engine_factory] = lambda: engine_factory
    return app


@pytest_asyncio.fixture
async def client(app: FastAPI) -> AsyncGenerator[AsyncClient, None]:
    """Create client using ASGI transport - same as CLI will use."""
    async with AsyncClient(transport=ASGITransport(app=app), base_url="http://test") as client:
        yield client


@pytest.fixture
def project_url(test_project: Project) -> str:
    """Create a URL prefix for the project routes.

    This helps tests generate the correct URL for project-scoped routes.
    """
    # Make sure this matches what's in tests/conftest.py for test_project creation
    # The permalink should be generated from "Test Project Context"
    return f"/{test_project.permalink}"

```

--------------------------------------------------------------------------------
/.github/workflows/dev-release.yml:
--------------------------------------------------------------------------------

```yaml
name: Dev Release

on:
  push:
    branches: [main]
  workflow_dispatch:  # Allow manual triggering

jobs:
  dev-release:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: write

    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.12"

      - name: Install uv
        run: |
          pip install uv

      - name: Install dependencies and build
        run: |
          uv venv
          uv sync
          uv build

      - name: Check if this is a dev version
        id: check_version
        run: |
          VERSION=$(uv run python -c "import basic_memory; print(basic_memory.__version__)")
          echo "version=$VERSION" >> $GITHUB_OUTPUT
          if [[ "$VERSION" == *"dev"* ]]; then
            echo "is_dev=true" >> $GITHUB_OUTPUT
            echo "Dev version detected: $VERSION"
          else
            echo "is_dev=false" >> $GITHUB_OUTPUT
            echo "Release version detected: $VERSION, skipping dev release"
          fi

      - name: Publish dev version to PyPI
        if: steps.check_version.outputs.is_dev == 'true'
        uses: pypa/gh-action-pypi-publish@release/v1
        with:
          password: ${{ secrets.PYPI_TOKEN }}
          skip-existing: true  # Don't fail if version already exists
```

--------------------------------------------------------------------------------
/tests/cli/test_cli_exit.py:
--------------------------------------------------------------------------------

```python
"""Regression tests for CLI command exit behavior.

These tests verify that CLI commands exit cleanly without hanging,
which was a bug fixed in the database initialization refactor.
"""

import subprocess
from pathlib import Path


def test_bm_version_exits_cleanly():
    """Test that 'bm --version' exits cleanly within timeout."""
    # Use uv run to ensure correct environment
    result = subprocess.run(
        ["uv", "run", "bm", "--version"],
        capture_output=True,
        text=True,
        timeout=10,
        cwd=Path(__file__).parent.parent.parent,  # Project root
    )
    assert result.returncode == 0
    assert "Basic Memory version:" in result.stdout


def test_bm_help_exits_cleanly():
    """Test that 'bm --help' exits cleanly within timeout."""
    result = subprocess.run(
        ["uv", "run", "bm", "--help"],
        capture_output=True,
        text=True,
        timeout=10,
        cwd=Path(__file__).parent.parent.parent,
    )
    assert result.returncode == 0
    assert "Basic Memory" in result.stdout


def test_bm_tool_help_exits_cleanly():
    """Test that 'bm tool --help' exits cleanly within timeout."""
    result = subprocess.run(
        ["uv", "run", "bm", "tool", "--help"],
        capture_output=True,
        text=True,
        timeout=10,
        cwd=Path(__file__).parent.parent.parent,
    )
    assert result.returncode == 0
    assert "tool" in result.stdout.lower()

```

--------------------------------------------------------------------------------
/src/basic_memory/alembic/versions/b3c3938bacdb_relation_to_name_unique_index.py:
--------------------------------------------------------------------------------

```python
"""relation to_name unique index

Revision ID: b3c3938bacdb
Revises: 3dae7c7b1564
Create Date: 2025-02-22 14:59:30.668466

"""

from typing import Sequence, Union

from alembic import op


# revision identifiers, used by Alembic.
revision: str = "b3c3938bacdb"
down_revision: Union[str, None] = "3dae7c7b1564"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None


def upgrade() -> None:
    # SQLite doesn't support constraint changes through ALTER
    # Need to recreate table with desired constraints
    with op.batch_alter_table("relation") as batch_op:
        # Drop existing unique constraint
        batch_op.drop_constraint("uix_relation", type_="unique")

        # Add new constraints
        batch_op.create_unique_constraint(
            "uix_relation_from_id_to_id", ["from_id", "to_id", "relation_type"]
        )
        batch_op.create_unique_constraint(
            "uix_relation_from_id_to_name", ["from_id", "to_name", "relation_type"]
        )


def downgrade() -> None:
    with op.batch_alter_table("relation") as batch_op:
        # Drop new constraints
        batch_op.drop_constraint("uix_relation_from_id_to_name", type_="unique")
        batch_op.drop_constraint("uix_relation_from_id_to_id", type_="unique")

        # Restore original constraint
        batch_op.create_unique_constraint("uix_relation", ["from_id", "to_id", "relation_type"])

```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/tools/__init__.py:
--------------------------------------------------------------------------------

```python
"""MCP tools for Basic Memory.

This package provides the complete set of tools for interacting with
Basic Memory through the MCP protocol. Importing this module registers
all tools with the MCP server.
"""

# Import tools to register them with MCP
from basic_memory.mcp.tools.delete_note import delete_note
from basic_memory.mcp.tools.read_content import read_content
from basic_memory.mcp.tools.build_context import build_context
from basic_memory.mcp.tools.recent_activity import recent_activity
from basic_memory.mcp.tools.read_note import read_note
from basic_memory.mcp.tools.view_note import view_note
from basic_memory.mcp.tools.write_note import write_note
from basic_memory.mcp.tools.search import search_notes
from basic_memory.mcp.tools.canvas import canvas
from basic_memory.mcp.tools.list_directory import list_directory
from basic_memory.mcp.tools.edit_note import edit_note
from basic_memory.mcp.tools.move_note import move_note
from basic_memory.mcp.tools.project_management import (
    list_memory_projects,
    create_memory_project,
    delete_project,
)

# ChatGPT-compatible tools
from basic_memory.mcp.tools.chatgpt_tools import search, fetch

__all__ = [
    "build_context",
    "canvas",
    "create_memory_project",
    "delete_note",
    "delete_project",
    "edit_note",
    "fetch",
    "list_directory",
    "list_memory_projects",
    "move_note",
    "read_content",
    "read_note",
    "recent_activity",
    "search",
    "search_notes",
    "view_note",
    "write_note",
]

```

--------------------------------------------------------------------------------
/docs/testing-coverage.md:
--------------------------------------------------------------------------------

```markdown
## Coverage policy (practical 100%)

Basic Memory’s test suite intentionally mixes:
- unit tests (fast, deterministic)
- integration tests (real filesystem + real DB via `test-int/`)

To keep the default CI signal **stable and meaningful**, the default `pytest` coverage report targets **core library logic** and **excludes** a small set of modules that are either:
- highly environment-dependent (OS/DB tuning)
- inherently interactive (CLI)
- background-task orchestration (watchers/sync runners)
- external analytics

### What’s excluded (and why)

Coverage excludes are configured in `pyproject.toml` under `[tool.coverage.report].omit`.

Current exclusions include:
- `src/basic_memory/cli/**`: interactive wrappers; behavior is validated via higher-level tests and smoke tests.
- `src/basic_memory/db.py`: platform/backend tuning paths (SQLite/Postgres/Windows), covered by integration tests and targeted runs.
- `src/basic_memory/services/initialization.py`: startup orchestration/background tasks; covered indirectly by app/MCP entrypoints.
- `src/basic_memory/sync/sync_service.py`: heavy filesystem↔DB integration; validated in integration suite (not enforced in unit coverage).
- `src/basic_memory/telemetry.py`: external analytics; exercised lightly but excluded from strict coverage gate.

### Recommended additional runs

If you want extra confidence locally/CI:
- **Postgres backend**: run tests with `BASIC_MEMORY_TEST_POSTGRES=1`.
- **Strict backend-complete coverage**: run coverage on SQLite + Postgres and combine the results (recommended).



```

--------------------------------------------------------------------------------
/tests/api/test_relation_background_resolution.py:
--------------------------------------------------------------------------------

```python
"""Test that relation resolution happens in the background."""

import pytest

from basic_memory.api.routers.knowledge_router import resolve_relations_background


@pytest.mark.asyncio
async def test_resolve_relations_background_success():
    """Test that background relation resolution calls sync service correctly."""

    class StubSyncService:
        def __init__(self) -> None:
            self.calls: list[int] = []

        async def resolve_relations(self, *, entity_id: int) -> None:
            self.calls.append(entity_id)

    sync_service = StubSyncService()

    entity_id = 123
    entity_permalink = "test/entity"

    # Call the background function
    await resolve_relations_background(sync_service, entity_id, entity_permalink)

    # Verify sync service was called with the entity_id
    assert sync_service.calls == [entity_id]


@pytest.mark.asyncio
async def test_resolve_relations_background_handles_errors():
    """Test that background relation resolution handles errors gracefully."""

    class StubSyncService:
        def __init__(self) -> None:
            self.calls: list[int] = []

        async def resolve_relations(self, *, entity_id: int) -> None:
            self.calls.append(entity_id)
            raise Exception("Test error")

    sync_service = StubSyncService()

    entity_id = 123
    entity_permalink = "test/entity"

    # Call should not raise - errors are logged
    await resolve_relations_background(sync_service, entity_id, entity_permalink)

    # Verify sync service was called
    assert sync_service.calls == [entity_id]

```

--------------------------------------------------------------------------------
/src/basic_memory/alembic/versions/9d9c1cb7d8f5_add_mtime_and_size_columns_to_entity_.py:
--------------------------------------------------------------------------------

```python
"""Add mtime and size columns to Entity for sync optimization

Revision ID: 9d9c1cb7d8f5
Revises: a1b2c3d4e5f6
Create Date: 2025-10-20 05:07:55.173849

"""

from typing import Sequence, Union

from alembic import op
import sqlalchemy as sa


# revision identifiers, used by Alembic.
revision: str = "9d9c1cb7d8f5"
down_revision: Union[str, None] = "a1b2c3d4e5f6"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None


def upgrade() -> None:
    # ### commands auto generated by Alembic - please adjust! ###
    with op.batch_alter_table("entity", schema=None) as batch_op:
        batch_op.add_column(sa.Column("mtime", sa.Float(), nullable=True))
        batch_op.add_column(sa.Column("size", sa.Integer(), nullable=True))
        batch_op.drop_constraint(batch_op.f("fk_entity_project_id"), type_="foreignkey")
        batch_op.create_foreign_key(
            batch_op.f("fk_entity_project_id"), "project", ["project_id"], ["id"]
        )

    # ### end Alembic commands ###


def downgrade() -> None:
    # ### commands auto generated by Alembic - please adjust! ###
    with op.batch_alter_table("entity", schema=None) as batch_op:
        batch_op.drop_constraint(batch_op.f("fk_entity_project_id"), type_="foreignkey")
        batch_op.create_foreign_key(
            batch_op.f("fk_entity_project_id"),
            "project",
            ["project_id"],
            ["id"],
            ondelete="CASCADE",
        )
        batch_op.drop_column("size")
        batch_op.drop_column("mtime")

    # ### end Alembic commands ###

```

--------------------------------------------------------------------------------
/src/basic_memory/schemas/v2/resource.py:
--------------------------------------------------------------------------------

```python
"""V2 resource schemas for file content operations."""

from pydantic import BaseModel, Field


class CreateResourceRequest(BaseModel):
    """Request to create a new resource file.

    File path is required for new resources since we need to know where
    to create the file.
    """

    file_path: str = Field(
        ...,
        description="Path to create the file, relative to project root",
        min_length=1,
        max_length=500,
    )
    content: str = Field(..., description="File content to write")


class UpdateResourceRequest(BaseModel):
    """Request to update an existing resource by entity ID.

    Only content is required - the file path is already known from the entity.
    Optionally can update the file_path to move the file.
    """

    content: str = Field(..., description="File content to write")
    file_path: str | None = Field(
        None,
        description="Optional new file path to move the resource",
        min_length=1,
        max_length=500,
    )


class ResourceResponse(BaseModel):
    """Response from resource operations."""

    entity_id: int = Field(..., description="Internal entity ID of the resource")
    external_id: str = Field(..., description="External UUID of the resource for API references")
    file_path: str = Field(..., description="File path of the resource")
    checksum: str = Field(..., description="File content checksum")
    size: int = Field(..., description="File size in bytes")
    created_at: float = Field(..., description="Creation timestamp")
    modified_at: float = Field(..., description="Modification timestamp")

```

--------------------------------------------------------------------------------
/.github/workflows/docker.yml:
--------------------------------------------------------------------------------

```yaml
name: Docker Image CI

on:
  push:
    tags:
      - 'v*'  # Trigger on version tags like v1.0.0, v0.13.0, etc.
  workflow_dispatch:  # Allow manual triggering for testing

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: basicmachines-co/basic-memory

jobs:
  docker:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
        with:
          platforms: linux/amd64,linux/arm64

      - name: Log in to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          tags: |
            type=ref,event=branch
            type=ref,event=pr
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=raw,value=latest,enable={{is_default_branch}}

      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          context: .
          file: ./Dockerfile
          platforms: linux/amd64,linux/arm64
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max


```

--------------------------------------------------------------------------------
/src/basic_memory/alembic/versions/a2b3c4d5e6f7_add_search_index_entity_cascade.py:
--------------------------------------------------------------------------------

```python
"""Add cascade delete FK from search_index to entity

Revision ID: a2b3c4d5e6f7
Revises: f8a9b2c3d4e5
Create Date: 2025-12-02 07:00:00.000000

"""

from typing import Sequence, Union

from alembic import op


# revision identifiers, used by Alembic.
revision: str = "a2b3c4d5e6f7"
down_revision: Union[str, None] = "f8a9b2c3d4e5"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None


def upgrade() -> None:
    """Add FK with CASCADE delete from search_index.entity_id to entity.id.

    This migration is Postgres-only because:
    - SQLite uses FTS5 virtual tables which don't support foreign keys
    - The FK enables automatic cleanup of search_index entries when entities are deleted
    """
    connection = op.get_bind()
    dialect = connection.dialect.name

    if dialect == "postgresql":
        # First, clean up any orphaned search_index entries where entity no longer exists
        op.execute("""
            DELETE FROM search_index
            WHERE entity_id IS NOT NULL
            AND entity_id NOT IN (SELECT id FROM entity)
        """)

        # Add FK with CASCADE - nullable FK allows search_index entries without entity_id
        op.create_foreign_key(
            "fk_search_index_entity_id",
            "search_index",
            "entity",
            ["entity_id"],
            ["id"],
            ondelete="CASCADE",
        )


def downgrade() -> None:
    """Remove the FK constraint."""
    connection = op.get_bind()
    dialect = connection.dialect.name

    if dialect == "postgresql":
        op.drop_constraint("fk_search_index_entity_id", "search_index", type_="foreignkey")

```

--------------------------------------------------------------------------------
/tests/sync/test_watch_service_atomic_adds.py:
--------------------------------------------------------------------------------

```python
import pytest
from watchfiles.main import Change

from basic_memory.sync.watch_service import WatchService


@pytest.mark.asyncio
async def test_handle_changes_reclassifies_added_existing_files_as_modified(
    app_config,
    project_repository,
    sync_service,
    test_project,
    project_config,
):
    """Regression: don't mutate `adds` while iterating.

    Some editors perform atomic writes that can show up as "added" events for files
    that already exist and have entities in the DB. We should process these as
    modifications for *all* affected files (not skip half the batch).
    """

    async def sync_service_factory(_project):
        return sync_service

    watch_service = WatchService(
        app_config=app_config,
        project_repository=project_repository,
        quiet=True,
        sync_service_factory=sync_service_factory,
    )

    # Create two files and sync them so they exist in the DB.
    file_a = project_config.home / "atomic-a.md"
    file_b = project_config.home / "atomic-b.md"
    file_a.write_text("# A\n\n- links_to [[B]]\n", encoding="utf-8")
    file_b.write_text("# B\n", encoding="utf-8")

    await sync_service.sync(project_config.home, project_name=test_project.name)

    # Simulate a watcher batch where both existing files show up as "added".
    changes = {
        (Change.added, str(file_a)),
        (Change.added, str(file_b)),
    }

    await watch_service.handle_changes(test_project, changes)

    # Both should have been processed as "modified" (reclassified), not "new".
    actions = [e.action for e in watch_service.state.recent_events]
    assert "new" not in actions
    assert actions.count("modified") >= 2

```

--------------------------------------------------------------------------------
/src/basic_memory/deps/db.py:
--------------------------------------------------------------------------------

```python
"""Database dependency injection for basic-memory.

This module provides database-related dependencies:
- Engine and session maker factories
- Session dependencies for request handling
"""

from typing import Annotated

from fastapi import Depends, Request
from loguru import logger
from sqlalchemy.ext.asyncio import (
    AsyncEngine,
    AsyncSession,
    async_sessionmaker,
)

from basic_memory import db
from basic_memory.deps.config import get_app_config


async def get_engine_factory(
    request: Request,
) -> tuple[AsyncEngine, async_sessionmaker[AsyncSession]]:  # pragma: no cover
    """Get cached engine and session maker from app state.

    For API requests, returns cached connections from app.state for optimal performance.
    For non-API contexts (CLI), falls back to direct database connection.
    """
    # Try to get cached connections from app state (API context)
    if (
        hasattr(request, "app")
        and hasattr(request.app.state, "engine")
        and hasattr(request.app.state, "session_maker")
    ):
        return request.app.state.engine, request.app.state.session_maker

    # Fallback for non-API contexts (CLI)
    logger.debug("Using fallback database connection for non-API context")
    app_config = get_app_config()
    engine, session_maker = await db.get_or_create_db(app_config.database_path)
    return engine, session_maker


EngineFactoryDep = Annotated[
    tuple[AsyncEngine, async_sessionmaker[AsyncSession]], Depends(get_engine_factory)
]


async def get_session_maker(engine_factory: EngineFactoryDep) -> async_sessionmaker[AsyncSession]:
    """Get session maker."""
    _, session_maker = engine_factory
    return session_maker


SessionMakerDep = Annotated[async_sessionmaker, Depends(get_session_maker)]

```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/prompts/search.py:
--------------------------------------------------------------------------------

```python
"""Search prompts for Basic Memory MCP server.

These prompts help users search and explore their knowledge base.
"""

from typing import Annotated, Optional

from loguru import logger
from pydantic import Field

from basic_memory.config import get_project_config
from basic_memory.mcp.async_client import get_client
from basic_memory.mcp.server import mcp
from basic_memory.mcp.tools.utils import call_post
from basic_memory.schemas.base import TimeFrame
from basic_memory.schemas.prompt import SearchPromptRequest


@mcp.prompt(
    name="search_knowledge_base",
    description="Search across all content in basic-memory",
)
async def search_prompt(
    query: str,
    timeframe: Annotated[
        Optional[TimeFrame],
        Field(description="How far back to search (e.g. '1d', '1 week')"),
    ] = None,
) -> str:
    """Search across all content in basic-memory.

    This prompt helps search for content in the knowledge base and
    provides helpful context about the results.

    Args:
        query: The search text to look for
        timeframe: Optional timeframe to limit results (e.g. '1d', '1 week')

    Returns:
        Formatted search results with context
    """
    logger.info(f"Searching knowledge base, query: {query}, timeframe: {timeframe}")

    async with get_client() as client:
        # Create request model
        request = SearchPromptRequest(query=query, timeframe=timeframe)

        project_url = get_project_config().project_url

        # Call the prompt API endpoint
        response = await call_post(
            client, f"{project_url}/prompt/search", json=request.model_dump(exclude_none=True)
        )

        # Extract the rendered prompt from the response
        result = response.json()
        return result["prompt"]

```

--------------------------------------------------------------------------------
/src/basic_memory/schemas/__init__.py:
--------------------------------------------------------------------------------

```python
"""Knowledge graph schema exports.

This module exports all schema classes to simplify imports.
Rather than importing from individual schema files, you can
import everything from basic_memory.schemas.
"""

# Base types and models
from basic_memory.schemas.base import (
    Observation,
    EntityType,
    RelationType,
    Relation,
    Entity,
)

# Delete operation models
from basic_memory.schemas.delete import (
    DeleteEntitiesRequest,
)

# Request models
from basic_memory.schemas.request import (
    SearchNodesRequest,
    GetEntitiesRequest,
    CreateRelationsRequest,
)

# Response models
from basic_memory.schemas.response import (
    SQLAlchemyModel,
    ObservationResponse,
    RelationResponse,
    EntityResponse,
    EntityListResponse,
    SearchNodesResponse,
    DeleteEntitiesResponse,
)

from basic_memory.schemas.project_info import (
    ProjectStatistics,
    ActivityMetrics,
    SystemStatus,
    ProjectInfoResponse,
)

from basic_memory.schemas.directory import (
    DirectoryNode,
)

from basic_memory.schemas.sync_report import (
    SyncReportResponse,
)

# For convenient imports, export all models
__all__ = [
    # Base
    "Observation",
    "EntityType",
    "RelationType",
    "Relation",
    "Entity",
    # Requests
    "SearchNodesRequest",
    "GetEntitiesRequest",
    "CreateRelationsRequest",
    # Responses
    "SQLAlchemyModel",
    "ObservationResponse",
    "RelationResponse",
    "EntityResponse",
    "EntityListResponse",
    "SearchNodesResponse",
    "DeleteEntitiesResponse",
    # Delete Operations
    "DeleteEntitiesRequest",
    # Project Info
    "ProjectStatistics",
    "ActivityMetrics",
    "SystemStatus",
    "ProjectInfoResponse",
    # Directory
    "DirectoryNode",
    # Sync
    "SyncReportResponse",
]

```

--------------------------------------------------------------------------------
/src/basic_memory/runtime.py:
--------------------------------------------------------------------------------

```python
"""Runtime mode resolution for Basic Memory.

This module centralizes runtime mode detection, ensuring cloud/local/test
determination happens in one place rather than scattered across modules.

Composition roots (containers) read ConfigManager and use this module
to resolve the runtime mode, then pass the result downstream.
"""

from enum import Enum, auto


class RuntimeMode(Enum):
    """Runtime modes for Basic Memory."""

    LOCAL = auto()  # Local standalone mode (default)
    CLOUD = auto()  # Cloud mode with remote sync
    TEST = auto()  # Test environment

    @property
    def is_cloud(self) -> bool:
        return self == RuntimeMode.CLOUD

    @property
    def is_local(self) -> bool:
        return self == RuntimeMode.LOCAL

    @property
    def is_test(self) -> bool:
        return self == RuntimeMode.TEST


def resolve_runtime_mode(
    cloud_mode_enabled: bool,
    is_test_env: bool,
) -> RuntimeMode:
    """Resolve the runtime mode from configuration flags.

    This is the single source of truth for mode resolution.
    Composition roots call this with config values they've read.

    Args:
        cloud_mode_enabled: Whether cloud mode is enabled in config
        is_test_env: Whether running in test environment

    Returns:
        The resolved RuntimeMode
    """
    # Trigger: test environment is detected
    # Why: tests need special handling (no file sync, isolated DB)
    # Outcome: returns TEST mode, skipping cloud mode check
    if is_test_env:
        return RuntimeMode.TEST

    # Trigger: cloud mode is enabled in config
    # Why: cloud mode changes auth, sync, and API behavior
    # Outcome: returns CLOUD mode for remote-first behavior
    if cloud_mode_enabled:
        return RuntimeMode.CLOUD

    return RuntimeMode.LOCAL

```

--------------------------------------------------------------------------------
/tests/mcp/test_tool_project_management.py:
--------------------------------------------------------------------------------

```python
"""Tests for MCP project management tools."""

import pytest
from sqlalchemy import select

from basic_memory import db
from basic_memory.mcp.tools import list_memory_projects, create_memory_project, delete_project
from basic_memory.models.project import Project


@pytest.mark.asyncio
async def test_list_memory_projects_unconstrained(app, test_project):
    result = await list_memory_projects.fn()
    assert "Available projects:" in result
    assert f"• {test_project.name}" in result


@pytest.mark.asyncio
async def test_list_memory_projects_constrained_env(monkeypatch, app, test_project):
    monkeypatch.setenv("BASIC_MEMORY_MCP_PROJECT", test_project.name)
    result = await list_memory_projects.fn()
    assert f"Project: {test_project.name}" in result
    assert "constrained to a single project" in result


@pytest.mark.asyncio
async def test_create_and_delete_project_and_name_match_branch(
    app, tmp_path_factory, session_maker
):
    # Create a project through the tool (exercises POST + response formatting).
    project_root = tmp_path_factory.mktemp("extra-project-home")
    result = await create_memory_project.fn(
        project_name="My Project",
        project_path=str(project_root),
        set_default=False,
    )
    assert result.startswith("✓")
    assert "My Project" in result

    # Make permalink intentionally not derived from name so delete_project hits the name-match branch.
    async with db.scoped_session(session_maker) as session:
        project = (
            await session.execute(select(Project).where(Project.name == "My Project"))
        ).scalar_one()
        project.permalink = "custom-permalink"
        await session.commit()

    delete_result = await delete_project.fn("My Project")
    assert delete_result.startswith("✓")

```

--------------------------------------------------------------------------------
/tests/api/test_project_router_operations.py:
--------------------------------------------------------------------------------

```python
"""Tests for project router operation endpoints."""

import pytest


@pytest.mark.asyncio
async def test_get_project_info_additional(client, test_graph, project_url):
    """Test additional fields in the project info endpoint."""
    # Call the endpoint
    response = await client.get(f"{project_url}/project/info")

    # Verify response
    assert response.status_code == 200
    data = response.json()

    # Check specific fields we're interested in
    assert "available_projects" in data
    assert isinstance(data["available_projects"], dict)

    # Get a project from the list
    for project_name, project_info in data["available_projects"].items():
        # Verify project structure
        assert "path" in project_info
        assert "active" in project_info
        assert "is_default" in project_info
        break  # Just check the first one for structure


@pytest.mark.asyncio
async def test_project_list_additional(client, project_url):
    """Test additional fields in the project list endpoint."""
    # Call the endpoint
    response = await client.get("/projects/projects")

    # Verify response
    assert response.status_code == 200
    data = response.json()

    # Verify projects list structure in more detail
    assert "projects" in data
    assert len(data["projects"]) > 0

    # Verify the default project is identified
    default_project = data["default_project"]
    assert default_project

    # Verify the default_project appears in the projects list and is marked as default
    default_in_list = False
    for project in data["projects"]:
        if project["name"] == default_project:
            assert project["is_default"] is True
            default_in_list = True
            break

    assert default_in_list, "Default project should appear in the projects list"

```

--------------------------------------------------------------------------------
/src/basic_memory/alembic/versions/a1b2c3d4e5f6_fix_project_foreign_keys.py:
--------------------------------------------------------------------------------

```python
"""fix project foreign keys

Revision ID: a1b2c3d4e5f6
Revises: 647e7a75e2cd
Create Date: 2025-08-19 22:06:00.000000

"""

from typing import Sequence, Union

from alembic import op


# revision identifiers, used by Alembic.
revision: str = "a1b2c3d4e5f6"
down_revision: Union[str, None] = "647e7a75e2cd"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None


def upgrade() -> None:
    """Re-establish foreign key constraints that were lost during project table recreation.

    The migration 647e7a75e2cd recreated the project table but did not re-establish
    the foreign key constraint from entity.project_id to project.id, causing
    foreign key constraint failures when trying to delete projects with related entities.
    """
    # SQLite doesn't allow adding foreign key constraints to existing tables easily
    # We need to be careful and handle the case where the constraint might already exist

    with op.batch_alter_table("entity", schema=None) as batch_op:
        # Try to drop existing foreign key constraint (may not exist)
        try:
            batch_op.drop_constraint("fk_entity_project_id", type_="foreignkey")
        except Exception:
            # Constraint may not exist, which is fine - we'll create it next
            pass

        # Add the foreign key constraint with CASCADE DELETE
        # This ensures that when a project is deleted, all related entities are also deleted
        batch_op.create_foreign_key(
            "fk_entity_project_id", "project", ["project_id"], ["id"], ondelete="CASCADE"
        )


def downgrade() -> None:
    """Remove the foreign key constraint."""
    with op.batch_alter_table("entity", schema=None) as batch_op:
        batch_op.drop_constraint("fk_entity_project_id", type_="foreignkey")

```

--------------------------------------------------------------------------------
/src/basic_memory/alembic/versions/502b60eaa905_remove_required_from_entity_permalink.py:
--------------------------------------------------------------------------------

```python
"""remove required from entity.permalink

Revision ID: 502b60eaa905
Revises: b3c3938bacdb
Create Date: 2025-02-24 13:33:09.790951

"""

from typing import Sequence, Union

from alembic import op
import sqlalchemy as sa


# revision identifiers, used by Alembic.
revision: str = "502b60eaa905"
down_revision: Union[str, None] = "b3c3938bacdb"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None


def upgrade() -> None:
    # ### commands auto generated by Alembic - please adjust! ###
    with op.batch_alter_table("entity", schema=None) as batch_op:
        batch_op.alter_column("permalink", existing_type=sa.VARCHAR(), nullable=True)
        batch_op.drop_index("ix_entity_permalink")
        batch_op.create_index(batch_op.f("ix_entity_permalink"), ["permalink"], unique=False)
        batch_op.drop_constraint("uix_entity_permalink", type_="unique")
        batch_op.create_index(
            "uix_entity_permalink",
            ["permalink"],
            unique=True,
            sqlite_where=sa.text("content_type = 'text/markdown' AND permalink IS NOT NULL"),
        )

    # ### end Alembic commands ###


def downgrade() -> None:
    # ### commands auto generated by Alembic - please adjust! ###
    with op.batch_alter_table("entity", schema=None) as batch_op:
        batch_op.drop_index(
            "uix_entity_permalink",
            sqlite_where=sa.text("content_type = 'text/markdown' AND permalink IS NOT NULL"),
        )
        batch_op.create_unique_constraint("uix_entity_permalink", ["permalink"])
        batch_op.drop_index(batch_op.f("ix_entity_permalink"))
        batch_op.create_index("ix_entity_permalink", ["permalink"], unique=1)
        batch_op.alter_column("permalink", existing_type=sa.VARCHAR(), nullable=False)

    # ### end Alembic commands ###

```

--------------------------------------------------------------------------------
/src/basic_memory/schemas/cloud.py:
--------------------------------------------------------------------------------

```python
"""Schemas for cloud-related API responses."""

from pydantic import BaseModel, Field


class TenantMountInfo(BaseModel):
    """Response from /tenant/mount/info endpoint."""

    tenant_id: str = Field(..., description="Unique identifier for the tenant")
    bucket_name: str = Field(..., description="S3 bucket name for the tenant")


class MountCredentials(BaseModel):
    """Response from /tenant/mount/credentials endpoint."""

    access_key: str = Field(..., description="S3 access key for mount")
    secret_key: str = Field(..., description="S3 secret key for mount")


class CloudProject(BaseModel):
    """Representation of a cloud project."""

    name: str = Field(..., description="Project name")
    path: str = Field(..., description="Project path on cloud")


class CloudProjectList(BaseModel):
    """Response from /proxy/projects/projects endpoint."""

    projects: list[CloudProject] = Field(default_factory=list, description="List of cloud projects")


class CloudProjectCreateRequest(BaseModel):
    """Request to create a new cloud project."""

    name: str = Field(..., description="Project name")
    path: str = Field(..., description="Project path (permalink)")
    set_default: bool = Field(default=False, description="Set as default project")


class CloudProjectCreateResponse(BaseModel):
    """Response from creating a cloud project."""

    message: str = Field(..., description="Status message about the project creation")
    status: str = Field(..., description="Status of the creation (success or error)")
    default: bool = Field(..., description="True if the project was set as the default")
    old_project: dict | None = Field(None, description="Information about the previous project")
    new_project: dict | None = Field(
        None, description="Information about the newly created project"
    )

```

--------------------------------------------------------------------------------
/tests/mcp/conftest.py:
--------------------------------------------------------------------------------

```python
"""Tests for the MCP server implementation using FastAPI TestClient."""

from typing import AsyncGenerator

import pytest
import pytest_asyncio
from fastapi import FastAPI
from httpx import AsyncClient, ASGITransport
from mcp.server import FastMCP

from basic_memory.api.app import app as fastapi_app
from basic_memory.deps import get_project_config, get_engine_factory, get_app_config
from basic_memory.services.search_service import SearchService
from basic_memory.mcp.server import mcp as mcp_server


@pytest.fixture(scope="function")
def mcp() -> FastMCP:
    return mcp_server  # pyright: ignore [reportReturnType]


@pytest.fixture(scope="function")
def app(app_config, project_config, engine_factory, config_manager) -> FastAPI:
    """Create test FastAPI application."""
    app = fastapi_app
    app.dependency_overrides[get_app_config] = lambda: app_config
    app.dependency_overrides[get_project_config] = lambda: project_config
    app.dependency_overrides[get_engine_factory] = lambda: engine_factory
    return app


@pytest_asyncio.fixture(scope="function")
async def client(app: FastAPI) -> AsyncGenerator[AsyncClient, None]:
    """Create test client that both MCP and tests will use."""
    async with AsyncClient(transport=ASGITransport(app=app), base_url="http://test") as client:
        yield client


@pytest.fixture
def test_entity_data():
    """Sample data for creating a test entity."""
    return {
        "entities": [
            {
                "title": "Test Entity",
                "entity_type": "test",
                "summary": "",  # Empty string instead of None
            }
        ]
    }


@pytest_asyncio.fixture
async def init_search_index(search_service: SearchService):
    """Initialize search index. Request this fixture explicitly in tests that need it."""
    await search_service.init_search_index()

```

--------------------------------------------------------------------------------
/tests/test_runtime.py:
--------------------------------------------------------------------------------

```python
"""Tests for runtime mode resolution."""

from basic_memory.runtime import RuntimeMode, resolve_runtime_mode


class TestRuntimeMode:
    """Tests for RuntimeMode enum."""

    def test_local_mode_properties(self):
        mode = RuntimeMode.LOCAL
        assert mode.is_local is True
        assert mode.is_cloud is False
        assert mode.is_test is False

    def test_cloud_mode_properties(self):
        mode = RuntimeMode.CLOUD
        assert mode.is_local is False
        assert mode.is_cloud is True
        assert mode.is_test is False

    def test_test_mode_properties(self):
        mode = RuntimeMode.TEST
        assert mode.is_local is False
        assert mode.is_cloud is False
        assert mode.is_test is True


class TestResolveRuntimeMode:
    """Tests for resolve_runtime_mode function."""

    def test_resolves_to_test_when_test_env(self):
        """Test environment takes precedence over cloud mode."""
        mode = resolve_runtime_mode(cloud_mode_enabled=True, is_test_env=True)
        assert mode == RuntimeMode.TEST

    def test_resolves_to_cloud_when_enabled(self):
        """Cloud mode is used when enabled and not in test env."""
        mode = resolve_runtime_mode(cloud_mode_enabled=True, is_test_env=False)
        assert mode == RuntimeMode.CLOUD

    def test_resolves_to_local_by_default(self):
        """Local mode is the default when no other modes apply."""
        mode = resolve_runtime_mode(cloud_mode_enabled=False, is_test_env=False)
        assert mode == RuntimeMode.LOCAL

    def test_test_env_overrides_cloud_mode(self):
        """Test environment should override cloud mode."""
        # When both are enabled, test takes precedence
        mode = resolve_runtime_mode(cloud_mode_enabled=True, is_test_env=True)
        assert mode == RuntimeMode.TEST
        assert mode.is_test is True
        assert mode.is_cloud is False

```

--------------------------------------------------------------------------------
/src/basic_memory/markdown/schemas.py:
--------------------------------------------------------------------------------

```python
"""Schema models for entity markdown files."""

from datetime import datetime
from typing import List, Optional

from pydantic import BaseModel


class Observation(BaseModel):
    """An observation about an entity."""

    category: Optional[str] = "Note"
    content: str
    tags: Optional[List[str]] = None
    context: Optional[str] = None

    def __str__(self) -> str:
        obs_string = f"- [{self.category}] {self.content}"
        if self.context:
            obs_string += f" ({self.context})"
        return obs_string


class Relation(BaseModel):
    """A relation between entities."""

    type: str
    target: str
    context: Optional[str] = None

    def __str__(self) -> str:
        rel_string = f"- {self.type} [[{self.target}]]"
        if self.context:
            rel_string += f" ({self.context})"
        return rel_string


class EntityFrontmatter(BaseModel):
    """Required frontmatter fields for an entity."""

    metadata: dict = {}

    @property
    def tags(self) -> List[str]:
        return self.metadata.get("tags") if self.metadata else None  # pyright: ignore

    @property
    def title(self) -> str:
        return self.metadata.get("title") if self.metadata else None  # pyright: ignore

    @property
    def type(self) -> str:
        return self.metadata.get("type", "note") if self.metadata else "note"  # pyright: ignore

    @property
    def permalink(self) -> str:
        return self.metadata.get("permalink") if self.metadata else None  # pyright: ignore


class EntityMarkdown(BaseModel):
    """Complete entity combining frontmatter, content, and metadata."""

    frontmatter: EntityFrontmatter
    content: Optional[str] = None
    observations: List[Observation] = []
    relations: List[Relation] = []

    # created, updated will have values after a read
    created: Optional[datetime] = None
    modified: Optional[datetime] = None

```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/clients/search.py:
--------------------------------------------------------------------------------

```python
"""Typed client for search API operations.

Encapsulates all /v2/projects/{project_id}/search/* endpoints.
"""

from typing import Any

from httpx import AsyncClient

from basic_memory.mcp.tools.utils import call_post
from basic_memory.schemas.search import SearchResponse


class SearchClient:
    """Typed client for search operations.

    Centralizes:
    - API path construction for /v2/projects/{project_id}/search/*
    - Response validation via Pydantic models
    - Consistent error handling through call_* utilities

    Usage:
        async with get_client() as http_client:
            client = SearchClient(http_client, project_id)
            results = await client.search(search_query.model_dump())
    """

    def __init__(self, http_client: AsyncClient, project_id: str):
        """Initialize the search client.

        Args:
            http_client: HTTPX AsyncClient for making requests
            project_id: Project external_id (UUID) for API calls
        """
        self.http_client = http_client
        self.project_id = project_id
        self._base_path = f"/v2/projects/{project_id}/search"

    async def search(
        self,
        query: dict[str, Any],
        *,
        page: int = 1,
        page_size: int = 10,
    ) -> SearchResponse:
        """Search across all content in the knowledge base.

        Args:
            query: Search query dict (from SearchQuery.model_dump())
            page: Page number (1-indexed)
            page_size: Results per page

        Returns:
            SearchResponse with results and pagination

        Raises:
            ToolError: If the request fails
        """
        response = await call_post(
            self.http_client,
            f"{self._base_path}/",
            json=query,
            params={"page": page, "page_size": page_size},
        )
        return SearchResponse.model_validate(response.json())

```

--------------------------------------------------------------------------------
/src/basic_memory/importers/utils.py:
--------------------------------------------------------------------------------

```python
"""Utility functions for import services."""

import re
from datetime import datetime
from typing import Any


def clean_filename(name: str | None) -> str:  # pragma: no cover
    """Clean a string to be used as a filename.

    Args:
        name: The string to clean (can be None).

    Returns:
        A cleaned string suitable for use as a filename.
    """
    # Handle None or empty input
    if not name:
        return "untitled"
    # Replace common punctuation and whitespace with underscores
    name = re.sub(r"[\s\-,.:/\\\[\]\(\)]+", "_", name)
    # Remove any non-alphanumeric or underscore characters
    name = re.sub(r"[^\w]+", "", name)
    # Ensure the name isn't too long
    if len(name) > 100:  # pragma: no cover
        name = name[:100]
    # Ensure the name isn't empty
    if not name:  # pragma: no cover
        name = "untitled"
    return name


def format_timestamp(timestamp: Any) -> str:  # pragma: no cover
    """Format a timestamp for use in a filename or title.

    Args:
        timestamp: A timestamp in various formats.

    Returns:
        A formatted string representation of the timestamp.
    """
    if isinstance(timestamp, str):
        try:
            # Try ISO format
            timestamp = datetime.fromisoformat(timestamp.replace("Z", "+00:00"))
        except ValueError:
            try:
                # Try unix timestamp as string
                timestamp = datetime.fromtimestamp(float(timestamp)).astimezone()
            except ValueError:
                # Return as is if we can't parse it
                return timestamp
    elif isinstance(timestamp, (int, float)):
        # Unix timestamp
        timestamp = datetime.fromtimestamp(timestamp).astimezone()

    if isinstance(timestamp, datetime):
        return timestamp.strftime("%Y-%m-%d %H:%M:%S")

    # Return as is if we can't format it
    return str(timestamp)  # pragma: no cover

```

--------------------------------------------------------------------------------
/tests/importers/test_importer_utils.py:
--------------------------------------------------------------------------------

```python
"""Tests for importer utility functions."""

from datetime import datetime

from basic_memory.importers.utils import clean_filename, format_timestamp


def test_clean_filename():
    """Test clean_filename utility function."""
    # Test with normal string
    assert clean_filename("Hello World") == "Hello_World"

    # Test with punctuation
    assert clean_filename("Hello, World!") == "Hello_World"

    # Test with special characters
    assert clean_filename("File[1]/with\\special:chars") == "File_1_with_special_chars"

    # Test with long string (over 100 chars)
    long_str = "a" * 120
    assert len(clean_filename(long_str)) == 100

    # Test with empty string
    assert clean_filename("") == "untitled"

    # Test with None (fixes #451 - ChatGPT null titles)
    assert clean_filename(None) == "untitled"

    # Test with only special characters
    # Some implementations may return empty string or underscore
    result = clean_filename("!@#$%^&*()")
    assert result in ["untitled", "_", ""]


def test_format_timestamp():
    """Test format_timestamp utility function."""
    # Test with datetime object
    dt = datetime(2023, 1, 1, 12, 30, 45)
    assert format_timestamp(dt) == "2023-01-01 12:30:45"

    # Test with ISO format string
    iso_str = "2023-01-01T12:30:45Z"
    assert format_timestamp(iso_str) == "2023-01-01 12:30:45"

    # Test with Unix timestamp as int
    unix_ts = 1672577445  # 2023-01-01 12:30:45 UTC
    formatted = format_timestamp(unix_ts)
    # The exact format may vary by timezone, so we just check for the year
    assert "2023" in formatted

    # Test with Unix timestamp as string
    unix_str = "1672577445"
    formatted = format_timestamp(unix_str)
    assert "2023" in formatted

    # Test with unparseable string
    assert format_timestamp("not a timestamp") == "not a timestamp"

    # Test with non-timestamp object
    assert format_timestamp(None) == "None"

```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/clients/directory.py:
--------------------------------------------------------------------------------

```python
"""Typed client for directory API operations.

Encapsulates all /v2/projects/{project_id}/directory/* endpoints.
"""

from typing import Optional, Any

from httpx import AsyncClient

from basic_memory.mcp.tools.utils import call_get


class DirectoryClient:
    """Typed client for directory listing operations.

    Centralizes:
    - API path construction for /v2/projects/{project_id}/directory/*
    - Response validation
    - Consistent error handling through call_* utilities

    Usage:
        async with get_client() as http_client:
            client = DirectoryClient(http_client, project_id)
            nodes = await client.list("/", depth=2)
    """

    def __init__(self, http_client: AsyncClient, project_id: str):
        """Initialize the directory client.

        Args:
            http_client: HTTPX AsyncClient for making requests
            project_id: Project external_id (UUID) for API calls
        """
        self.http_client = http_client
        self.project_id = project_id
        self._base_path = f"/v2/projects/{project_id}/directory"

    async def list(
        self,
        dir_name: str = "/",
        *,
        depth: int = 1,
        file_name_glob: Optional[str] = None,
    ) -> list[dict[str, Any]]:
        """List directory contents.

        Args:
            dir_name: Directory path to list (default: root)
            depth: How deep to traverse (default: 1)
            file_name_glob: Optional glob pattern to filter files

        Returns:
            List of directory nodes with their contents

        Raises:
            ToolError: If the request fails
        """
        params: dict = {
            "dir_name": dir_name,
            "depth": depth,
        }
        if file_name_glob:
            params["file_name_glob"] = file_name_glob

        response = await call_get(
            self.http_client,
            f"{self._base_path}/list",
            params=params,
        )
        return response.json()

```

--------------------------------------------------------------------------------
/tests/api/test_async_client.py:
--------------------------------------------------------------------------------

```python
"""Tests for async_client configuration."""

from httpx import AsyncClient, ASGITransport, Timeout

from basic_memory.mcp.async_client import create_client


def test_create_client_uses_asgi_when_no_remote_env(config_manager, monkeypatch):
    """Test that create_client uses ASGI transport when cloud mode is disabled."""
    monkeypatch.delenv("BASIC_MEMORY_USE_REMOTE_API", raising=False)
    monkeypatch.delenv("BASIC_MEMORY_CLOUD_MODE", raising=False)

    cfg = config_manager.load_config()
    cfg.cloud_mode = False
    config_manager.save_config(cfg)

    client = create_client()

    assert isinstance(client, AsyncClient)
    assert isinstance(client._transport, ASGITransport)
    assert str(client.base_url) == "http://test"


def test_create_client_uses_http_when_cloud_mode_env_set(config_manager, monkeypatch):
    """Test that create_client uses HTTP transport when BASIC_MEMORY_CLOUD_MODE is set."""
    monkeypatch.setenv("BASIC_MEMORY_CLOUD_MODE", "True")

    config = config_manager.load_config()
    client = create_client()

    assert isinstance(client, AsyncClient)
    assert not isinstance(client._transport, ASGITransport)
    # Cloud mode uses cloud_host/proxy as base_url
    assert str(client.base_url) == f"{config.cloud_host}/proxy/"


def test_create_client_configures_extended_timeouts(config_manager, monkeypatch):
    """Test that create_client configures 30-second timeouts for long operations."""
    monkeypatch.delenv("BASIC_MEMORY_USE_REMOTE_API", raising=False)
    monkeypatch.delenv("BASIC_MEMORY_CLOUD_MODE", raising=False)

    cfg = config_manager.load_config()
    cfg.cloud_mode = False
    config_manager.save_config(cfg)

    client = create_client()

    # Verify timeout configuration
    assert isinstance(client.timeout, Timeout)
    assert client.timeout.connect == 10.0  # 10 seconds for connection
    assert client.timeout.read == 30.0  # 30 seconds for reading
    assert client.timeout.write == 30.0  # 30 seconds for writing
    assert client.timeout.pool == 30.0  # 30 seconds for pool

```

--------------------------------------------------------------------------------
/tests/schemas/test_memory_url.py:
--------------------------------------------------------------------------------

```python
"""Tests for MemoryUrl parsing."""

import pytest

from basic_memory.schemas.memory import memory_url, memory_url_path, normalize_memory_url


def test_basic_permalink():
    """Test basic permalink parsing."""
    url = memory_url.validate_strings("memory://specs/search")
    assert str(url) == "memory://specs/search"
    assert memory_url_path(url) == "specs/search"


def test_glob_pattern():
    """Test pattern matching."""
    url = memory_url.validate_python("memory://specs/search/*")
    assert memory_url_path(url) == "specs/search/*"


def test_related_prefix():
    """Test related content prefix."""
    url = memory_url.validate_python("memory://related/specs/search")
    assert memory_url_path(url) == "related/specs/search"


def test_context_prefix():
    """Test context prefix."""
    url = memory_url.validate_python("memory://context/current")
    assert memory_url_path(url) == "context/current"


def test_complex_pattern():
    """Test multiple glob patterns."""
    url = memory_url.validate_python("memory://specs/*/search/*")
    assert memory_url_path(url) == "specs/*/search/*"


def test_path_with_dashes():
    """Test path with dashes and other chars."""
    url = memory_url.validate_python("memory://file-sync-and-note-updates-implementation")
    assert memory_url_path(url) == "file-sync-and-note-updates-implementation"


def test_str_representation():
    """Test converting back to string."""
    url = memory_url.validate_python("memory://specs/search")
    assert url == "memory://specs/search"


def test_normalize_memory_url():
    """Test converting back to string."""
    url = normalize_memory_url("memory://specs/search")
    assert url == "memory://specs/search"


def test_normalize_memory_url_no_prefix():
    """Test converting back to string."""
    url = normalize_memory_url("specs/search")
    assert url == "memory://specs/search"


def test_normalize_memory_url_empty():
    """Test that empty string raises ValueError."""
    with pytest.raises(ValueError, match="cannot be empty"):
        normalize_memory_url("")

```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/prompts/continue_conversation.py:
--------------------------------------------------------------------------------

```python
"""Session continuation prompts for Basic Memory MCP server.

These prompts help users continue conversations and work across sessions,
providing context from previous interactions to maintain continuity.
"""

from typing import Annotated, Optional

from loguru import logger
from pydantic import Field

from basic_memory.config import get_project_config
from basic_memory.mcp.async_client import get_client
from basic_memory.mcp.server import mcp
from basic_memory.mcp.tools.utils import call_post
from basic_memory.schemas.base import TimeFrame
from basic_memory.schemas.prompt import ContinueConversationRequest


@mcp.prompt(
    name="continue_conversation",
    description="Continue a previous conversation",
)
async def continue_conversation(
    topic: Annotated[Optional[str], Field(description="Topic or keyword to search for")] = None,
    timeframe: Annotated[
        Optional[TimeFrame],
        Field(description="How far back to look for activity (e.g. '1d', '1 week')"),
    ] = None,
) -> str:
    """Continue a previous conversation or work session.

    This prompt helps you pick up where you left off by finding recent context
    about a specific topic or showing general recent activity.

    Args:
        topic: Topic or keyword to search for (optional)
        timeframe: How far back to look for activity

    Returns:
        Context from previous sessions on this topic
    """
    logger.info(f"Continuing session, topic: {topic}, timeframe: {timeframe}")

    async with get_client() as client:
        # Create request model
        request = ContinueConversationRequest(  # pyright: ignore [reportCallIssue]
            topic=topic, timeframe=timeframe
        )

        project_url = get_project_config().project_url

        # Call the prompt API endpoint
        response = await call_post(
            client,
            f"{project_url}/prompt/continue-conversation",
            json=request.model_dump(exclude_none=True),
        )

        # Extract the rendered prompt from the response
        result = response.json()
        return result["prompt"]

```

--------------------------------------------------------------------------------
/tests/cli/conftest.py:
--------------------------------------------------------------------------------

```python
import os
from pathlib import Path
from typing import AsyncGenerator

import pytest
import pytest_asyncio
from fastapi import FastAPI
from httpx import AsyncClient, ASGITransport

from basic_memory.api.app import app as fastapi_app
from basic_memory.deps import get_project_config, get_engine_factory, get_app_config


@pytest.fixture(autouse=True)
def isolated_home(tmp_path, monkeypatch) -> Path:
    """Isolate tests from user's HOME directory.

    This prevents tests from reading/writing to ~/.basic-memory/.bmignore
    or other user-specific configuration.

    Sets BASIC_MEMORY_HOME to tmp_path directly so the default project
    writes files to tmp_path, which is where tests expect to find them.
    """
    # Clear config cache to ensure fresh config for each test
    from basic_memory import config as config_module

    config_module._CONFIG_CACHE = None

    monkeypatch.setenv("HOME", str(tmp_path))
    if os.name == "nt":
        monkeypatch.setenv("USERPROFILE", str(tmp_path))
    # Set to tmp_path directly (not tmp_path/basic-memory) so default project
    # home is tmp_path - tests expect to find imported files there
    monkeypatch.setenv("BASIC_MEMORY_HOME", str(tmp_path))
    return tmp_path


@pytest_asyncio.fixture
async def app(app_config, project_config, engine_factory, test_config, aiolib) -> FastAPI:
    """Create test FastAPI application."""
    app = fastapi_app
    app.dependency_overrides[get_app_config] = lambda: app_config
    app.dependency_overrides[get_project_config] = lambda: project_config
    app.dependency_overrides[get_engine_factory] = lambda: engine_factory
    return app


@pytest_asyncio.fixture
async def client(app: FastAPI, aiolib) -> AsyncGenerator[AsyncClient, None]:
    """Create test client that both MCP and tests will use."""
    async with AsyncClient(transport=ASGITransport(app=app), base_url="http://test") as client:
        yield client


@pytest_asyncio.fixture
async def cli_env(project_config, client, test_config):
    """Set up CLI environment with correct project session."""
    return {"project_config": project_config, "client": client}

```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/clients/resource.py:
--------------------------------------------------------------------------------

```python
"""Typed client for resource API operations.

Encapsulates all /v2/projects/{project_id}/resource/* endpoints.
"""

from typing import Optional

from httpx import AsyncClient, Response

from basic_memory.mcp.tools.utils import call_get


class ResourceClient:
    """Typed client for resource operations.

    Centralizes:
    - API path construction for /v2/projects/{project_id}/resource/*
    - Consistent error handling through call_* utilities

    Note: This client returns raw Response objects for resources since they
    may be text, images, or other binary content that needs special handling.

    Usage:
        async with get_client() as http_client:
            client = ResourceClient(http_client, project_id)
            response = await client.read(entity_id)
            text = response.text
    """

    def __init__(self, http_client: AsyncClient, project_id: str):
        """Initialize the resource client.

        Args:
            http_client: HTTPX AsyncClient for making requests
            project_id: Project external_id (UUID) for API calls
        """
        self.http_client = http_client
        self.project_id = project_id
        self._base_path = f"/v2/projects/{project_id}/resource"

    async def read(
        self,
        entity_id: str,
        *,
        page: Optional[int] = None,
        page_size: Optional[int] = None,
    ) -> Response:
        """Read a resource by entity ID.

        Args:
            entity_id: Entity external_id (UUID)
            page: Optional page number for paginated content
            page_size: Optional page size for paginated content

        Returns:
            Raw HTTP Response (caller handles text/binary content)

        Raises:
            ToolError: If the resource is not found or request fails
        """
        params: dict = {}
        if page is not None:
            params["page"] = page
        if page_size is not None:
            params["page_size"] = page_size

        return await call_get(
            self.http_client,
            f"{self._base_path}/{entity_id}",
            params=params if params else None,
        )

```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/server.py:
--------------------------------------------------------------------------------

```python
"""
Basic Memory FastMCP server.
"""

from contextlib import asynccontextmanager

from fastmcp import FastMCP
from loguru import logger

from basic_memory import db
from basic_memory.mcp.container import McpContainer, set_container
from basic_memory.services.initialization import initialize_app
from basic_memory.telemetry import show_notice_if_needed, track_app_started


@asynccontextmanager
async def lifespan(app: FastMCP):
    """Lifecycle manager for the MCP server.

    Handles:
    - Database initialization and migrations
    - Telemetry notice and tracking
    - File sync via SyncCoordinator (if enabled and not in cloud mode)
    - Proper cleanup on shutdown
    """
    # --- Composition Root ---
    # Create container and read config (single point of config access)
    container = McpContainer.create()
    set_container(container)

    logger.info(f"Starting Basic Memory MCP server (mode={container.mode.name})")

    # Show telemetry notice (first run only) and track startup
    show_notice_if_needed()
    track_app_started("mcp")

    # Track if we created the engine (vs test fixtures providing it)
    # This prevents disposing an engine provided by test fixtures when
    # multiple Client connections are made in the same test
    engine_was_none = db._engine is None

    # Initialize app (runs migrations, reconciles projects)
    await initialize_app(container.config)

    # Create and start sync coordinator (lifecycle centralized in coordinator)
    sync_coordinator = container.create_sync_coordinator()
    await sync_coordinator.start()

    try:
        yield
    finally:
        # Shutdown - coordinator handles clean task cancellation
        logger.info("Shutting down Basic Memory MCP server")
        await sync_coordinator.stop()

        # Only shutdown DB if we created it (not if test fixture provided it)
        if engine_was_none:
            await db.shutdown_db()
            logger.info("Database connections closed")
        else:  # pragma: no cover
            logger.debug("Skipping DB shutdown - engine provided externally")


mcp = FastMCP(
    name="Basic Memory",
    lifespan=lifespan,
)

```

--------------------------------------------------------------------------------
/llms-install.md:
--------------------------------------------------------------------------------

```markdown
# Basic Memory Installation Guide for LLMs

This guide is specifically designed to help AI assistants like Cline install and configure Basic Memory. Follow these
steps in order.

## Installation Steps

### 1. Install Basic Memory Package

Use one of the following package managers to install:

```bash
# Install with uv (recommended)
uv tool install basic-memory

# Or with pip
pip install basic-memory
```

### 2. Configure MCP Server

Add the following to your config:

```json
{
  "mcpServers": {
    "basic-memory": {
      "command": "uvx",
      "args": [
        "basic-memory",
        "mcp"
      ]
    }
  }
}
```

For Claude Desktop, this file is located at:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

### 3. Start Synchronization (optional)

To synchronize files in real-time, run:

```bash
basic-memory sync --watch
```

Or for a one-time sync:

```bash
basic-memory sync
```

## Configuration Options

### Custom Directory

To use a directory other than the default `~/basic-memory`:

```bash
basic-memory project add custom-project /path/to/your/directory
basic-memory project default custom-project
```

### Multiple Projects

To manage multiple knowledge bases:

```bash
# List all projects
basic-memory project list

# Add a new project
basic-memory project add work ~/work-basic-memory

# Set default project
basic-memory project default work
```

## Importing Existing Data

### From Claude.ai

```bash
basic-memory import claude conversations path/to/conversations.json
basic-memory import claude projects path/to/projects.json
```

### From ChatGPT

```bash
basic-memory import chatgpt path/to/conversations.json
```

### From MCP Memory Server

```bash
basic-memory import memory-json path/to/memory.json
```

## Troubleshooting

If you encounter issues:

1. Check that Basic Memory is properly installed:
   ```bash
   basic-memory --version
   ```

2. Verify the sync process is running:
   ```bash
   ps aux | grep basic-memory
   ```

3. Check sync output for errors:
   ```bash
   basic-memory sync --verbose
   ```

4. Check log output:
   ```bash
   cat ~/.basic-memory/basic-memory.log
   ```

For more detailed information, refer to the [full documentation](https://memory.basicmachines.co/).
```

--------------------------------------------------------------------------------
/tests/cli/cloud/test_upload_path.py:
--------------------------------------------------------------------------------

```python
from contextlib import asynccontextmanager

import httpx
import pytest

from basic_memory.cli.commands.cloud.upload import upload_path


@pytest.mark.asyncio
async def test_upload_path_dry_run_respects_gitignore_and_bmignore(config_home, tmp_path, capsys):
    root = tmp_path / "proj"
    root.mkdir()

    # Create a .gitignore that ignores one file
    (root / ".gitignore").write_text("ignored.md\n", encoding="utf-8")

    # Create files
    (root / "keep.md").write_text("keep", encoding="utf-8")
    (root / "ignored.md").write_text("ignored", encoding="utf-8")

    ok = await upload_path(root, "proj", verbose=True, use_gitignore=True, dry_run=True)
    assert ok is True

    out = capsys.readouterr().out
    # Verbose mode prints ignored files in the scan phase, but they must not appear
    # in the final "would be uploaded" list.
    assert "[INCLUDE] keep.md" in out or "keep.md" in out
    assert "[IGNORED] ignored.md" in out
    assert "Files that would be uploaded:" in out
    assert "  keep.md (" in out
    assert "  ignored.md (" not in out


@pytest.mark.asyncio
async def test_upload_path_non_dry_puts_files_and_skips_archives(config_home, tmp_path):
    root = tmp_path / "proj"
    root.mkdir()

    (root / "keep.md").write_text("keep", encoding="utf-8")
    (root / "archive.zip").write_bytes(b"zipbytes")

    seen = {"puts": []}

    async def handler(request: httpx.Request) -> httpx.Response:
        # Expect PUT to the webdav path
        assert request.method == "PUT"
        seen["puts"].append(request.url.path)
        # Must have mtime header
        assert request.headers.get("x-oc-mtime")
        return httpx.Response(201, text="Created")

    transport = httpx.MockTransport(handler)

    @asynccontextmanager
    async def client_cm_factory():
        async with httpx.AsyncClient(
            transport=transport, base_url="https://cloud.example.test"
        ) as client:
            yield client

    ok = await upload_path(
        root,
        "proj",
        verbose=False,
        use_gitignore=False,
        dry_run=False,
        client_cm_factory=client_cm_factory,
    )
    assert ok is True

    # Only keep.md uploaded; archive skipped
    assert "/webdav/proj/keep.md" in seen["puts"]
    assert all("archive.zip" not in p for p in seen["puts"])

```

--------------------------------------------------------------------------------
/src/basic_memory/cli/container.py:
--------------------------------------------------------------------------------

```python
"""CLI composition root for Basic Memory.

This container owns reading ConfigManager and environment variables for the
CLI entrypoint. Downstream modules receive config/dependencies explicitly
rather than reading globals.

Design principles:
- Only this module reads ConfigManager directly
- Runtime mode (cloud/local/test) is resolved here
- Different CLI commands may need different initialization
"""

from dataclasses import dataclass

from basic_memory.config import BasicMemoryConfig, ConfigManager
from basic_memory.runtime import RuntimeMode, resolve_runtime_mode


@dataclass
class CliContainer:
    """Composition root for the CLI entrypoint.

    Holds resolved configuration and runtime context.
    Created once at CLI startup, then used by subcommands.
    """

    config: BasicMemoryConfig
    mode: RuntimeMode

    @classmethod
    def create(cls) -> "CliContainer":
        """Create container by reading ConfigManager.

        This is the single point where CLI reads global config.
        """
        config = ConfigManager().config
        mode = resolve_runtime_mode(
            cloud_mode_enabled=config.cloud_mode_enabled,
            is_test_env=config.is_test_env,
        )
        return cls(config=config, mode=mode)

    # --- Runtime Mode Properties ---

    @property
    def is_cloud_mode(self) -> bool:
        """Whether running in cloud mode."""
        return self.mode.is_cloud


# Module-level container instance (set by app callback)
_container: CliContainer | None = None


def get_container() -> CliContainer:
    """Get the current CLI container.

    Returns:
        The CLI container

    Raises:
        RuntimeError: If container hasn't been initialized
    """
    if _container is None:
        raise RuntimeError("CLI container not initialized. Call set_container() first.")
    return _container


def set_container(container: CliContainer) -> None:
    """Set the CLI container (called by app callback)."""
    global _container
    _container = container


def get_or_create_container() -> CliContainer:
    """Get existing container or create new one.

    This is useful for CLI commands that might be called before
    the main app callback runs (e.g., eager options).
    """
    global _container
    if _container is None:
        _container = CliContainer.create()
    return _container

```

--------------------------------------------------------------------------------
/.claude/commands/spec.md:
--------------------------------------------------------------------------------

```markdown
---
allowed-tools: mcp__basic-memory__write_note, mcp__basic-memory__read_note, mcp__basic-memory__search_notes, mcp__basic-memory__edit_note
argument-hint: [create|status|show|review] [spec-name]
description: Manage specifications in our development process
---

## Context

Specifications are managed in the Basic Memory "specs" project. All specs live in a centralized location accessible across all repositories via MCP tools.

See SPEC-1 and SPEC-2 in the "specs" project for the full specification-driven development process.

Available commands:
- `create [name]` - Create new specification
- `status` - Show all spec statuses
- `show [spec-name]` - Read a specific spec
- `review [spec-name]` - Review implementation against spec

## Your task

Execute the spec command: `/spec $ARGUMENTS`

### If command is "create":
1. Get next SPEC number by searching existing specs in "specs" project
2. Create new spec using template from SPEC-2
3. Use mcp__basic-memory__write_note with project="specs"
4. Include standard sections: Why, What, How, How to Evaluate

### If command is "status":
1. Use mcp__basic-memory__search_notes with project="specs"
2. Display table with spec number, title, and progress
3. Show completion status from checkboxes in content

### If command is "show":
1. Use mcp__basic-memory__read_note with project="specs"
2. Display the full spec content

### If command is "review":
1. Read the specified spec and its "How to Evaluate" section
2. Review current implementation against success criteria with careful evaluation of:
   - **Functional completeness** - All specified features working
   - **Test coverage analysis** - Actual test files and coverage percentage
     - Count existing test files vs required components/APIs/composables
     - Verify unit tests, integration tests, and end-to-end tests
     - Check for missing test categories (component, API, workflow)
   - **Code quality metrics** - TypeScript compilation, linting, performance
   - **Architecture compliance** - Component isolation, state management patterns
   - **Documentation completeness** - Implementation matches specification
3. Provide honest, accurate assessment - do not overstate completeness
4. Document findings and update spec with review results using mcp__basic-memory__edit_note
5. If gaps found, clearly identify what still needs to be implemented/tested

```

--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/telemetry.py:
--------------------------------------------------------------------------------

```python
"""Telemetry commands for basic-memory CLI."""

import typer
from rich.console import Console
from rich.panel import Panel

from basic_memory.cli.app import app
from basic_memory.config import ConfigManager

console = Console()

# Create telemetry subcommand group
telemetry_app = typer.Typer(help="Manage anonymous telemetry settings")
app.add_typer(telemetry_app, name="telemetry")


@telemetry_app.command("enable")
def enable() -> None:
    """Enable anonymous telemetry.

    Telemetry helps improve Basic Memory by collecting anonymous usage data.
    No personal data, note content, or file paths are ever collected.
    """
    config_manager = ConfigManager()
    config = config_manager.config
    config.telemetry_enabled = True
    config_manager.save_config(config)
    console.print("[green]Telemetry enabled[/green]")
    console.print("[dim]Thank you for helping improve Basic Memory![/dim]")


@telemetry_app.command("disable")
def disable() -> None:
    """Disable anonymous telemetry.

    You can re-enable telemetry anytime with: bm telemetry enable
    """
    config_manager = ConfigManager()
    config = config_manager.config
    config.telemetry_enabled = False
    config_manager.save_config(config)
    console.print("[yellow]Telemetry disabled[/yellow]")


@telemetry_app.command("status")
def status() -> None:
    """Show current telemetry status and what's collected."""
    from basic_memory.telemetry import get_install_id, TELEMETRY_DOCS_URL

    config = ConfigManager().config

    status_text = (
        "[green]enabled[/green]" if config.telemetry_enabled else "[yellow]disabled[/yellow]"
    )

    console.print(f"\nTelemetry: {status_text}")
    console.print(f"Install ID: [dim]{get_install_id()}[/dim]")
    console.print()

    what_we_collect = """
[bold]What we collect:[/bold]
  - App version, Python version, OS, architecture
  - Feature usage (which MCP tools and CLI commands)
  - Sync statistics (entity count, duration)
  - Error types (sanitized, no file paths)

[bold]What we NEVER collect:[/bold]
  - Note content, file names, or paths
  - Personal information
  - IP addresses
"""

    console.print(
        Panel(
            what_we_collect.strip(),
            title="Telemetry Details",
            border_style="blue",
            expand=False,
        )
    )
    console.print(f"[dim]Details: {TELEMETRY_DOCS_URL}[/dim]")

```

--------------------------------------------------------------------------------
/tests/api/test_api_container.py:
--------------------------------------------------------------------------------

```python
"""Tests for API container composition root."""

import pytest

from basic_memory.api.container import (
    ApiContainer,
    get_container,
    set_container,
)
from basic_memory.runtime import RuntimeMode


class TestApiContainer:
    """Tests for ApiContainer."""

    def test_create_from_config(self, app_config):
        """Container can be created from config manager."""
        container = ApiContainer(config=app_config, mode=RuntimeMode.LOCAL)
        assert container.config == app_config
        assert container.mode == RuntimeMode.LOCAL

    def test_should_sync_files_when_enabled_and_not_test(self, app_config):
        """Sync should be enabled when config says so and not in test mode."""
        app_config.sync_changes = True
        container = ApiContainer(config=app_config, mode=RuntimeMode.LOCAL)
        assert container.should_sync_files is True

    def test_should_not_sync_files_when_disabled(self, app_config):
        """Sync should be disabled when config says so."""
        app_config.sync_changes = False
        container = ApiContainer(config=app_config, mode=RuntimeMode.LOCAL)
        assert container.should_sync_files is False

    def test_should_not_sync_files_in_test_mode(self, app_config):
        """Sync should be disabled in test mode regardless of config."""
        app_config.sync_changes = True
        container = ApiContainer(config=app_config, mode=RuntimeMode.TEST)
        assert container.should_sync_files is False


class TestContainerAccessors:
    """Tests for container get/set functions."""

    def test_get_container_raises_when_not_set(self, monkeypatch):
        """get_container raises RuntimeError when container not initialized."""
        # Clear any existing container
        import basic_memory.api.container as container_module

        monkeypatch.setattr(container_module, "_container", None)

        with pytest.raises(RuntimeError, match="API container not initialized"):
            get_container()

    def test_set_and_get_container(self, app_config, monkeypatch):
        """set_container allows get_container to return the container."""
        import basic_memory.api.container as container_module

        container = ApiContainer(config=app_config, mode=RuntimeMode.LOCAL)
        monkeypatch.setattr(container_module, "_container", None)

        set_container(container)
        assert get_container() is container

```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/prompts/ai_assistant_guide.py:
--------------------------------------------------------------------------------

```python
from pathlib import Path

from basic_memory.config import ConfigManager
from basic_memory.mcp.server import mcp
from loguru import logger


@mcp.resource(
    uri="memory://ai_assistant_guide",
    name="ai assistant guide",
    description="Give an AI assistant guidance on how to use Basic Memory tools effectively",
)
def ai_assistant_guide() -> str:
    """Return a concise guide on Basic Memory tools and how to use them.

    Dynamically adapts instructions based on configuration:
    - Default project mode: Simplified instructions with automatic project
    - Regular mode: Project discovery and selection guidance
    - CLI constraint mode: Single project constraint information

    Returns:
        A focused guide on Basic Memory usage.
    """
    logger.info("Loading AI assistant guide resource")

    # Load base guide content
    guide_doc = Path(__file__).parent.parent / "resources" / "ai_assistant_guide.md"
    content = guide_doc.read_text(encoding="utf-8")

    # Check configuration for mode-specific instructions
    config = ConfigManager().config

    # Add mode-specific header
    mode_info = ""
    if config.default_project_mode:  # pragma: no cover
        mode_info = f"""
# 🎯 Default Project Mode Active

**Current Configuration**: All operations automatically use project '{config.default_project}'

**Simplified Usage**: You don't need to specify the project parameter in tool calls.
- `write_note(title="Note", content="...", folder="docs")` ✅
- Project parameter is optional and will default to '{config.default_project}'
- To use a different project, explicitly specify: `project="other-project"`

────────────────────────────────────────

"""
    else:  # pragma: no cover
        mode_info = """
# 🔧 Multi-Project Mode Active

**Current Configuration**: Project parameter required for all operations

**Project Discovery Required**: Use these tools to select a project:
- `list_memory_projects()` - See all available projects
- `recent_activity()` - Get project activity and recommendations
- Remember the user's project choice throughout the conversation

────────────────────────────────────────

"""

    # Prepend mode info to the guide
    enhanced_content = mode_info + content

    logger.info(
        f"Loaded AI assistant guide ({len(enhanced_content)} chars) with mode: {'default_project' if config.default_project_mode else 'multi_project'}"
    )
    return enhanced_content

```

--------------------------------------------------------------------------------
/.github/workflows/release.yml:
--------------------------------------------------------------------------------

```yaml
name: Release

on:
  push:
    tags:
      - 'v*'  # Trigger on version tags like v1.0.0, v0.13.0, etc.

jobs:
  release:
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: write

    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.12"

      - name: Install uv
        run: |
          pip install uv

      - name: Install dependencies and build
        run: |
          uv venv
          uv sync
          uv build

      - name: Verify build succeeded
        run: |
          # Verify that build artifacts exist
          ls -la dist/
          echo "Build completed successfully"

      - name: Create GitHub Release
        uses: softprops/action-gh-release@v2
        with:
          files: |
            dist/*.whl
            dist/*.tar.gz
          generate_release_notes: true
          tag_name: ${{ github.ref_name }}
          token: ${{ secrets.GITHUB_TOKEN }}

      - name: Publish to PyPI
        uses: pypa/gh-action-pypi-publish@release/v1
        with:
          password: ${{ secrets.PYPI_TOKEN }}

  homebrew:
    name: Update Homebrew Formula
    needs: release
    runs-on: ubuntu-latest
    # Only run for stable releases (not dev, beta, or rc versions)
    if: ${{ !contains(github.ref_name, 'dev') && !contains(github.ref_name, 'b') && !contains(github.ref_name, 'rc') }}
    permissions:
      contents: write
      actions: read
    steps:
      - name: Update Homebrew formula
        uses: mislav/bump-homebrew-formula-action@v3
        with:
          # Formula name in homebrew-basic-memory repo
          formula-name: basic-memory
          # The tap repository
          homebrew-tap: basicmachines-co/homebrew-basic-memory
          # Base branch of the tap repository
          base-branch: main
          # Download URL will be automatically constructed from the tag
          download-url: https://github.com/basicmachines-co/basic-memory/archive/refs/tags/${{ github.ref_name }}.tar.gz
          # Commit message for the formula update
          commit-message: |
            {{formulaName}} {{version}}

            Created by https://github.com/basicmachines-co/basic-memory/actions/runs/${{ github.run_id }}
        env:
          # Personal Access Token with repo scope for homebrew-basic-memory repo
          COMMITTER_TOKEN: ${{ secrets.HOMEBREW_TOKEN }}


```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/tools/view_note.py:
--------------------------------------------------------------------------------

```python
"""View note tool for Basic Memory MCP server."""

from textwrap import dedent
from typing import Optional

from loguru import logger
from fastmcp import Context

from basic_memory.mcp.server import mcp
from basic_memory.mcp.tools.read_note import read_note
from basic_memory.telemetry import track_mcp_tool


@mcp.tool(
    description="View a note as a formatted artifact for better readability.",
)
async def view_note(
    identifier: str,
    project: Optional[str] = None,
    page: int = 1,
    page_size: int = 10,
    context: Context | None = None,
) -> str:
    """View a markdown note as a formatted artifact.

    This tool reads a note using the same logic as read_note but instructs Claude
    to display the content as a markdown artifact in the Claude Desktop app.
    Project parameter optional with server resolution.

    Args:
        identifier: The title or permalink of the note to view
        project: Project name to read from. Optional - server will resolve using hierarchy.
                If unknown, use list_memory_projects() to discover available projects.
        page: Page number for paginated results (default: 1)
        page_size: Number of items per page (default: 10)
        context: Optional FastMCP context for performance caching.

    Returns:
        Instructions for Claude to create a markdown artifact with the note content.

    Examples:
        # View a note by title
        view_note("Meeting Notes")

        # View a note by permalink
        view_note("meetings/weekly-standup")

        # View with pagination
        view_note("large-document", page=2, page_size=5)

        # Explicit project specification
        view_note("Meeting Notes", project="my-project")

    Raises:
        HTTPError: If project doesn't exist or is inaccessible
        SecurityError: If identifier attempts path traversal
    """
    track_mcp_tool("view_note")
    logger.info(f"Viewing note: {identifier} in project: {project}")

    # Call the existing read_note logic
    content = await read_note.fn(identifier, project, page, page_size, context)

    # Check if this is an error message (note not found)
    if "# Note Not Found" in content:
        return content  # Return error message directly

    # Return instructions for Claude to create an artifact
    return dedent(f"""
        Note retrieved: "{identifier}"
        
        Display this note as a markdown artifact for the user.
    
        Content:
        ---
        {content}
        ---
        """).strip()

```

--------------------------------------------------------------------------------
/docker-compose.yml:
--------------------------------------------------------------------------------

```yaml
# Docker Compose configuration for Basic Memory
# See docs/Docker.md for detailed setup instructions

version: '3.8'

services:
  basic-memory:
    # Use pre-built image (recommended for most users)
    image: ghcr.io/basicmachines-co/basic-memory:latest
    
    # Uncomment to build locally instead:
    # build: .
    
    container_name: basic-memory-server
    
    # Volume mounts for knowledge directories and persistent data
    volumes:

      # Persistent storage for configuration and database
      - basic-memory-config:/root/.basic-memory:rw

      # Mount your knowledge directory (required)
      # Change './knowledge' to your actual Obsidian vault or knowledge directory
      - ./knowledge:/app/data:rw

      # OPTIONAL: Mount additional knowledge directories for multiple projects
      # - ./work-notes:/app/data/work:rw
      # - ./personal-notes:/app/data/personal:rw

      # You can edit the project config manually in the mounted config volume
      # The default project will be configured to use /app/data
    environment:
      # Project configuration
      - BASIC_MEMORY_DEFAULT_PROJECT=main
      
      # Enable real-time file synchronization (recommended for Docker)
      - BASIC_MEMORY_SYNC_CHANGES=true
      
      # Logging configuration
      - BASIC_MEMORY_LOG_LEVEL=INFO
      
      # Sync delay in milliseconds (adjust for performance vs responsiveness)
      - BASIC_MEMORY_SYNC_DELAY=1000
    
    # Port exposure for HTTP transport (only needed if not using STDIO)
    ports:
      - "8000:8000"
    
    # Command with SSE transport (configurable via environment variables above)
    # IMPORTANT: The SSE and streamable-http endpoints are not secured
    command: ["basic-memory", "mcp", "--transport", "sse", "--host", "0.0.0.0", "--port", "8000"]
    
    # Container management
    restart: unless-stopped
    
    # Health monitoring
    healthcheck:
      test: ["CMD", "basic-memory", "--version"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 30s
    
    # Optional: Resource limits
    # deploy:
    #   resources:
    #     limits:
    #       memory: 512M
    #       cpus: '0.5'
    #     reservations:
    #       memory: 256M
    #       cpus: '0.25'

volumes:
  # Named volume for persistent configuration and database
  # This ensures your configuration and knowledge graph persist across container restarts
  basic-memory-config:
    driver: local

# Network configuration (optional)
# networks:
#   basic-memory-net:
#     driver: bridge
```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/resources/project_info.py:
--------------------------------------------------------------------------------

```python
"""Project info tool for Basic Memory MCP server."""

from typing import Optional

from loguru import logger
from fastmcp import Context

from basic_memory.mcp.async_client import get_client
from basic_memory.mcp.project_context import get_active_project
from basic_memory.mcp.server import mcp
from basic_memory.mcp.tools.utils import call_get
from basic_memory.schemas import ProjectInfoResponse


@mcp.resource(
    uri="memory://{project}/info",
    description="Get information and statistics about the current Basic Memory project.",
)
async def project_info(
    project: Optional[str] = None, context: Context | None = None
) -> ProjectInfoResponse:
    """Get comprehensive information about the current Basic Memory project.

    This tool provides detailed statistics and status information about your
    Basic Memory project, including:

    - Project configuration
    - Entity, observation, and relation counts
    - Graph metrics (most connected entities, isolated entities)
    - Recent activity and growth over time
    - System status (database, watch service, version)

    Use this tool to:
    - Verify your Basic Memory installation is working correctly
    - Get insights into your knowledge base structure
    - Monitor growth and activity over time
    - Identify potential issues like unresolved relations

    Args:
        project: Optional project name. If not provided, uses default_project
                (if default_project_mode=true) or CLI constraint. If unknown,
                use list_memory_projects() to discover available projects.
        context: Optional FastMCP context for performance caching.

    Returns:
        Detailed project information and statistics

    Examples:
        # Get information about the current/default project
        info = await project_info()

        # Get information about a specific project
        info = await project_info(project="my-project")

        # Check entity counts
        print(f"Total entities: {info.statistics.total_entities}")

        # Check system status
        print(f"Basic Memory version: {info.system.version}")
    """
    logger.info("Getting project info")

    async with get_client() as client:
        project_config = await get_active_project(client, project, context)
        project_url = project_config.permalink

        # Call the API endpoint
        response = await call_get(client, f"{project_url}/project/info")

        # Convert response to ProjectInfoResponse
        return ProjectInfoResponse.model_validate(response.json())

```

--------------------------------------------------------------------------------
/src/basic_memory/api/v2/routers/search_router.py:
--------------------------------------------------------------------------------

```python
"""V2 router for search operations.

This router uses external_id UUIDs for stable, API-friendly routing.
V1 uses string-based project names which are less efficient and less stable.
"""

from fastapi import APIRouter, BackgroundTasks, Path

from basic_memory.api.routers.utils import to_search_results
from basic_memory.schemas.search import SearchQuery, SearchResponse
from basic_memory.deps import SearchServiceV2ExternalDep, EntityServiceV2ExternalDep

# Note: No prefix here - it's added during registration as /v2/{project_id}/search
router = APIRouter(tags=["search"])


@router.post("/search/", response_model=SearchResponse)
async def search(
    query: SearchQuery,
    search_service: SearchServiceV2ExternalDep,
    entity_service: EntityServiceV2ExternalDep,
    project_id: str = Path(..., description="Project external UUID"),
    page: int = 1,
    page_size: int = 10,
):
    """Search across all knowledge and documents in a project.

    V2 uses external_id UUIDs for stable API references.

    Args:
        project_id: Project external UUID from URL path
        query: Search query parameters (text, filters, etc.)
        search_service: Search service scoped to project
        entity_service: Entity service scoped to project
        page: Page number for pagination
        page_size: Number of results per page

    Returns:
        SearchResponse with paginated search results
    """
    limit = page_size
    offset = (page - 1) * page_size
    results = await search_service.search(query, limit=limit, offset=offset)
    search_results = await to_search_results(entity_service, results)
    return SearchResponse(
        results=search_results,
        current_page=page,
        page_size=page_size,
    )


@router.post("/search/reindex")
async def reindex(
    background_tasks: BackgroundTasks,
    search_service: SearchServiceV2ExternalDep,
    project_id: str = Path(..., description="Project external UUID"),
):
    """Recreate and populate the search index for a project.

    This is a background operation that rebuilds the search index
    from scratch. Useful after bulk updates or if the index becomes
    corrupted.

    Args:
        project_id: Project external UUID from URL path
        background_tasks: FastAPI background tasks handler
        search_service: Search service scoped to project

    Returns:
        Status message indicating reindex has been initiated
    """
    await search_service.reindex_all(background_tasks=background_tasks)
    return {"status": "ok", "message": "Reindex initiated"}

```

--------------------------------------------------------------------------------
/tests/services/test_project_service_operations.py:
--------------------------------------------------------------------------------

```python
"""Additional tests for ProjectService operations."""

import os
import tempfile
from pathlib import Path

import pytest

from basic_memory.services.project_service import ProjectService


@pytest.mark.asyncio
async def test_get_project_from_database(project_service: ProjectService):
    """Test getting projects from the database."""
    # Generate unique project name for testing
    test_project_name = f"test-project-{os.urandom(4).hex()}"
    with tempfile.TemporaryDirectory() as temp_dir:
        test_root = Path(temp_dir)
        test_path = str(test_root / "test-project")

        # Make sure directory exists
        os.makedirs(test_path, exist_ok=True)

        try:
            # Add a project to the database
            project_data = {
                "name": test_project_name,
                "path": test_path,
                "permalink": test_project_name.lower().replace(" ", "-"),
                "is_active": True,
                "is_default": False,
            }
            await project_service.repository.create(project_data)

            # Verify we can get the project
            project = await project_service.repository.get_by_name(test_project_name)
            assert project is not None
            assert project.name == test_project_name
            assert project.path == test_path

        finally:
            # Clean up
            project = await project_service.repository.get_by_name(test_project_name)
            if project:
                await project_service.repository.delete(project.id)


@pytest.mark.asyncio
async def test_add_project_to_config(project_service: ProjectService, config_manager):
    """Test adding a project to the config manager."""
    # Generate unique project name for testing
    test_project_name = f"config-project-{os.urandom(4).hex()}"
    with tempfile.TemporaryDirectory() as temp_dir:
        test_root = Path(temp_dir)
        test_path = test_root / "config-project"

        # Make sure directory exists
        test_path.mkdir(parents=True, exist_ok=True)

        try:
            # Add a project to config only (using ConfigManager directly)
            config_manager.add_project(test_project_name, str(test_path))

            # Verify it's in the config
            assert test_project_name in project_service.projects
            assert Path(project_service.projects[test_project_name]) == test_path

        finally:
            # Clean up
            if test_project_name in project_service.projects:
                config_manager.remove_project(test_project_name)

```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/clients/project.py:
--------------------------------------------------------------------------------

```python
"""Typed client for project API operations.

Encapsulates project-level endpoints.
"""

from typing import Any

from httpx import AsyncClient

from basic_memory.mcp.tools.utils import call_get, call_post, call_delete
from basic_memory.schemas.project_info import ProjectList, ProjectStatusResponse


class ProjectClient:
    """Typed client for project management operations.

    Centralizes:
    - API path construction for project endpoints
    - Response validation via Pydantic models
    - Consistent error handling through call_* utilities

    Note: This client does not require a project_id since it operates
    across projects.

    Usage:
        async with get_client() as http_client:
            client = ProjectClient(http_client)
            projects = await client.list_projects()
    """

    def __init__(self, http_client: AsyncClient):
        """Initialize the project client.

        Args:
            http_client: HTTPX AsyncClient for making requests
        """
        self.http_client = http_client

    async def list_projects(self) -> ProjectList:
        """List all available projects.

        Returns:
            ProjectList with all projects and default project name

        Raises:
            ToolError: If the request fails
        """
        response = await call_get(
            self.http_client,
            "/projects/projects",
        )
        return ProjectList.model_validate(response.json())

    async def create_project(self, project_data: dict[str, Any]) -> ProjectStatusResponse:
        """Create a new project.

        Args:
            project_data: Project creation data (name, path, set_default)

        Returns:
            ProjectStatusResponse with creation result

        Raises:
            ToolError: If the request fails
        """
        response = await call_post(
            self.http_client,
            "/projects/projects",
            json=project_data,
        )
        return ProjectStatusResponse.model_validate(response.json())

    async def delete_project(self, project_external_id: str) -> ProjectStatusResponse:
        """Delete a project by its external ID.

        Args:
            project_external_id: Project external ID (UUID)

        Returns:
            ProjectStatusResponse with deletion result

        Raises:
            ToolError: If the request fails
        """
        response = await call_delete(
            self.http_client,
            f"/v2/projects/{project_external_id}",
        )
        return ProjectStatusResponse.model_validate(response.json())

```

--------------------------------------------------------------------------------
/src/basic_memory/schemas/sync_report.py:
--------------------------------------------------------------------------------

```python
"""Pydantic schemas for sync report responses."""

from datetime import datetime
from typing import TYPE_CHECKING, Dict, List, Set

from pydantic import BaseModel, Field

# avoid cirular imports
if TYPE_CHECKING:  # pragma: no cover
    from basic_memory.sync.sync_service import SyncReport


class SkippedFileResponse(BaseModel):
    """Information about a file that was skipped due to repeated failures."""

    path: str = Field(description="File path relative to project root")
    reason: str = Field(description="Error message from last failure")
    failure_count: int = Field(description="Number of consecutive failures")
    first_failed: datetime = Field(description="Timestamp of first failure")

    model_config = {"from_attributes": True}


class SyncReportResponse(BaseModel):
    """Report of file changes found compared to database state.

    Used for API responses when scanning or syncing files.
    """

    new: Set[str] = Field(default_factory=set, description="Files on disk but not in database")
    modified: Set[str] = Field(default_factory=set, description="Files with different checksums")
    deleted: Set[str] = Field(default_factory=set, description="Files in database but not on disk")
    moves: Dict[str, str] = Field(
        default_factory=dict, description="Files moved (old_path -> new_path)"
    )
    checksums: Dict[str, str] = Field(
        default_factory=dict, description="Current file checksums (path -> checksum)"
    )
    skipped_files: List[SkippedFileResponse] = Field(
        default_factory=list, description="Files skipped due to repeated failures"
    )
    total: int = Field(description="Total number of changes")

    @classmethod
    def from_sync_report(cls, report: "SyncReport") -> "SyncReportResponse":
        """Convert SyncReport dataclass to Pydantic model.

        Args:
            report: SyncReport dataclass from sync service

        Returns:
            SyncReportResponse with same data
        """
        return cls(
            new=report.new,
            modified=report.modified,
            deleted=report.deleted,
            moves=report.moves,
            checksums=report.checksums,
            skipped_files=[
                SkippedFileResponse(
                    path=skipped.path,
                    reason=skipped.reason,
                    failure_count=skipped.failure_count,
                    first_failed=skipped.first_failed,
                )
                for skipped in report.skipped_files
            ],
            total=report.total,
        )

    model_config = {"from_attributes": True}

```

--------------------------------------------------------------------------------
/tests/cli/cloud/test_rclone_config_and_bmignore_filters.py:
--------------------------------------------------------------------------------

```python
import time

from basic_memory.cli.commands.cloud.bisync_commands import convert_bmignore_to_rclone_filters
from basic_memory.cli.commands.cloud.rclone_config import (
    configure_rclone_remote,
    get_rclone_config_path,
)
from basic_memory.ignore_utils import get_bmignore_path


def test_convert_bmignore_to_rclone_filters_creates_and_converts(config_home):
    bmignore = get_bmignore_path()
    bmignore.parent.mkdir(parents=True, exist_ok=True)
    bmignore.write_text(
        "\n".join(
            [
                "# comment",
                "",
                "node_modules",
                "*.pyc",
                ".git",
            ]
        )
        + "\n",
        encoding="utf-8",
    )

    rclone_filter = convert_bmignore_to_rclone_filters()
    assert rclone_filter.exists()
    content = rclone_filter.read_text(encoding="utf-8").splitlines()

    # Comments/empties preserved
    assert "# comment" in content
    assert "" in content
    # Directory pattern becomes recursive exclude
    assert "- node_modules/**" in content
    # Wildcard pattern becomes simple exclude
    assert "- *.pyc" in content
    assert "- .git/**" in content


def test_convert_bmignore_to_rclone_filters_is_cached_when_up_to_date(config_home):
    bmignore = get_bmignore_path()
    bmignore.parent.mkdir(parents=True, exist_ok=True)
    bmignore.write_text("node_modules\n", encoding="utf-8")

    first = convert_bmignore_to_rclone_filters()
    first_mtime = first.stat().st_mtime

    # Ensure bmignore is older than rclone filter file
    time.sleep(0.01)
    # Touch rclone filter to be "newer"
    first.write_text(first.read_text(encoding="utf-8"), encoding="utf-8")

    second = convert_bmignore_to_rclone_filters()
    assert second == first
    assert second.stat().st_mtime >= first_mtime


def test_configure_rclone_remote_writes_config_and_backs_up_existing(config_home):
    cfg_path = get_rclone_config_path()
    cfg_path.parent.mkdir(parents=True, exist_ok=True)
    cfg_path.write_text("[other]\ntype = local\n", encoding="utf-8")

    remote = configure_rclone_remote(access_key="ak", secret_key="sk")
    assert remote == "basic-memory-cloud"

    # Config file updated
    text = cfg_path.read_text(encoding="utf-8")
    assert "[basic-memory-cloud]" in text
    assert "type = s3" in text
    assert "access_key_id = ak" in text
    assert "secret_access_key = sk" in text
    assert "encoding = Slash,InvalidUtf8" in text

    # Backup exists
    backups = list(cfg_path.parent.glob("rclone.conf.backup-*"))
    assert backups, "expected a backup of rclone.conf to be created"

```

--------------------------------------------------------------------------------
/tests/utils/test_parse_tags.py:
--------------------------------------------------------------------------------

```python
"""Tests for parse_tags utility function."""

from typing import List, Union

import pytest

from basic_memory.utils import parse_tags


@pytest.mark.parametrize(
    "input_tags,expected",
    [
        # None input
        (None, []),
        # List inputs
        ([], []),
        (["tag1", "tag2"], ["tag1", "tag2"]),
        (["tag1", "", "tag2"], ["tag1", "tag2"]),  # Empty tags are filtered
        ([" tag1 ", " tag2 "], ["tag1", "tag2"]),  # Whitespace is stripped
        # String inputs
        ("", []),
        ("tag1", ["tag1"]),
        ("tag1,tag2", ["tag1", "tag2"]),
        ("tag1, tag2", ["tag1", "tag2"]),  # Whitespace after comma is stripped
        ("tag1,,tag2", ["tag1", "tag2"]),  # Empty tags are filtered
        # Tags with leading '#' characters - these should be stripped
        (["#tag1", "##tag2"], ["tag1", "tag2"]),
        ("#tag1,##tag2", ["tag1", "tag2"]),
        (["tag1", "#tag2", "##tag3"], ["tag1", "tag2", "tag3"]),
        # Mixed whitespace and '#' characters
        ([" #tag1 ", " ##tag2 "], ["tag1", "tag2"]),
        (" #tag1 , ##tag2 ", ["tag1", "tag2"]),
        # JSON stringified arrays (common AI assistant issue)
        ('["tag1", "tag2", "tag3"]', ["tag1", "tag2", "tag3"]),
        ('["system", "overview", "reference"]', ["system", "overview", "reference"]),
        ('["#tag1", "##tag2"]', ["tag1", "tag2"]),  # JSON array with hash prefixes
        ('[ "tag1" , "tag2" ]', ["tag1", "tag2"]),  # JSON array with extra spaces
    ],
)
def test_parse_tags(input_tags: Union[List[str], str, None], expected: List[str]) -> None:
    """Test tag parsing with various input formats."""
    result = parse_tags(input_tags)
    assert result == expected


def test_parse_tags_special_case() -> None:
    """Test parsing from non-string, non-list types."""

    # Test with custom object that has __str__ method
    class TagObject:
        def __str__(self) -> str:
            return "tag1,tag2"

    result = parse_tags(TagObject())  # pyright: ignore [reportArgumentType]
    assert result == ["tag1", "tag2"]


def test_parse_tags_invalid_json() -> None:
    """Test that invalid JSON strings fall back to comma-separated parsing."""
    # Invalid JSON should fall back to comma-separated parsing
    result = parse_tags("[invalid json")
    assert result == ["[invalid json"]  # Treated as single tag

    result = parse_tags("[tag1, tag2]")  # Valid bracket format but not JSON
    assert result == ["[tag1", "tag2]"]  # Split by comma

    result = parse_tags('["tag1", "tag2"')  # Incomplete JSON
    assert result == ['["tag1"', '"tag2"']  # Fall back to comma separation

```

--------------------------------------------------------------------------------
/.github/workflows/claude-issue-triage.yml:
--------------------------------------------------------------------------------

```yaml
name: Claude Issue Triage

on:
  issues:
    types: [opened]

jobs:
  triage:
    runs-on: ubuntu-latest
    permissions:
      issues: write
      id-token: write
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
        with:
          fetch-depth: 1

      - name: Run Claude Issue Triage
        uses: anthropics/claude-code-action@v1
        with:
          claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
          track_progress: true  # Show triage progress
          prompt: |
            Analyze this new Basic Memory issue and perform triage:

            **Issue Analysis:**
            1. **Type Classification:**
               - Bug report (code defect)
               - Feature request (new functionality)
               - Enhancement (improvement to existing feature)
               - Documentation (docs improvement)
               - Question/Support (user help)
               - MCP tool issue (specific to MCP functionality)

            2. **Priority Assessment:**
               - Critical: Security issues, data loss, complete breakage
               - High: Major functionality broken, affects many users
               - Medium: Minor bugs, usability issues
               - Low: Nice-to-have improvements, cosmetic issues

            3. **Component Classification:**
               - CLI commands
               - MCP tools
               - Database/sync
               - Cloud functionality
               - Documentation
               - Testing

            4. **Complexity Estimate:**
               - Simple: Quick fix, documentation update
               - Medium: Requires some investigation/testing
               - Complex: Major feature work, architectural changes

            **Actions to Take:**
            1. Add appropriate labels using: `gh issue edit ${{ github.event.issue.number }} --add-label "label1,label2"`
            2. Check for duplicates using: `gh search issues`
            3. If duplicate found, comment mentioning the original issue
            4. For feature requests, ask clarifying questions if needed
            5. For bugs, request reproduction steps if missing

            **Available Labels:**
            - Type: bug, enhancement, feature, documentation, question, mcp-tool
            - Priority: critical, high, medium, low
            - Component: cli, mcp, database, cloud, docs, testing
            - Complexity: simple, medium, complex
            - Status: needs-reproduction, needs-clarification, duplicate

            Read the issue carefully and provide helpful triage with appropriate labels.

          claude_args: '--allowed-tools "Bash(gh issue:*),Bash(gh search:*),Read"'
```
Page 1/19FirstPrevNextLast