This is page 2 of 27. Use http://codebase.md/basicmachines-co/basic-memory?lines=true&page={x} to view the full context.
# Directory Structure
```
├── .claude
│ ├── commands
│ │ ├── release
│ │ │ ├── beta.md
│ │ │ ├── changelog.md
│ │ │ ├── release-check.md
│ │ │ └── release.md
│ │ ├── spec.md
│ │ └── test-live.md
│ └── settings.json
├── .dockerignore
├── .env.example
├── .github
│ ├── dependabot.yml
│ ├── ISSUE_TEMPLATE
│ │ ├── bug_report.md
│ │ ├── config.yml
│ │ ├── documentation.md
│ │ └── feature_request.md
│ └── workflows
│ ├── claude-code-review.yml
│ ├── claude-issue-triage.yml
│ ├── claude.yml
│ ├── dev-release.yml
│ ├── docker.yml
│ ├── pr-title.yml
│ ├── release.yml
│ └── test.yml
├── .gitignore
├── .python-version
├── CHANGELOG.md
├── CITATION.cff
├── CLA.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── docker-compose-postgres.yml
├── docker-compose.yml
├── Dockerfile
├── docs
│ ├── ai-assistant-guide-extended.md
│ ├── ARCHITECTURE.md
│ ├── character-handling.md
│ ├── cloud-cli.md
│ ├── Docker.md
│ └── testing-coverage.md
├── justfile
├── LICENSE
├── llms-install.md
├── pyproject.toml
├── README.md
├── SECURITY.md
├── smithery.yaml
├── specs
│ ├── SPEC-1 Specification-Driven Development Process.md
│ ├── SPEC-10 Unified Deployment Workflow and Event Tracking.md
│ ├── SPEC-11 Basic Memory API Performance Optimization.md
│ ├── SPEC-12 OpenTelemetry Observability.md
│ ├── SPEC-13 CLI Authentication with Subscription Validation.md
│ ├── SPEC-14 Cloud Git Versioning & GitHub Backup.md
│ ├── SPEC-14- Cloud Git Versioning & GitHub Backup.md
│ ├── SPEC-15 Configuration Persistence via Tigris for Cloud Tenants.md
│ ├── SPEC-16 MCP Cloud Service Consolidation.md
│ ├── SPEC-17 Semantic Search with ChromaDB.md
│ ├── SPEC-18 AI Memory Management Tool.md
│ ├── SPEC-19 Sync Performance and Memory Optimization.md
│ ├── SPEC-2 Slash Commands Reference.md
│ ├── SPEC-20 Simplified Project-Scoped Rclone Sync.md
│ ├── SPEC-3 Agent Definitions.md
│ ├── SPEC-4 Notes Web UI Component Architecture.md
│ ├── SPEC-5 CLI Cloud Upload via WebDAV.md
│ ├── SPEC-6 Explicit Project Parameter Architecture.md
│ ├── SPEC-7 POC to spike Tigris Turso for local access to cloud data.md
│ ├── SPEC-8 TigrisFS Integration.md
│ ├── SPEC-9 Multi-Project Bidirectional Sync Architecture.md
│ ├── SPEC-9 Signed Header Tenant Information.md
│ └── SPEC-9-1 Follow-Ups- Conflict, Sync, and Observability.md
├── src
│ └── basic_memory
│ ├── __init__.py
│ ├── alembic
│ │ ├── alembic.ini
│ │ ├── env.py
│ │ ├── migrations.py
│ │ ├── script.py.mako
│ │ └── versions
│ │ ├── 314f1ea54dc4_add_postgres_full_text_search_support_.py
│ │ ├── 3dae7c7b1564_initial_schema.py
│ │ ├── 502b60eaa905_remove_required_from_entity_permalink.py
│ │ ├── 5fe1ab1ccebe_add_projects_table.py
│ │ ├── 647e7a75e2cd_project_constraint_fix.py
│ │ ├── 6830751f5fb6_merge_multiple_heads.py
│ │ ├── 9d9c1cb7d8f5_add_mtime_and_size_columns_to_entity_.py
│ │ ├── a1b2c3d4e5f6_fix_project_foreign_keys.py
│ │ ├── a2b3c4d5e6f7_add_search_index_entity_cascade.py
│ │ ├── b3c3938bacdb_relation_to_name_unique_index.py
│ │ ├── cc7172b46608_update_search_index_schema.py
│ │ ├── e7e1f4367280_add_scan_watermark_tracking_to_project.py
│ │ ├── f8a9b2c3d4e5_add_pg_trgm_for_fuzzy_link_resolution.py
│ │ └── g9a0b3c4d5e6_add_external_id_to_project_and_entity.py
│ ├── api
│ │ ├── __init__.py
│ │ ├── app.py
│ │ ├── container.py
│ │ ├── routers
│ │ │ ├── __init__.py
│ │ │ ├── directory_router.py
│ │ │ ├── importer_router.py
│ │ │ ├── knowledge_router.py
│ │ │ ├── management_router.py
│ │ │ ├── memory_router.py
│ │ │ ├── project_router.py
│ │ │ ├── prompt_router.py
│ │ │ ├── resource_router.py
│ │ │ ├── search_router.py
│ │ │ └── utils.py
│ │ ├── template_loader.py
│ │ └── v2
│ │ ├── __init__.py
│ │ └── routers
│ │ ├── __init__.py
│ │ ├── directory_router.py
│ │ ├── importer_router.py
│ │ ├── knowledge_router.py
│ │ ├── memory_router.py
│ │ ├── project_router.py
│ │ ├── prompt_router.py
│ │ ├── resource_router.py
│ │ └── search_router.py
│ ├── cli
│ │ ├── __init__.py
│ │ ├── app.py
│ │ ├── auth.py
│ │ ├── commands
│ │ │ ├── __init__.py
│ │ │ ├── cloud
│ │ │ │ ├── __init__.py
│ │ │ │ ├── api_client.py
│ │ │ │ ├── bisync_commands.py
│ │ │ │ ├── cloud_utils.py
│ │ │ │ ├── core_commands.py
│ │ │ │ ├── rclone_commands.py
│ │ │ │ ├── rclone_config.py
│ │ │ │ ├── rclone_installer.py
│ │ │ │ ├── upload_command.py
│ │ │ │ └── upload.py
│ │ │ ├── command_utils.py
│ │ │ ├── db.py
│ │ │ ├── format.py
│ │ │ ├── import_chatgpt.py
│ │ │ ├── import_claude_conversations.py
│ │ │ ├── import_claude_projects.py
│ │ │ ├── import_memory_json.py
│ │ │ ├── mcp.py
│ │ │ ├── project.py
│ │ │ ├── status.py
│ │ │ ├── telemetry.py
│ │ │ └── tool.py
│ │ ├── container.py
│ │ └── main.py
│ ├── config.py
│ ├── db.py
│ ├── deps
│ │ ├── __init__.py
│ │ ├── config.py
│ │ ├── db.py
│ │ ├── importers.py
│ │ ├── projects.py
│ │ ├── repositories.py
│ │ └── services.py
│ ├── deps.py
│ ├── file_utils.py
│ ├── ignore_utils.py
│ ├── importers
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── chatgpt_importer.py
│ │ ├── claude_conversations_importer.py
│ │ ├── claude_projects_importer.py
│ │ ├── memory_json_importer.py
│ │ └── utils.py
│ ├── markdown
│ │ ├── __init__.py
│ │ ├── entity_parser.py
│ │ ├── markdown_processor.py
│ │ ├── plugins.py
│ │ ├── schemas.py
│ │ └── utils.py
│ ├── mcp
│ │ ├── __init__.py
│ │ ├── async_client.py
│ │ ├── clients
│ │ │ ├── __init__.py
│ │ │ ├── directory.py
│ │ │ ├── knowledge.py
│ │ │ ├── memory.py
│ │ │ ├── project.py
│ │ │ ├── resource.py
│ │ │ └── search.py
│ │ ├── container.py
│ │ ├── project_context.py
│ │ ├── prompts
│ │ │ ├── __init__.py
│ │ │ ├── ai_assistant_guide.py
│ │ │ ├── continue_conversation.py
│ │ │ ├── recent_activity.py
│ │ │ ├── search.py
│ │ │ └── utils.py
│ │ ├── resources
│ │ │ ├── ai_assistant_guide.md
│ │ │ └── project_info.py
│ │ ├── server.py
│ │ └── tools
│ │ ├── __init__.py
│ │ ├── build_context.py
│ │ ├── canvas.py
│ │ ├── chatgpt_tools.py
│ │ ├── delete_note.py
│ │ ├── edit_note.py
│ │ ├── list_directory.py
│ │ ├── move_note.py
│ │ ├── project_management.py
│ │ ├── read_content.py
│ │ ├── read_note.py
│ │ ├── recent_activity.py
│ │ ├── search.py
│ │ ├── utils.py
│ │ ├── view_note.py
│ │ └── write_note.py
│ ├── models
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── knowledge.py
│ │ ├── project.py
│ │ └── search.py
│ ├── project_resolver.py
│ ├── repository
│ │ ├── __init__.py
│ │ ├── entity_repository.py
│ │ ├── observation_repository.py
│ │ ├── postgres_search_repository.py
│ │ ├── project_info_repository.py
│ │ ├── project_repository.py
│ │ ├── relation_repository.py
│ │ ├── repository.py
│ │ ├── search_index_row.py
│ │ ├── search_repository_base.py
│ │ ├── search_repository.py
│ │ └── sqlite_search_repository.py
│ ├── runtime.py
│ ├── schemas
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── cloud.py
│ │ ├── delete.py
│ │ ├── directory.py
│ │ ├── importer.py
│ │ ├── memory.py
│ │ ├── project_info.py
│ │ ├── prompt.py
│ │ ├── request.py
│ │ ├── response.py
│ │ ├── search.py
│ │ ├── sync_report.py
│ │ └── v2
│ │ ├── __init__.py
│ │ ├── entity.py
│ │ └── resource.py
│ ├── services
│ │ ├── __init__.py
│ │ ├── context_service.py
│ │ ├── directory_service.py
│ │ ├── entity_service.py
│ │ ├── exceptions.py
│ │ ├── file_service.py
│ │ ├── initialization.py
│ │ ├── link_resolver.py
│ │ ├── project_service.py
│ │ ├── search_service.py
│ │ └── service.py
│ ├── sync
│ │ ├── __init__.py
│ │ ├── background_sync.py
│ │ ├── coordinator.py
│ │ ├── sync_service.py
│ │ └── watch_service.py
│ ├── telemetry.py
│ ├── templates
│ │ └── prompts
│ │ ├── continue_conversation.hbs
│ │ └── search.hbs
│ └── utils.py
├── test-int
│ ├── BENCHMARKS.md
│ ├── cli
│ │ ├── test_project_commands_integration.py
│ │ └── test_version_integration.py
│ ├── conftest.py
│ ├── mcp
│ │ ├── test_build_context_underscore.py
│ │ ├── test_build_context_validation.py
│ │ ├── test_chatgpt_tools_integration.py
│ │ ├── test_default_project_mode_integration.py
│ │ ├── test_delete_note_integration.py
│ │ ├── test_edit_note_integration.py
│ │ ├── test_lifespan_shutdown_sync_task_cancellation_integration.py
│ │ ├── test_list_directory_integration.py
│ │ ├── test_move_note_integration.py
│ │ ├── test_project_management_integration.py
│ │ ├── test_project_state_sync_integration.py
│ │ ├── test_read_content_integration.py
│ │ ├── test_read_note_integration.py
│ │ ├── test_search_integration.py
│ │ ├── test_single_project_mcp_integration.py
│ │ └── test_write_note_integration.py
│ ├── test_db_wal_mode.py
│ └── test_disable_permalinks_integration.py
├── tests
│ ├── __init__.py
│ ├── api
│ │ ├── conftest.py
│ │ ├── test_api_container.py
│ │ ├── test_async_client.py
│ │ ├── test_continue_conversation_template.py
│ │ ├── test_directory_router.py
│ │ ├── test_importer_router.py
│ │ ├── test_knowledge_router.py
│ │ ├── test_management_router.py
│ │ ├── test_memory_router.py
│ │ ├── test_project_router_operations.py
│ │ ├── test_project_router.py
│ │ ├── test_prompt_router.py
│ │ ├── test_relation_background_resolution.py
│ │ ├── test_resource_router.py
│ │ ├── test_search_router.py
│ │ ├── test_search_template.py
│ │ ├── test_template_loader_helpers.py
│ │ ├── test_template_loader.py
│ │ └── v2
│ │ ├── __init__.py
│ │ ├── conftest.py
│ │ ├── test_directory_router.py
│ │ ├── test_importer_router.py
│ │ ├── test_knowledge_router.py
│ │ ├── test_memory_router.py
│ │ ├── test_project_router.py
│ │ ├── test_prompt_router.py
│ │ ├── test_resource_router.py
│ │ └── test_search_router.py
│ ├── cli
│ │ ├── cloud
│ │ │ ├── test_cloud_api_client_and_utils.py
│ │ │ ├── test_rclone_config_and_bmignore_filters.py
│ │ │ └── test_upload_path.py
│ │ ├── conftest.py
│ │ ├── test_auth_cli_auth.py
│ │ ├── test_cli_container.py
│ │ ├── test_cli_exit.py
│ │ ├── test_cli_tool_exit.py
│ │ ├── test_cli_tools.py
│ │ ├── test_cloud_authentication.py
│ │ ├── test_ignore_utils.py
│ │ ├── test_import_chatgpt.py
│ │ ├── test_import_claude_conversations.py
│ │ ├── test_import_claude_projects.py
│ │ ├── test_import_memory_json.py
│ │ ├── test_project_add_with_local_path.py
│ │ └── test_upload.py
│ ├── conftest.py
│ ├── db
│ │ └── test_issue_254_foreign_key_constraints.py
│ ├── importers
│ │ ├── test_conversation_indexing.py
│ │ ├── test_importer_base.py
│ │ └── test_importer_utils.py
│ ├── markdown
│ │ ├── __init__.py
│ │ ├── test_date_frontmatter_parsing.py
│ │ ├── test_entity_parser_error_handling.py
│ │ ├── test_entity_parser.py
│ │ ├── test_markdown_plugins.py
│ │ ├── test_markdown_processor.py
│ │ ├── test_observation_edge_cases.py
│ │ ├── test_parser_edge_cases.py
│ │ ├── test_relation_edge_cases.py
│ │ └── test_task_detection.py
│ ├── mcp
│ │ ├── clients
│ │ │ ├── __init__.py
│ │ │ └── test_clients.py
│ │ ├── conftest.py
│ │ ├── test_async_client_modes.py
│ │ ├── test_mcp_container.py
│ │ ├── test_obsidian_yaml_formatting.py
│ │ ├── test_permalink_collision_file_overwrite.py
│ │ ├── test_project_context.py
│ │ ├── test_prompts.py
│ │ ├── test_recent_activity_prompt_modes.py
│ │ ├── test_resources.py
│ │ ├── test_server_lifespan_branches.py
│ │ ├── test_tool_build_context.py
│ │ ├── test_tool_canvas.py
│ │ ├── test_tool_delete_note.py
│ │ ├── test_tool_edit_note.py
│ │ ├── test_tool_list_directory.py
│ │ ├── test_tool_move_note.py
│ │ ├── test_tool_project_management.py
│ │ ├── test_tool_read_content.py
│ │ ├── test_tool_read_note.py
│ │ ├── test_tool_recent_activity.py
│ │ ├── test_tool_resource.py
│ │ ├── test_tool_search.py
│ │ ├── test_tool_utils.py
│ │ ├── test_tool_view_note.py
│ │ ├── test_tool_write_note_kebab_filenames.py
│ │ ├── test_tool_write_note.py
│ │ └── tools
│ │ └── test_chatgpt_tools.py
│ ├── Non-MarkdownFileSupport.pdf
│ ├── README.md
│ ├── repository
│ │ ├── test_entity_repository_upsert.py
│ │ ├── test_entity_repository.py
│ │ ├── test_entity_upsert_issue_187.py
│ │ ├── test_observation_repository.py
│ │ ├── test_postgres_search_repository.py
│ │ ├── test_project_info_repository.py
│ │ ├── test_project_repository.py
│ │ ├── test_relation_repository.py
│ │ ├── test_repository.py
│ │ ├── test_search_repository_edit_bug_fix.py
│ │ └── test_search_repository.py
│ ├── schemas
│ │ ├── test_base_timeframe_minimum.py
│ │ ├── test_memory_serialization.py
│ │ ├── test_memory_url_validation.py
│ │ ├── test_memory_url.py
│ │ ├── test_relation_response_reference_resolution.py
│ │ ├── test_schemas.py
│ │ └── test_search.py
│ ├── Screenshot.png
│ ├── services
│ │ ├── test_context_service.py
│ │ ├── test_directory_service.py
│ │ ├── test_entity_service_disable_permalinks.py
│ │ ├── test_entity_service.py
│ │ ├── test_file_service.py
│ │ ├── test_initialization_cloud_mode_branches.py
│ │ ├── test_initialization.py
│ │ ├── test_link_resolver.py
│ │ ├── test_project_removal_bug.py
│ │ ├── test_project_service_operations.py
│ │ ├── test_project_service.py
│ │ └── test_search_service.py
│ ├── sync
│ │ ├── test_character_conflicts.py
│ │ ├── test_coordinator.py
│ │ ├── test_sync_service_incremental.py
│ │ ├── test_sync_service.py
│ │ ├── test_sync_wikilink_issue.py
│ │ ├── test_tmp_files.py
│ │ ├── test_watch_service_atomic_adds.py
│ │ ├── test_watch_service_edge_cases.py
│ │ ├── test_watch_service_reload.py
│ │ └── test_watch_service.py
│ ├── test_config.py
│ ├── test_deps.py
│ ├── test_production_cascade_delete.py
│ ├── test_project_resolver.py
│ ├── test_rclone_commands.py
│ ├── test_runtime.py
│ ├── test_telemetry.py
│ └── utils
│ ├── test_file_utils.py
│ ├── test_frontmatter_obsidian_compatible.py
│ ├── test_parse_tags.py
│ ├── test_permalink_formatting.py
│ ├── test_timezone_utils.py
│ ├── test_utf8_handling.py
│ └── test_validate_project_path.py
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/tests/cli/conftest.py:
--------------------------------------------------------------------------------
```python
1 | import os
2 | from pathlib import Path
3 | from typing import AsyncGenerator
4 |
5 | import pytest
6 | import pytest_asyncio
7 | from fastapi import FastAPI
8 | from httpx import AsyncClient, ASGITransport
9 |
10 | from basic_memory.api.app import app as fastapi_app
11 | from basic_memory.deps import get_project_config, get_engine_factory, get_app_config
12 |
13 |
14 | @pytest.fixture(autouse=True)
15 | def isolated_home(tmp_path, monkeypatch) -> Path:
16 | """Isolate tests from user's HOME directory.
17 |
18 | This prevents tests from reading/writing to ~/.basic-memory/.bmignore
19 | or other user-specific configuration.
20 |
21 | Sets BASIC_MEMORY_HOME to tmp_path directly so the default project
22 | writes files to tmp_path, which is where tests expect to find them.
23 | """
24 | # Clear config cache to ensure fresh config for each test
25 | from basic_memory import config as config_module
26 |
27 | config_module._CONFIG_CACHE = None
28 |
29 | monkeypatch.setenv("HOME", str(tmp_path))
30 | if os.name == "nt":
31 | monkeypatch.setenv("USERPROFILE", str(tmp_path))
32 | # Set to tmp_path directly (not tmp_path/basic-memory) so default project
33 | # home is tmp_path - tests expect to find imported files there
34 | monkeypatch.setenv("BASIC_MEMORY_HOME", str(tmp_path))
35 | return tmp_path
36 |
37 |
38 | @pytest_asyncio.fixture
39 | async def app(app_config, project_config, engine_factory, test_config, aiolib) -> FastAPI:
40 | """Create test FastAPI application."""
41 | app = fastapi_app
42 | app.dependency_overrides[get_app_config] = lambda: app_config
43 | app.dependency_overrides[get_project_config] = lambda: project_config
44 | app.dependency_overrides[get_engine_factory] = lambda: engine_factory
45 | return app
46 |
47 |
48 | @pytest_asyncio.fixture
49 | async def client(app: FastAPI, aiolib) -> AsyncGenerator[AsyncClient, None]:
50 | """Create test client that both MCP and tests will use."""
51 | async with AsyncClient(transport=ASGITransport(app=app), base_url="http://test") as client:
52 | yield client
53 |
54 |
55 | @pytest_asyncio.fixture
56 | async def cli_env(project_config, client, test_config):
57 | """Set up CLI environment with correct project session."""
58 | return {"project_config": project_config, "client": client}
59 |
```
--------------------------------------------------------------------------------
/src/basic_memory/mcp/clients/resource.py:
--------------------------------------------------------------------------------
```python
1 | """Typed client for resource API operations.
2 |
3 | Encapsulates all /v2/projects/{project_id}/resource/* endpoints.
4 | """
5 |
6 | from typing import Optional
7 |
8 | from httpx import AsyncClient, Response
9 |
10 | from basic_memory.mcp.tools.utils import call_get
11 |
12 |
13 | class ResourceClient:
14 | """Typed client for resource operations.
15 |
16 | Centralizes:
17 | - API path construction for /v2/projects/{project_id}/resource/*
18 | - Consistent error handling through call_* utilities
19 |
20 | Note: This client returns raw Response objects for resources since they
21 | may be text, images, or other binary content that needs special handling.
22 |
23 | Usage:
24 | async with get_client() as http_client:
25 | client = ResourceClient(http_client, project_id)
26 | response = await client.read(entity_id)
27 | text = response.text
28 | """
29 |
30 | def __init__(self, http_client: AsyncClient, project_id: str):
31 | """Initialize the resource client.
32 |
33 | Args:
34 | http_client: HTTPX AsyncClient for making requests
35 | project_id: Project external_id (UUID) for API calls
36 | """
37 | self.http_client = http_client
38 | self.project_id = project_id
39 | self._base_path = f"/v2/projects/{project_id}/resource"
40 |
41 | async def read(
42 | self,
43 | entity_id: str,
44 | *,
45 | page: Optional[int] = None,
46 | page_size: Optional[int] = None,
47 | ) -> Response:
48 | """Read a resource by entity ID.
49 |
50 | Args:
51 | entity_id: Entity external_id (UUID)
52 | page: Optional page number for paginated content
53 | page_size: Optional page size for paginated content
54 |
55 | Returns:
56 | Raw HTTP Response (caller handles text/binary content)
57 |
58 | Raises:
59 | ToolError: If the resource is not found or request fails
60 | """
61 | params: dict = {}
62 | if page is not None:
63 | params["page"] = page
64 | if page_size is not None:
65 | params["page_size"] = page_size
66 |
67 | return await call_get(
68 | self.http_client,
69 | f"{self._base_path}/{entity_id}",
70 | params=params if params else None,
71 | )
72 |
```
--------------------------------------------------------------------------------
/src/basic_memory/mcp/server.py:
--------------------------------------------------------------------------------
```python
1 | """
2 | Basic Memory FastMCP server.
3 | """
4 |
5 | from contextlib import asynccontextmanager
6 |
7 | from fastmcp import FastMCP
8 | from loguru import logger
9 |
10 | from basic_memory import db
11 | from basic_memory.mcp.container import McpContainer, set_container
12 | from basic_memory.services.initialization import initialize_app
13 | from basic_memory.telemetry import show_notice_if_needed, track_app_started
14 |
15 |
16 | @asynccontextmanager
17 | async def lifespan(app: FastMCP):
18 | """Lifecycle manager for the MCP server.
19 |
20 | Handles:
21 | - Database initialization and migrations
22 | - Telemetry notice and tracking
23 | - File sync via SyncCoordinator (if enabled and not in cloud mode)
24 | - Proper cleanup on shutdown
25 | """
26 | # --- Composition Root ---
27 | # Create container and read config (single point of config access)
28 | container = McpContainer.create()
29 | set_container(container)
30 |
31 | logger.info(f"Starting Basic Memory MCP server (mode={container.mode.name})")
32 |
33 | # Show telemetry notice (first run only) and track startup
34 | show_notice_if_needed()
35 | track_app_started("mcp")
36 |
37 | # Track if we created the engine (vs test fixtures providing it)
38 | # This prevents disposing an engine provided by test fixtures when
39 | # multiple Client connections are made in the same test
40 | engine_was_none = db._engine is None
41 |
42 | # Initialize app (runs migrations, reconciles projects)
43 | await initialize_app(container.config)
44 |
45 | # Create and start sync coordinator (lifecycle centralized in coordinator)
46 | sync_coordinator = container.create_sync_coordinator()
47 | await sync_coordinator.start()
48 |
49 | try:
50 | yield
51 | finally:
52 | # Shutdown - coordinator handles clean task cancellation
53 | logger.info("Shutting down Basic Memory MCP server")
54 | await sync_coordinator.stop()
55 |
56 | # Only shutdown DB if we created it (not if test fixture provided it)
57 | if engine_was_none:
58 | await db.shutdown_db()
59 | logger.info("Database connections closed")
60 | else: # pragma: no cover
61 | logger.debug("Skipping DB shutdown - engine provided externally")
62 |
63 |
64 | mcp = FastMCP(
65 | name="Basic Memory",
66 | lifespan=lifespan,
67 | )
68 |
```
--------------------------------------------------------------------------------
/llms-install.md:
--------------------------------------------------------------------------------
```markdown
1 | # Basic Memory Installation Guide for LLMs
2 |
3 | This guide is specifically designed to help AI assistants like Cline install and configure Basic Memory. Follow these
4 | steps in order.
5 |
6 | ## Installation Steps
7 |
8 | ### 1. Install Basic Memory Package
9 |
10 | Use one of the following package managers to install:
11 |
12 | ```bash
13 | # Install with uv (recommended)
14 | uv tool install basic-memory
15 |
16 | # Or with pip
17 | pip install basic-memory
18 | ```
19 |
20 | ### 2. Configure MCP Server
21 |
22 | Add the following to your config:
23 |
24 | ```json
25 | {
26 | "mcpServers": {
27 | "basic-memory": {
28 | "command": "uvx",
29 | "args": [
30 | "basic-memory",
31 | "mcp"
32 | ]
33 | }
34 | }
35 | }
36 | ```
37 |
38 | For Claude Desktop, this file is located at:
39 |
40 | macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
41 | Windows: %APPDATA%\Claude\claude_desktop_config.json
42 |
43 | ### 3. Start Synchronization (optional)
44 |
45 | To synchronize files in real-time, run:
46 |
47 | ```bash
48 | basic-memory sync --watch
49 | ```
50 |
51 | Or for a one-time sync:
52 |
53 | ```bash
54 | basic-memory sync
55 | ```
56 |
57 | ## Configuration Options
58 |
59 | ### Custom Directory
60 |
61 | To use a directory other than the default `~/basic-memory`:
62 |
63 | ```bash
64 | basic-memory project add custom-project /path/to/your/directory
65 | basic-memory project default custom-project
66 | ```
67 |
68 | ### Multiple Projects
69 |
70 | To manage multiple knowledge bases:
71 |
72 | ```bash
73 | # List all projects
74 | basic-memory project list
75 |
76 | # Add a new project
77 | basic-memory project add work ~/work-basic-memory
78 |
79 | # Set default project
80 | basic-memory project default work
81 | ```
82 |
83 | ## Importing Existing Data
84 |
85 | ### From Claude.ai
86 |
87 | ```bash
88 | basic-memory import claude conversations path/to/conversations.json
89 | basic-memory import claude projects path/to/projects.json
90 | ```
91 |
92 | ### From ChatGPT
93 |
94 | ```bash
95 | basic-memory import chatgpt path/to/conversations.json
96 | ```
97 |
98 | ### From MCP Memory Server
99 |
100 | ```bash
101 | basic-memory import memory-json path/to/memory.json
102 | ```
103 |
104 | ## Troubleshooting
105 |
106 | If you encounter issues:
107 |
108 | 1. Check that Basic Memory is properly installed:
109 | ```bash
110 | basic-memory --version
111 | ```
112 |
113 | 2. Verify the sync process is running:
114 | ```bash
115 | ps aux | grep basic-memory
116 | ```
117 |
118 | 3. Check sync output for errors:
119 | ```bash
120 | basic-memory sync --verbose
121 | ```
122 |
123 | 4. Check log output:
124 | ```bash
125 | cat ~/.basic-memory/basic-memory.log
126 | ```
127 |
128 | For more detailed information, refer to the [full documentation](https://memory.basicmachines.co/).
```
--------------------------------------------------------------------------------
/tests/cli/cloud/test_upload_path.py:
--------------------------------------------------------------------------------
```python
1 | from contextlib import asynccontextmanager
2 |
3 | import httpx
4 | import pytest
5 |
6 | from basic_memory.cli.commands.cloud.upload import upload_path
7 |
8 |
9 | @pytest.mark.asyncio
10 | async def test_upload_path_dry_run_respects_gitignore_and_bmignore(config_home, tmp_path, capsys):
11 | root = tmp_path / "proj"
12 | root.mkdir()
13 |
14 | # Create a .gitignore that ignores one file
15 | (root / ".gitignore").write_text("ignored.md\n", encoding="utf-8")
16 |
17 | # Create files
18 | (root / "keep.md").write_text("keep", encoding="utf-8")
19 | (root / "ignored.md").write_text("ignored", encoding="utf-8")
20 |
21 | ok = await upload_path(root, "proj", verbose=True, use_gitignore=True, dry_run=True)
22 | assert ok is True
23 |
24 | out = capsys.readouterr().out
25 | # Verbose mode prints ignored files in the scan phase, but they must not appear
26 | # in the final "would be uploaded" list.
27 | assert "[INCLUDE] keep.md" in out or "keep.md" in out
28 | assert "[IGNORED] ignored.md" in out
29 | assert "Files that would be uploaded:" in out
30 | assert " keep.md (" in out
31 | assert " ignored.md (" not in out
32 |
33 |
34 | @pytest.mark.asyncio
35 | async def test_upload_path_non_dry_puts_files_and_skips_archives(config_home, tmp_path):
36 | root = tmp_path / "proj"
37 | root.mkdir()
38 |
39 | (root / "keep.md").write_text("keep", encoding="utf-8")
40 | (root / "archive.zip").write_bytes(b"zipbytes")
41 |
42 | seen = {"puts": []}
43 |
44 | async def handler(request: httpx.Request) -> httpx.Response:
45 | # Expect PUT to the webdav path
46 | assert request.method == "PUT"
47 | seen["puts"].append(request.url.path)
48 | # Must have mtime header
49 | assert request.headers.get("x-oc-mtime")
50 | return httpx.Response(201, text="Created")
51 |
52 | transport = httpx.MockTransport(handler)
53 |
54 | @asynccontextmanager
55 | async def client_cm_factory():
56 | async with httpx.AsyncClient(
57 | transport=transport, base_url="https://cloud.example.test"
58 | ) as client:
59 | yield client
60 |
61 | ok = await upload_path(
62 | root,
63 | "proj",
64 | verbose=False,
65 | use_gitignore=False,
66 | dry_run=False,
67 | client_cm_factory=client_cm_factory,
68 | )
69 | assert ok is True
70 |
71 | # Only keep.md uploaded; archive skipped
72 | assert "/webdav/proj/keep.md" in seen["puts"]
73 | assert all("archive.zip" not in p for p in seen["puts"])
74 |
```
--------------------------------------------------------------------------------
/src/basic_memory/cli/container.py:
--------------------------------------------------------------------------------
```python
1 | """CLI composition root for Basic Memory.
2 |
3 | This container owns reading ConfigManager and environment variables for the
4 | CLI entrypoint. Downstream modules receive config/dependencies explicitly
5 | rather than reading globals.
6 |
7 | Design principles:
8 | - Only this module reads ConfigManager directly
9 | - Runtime mode (cloud/local/test) is resolved here
10 | - Different CLI commands may need different initialization
11 | """
12 |
13 | from dataclasses import dataclass
14 |
15 | from basic_memory.config import BasicMemoryConfig, ConfigManager
16 | from basic_memory.runtime import RuntimeMode, resolve_runtime_mode
17 |
18 |
19 | @dataclass
20 | class CliContainer:
21 | """Composition root for the CLI entrypoint.
22 |
23 | Holds resolved configuration and runtime context.
24 | Created once at CLI startup, then used by subcommands.
25 | """
26 |
27 | config: BasicMemoryConfig
28 | mode: RuntimeMode
29 |
30 | @classmethod
31 | def create(cls) -> "CliContainer":
32 | """Create container by reading ConfigManager.
33 |
34 | This is the single point where CLI reads global config.
35 | """
36 | config = ConfigManager().config
37 | mode = resolve_runtime_mode(
38 | cloud_mode_enabled=config.cloud_mode_enabled,
39 | is_test_env=config.is_test_env,
40 | )
41 | return cls(config=config, mode=mode)
42 |
43 | # --- Runtime Mode Properties ---
44 |
45 | @property
46 | def is_cloud_mode(self) -> bool:
47 | """Whether running in cloud mode."""
48 | return self.mode.is_cloud
49 |
50 |
51 | # Module-level container instance (set by app callback)
52 | _container: CliContainer | None = None
53 |
54 |
55 | def get_container() -> CliContainer:
56 | """Get the current CLI container.
57 |
58 | Returns:
59 | The CLI container
60 |
61 | Raises:
62 | RuntimeError: If container hasn't been initialized
63 | """
64 | if _container is None:
65 | raise RuntimeError("CLI container not initialized. Call set_container() first.")
66 | return _container
67 |
68 |
69 | def set_container(container: CliContainer) -> None:
70 | """Set the CLI container (called by app callback)."""
71 | global _container
72 | _container = container
73 |
74 |
75 | def get_or_create_container() -> CliContainer:
76 | """Get existing container or create new one.
77 |
78 | This is useful for CLI commands that might be called before
79 | the main app callback runs (e.g., eager options).
80 | """
81 | global _container
82 | if _container is None:
83 | _container = CliContainer.create()
84 | return _container
85 |
```
--------------------------------------------------------------------------------
/.claude/commands/spec.md:
--------------------------------------------------------------------------------
```markdown
1 | ---
2 | allowed-tools: mcp__basic-memory__write_note, mcp__basic-memory__read_note, mcp__basic-memory__search_notes, mcp__basic-memory__edit_note
3 | argument-hint: [create|status|show|review] [spec-name]
4 | description: Manage specifications in our development process
5 | ---
6 |
7 | ## Context
8 |
9 | Specifications are managed in the Basic Memory "specs" project. All specs live in a centralized location accessible across all repositories via MCP tools.
10 |
11 | See SPEC-1 and SPEC-2 in the "specs" project for the full specification-driven development process.
12 |
13 | Available commands:
14 | - `create [name]` - Create new specification
15 | - `status` - Show all spec statuses
16 | - `show [spec-name]` - Read a specific spec
17 | - `review [spec-name]` - Review implementation against spec
18 |
19 | ## Your task
20 |
21 | Execute the spec command: `/spec $ARGUMENTS`
22 |
23 | ### If command is "create":
24 | 1. Get next SPEC number by searching existing specs in "specs" project
25 | 2. Create new spec using template from SPEC-2
26 | 3. Use mcp__basic-memory__write_note with project="specs"
27 | 4. Include standard sections: Why, What, How, How to Evaluate
28 |
29 | ### If command is "status":
30 | 1. Use mcp__basic-memory__search_notes with project="specs"
31 | 2. Display table with spec number, title, and progress
32 | 3. Show completion status from checkboxes in content
33 |
34 | ### If command is "show":
35 | 1. Use mcp__basic-memory__read_note with project="specs"
36 | 2. Display the full spec content
37 |
38 | ### If command is "review":
39 | 1. Read the specified spec and its "How to Evaluate" section
40 | 2. Review current implementation against success criteria with careful evaluation of:
41 | - **Functional completeness** - All specified features working
42 | - **Test coverage analysis** - Actual test files and coverage percentage
43 | - Count existing test files vs required components/APIs/composables
44 | - Verify unit tests, integration tests, and end-to-end tests
45 | - Check for missing test categories (component, API, workflow)
46 | - **Code quality metrics** - TypeScript compilation, linting, performance
47 | - **Architecture compliance** - Component isolation, state management patterns
48 | - **Documentation completeness** - Implementation matches specification
49 | 3. Provide honest, accurate assessment - do not overstate completeness
50 | 4. Document findings and update spec with review results using mcp__basic-memory__edit_note
51 | 5. If gaps found, clearly identify what still needs to be implemented/tested
52 |
```
--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/telemetry.py:
--------------------------------------------------------------------------------
```python
1 | """Telemetry commands for basic-memory CLI."""
2 |
3 | import typer
4 | from rich.console import Console
5 | from rich.panel import Panel
6 |
7 | from basic_memory.cli.app import app
8 | from basic_memory.config import ConfigManager
9 |
10 | console = Console()
11 |
12 | # Create telemetry subcommand group
13 | telemetry_app = typer.Typer(help="Manage anonymous telemetry settings")
14 | app.add_typer(telemetry_app, name="telemetry")
15 |
16 |
17 | @telemetry_app.command("enable")
18 | def enable() -> None:
19 | """Enable anonymous telemetry.
20 |
21 | Telemetry helps improve Basic Memory by collecting anonymous usage data.
22 | No personal data, note content, or file paths are ever collected.
23 | """
24 | config_manager = ConfigManager()
25 | config = config_manager.config
26 | config.telemetry_enabled = True
27 | config_manager.save_config(config)
28 | console.print("[green]Telemetry enabled[/green]")
29 | console.print("[dim]Thank you for helping improve Basic Memory![/dim]")
30 |
31 |
32 | @telemetry_app.command("disable")
33 | def disable() -> None:
34 | """Disable anonymous telemetry.
35 |
36 | You can re-enable telemetry anytime with: bm telemetry enable
37 | """
38 | config_manager = ConfigManager()
39 | config = config_manager.config
40 | config.telemetry_enabled = False
41 | config_manager.save_config(config)
42 | console.print("[yellow]Telemetry disabled[/yellow]")
43 |
44 |
45 | @telemetry_app.command("status")
46 | def status() -> None:
47 | """Show current telemetry status and what's collected."""
48 | from basic_memory.telemetry import get_install_id, TELEMETRY_DOCS_URL
49 |
50 | config = ConfigManager().config
51 |
52 | status_text = (
53 | "[green]enabled[/green]" if config.telemetry_enabled else "[yellow]disabled[/yellow]"
54 | )
55 |
56 | console.print(f"\nTelemetry: {status_text}")
57 | console.print(f"Install ID: [dim]{get_install_id()}[/dim]")
58 | console.print()
59 |
60 | what_we_collect = """
61 | [bold]What we collect:[/bold]
62 | - App version, Python version, OS, architecture
63 | - Feature usage (which MCP tools and CLI commands)
64 | - Sync statistics (entity count, duration)
65 | - Error types (sanitized, no file paths)
66 |
67 | [bold]What we NEVER collect:[/bold]
68 | - Note content, file names, or paths
69 | - Personal information
70 | - IP addresses
71 | """
72 |
73 | console.print(
74 | Panel(
75 | what_we_collect.strip(),
76 | title="Telemetry Details",
77 | border_style="blue",
78 | expand=False,
79 | )
80 | )
81 | console.print(f"[dim]Details: {TELEMETRY_DOCS_URL}[/dim]")
82 |
```
--------------------------------------------------------------------------------
/tests/api/test_api_container.py:
--------------------------------------------------------------------------------
```python
1 | """Tests for API container composition root."""
2 |
3 | import pytest
4 |
5 | from basic_memory.api.container import (
6 | ApiContainer,
7 | get_container,
8 | set_container,
9 | )
10 | from basic_memory.runtime import RuntimeMode
11 |
12 |
13 | class TestApiContainer:
14 | """Tests for ApiContainer."""
15 |
16 | def test_create_from_config(self, app_config):
17 | """Container can be created from config manager."""
18 | container = ApiContainer(config=app_config, mode=RuntimeMode.LOCAL)
19 | assert container.config == app_config
20 | assert container.mode == RuntimeMode.LOCAL
21 |
22 | def test_should_sync_files_when_enabled_and_not_test(self, app_config):
23 | """Sync should be enabled when config says so and not in test mode."""
24 | app_config.sync_changes = True
25 | container = ApiContainer(config=app_config, mode=RuntimeMode.LOCAL)
26 | assert container.should_sync_files is True
27 |
28 | def test_should_not_sync_files_when_disabled(self, app_config):
29 | """Sync should be disabled when config says so."""
30 | app_config.sync_changes = False
31 | container = ApiContainer(config=app_config, mode=RuntimeMode.LOCAL)
32 | assert container.should_sync_files is False
33 |
34 | def test_should_not_sync_files_in_test_mode(self, app_config):
35 | """Sync should be disabled in test mode regardless of config."""
36 | app_config.sync_changes = True
37 | container = ApiContainer(config=app_config, mode=RuntimeMode.TEST)
38 | assert container.should_sync_files is False
39 |
40 |
41 | class TestContainerAccessors:
42 | """Tests for container get/set functions."""
43 |
44 | def test_get_container_raises_when_not_set(self, monkeypatch):
45 | """get_container raises RuntimeError when container not initialized."""
46 | # Clear any existing container
47 | import basic_memory.api.container as container_module
48 |
49 | monkeypatch.setattr(container_module, "_container", None)
50 |
51 | with pytest.raises(RuntimeError, match="API container not initialized"):
52 | get_container()
53 |
54 | def test_set_and_get_container(self, app_config, monkeypatch):
55 | """set_container allows get_container to return the container."""
56 | import basic_memory.api.container as container_module
57 |
58 | container = ApiContainer(config=app_config, mode=RuntimeMode.LOCAL)
59 | monkeypatch.setattr(container_module, "_container", None)
60 |
61 | set_container(container)
62 | assert get_container() is container
63 |
```
--------------------------------------------------------------------------------
/src/basic_memory/mcp/prompts/ai_assistant_guide.py:
--------------------------------------------------------------------------------
```python
1 | from pathlib import Path
2 |
3 | from basic_memory.config import ConfigManager
4 | from basic_memory.mcp.server import mcp
5 | from loguru import logger
6 |
7 |
8 | @mcp.resource(
9 | uri="memory://ai_assistant_guide",
10 | name="ai assistant guide",
11 | description="Give an AI assistant guidance on how to use Basic Memory tools effectively",
12 | )
13 | def ai_assistant_guide() -> str:
14 | """Return a concise guide on Basic Memory tools and how to use them.
15 |
16 | Dynamically adapts instructions based on configuration:
17 | - Default project mode: Simplified instructions with automatic project
18 | - Regular mode: Project discovery and selection guidance
19 | - CLI constraint mode: Single project constraint information
20 |
21 | Returns:
22 | A focused guide on Basic Memory usage.
23 | """
24 | logger.info("Loading AI assistant guide resource")
25 |
26 | # Load base guide content
27 | guide_doc = Path(__file__).parent.parent / "resources" / "ai_assistant_guide.md"
28 | content = guide_doc.read_text(encoding="utf-8")
29 |
30 | # Check configuration for mode-specific instructions
31 | config = ConfigManager().config
32 |
33 | # Add mode-specific header
34 | mode_info = ""
35 | if config.default_project_mode: # pragma: no cover
36 | mode_info = f"""
37 | # 🎯 Default Project Mode Active
38 |
39 | **Current Configuration**: All operations automatically use project '{config.default_project}'
40 |
41 | **Simplified Usage**: You don't need to specify the project parameter in tool calls.
42 | - `write_note(title="Note", content="...", folder="docs")` ✅
43 | - Project parameter is optional and will default to '{config.default_project}'
44 | - To use a different project, explicitly specify: `project="other-project"`
45 |
46 | ────────────────────────────────────────
47 |
48 | """
49 | else: # pragma: no cover
50 | mode_info = """
51 | # 🔧 Multi-Project Mode Active
52 |
53 | **Current Configuration**: Project parameter required for all operations
54 |
55 | **Project Discovery Required**: Use these tools to select a project:
56 | - `list_memory_projects()` - See all available projects
57 | - `recent_activity()` - Get project activity and recommendations
58 | - Remember the user's project choice throughout the conversation
59 |
60 | ────────────────────────────────────────
61 |
62 | """
63 |
64 | # Prepend mode info to the guide
65 | enhanced_content = mode_info + content
66 |
67 | logger.info(
68 | f"Loaded AI assistant guide ({len(enhanced_content)} chars) with mode: {'default_project' if config.default_project_mode else 'multi_project'}"
69 | )
70 | return enhanced_content
71 |
```
--------------------------------------------------------------------------------
/.github/workflows/release.yml:
--------------------------------------------------------------------------------
```yaml
1 | name: Release
2 |
3 | on:
4 | push:
5 | tags:
6 | - 'v*' # Trigger on version tags like v1.0.0, v0.13.0, etc.
7 |
8 | jobs:
9 | release:
10 | runs-on: ubuntu-latest
11 | permissions:
12 | id-token: write
13 | contents: write
14 |
15 | steps:
16 | - uses: actions/checkout@v4
17 | with:
18 | fetch-depth: 0
19 |
20 | - name: Set up Python
21 | uses: actions/setup-python@v5
22 | with:
23 | python-version: "3.12"
24 |
25 | - name: Install uv
26 | run: |
27 | pip install uv
28 |
29 | - name: Install dependencies and build
30 | run: |
31 | uv venv
32 | uv sync
33 | uv build
34 |
35 | - name: Verify build succeeded
36 | run: |
37 | # Verify that build artifacts exist
38 | ls -la dist/
39 | echo "Build completed successfully"
40 |
41 | - name: Create GitHub Release
42 | uses: softprops/action-gh-release@v2
43 | with:
44 | files: |
45 | dist/*.whl
46 | dist/*.tar.gz
47 | generate_release_notes: true
48 | tag_name: ${{ github.ref_name }}
49 | token: ${{ secrets.GITHUB_TOKEN }}
50 |
51 | - name: Publish to PyPI
52 | uses: pypa/gh-action-pypi-publish@release/v1
53 | with:
54 | password: ${{ secrets.PYPI_TOKEN }}
55 |
56 | homebrew:
57 | name: Update Homebrew Formula
58 | needs: release
59 | runs-on: ubuntu-latest
60 | # Only run for stable releases (not dev, beta, or rc versions)
61 | if: ${{ !contains(github.ref_name, 'dev') && !contains(github.ref_name, 'b') && !contains(github.ref_name, 'rc') }}
62 | permissions:
63 | contents: write
64 | actions: read
65 | steps:
66 | - name: Update Homebrew formula
67 | uses: mislav/bump-homebrew-formula-action@v3
68 | with:
69 | # Formula name in homebrew-basic-memory repo
70 | formula-name: basic-memory
71 | # The tap repository
72 | homebrew-tap: basicmachines-co/homebrew-basic-memory
73 | # Base branch of the tap repository
74 | base-branch: main
75 | # Download URL will be automatically constructed from the tag
76 | download-url: https://github.com/basicmachines-co/basic-memory/archive/refs/tags/${{ github.ref_name }}.tar.gz
77 | # Commit message for the formula update
78 | commit-message: |
79 | {{formulaName}} {{version}}
80 |
81 | Created by https://github.com/basicmachines-co/basic-memory/actions/runs/${{ github.run_id }}
82 | env:
83 | # Personal Access Token with repo scope for homebrew-basic-memory repo
84 | COMMITTER_TOKEN: ${{ secrets.HOMEBREW_TOKEN }}
85 |
86 |
```
--------------------------------------------------------------------------------
/src/basic_memory/mcp/tools/view_note.py:
--------------------------------------------------------------------------------
```python
1 | """View note tool for Basic Memory MCP server."""
2 |
3 | from textwrap import dedent
4 | from typing import Optional
5 |
6 | from loguru import logger
7 | from fastmcp import Context
8 |
9 | from basic_memory.mcp.server import mcp
10 | from basic_memory.mcp.tools.read_note import read_note
11 | from basic_memory.telemetry import track_mcp_tool
12 |
13 |
14 | @mcp.tool(
15 | description="View a note as a formatted artifact for better readability.",
16 | )
17 | async def view_note(
18 | identifier: str,
19 | project: Optional[str] = None,
20 | page: int = 1,
21 | page_size: int = 10,
22 | context: Context | None = None,
23 | ) -> str:
24 | """View a markdown note as a formatted artifact.
25 |
26 | This tool reads a note using the same logic as read_note but instructs Claude
27 | to display the content as a markdown artifact in the Claude Desktop app.
28 | Project parameter optional with server resolution.
29 |
30 | Args:
31 | identifier: The title or permalink of the note to view
32 | project: Project name to read from. Optional - server will resolve using hierarchy.
33 | If unknown, use list_memory_projects() to discover available projects.
34 | page: Page number for paginated results (default: 1)
35 | page_size: Number of items per page (default: 10)
36 | context: Optional FastMCP context for performance caching.
37 |
38 | Returns:
39 | Instructions for Claude to create a markdown artifact with the note content.
40 |
41 | Examples:
42 | # View a note by title
43 | view_note("Meeting Notes")
44 |
45 | # View a note by permalink
46 | view_note("meetings/weekly-standup")
47 |
48 | # View with pagination
49 | view_note("large-document", page=2, page_size=5)
50 |
51 | # Explicit project specification
52 | view_note("Meeting Notes", project="my-project")
53 |
54 | Raises:
55 | HTTPError: If project doesn't exist or is inaccessible
56 | SecurityError: If identifier attempts path traversal
57 | """
58 | track_mcp_tool("view_note")
59 | logger.info(f"Viewing note: {identifier} in project: {project}")
60 |
61 | # Call the existing read_note logic
62 | content = await read_note.fn(identifier, project, page, page_size, context)
63 |
64 | # Check if this is an error message (note not found)
65 | if "# Note Not Found" in content:
66 | return content # Return error message directly
67 |
68 | # Return instructions for Claude to create an artifact
69 | return dedent(f"""
70 | Note retrieved: "{identifier}"
71 |
72 | Display this note as a markdown artifact for the user.
73 |
74 | Content:
75 | ---
76 | {content}
77 | ---
78 | """).strip()
79 |
```
--------------------------------------------------------------------------------
/docker-compose.yml:
--------------------------------------------------------------------------------
```yaml
1 | # Docker Compose configuration for Basic Memory
2 | # See docs/Docker.md for detailed setup instructions
3 |
4 | version: '3.8'
5 |
6 | services:
7 | basic-memory:
8 | # Use pre-built image (recommended for most users)
9 | image: ghcr.io/basicmachines-co/basic-memory:latest
10 |
11 | # Uncomment to build locally instead:
12 | # build: .
13 |
14 | container_name: basic-memory-server
15 |
16 | # Volume mounts for knowledge directories and persistent data
17 | volumes:
18 |
19 | # Persistent storage for configuration and database
20 | - basic-memory-config:/root/.basic-memory:rw
21 |
22 | # Mount your knowledge directory (required)
23 | # Change './knowledge' to your actual Obsidian vault or knowledge directory
24 | - ./knowledge:/app/data:rw
25 |
26 | # OPTIONAL: Mount additional knowledge directories for multiple projects
27 | # - ./work-notes:/app/data/work:rw
28 | # - ./personal-notes:/app/data/personal:rw
29 |
30 | # You can edit the project config manually in the mounted config volume
31 | # The default project will be configured to use /app/data
32 | environment:
33 | # Project configuration
34 | - BASIC_MEMORY_DEFAULT_PROJECT=main
35 |
36 | # Enable real-time file synchronization (recommended for Docker)
37 | - BASIC_MEMORY_SYNC_CHANGES=true
38 |
39 | # Logging configuration
40 | - BASIC_MEMORY_LOG_LEVEL=INFO
41 |
42 | # Sync delay in milliseconds (adjust for performance vs responsiveness)
43 | - BASIC_MEMORY_SYNC_DELAY=1000
44 |
45 | # Port exposure for HTTP transport (only needed if not using STDIO)
46 | ports:
47 | - "8000:8000"
48 |
49 | # Command with SSE transport (configurable via environment variables above)
50 | # IMPORTANT: The SSE and streamable-http endpoints are not secured
51 | command: ["basic-memory", "mcp", "--transport", "sse", "--host", "0.0.0.0", "--port", "8000"]
52 |
53 | # Container management
54 | restart: unless-stopped
55 |
56 | # Health monitoring
57 | healthcheck:
58 | test: ["CMD", "basic-memory", "--version"]
59 | interval: 30s
60 | timeout: 10s
61 | retries: 3
62 | start_period: 30s
63 |
64 | # Optional: Resource limits
65 | # deploy:
66 | # resources:
67 | # limits:
68 | # memory: 512M
69 | # cpus: '0.5'
70 | # reservations:
71 | # memory: 256M
72 | # cpus: '0.25'
73 |
74 | volumes:
75 | # Named volume for persistent configuration and database
76 | # This ensures your configuration and knowledge graph persist across container restarts
77 | basic-memory-config:
78 | driver: local
79 |
80 | # Network configuration (optional)
81 | # networks:
82 | # basic-memory-net:
83 | # driver: bridge
```
--------------------------------------------------------------------------------
/src/basic_memory/mcp/resources/project_info.py:
--------------------------------------------------------------------------------
```python
1 | """Project info tool for Basic Memory MCP server."""
2 |
3 | from typing import Optional
4 |
5 | from loguru import logger
6 | from fastmcp import Context
7 |
8 | from basic_memory.mcp.async_client import get_client
9 | from basic_memory.mcp.project_context import get_active_project
10 | from basic_memory.mcp.server import mcp
11 | from basic_memory.mcp.tools.utils import call_get
12 | from basic_memory.schemas import ProjectInfoResponse
13 |
14 |
15 | @mcp.resource(
16 | uri="memory://{project}/info",
17 | description="Get information and statistics about the current Basic Memory project.",
18 | )
19 | async def project_info(
20 | project: Optional[str] = None, context: Context | None = None
21 | ) -> ProjectInfoResponse:
22 | """Get comprehensive information about the current Basic Memory project.
23 |
24 | This tool provides detailed statistics and status information about your
25 | Basic Memory project, including:
26 |
27 | - Project configuration
28 | - Entity, observation, and relation counts
29 | - Graph metrics (most connected entities, isolated entities)
30 | - Recent activity and growth over time
31 | - System status (database, watch service, version)
32 |
33 | Use this tool to:
34 | - Verify your Basic Memory installation is working correctly
35 | - Get insights into your knowledge base structure
36 | - Monitor growth and activity over time
37 | - Identify potential issues like unresolved relations
38 |
39 | Args:
40 | project: Optional project name. If not provided, uses default_project
41 | (if default_project_mode=true) or CLI constraint. If unknown,
42 | use list_memory_projects() to discover available projects.
43 | context: Optional FastMCP context for performance caching.
44 |
45 | Returns:
46 | Detailed project information and statistics
47 |
48 | Examples:
49 | # Get information about the current/default project
50 | info = await project_info()
51 |
52 | # Get information about a specific project
53 | info = await project_info(project="my-project")
54 |
55 | # Check entity counts
56 | print(f"Total entities: {info.statistics.total_entities}")
57 |
58 | # Check system status
59 | print(f"Basic Memory version: {info.system.version}")
60 | """
61 | logger.info("Getting project info")
62 |
63 | async with get_client() as client:
64 | project_config = await get_active_project(client, project, context)
65 | project_url = project_config.permalink
66 |
67 | # Call the API endpoint
68 | response = await call_get(client, f"{project_url}/project/info")
69 |
70 | # Convert response to ProjectInfoResponse
71 | return ProjectInfoResponse.model_validate(response.json())
72 |
```
--------------------------------------------------------------------------------
/src/basic_memory/api/v2/routers/search_router.py:
--------------------------------------------------------------------------------
```python
1 | """V2 router for search operations.
2 |
3 | This router uses external_id UUIDs for stable, API-friendly routing.
4 | V1 uses string-based project names which are less efficient and less stable.
5 | """
6 |
7 | from fastapi import APIRouter, BackgroundTasks, Path
8 |
9 | from basic_memory.api.routers.utils import to_search_results
10 | from basic_memory.schemas.search import SearchQuery, SearchResponse
11 | from basic_memory.deps import SearchServiceV2ExternalDep, EntityServiceV2ExternalDep
12 |
13 | # Note: No prefix here - it's added during registration as /v2/{project_id}/search
14 | router = APIRouter(tags=["search"])
15 |
16 |
17 | @router.post("/search/", response_model=SearchResponse)
18 | async def search(
19 | query: SearchQuery,
20 | search_service: SearchServiceV2ExternalDep,
21 | entity_service: EntityServiceV2ExternalDep,
22 | project_id: str = Path(..., description="Project external UUID"),
23 | page: int = 1,
24 | page_size: int = 10,
25 | ):
26 | """Search across all knowledge and documents in a project.
27 |
28 | V2 uses external_id UUIDs for stable API references.
29 |
30 | Args:
31 | project_id: Project external UUID from URL path
32 | query: Search query parameters (text, filters, etc.)
33 | search_service: Search service scoped to project
34 | entity_service: Entity service scoped to project
35 | page: Page number for pagination
36 | page_size: Number of results per page
37 |
38 | Returns:
39 | SearchResponse with paginated search results
40 | """
41 | limit = page_size
42 | offset = (page - 1) * page_size
43 | results = await search_service.search(query, limit=limit, offset=offset)
44 | search_results = await to_search_results(entity_service, results)
45 | return SearchResponse(
46 | results=search_results,
47 | current_page=page,
48 | page_size=page_size,
49 | )
50 |
51 |
52 | @router.post("/search/reindex")
53 | async def reindex(
54 | background_tasks: BackgroundTasks,
55 | search_service: SearchServiceV2ExternalDep,
56 | project_id: str = Path(..., description="Project external UUID"),
57 | ):
58 | """Recreate and populate the search index for a project.
59 |
60 | This is a background operation that rebuilds the search index
61 | from scratch. Useful after bulk updates or if the index becomes
62 | corrupted.
63 |
64 | Args:
65 | project_id: Project external UUID from URL path
66 | background_tasks: FastAPI background tasks handler
67 | search_service: Search service scoped to project
68 |
69 | Returns:
70 | Status message indicating reindex has been initiated
71 | """
72 | await search_service.reindex_all(background_tasks=background_tasks)
73 | return {"status": "ok", "message": "Reindex initiated"}
74 |
```
--------------------------------------------------------------------------------
/tests/services/test_project_service_operations.py:
--------------------------------------------------------------------------------
```python
1 | """Additional tests for ProjectService operations."""
2 |
3 | import os
4 | import tempfile
5 | from pathlib import Path
6 |
7 | import pytest
8 |
9 | from basic_memory.services.project_service import ProjectService
10 |
11 |
12 | @pytest.mark.asyncio
13 | async def test_get_project_from_database(project_service: ProjectService):
14 | """Test getting projects from the database."""
15 | # Generate unique project name for testing
16 | test_project_name = f"test-project-{os.urandom(4).hex()}"
17 | with tempfile.TemporaryDirectory() as temp_dir:
18 | test_root = Path(temp_dir)
19 | test_path = str(test_root / "test-project")
20 |
21 | # Make sure directory exists
22 | os.makedirs(test_path, exist_ok=True)
23 |
24 | try:
25 | # Add a project to the database
26 | project_data = {
27 | "name": test_project_name,
28 | "path": test_path,
29 | "permalink": test_project_name.lower().replace(" ", "-"),
30 | "is_active": True,
31 | "is_default": False,
32 | }
33 | await project_service.repository.create(project_data)
34 |
35 | # Verify we can get the project
36 | project = await project_service.repository.get_by_name(test_project_name)
37 | assert project is not None
38 | assert project.name == test_project_name
39 | assert project.path == test_path
40 |
41 | finally:
42 | # Clean up
43 | project = await project_service.repository.get_by_name(test_project_name)
44 | if project:
45 | await project_service.repository.delete(project.id)
46 |
47 |
48 | @pytest.mark.asyncio
49 | async def test_add_project_to_config(project_service: ProjectService, config_manager):
50 | """Test adding a project to the config manager."""
51 | # Generate unique project name for testing
52 | test_project_name = f"config-project-{os.urandom(4).hex()}"
53 | with tempfile.TemporaryDirectory() as temp_dir:
54 | test_root = Path(temp_dir)
55 | test_path = test_root / "config-project"
56 |
57 | # Make sure directory exists
58 | test_path.mkdir(parents=True, exist_ok=True)
59 |
60 | try:
61 | # Add a project to config only (using ConfigManager directly)
62 | config_manager.add_project(test_project_name, str(test_path))
63 |
64 | # Verify it's in the config
65 | assert test_project_name in project_service.projects
66 | assert Path(project_service.projects[test_project_name]) == test_path
67 |
68 | finally:
69 | # Clean up
70 | if test_project_name in project_service.projects:
71 | config_manager.remove_project(test_project_name)
72 |
```
--------------------------------------------------------------------------------
/src/basic_memory/mcp/clients/project.py:
--------------------------------------------------------------------------------
```python
1 | """Typed client for project API operations.
2 |
3 | Encapsulates project-level endpoints.
4 | """
5 |
6 | from typing import Any
7 |
8 | from httpx import AsyncClient
9 |
10 | from basic_memory.mcp.tools.utils import call_get, call_post, call_delete
11 | from basic_memory.schemas.project_info import ProjectList, ProjectStatusResponse
12 |
13 |
14 | class ProjectClient:
15 | """Typed client for project management operations.
16 |
17 | Centralizes:
18 | - API path construction for project endpoints
19 | - Response validation via Pydantic models
20 | - Consistent error handling through call_* utilities
21 |
22 | Note: This client does not require a project_id since it operates
23 | across projects.
24 |
25 | Usage:
26 | async with get_client() as http_client:
27 | client = ProjectClient(http_client)
28 | projects = await client.list_projects()
29 | """
30 |
31 | def __init__(self, http_client: AsyncClient):
32 | """Initialize the project client.
33 |
34 | Args:
35 | http_client: HTTPX AsyncClient for making requests
36 | """
37 | self.http_client = http_client
38 |
39 | async def list_projects(self) -> ProjectList:
40 | """List all available projects.
41 |
42 | Returns:
43 | ProjectList with all projects and default project name
44 |
45 | Raises:
46 | ToolError: If the request fails
47 | """
48 | response = await call_get(
49 | self.http_client,
50 | "/projects/projects",
51 | )
52 | return ProjectList.model_validate(response.json())
53 |
54 | async def create_project(self, project_data: dict[str, Any]) -> ProjectStatusResponse:
55 | """Create a new project.
56 |
57 | Args:
58 | project_data: Project creation data (name, path, set_default)
59 |
60 | Returns:
61 | ProjectStatusResponse with creation result
62 |
63 | Raises:
64 | ToolError: If the request fails
65 | """
66 | response = await call_post(
67 | self.http_client,
68 | "/projects/projects",
69 | json=project_data,
70 | )
71 | return ProjectStatusResponse.model_validate(response.json())
72 |
73 | async def delete_project(self, project_external_id: str) -> ProjectStatusResponse:
74 | """Delete a project by its external ID.
75 |
76 | Args:
77 | project_external_id: Project external ID (UUID)
78 |
79 | Returns:
80 | ProjectStatusResponse with deletion result
81 |
82 | Raises:
83 | ToolError: If the request fails
84 | """
85 | response = await call_delete(
86 | self.http_client,
87 | f"/v2/projects/{project_external_id}",
88 | )
89 | return ProjectStatusResponse.model_validate(response.json())
90 |
```
--------------------------------------------------------------------------------
/src/basic_memory/schemas/sync_report.py:
--------------------------------------------------------------------------------
```python
1 | """Pydantic schemas for sync report responses."""
2 |
3 | from datetime import datetime
4 | from typing import TYPE_CHECKING, Dict, List, Set
5 |
6 | from pydantic import BaseModel, Field
7 |
8 | # avoid cirular imports
9 | if TYPE_CHECKING: # pragma: no cover
10 | from basic_memory.sync.sync_service import SyncReport
11 |
12 |
13 | class SkippedFileResponse(BaseModel):
14 | """Information about a file that was skipped due to repeated failures."""
15 |
16 | path: str = Field(description="File path relative to project root")
17 | reason: str = Field(description="Error message from last failure")
18 | failure_count: int = Field(description="Number of consecutive failures")
19 | first_failed: datetime = Field(description="Timestamp of first failure")
20 |
21 | model_config = {"from_attributes": True}
22 |
23 |
24 | class SyncReportResponse(BaseModel):
25 | """Report of file changes found compared to database state.
26 |
27 | Used for API responses when scanning or syncing files.
28 | """
29 |
30 | new: Set[str] = Field(default_factory=set, description="Files on disk but not in database")
31 | modified: Set[str] = Field(default_factory=set, description="Files with different checksums")
32 | deleted: Set[str] = Field(default_factory=set, description="Files in database but not on disk")
33 | moves: Dict[str, str] = Field(
34 | default_factory=dict, description="Files moved (old_path -> new_path)"
35 | )
36 | checksums: Dict[str, str] = Field(
37 | default_factory=dict, description="Current file checksums (path -> checksum)"
38 | )
39 | skipped_files: List[SkippedFileResponse] = Field(
40 | default_factory=list, description="Files skipped due to repeated failures"
41 | )
42 | total: int = Field(description="Total number of changes")
43 |
44 | @classmethod
45 | def from_sync_report(cls, report: "SyncReport") -> "SyncReportResponse":
46 | """Convert SyncReport dataclass to Pydantic model.
47 |
48 | Args:
49 | report: SyncReport dataclass from sync service
50 |
51 | Returns:
52 | SyncReportResponse with same data
53 | """
54 | return cls(
55 | new=report.new,
56 | modified=report.modified,
57 | deleted=report.deleted,
58 | moves=report.moves,
59 | checksums=report.checksums,
60 | skipped_files=[
61 | SkippedFileResponse(
62 | path=skipped.path,
63 | reason=skipped.reason,
64 | failure_count=skipped.failure_count,
65 | first_failed=skipped.first_failed,
66 | )
67 | for skipped in report.skipped_files
68 | ],
69 | total=report.total,
70 | )
71 |
72 | model_config = {"from_attributes": True}
73 |
```
--------------------------------------------------------------------------------
/tests/cli/cloud/test_rclone_config_and_bmignore_filters.py:
--------------------------------------------------------------------------------
```python
1 | import time
2 |
3 | from basic_memory.cli.commands.cloud.bisync_commands import convert_bmignore_to_rclone_filters
4 | from basic_memory.cli.commands.cloud.rclone_config import (
5 | configure_rclone_remote,
6 | get_rclone_config_path,
7 | )
8 | from basic_memory.ignore_utils import get_bmignore_path
9 |
10 |
11 | def test_convert_bmignore_to_rclone_filters_creates_and_converts(config_home):
12 | bmignore = get_bmignore_path()
13 | bmignore.parent.mkdir(parents=True, exist_ok=True)
14 | bmignore.write_text(
15 | "\n".join(
16 | [
17 | "# comment",
18 | "",
19 | "node_modules",
20 | "*.pyc",
21 | ".git",
22 | ]
23 | )
24 | + "\n",
25 | encoding="utf-8",
26 | )
27 |
28 | rclone_filter = convert_bmignore_to_rclone_filters()
29 | assert rclone_filter.exists()
30 | content = rclone_filter.read_text(encoding="utf-8").splitlines()
31 |
32 | # Comments/empties preserved
33 | assert "# comment" in content
34 | assert "" in content
35 | # Directory pattern becomes recursive exclude
36 | assert "- node_modules/**" in content
37 | # Wildcard pattern becomes simple exclude
38 | assert "- *.pyc" in content
39 | assert "- .git/**" in content
40 |
41 |
42 | def test_convert_bmignore_to_rclone_filters_is_cached_when_up_to_date(config_home):
43 | bmignore = get_bmignore_path()
44 | bmignore.parent.mkdir(parents=True, exist_ok=True)
45 | bmignore.write_text("node_modules\n", encoding="utf-8")
46 |
47 | first = convert_bmignore_to_rclone_filters()
48 | first_mtime = first.stat().st_mtime
49 |
50 | # Ensure bmignore is older than rclone filter file
51 | time.sleep(0.01)
52 | # Touch rclone filter to be "newer"
53 | first.write_text(first.read_text(encoding="utf-8"), encoding="utf-8")
54 |
55 | second = convert_bmignore_to_rclone_filters()
56 | assert second == first
57 | assert second.stat().st_mtime >= first_mtime
58 |
59 |
60 | def test_configure_rclone_remote_writes_config_and_backs_up_existing(config_home):
61 | cfg_path = get_rclone_config_path()
62 | cfg_path.parent.mkdir(parents=True, exist_ok=True)
63 | cfg_path.write_text("[other]\ntype = local\n", encoding="utf-8")
64 |
65 | remote = configure_rclone_remote(access_key="ak", secret_key="sk")
66 | assert remote == "basic-memory-cloud"
67 |
68 | # Config file updated
69 | text = cfg_path.read_text(encoding="utf-8")
70 | assert "[basic-memory-cloud]" in text
71 | assert "type = s3" in text
72 | assert "access_key_id = ak" in text
73 | assert "secret_access_key = sk" in text
74 | assert "encoding = Slash,InvalidUtf8" in text
75 |
76 | # Backup exists
77 | backups = list(cfg_path.parent.glob("rclone.conf.backup-*"))
78 | assert backups, "expected a backup of rclone.conf to be created"
79 |
```
--------------------------------------------------------------------------------
/tests/utils/test_parse_tags.py:
--------------------------------------------------------------------------------
```python
1 | """Tests for parse_tags utility function."""
2 |
3 | from typing import List, Union
4 |
5 | import pytest
6 |
7 | from basic_memory.utils import parse_tags
8 |
9 |
10 | @pytest.mark.parametrize(
11 | "input_tags,expected",
12 | [
13 | # None input
14 | (None, []),
15 | # List inputs
16 | ([], []),
17 | (["tag1", "tag2"], ["tag1", "tag2"]),
18 | (["tag1", "", "tag2"], ["tag1", "tag2"]), # Empty tags are filtered
19 | ([" tag1 ", " tag2 "], ["tag1", "tag2"]), # Whitespace is stripped
20 | # String inputs
21 | ("", []),
22 | ("tag1", ["tag1"]),
23 | ("tag1,tag2", ["tag1", "tag2"]),
24 | ("tag1, tag2", ["tag1", "tag2"]), # Whitespace after comma is stripped
25 | ("tag1,,tag2", ["tag1", "tag2"]), # Empty tags are filtered
26 | # Tags with leading '#' characters - these should be stripped
27 | (["#tag1", "##tag2"], ["tag1", "tag2"]),
28 | ("#tag1,##tag2", ["tag1", "tag2"]),
29 | (["tag1", "#tag2", "##tag3"], ["tag1", "tag2", "tag3"]),
30 | # Mixed whitespace and '#' characters
31 | ([" #tag1 ", " ##tag2 "], ["tag1", "tag2"]),
32 | (" #tag1 , ##tag2 ", ["tag1", "tag2"]),
33 | # JSON stringified arrays (common AI assistant issue)
34 | ('["tag1", "tag2", "tag3"]', ["tag1", "tag2", "tag3"]),
35 | ('["system", "overview", "reference"]', ["system", "overview", "reference"]),
36 | ('["#tag1", "##tag2"]', ["tag1", "tag2"]), # JSON array with hash prefixes
37 | ('[ "tag1" , "tag2" ]', ["tag1", "tag2"]), # JSON array with extra spaces
38 | ],
39 | )
40 | def test_parse_tags(input_tags: Union[List[str], str, None], expected: List[str]) -> None:
41 | """Test tag parsing with various input formats."""
42 | result = parse_tags(input_tags)
43 | assert result == expected
44 |
45 |
46 | def test_parse_tags_special_case() -> None:
47 | """Test parsing from non-string, non-list types."""
48 |
49 | # Test with custom object that has __str__ method
50 | class TagObject:
51 | def __str__(self) -> str:
52 | return "tag1,tag2"
53 |
54 | result = parse_tags(TagObject()) # pyright: ignore [reportArgumentType]
55 | assert result == ["tag1", "tag2"]
56 |
57 |
58 | def test_parse_tags_invalid_json() -> None:
59 | """Test that invalid JSON strings fall back to comma-separated parsing."""
60 | # Invalid JSON should fall back to comma-separated parsing
61 | result = parse_tags("[invalid json")
62 | assert result == ["[invalid json"] # Treated as single tag
63 |
64 | result = parse_tags("[tag1, tag2]") # Valid bracket format but not JSON
65 | assert result == ["[tag1", "tag2]"] # Split by comma
66 |
67 | result = parse_tags('["tag1", "tag2"') # Incomplete JSON
68 | assert result == ['["tag1"', '"tag2"'] # Fall back to comma separation
69 |
```
--------------------------------------------------------------------------------
/.github/workflows/claude-issue-triage.yml:
--------------------------------------------------------------------------------
```yaml
1 | name: Claude Issue Triage
2 |
3 | on:
4 | issues:
5 | types: [opened]
6 |
7 | jobs:
8 | triage:
9 | runs-on: ubuntu-latest
10 | permissions:
11 | issues: write
12 | id-token: write
13 | steps:
14 | - name: Checkout repository
15 | uses: actions/checkout@v4
16 | with:
17 | fetch-depth: 1
18 |
19 | - name: Run Claude Issue Triage
20 | uses: anthropics/claude-code-action@v1
21 | with:
22 | claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
23 | track_progress: true # Show triage progress
24 | prompt: |
25 | Analyze this new Basic Memory issue and perform triage:
26 |
27 | **Issue Analysis:**
28 | 1. **Type Classification:**
29 | - Bug report (code defect)
30 | - Feature request (new functionality)
31 | - Enhancement (improvement to existing feature)
32 | - Documentation (docs improvement)
33 | - Question/Support (user help)
34 | - MCP tool issue (specific to MCP functionality)
35 |
36 | 2. **Priority Assessment:**
37 | - Critical: Security issues, data loss, complete breakage
38 | - High: Major functionality broken, affects many users
39 | - Medium: Minor bugs, usability issues
40 | - Low: Nice-to-have improvements, cosmetic issues
41 |
42 | 3. **Component Classification:**
43 | - CLI commands
44 | - MCP tools
45 | - Database/sync
46 | - Cloud functionality
47 | - Documentation
48 | - Testing
49 |
50 | 4. **Complexity Estimate:**
51 | - Simple: Quick fix, documentation update
52 | - Medium: Requires some investigation/testing
53 | - Complex: Major feature work, architectural changes
54 |
55 | **Actions to Take:**
56 | 1. Add appropriate labels using: `gh issue edit ${{ github.event.issue.number }} --add-label "label1,label2"`
57 | 2. Check for duplicates using: `gh search issues`
58 | 3. If duplicate found, comment mentioning the original issue
59 | 4. For feature requests, ask clarifying questions if needed
60 | 5. For bugs, request reproduction steps if missing
61 |
62 | **Available Labels:**
63 | - Type: bug, enhancement, feature, documentation, question, mcp-tool
64 | - Priority: critical, high, medium, low
65 | - Component: cli, mcp, database, cloud, docs, testing
66 | - Complexity: simple, medium, complex
67 | - Status: needs-reproduction, needs-clarification, duplicate
68 |
69 | Read the issue carefully and provide helpful triage with appropriate labels.
70 |
71 | claude_args: '--allowed-tools "Bash(gh issue:*),Bash(gh search:*),Read"'
```
--------------------------------------------------------------------------------
/src/basic_memory/api/routers/management_router.py:
--------------------------------------------------------------------------------
```python
1 | """Management router for basic-memory API."""
2 |
3 | import asyncio
4 |
5 | from fastapi import APIRouter, Request
6 | from loguru import logger
7 | from pydantic import BaseModel
8 |
9 | from basic_memory.config import ConfigManager
10 | from basic_memory.deps import SyncServiceDep, ProjectRepositoryDep
11 |
12 | router = APIRouter(prefix="/management", tags=["management"])
13 |
14 |
15 | class WatchStatusResponse(BaseModel):
16 | """Response model for watch status."""
17 |
18 | running: bool
19 | """Whether the watch service is currently running."""
20 |
21 |
22 | @router.get("/watch/status", response_model=WatchStatusResponse)
23 | async def get_watch_status(request: Request) -> WatchStatusResponse:
24 | """Get the current status of the watch service."""
25 | return WatchStatusResponse(
26 | running=request.app.state.watch_task is not None and not request.app.state.watch_task.done()
27 | )
28 |
29 |
30 | @router.post("/watch/start", response_model=WatchStatusResponse)
31 | async def start_watch_service(
32 | request: Request, project_repository: ProjectRepositoryDep, sync_service: SyncServiceDep
33 | ) -> WatchStatusResponse:
34 | """Start the watch service if it's not already running."""
35 |
36 | # needed because of circular imports from sync -> app
37 | from basic_memory.sync import WatchService
38 | from basic_memory.sync.background_sync import create_background_sync_task
39 |
40 | if request.app.state.watch_task is not None and not request.app.state.watch_task.done():
41 | # Watch service is already running
42 | return WatchStatusResponse(running=True)
43 |
44 | app_config = ConfigManager().config
45 |
46 | # Create and start a new watch service
47 | logger.info("Starting watch service via management API")
48 |
49 | # Get services needed for the watch task
50 | watch_service = WatchService(
51 | app_config=app_config,
52 | project_repository=project_repository,
53 | )
54 |
55 | # Create and store the task
56 | watch_task = create_background_sync_task(sync_service, watch_service)
57 | request.app.state.watch_task = watch_task
58 |
59 | return WatchStatusResponse(running=True)
60 |
61 |
62 | @router.post("/watch/stop", response_model=WatchStatusResponse)
63 | async def stop_watch_service(request: Request) -> WatchStatusResponse: # pragma: no cover
64 | """Stop the watch service if it's running."""
65 | if request.app.state.watch_task is None or request.app.state.watch_task.done():
66 | # Watch service is not running
67 | return WatchStatusResponse(running=False)
68 |
69 | # Cancel the running task
70 | logger.info("Stopping watch service via management API")
71 | request.app.state.watch_task.cancel()
72 |
73 | # Wait for it to be properly cancelled
74 | try:
75 | await request.app.state.watch_task
76 | except asyncio.CancelledError:
77 | pass
78 |
79 | request.app.state.watch_task = None
80 | return WatchStatusResponse(running=False)
81 |
```
--------------------------------------------------------------------------------
/tests/sync/test_sync_wikilink_issue.py:
--------------------------------------------------------------------------------
```python
1 | """Test for issue #72 - notes with wikilinks staying in modified status."""
2 |
3 | from pathlib import Path
4 |
5 | import pytest
6 |
7 | from basic_memory.sync.sync_service import SyncService
8 |
9 |
10 | async def create_test_file(path: Path, content: str) -> None:
11 | """Create a test file with given content."""
12 | path.parent.mkdir(parents=True, exist_ok=True)
13 | path.write_text(content)
14 |
15 |
16 | async def force_full_scan(sync_service: SyncService) -> None:
17 | """Force next sync to do a full scan by clearing watermark (for testing moves/deletions)."""
18 | if sync_service.entity_repository.project_id is not None:
19 | project = await sync_service.project_repository.find_by_id(
20 | sync_service.entity_repository.project_id
21 | )
22 | if project:
23 | await sync_service.project_repository.update(
24 | project.id,
25 | {
26 | "last_scan_timestamp": None,
27 | "last_file_count": None,
28 | },
29 | )
30 |
31 |
32 | @pytest.mark.asyncio
33 | async def test_wikilink_modified_status_issue(sync_service: SyncService, project_config):
34 | """Test that files with wikilinks don't remain in modified status after sync."""
35 | project_dir = project_config.home
36 |
37 | # Create a file with a wikilink
38 | content = """---
39 | title: Test Wikilink
40 | type: note
41 | ---
42 | # Test File
43 |
44 | This file contains a wikilink to [[another-file]].
45 | """
46 | test_file_path = project_dir / "test_wikilink.md"
47 | await create_test_file(test_file_path, content)
48 |
49 | # Initial sync
50 | report1 = await sync_service.sync(project_config.home)
51 | assert "test_wikilink.md" in report1.new
52 | assert "test_wikilink.md" not in report1.modified
53 |
54 | # Sync again without changing the file - should not be modified
55 | report2 = await sync_service.sync(project_config.home)
56 | assert "test_wikilink.md" not in report2.new
57 | assert "test_wikilink.md" not in report2.modified
58 |
59 | # Create the target file
60 | target_content = """---
61 | title: Another File
62 | type: note
63 | ---
64 | # Another File
65 |
66 | This is the target file.
67 | """
68 | target_file_path = project_dir / "another_file.md"
69 | await create_test_file(target_file_path, target_content)
70 |
71 | # Force full scan to detect the new file
72 | # (file just created may not be newer than watermark due to timing precision)
73 | await force_full_scan(sync_service)
74 |
75 | # Sync again after adding target file
76 | report3 = await sync_service.sync(project_config.home)
77 | assert "another_file.md" in report3.new
78 | assert "test_wikilink.md" not in report3.modified
79 |
80 | # Sync one more time - both files should now be stable
81 | report4 = await sync_service.sync(project_config.home)
82 | assert "test_wikilink.md" not in report4.modified
83 | assert "another_file.md" not in report4.modified
84 |
```
--------------------------------------------------------------------------------
/tests/mcp/test_async_client_modes.py:
--------------------------------------------------------------------------------
```python
1 | from contextlib import asynccontextmanager
2 |
3 | import httpx
4 | import pytest
5 |
6 | from basic_memory.cli.auth import CLIAuth
7 | from basic_memory.mcp import async_client as async_client_module
8 | from basic_memory.mcp.async_client import get_client, set_client_factory
9 |
10 |
11 | @pytest.fixture(autouse=True)
12 | def _reset_async_client_factory():
13 | async_client_module._client_factory = None
14 | yield
15 | async_client_module._client_factory = None
16 |
17 |
18 | @pytest.mark.asyncio
19 | async def test_get_client_uses_injected_factory(monkeypatch):
20 | seen = {"used": False}
21 |
22 | @asynccontextmanager
23 | async def factory():
24 | seen["used"] = True
25 | async with httpx.AsyncClient(base_url="https://example.test") as client:
26 | yield client
27 |
28 | # Ensure we don't leak factory to other tests
29 | set_client_factory(factory)
30 | async with get_client() as client:
31 | assert str(client.base_url) == "https://example.test"
32 | assert seen["used"] is True
33 |
34 |
35 | @pytest.mark.asyncio
36 | async def test_get_client_cloud_mode_injects_auth_header(config_manager, config_home):
37 | cfg = config_manager.load_config()
38 | cfg.cloud_mode = True
39 | cfg.cloud_host = "https://cloud.example.test"
40 | cfg.cloud_client_id = "cid"
41 | cfg.cloud_domain = "https://auth.example.test"
42 | config_manager.save_config(cfg)
43 |
44 | # Write token for CLIAuth so get_client() can authenticate without network
45 | auth = CLIAuth(client_id=cfg.cloud_client_id, authkit_domain=cfg.cloud_domain)
46 | auth.token_file.parent.mkdir(parents=True, exist_ok=True)
47 | auth.token_file.write_text(
48 | '{"access_token":"token-123","refresh_token":null,"expires_at":9999999999,"token_type":"Bearer"}',
49 | encoding="utf-8",
50 | )
51 |
52 | async with get_client() as client:
53 | assert str(client.base_url).rstrip("/") == "https://cloud.example.test/proxy"
54 | assert client.headers.get("Authorization") == "Bearer token-123"
55 |
56 |
57 | @pytest.mark.asyncio
58 | async def test_get_client_cloud_mode_raises_when_not_authenticated(config_manager):
59 | cfg = config_manager.load_config()
60 | cfg.cloud_mode = True
61 | cfg.cloud_host = "https://cloud.example.test"
62 | cfg.cloud_client_id = "cid"
63 | cfg.cloud_domain = "https://auth.example.test"
64 | config_manager.save_config(cfg)
65 |
66 | # No token file written -> should raise
67 | with pytest.raises(RuntimeError, match="Cloud mode enabled but not authenticated"):
68 | async with get_client():
69 | pass
70 |
71 |
72 | @pytest.mark.asyncio
73 | async def test_get_client_local_mode_uses_asgi_transport(config_manager):
74 | cfg = config_manager.load_config()
75 | cfg.cloud_mode = False
76 | config_manager.save_config(cfg)
77 |
78 | async with get_client() as client:
79 | # httpx stores ASGITransport privately, but we can still sanity-check type
80 | assert isinstance(client._transport, httpx.ASGITransport) # pyright: ignore[reportPrivateUsage]
81 |
```
--------------------------------------------------------------------------------
/src/basic_memory/api/routers/directory_router.py:
--------------------------------------------------------------------------------
```python
1 | """Router for directory tree operations."""
2 |
3 | from typing import List, Optional
4 |
5 | from fastapi import APIRouter, Query
6 |
7 | from basic_memory.deps import DirectoryServiceDep, ProjectIdDep
8 | from basic_memory.schemas.directory import DirectoryNode
9 |
10 | router = APIRouter(prefix="/directory", tags=["directory"])
11 |
12 |
13 | @router.get("/tree", response_model=DirectoryNode, response_model_exclude_none=True)
14 | async def get_directory_tree(
15 | directory_service: DirectoryServiceDep,
16 | project_id: ProjectIdDep,
17 | ):
18 | """Get hierarchical directory structure from the knowledge base.
19 |
20 | Args:
21 | directory_service: Service for directory operations
22 | project_id: ID of the current project
23 |
24 | Returns:
25 | DirectoryNode representing the root of the hierarchical tree structure
26 | """
27 | # Get a hierarchical directory tree for the specific project
28 | tree = await directory_service.get_directory_tree()
29 |
30 | # Return the hierarchical tree
31 | return tree
32 |
33 |
34 | @router.get("/structure", response_model=DirectoryNode, response_model_exclude_none=True)
35 | async def get_directory_structure(
36 | directory_service: DirectoryServiceDep,
37 | project_id: ProjectIdDep,
38 | ):
39 | """Get folder structure for navigation (no files).
40 |
41 | Optimized endpoint for folder tree navigation. Returns only directory nodes
42 | without file metadata. For full tree with files, use /directory/tree.
43 |
44 | Args:
45 | directory_service: Service for directory operations
46 | project_id: ID of the current project
47 |
48 | Returns:
49 | DirectoryNode tree containing only folders (type="directory")
50 | """
51 | structure = await directory_service.get_directory_structure()
52 | return structure
53 |
54 |
55 | @router.get("/list", response_model=List[DirectoryNode], response_model_exclude_none=True)
56 | async def list_directory(
57 | directory_service: DirectoryServiceDep,
58 | project_id: ProjectIdDep,
59 | dir_name: str = Query("/", description="Directory path to list"),
60 | depth: int = Query(1, ge=1, le=10, description="Recursion depth (1-10)"),
61 | file_name_glob: Optional[str] = Query(
62 | None, description="Glob pattern for filtering file names"
63 | ),
64 | ):
65 | """List directory contents with filtering and depth control.
66 |
67 | Args:
68 | directory_service: Service for directory operations
69 | project_id: ID of the current project
70 | dir_name: Directory path to list (default: root "/")
71 | depth: Recursion depth (1-10, default: 1 for immediate children only)
72 | file_name_glob: Optional glob pattern for filtering file names (e.g., "*.md", "*meeting*")
73 |
74 | Returns:
75 | List of DirectoryNode objects matching the criteria
76 | """
77 | # Get directory listing with filtering
78 | nodes = await directory_service.list_directory(
79 | dir_name=dir_name,
80 | depth=depth,
81 | file_name_glob=file_name_glob,
82 | )
83 |
84 | return nodes
85 |
```
--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/mcp.py:
--------------------------------------------------------------------------------
```python
1 | """MCP server command with streamable HTTP transport."""
2 |
3 | import os
4 | import typer
5 | from typing import Optional
6 |
7 | from basic_memory.cli.app import app
8 | from basic_memory.config import ConfigManager, init_mcp_logging
9 |
10 | # Import mcp instance (has lifespan that handles initialization and file sync)
11 | from basic_memory.mcp.server import mcp as mcp_server # pragma: no cover
12 |
13 | # Import mcp tools to register them
14 | import basic_memory.mcp.tools # noqa: F401 # pragma: no cover
15 |
16 | # Import prompts to register them
17 | import basic_memory.mcp.prompts # noqa: F401 # pragma: no cover
18 | from loguru import logger
19 |
20 | config = ConfigManager().config
21 |
22 | if not config.cloud_mode_enabled:
23 |
24 | @app.command()
25 | def mcp(
26 | transport: str = typer.Option(
27 | "stdio", help="Transport type: stdio, streamable-http, or sse"
28 | ),
29 | host: str = typer.Option(
30 | "0.0.0.0", help="Host for HTTP transports (use 0.0.0.0 to allow external connections)"
31 | ),
32 | port: int = typer.Option(8000, help="Port for HTTP transports"),
33 | path: str = typer.Option("/mcp", help="Path prefix for streamable-http transport"),
34 | project: Optional[str] = typer.Option(None, help="Restrict MCP server to single project"),
35 | ): # pragma: no cover
36 | """Run the MCP server with configurable transport options.
37 |
38 | This command starts an MCP server using one of three transport options:
39 |
40 | - stdio: Standard I/O (good for local usage)
41 | - streamable-http: Recommended for web deployments (default)
42 | - sse: Server-Sent Events (for compatibility with existing clients)
43 |
44 | Initialization, file sync, and cleanup are handled by the MCP server's lifespan.
45 | """
46 | # Initialize logging for MCP (file only, stdout breaks protocol)
47 | init_mcp_logging()
48 |
49 | # Validate and set project constraint if specified
50 | if project:
51 | config_manager = ConfigManager()
52 | project_name, _ = config_manager.get_project(project)
53 | if not project_name:
54 | typer.echo(f"No project found named: {project}", err=True)
55 | raise typer.Exit(1)
56 |
57 | # Set env var with validated project name
58 | os.environ["BASIC_MEMORY_MCP_PROJECT"] = project_name
59 | logger.info(f"MCP server constrained to project: {project_name}")
60 |
61 | # Run the MCP server (blocks)
62 | # Lifespan handles: initialization, migrations, file sync, cleanup
63 | logger.info(f"Starting MCP server with {transport.upper()} transport")
64 |
65 | if transport == "stdio":
66 | mcp_server.run(
67 | transport=transport,
68 | )
69 | elif transport == "streamable-http" or transport == "sse":
70 | mcp_server.run(
71 | transport=transport,
72 | host=host,
73 | port=port,
74 | path=path,
75 | log_level="INFO",
76 | )
77 |
```
--------------------------------------------------------------------------------
/specs/SPEC-2 Slash Commands Reference.md:
--------------------------------------------------------------------------------
```markdown
1 | ---
2 | title: 'SPEC-2: Slash Commands Reference'
3 | type: spec
4 | permalink: specs/spec-2-slash-commands-reference
5 | tags:
6 | - commands
7 | - process
8 | - reference
9 | ---
10 |
11 | # SPEC-2: Slash Commands Reference
12 |
13 | This document defines the slash commands used in our specification-driven development process.
14 |
15 | ## /spec create [name]
16 |
17 | **Purpose**: Create a new specification document
18 |
19 | **Usage**: `/spec create notes-decomposition`
20 |
21 | **Process**:
22 | 1. Create new spec document in `/specs` folder
23 | 2. Use SPEC-XXX numbering format (auto-increment)
24 | 3. Include standard spec template:
25 | - Why (reasoning/problem)
26 | - What (affected areas)
27 | - How (high-level approach)
28 | - How to Evaluate (testing/validation)
29 | 4. Tag appropriately for knowledge graph
30 | 5. Link to related specs/components
31 |
32 | **Template**:
33 | ```markdown
34 | # SPEC-XXX: [Title]
35 |
36 | ## Why
37 | [Problem statement and reasoning]
38 |
39 | ## What
40 | [What is affected or changed]
41 |
42 | ## How (High Level)
43 | [Approach to implementation]
44 |
45 | ## How to Evaluate
46 | [Testing/validation procedure]
47 |
48 | ## Notes
49 | [Additional context as needed]
50 | ```
51 |
52 | ## /spec status
53 |
54 | **Purpose**: Show current status of all specifications
55 |
56 | **Usage**: `/spec status`
57 |
58 | **Process**:
59 | 1. Search all specs in `/specs` folder
60 | 2. Display table showing:
61 | - Spec number and title
62 | - Status (draft, approved, implementing, complete)
63 | - Assigned agent (if any)
64 | - Last updated
65 | - Dependencies
66 |
67 | ## /spec implement [name]
68 |
69 | **Purpose**: Hand specification to appropriate agent for implementation
70 |
71 | **Usage**: `/spec implement SPEC-002`
72 |
73 | **Process**:
74 | 1. Read the specified spec
75 | 2. Analyze requirements to determine appropriate agent:
76 | - Frontend components → vue-developer
77 | - Architecture/system design → system-architect
78 | - Backend/API → python-developer
79 | 3. Launch agent with spec context
80 | 4. Agent creates implementation plan
81 | 5. Update spec with implementation status
82 |
83 | ## /spec review [name]
84 |
85 | **Purpose**: Review implementation against specification criteria
86 |
87 | **Usage**: `/spec review SPEC-002`
88 |
89 | **Process**:
90 | 1. Read original spec and "How to Evaluate" section
91 | 2. Examine current implementation
92 | 3. Test against success criteria
93 | 4. Document gaps or issues
94 | 5. Update spec with review results
95 | 6. Recommend next actions (complete, revise, iterate)
96 |
97 | ## Command Extensions
98 |
99 | As the process evolves, we may add:
100 | - `/spec link [spec1] [spec2]` - Create dependency links
101 | - `/spec archive [name]` - Archive completed specs
102 | - `/spec template [type]` - Create spec from template
103 | - `/spec search [query]` - Search spec content
104 |
105 | ## References
106 |
107 | - Claude Slash commands: https://docs.anthropic.com/en/docs/claude-code/slash-commands
108 |
109 | ## Creating a command
110 |
111 | Commands are implemented as Claude slash commands:
112 |
113 | Location in repo: .claude/commands/
114 |
115 | In the following example, we create the /optimize command:
116 | ```bash
117 | # Create a project command
118 | mkdir -p .claude/commands
119 | echo "Analyze this code for performance issues and suggest optimizations:" > .claude/commands/optimize.md
120 | ```
121 |
```
--------------------------------------------------------------------------------
/src/basic_memory/repository/observation_repository.py:
--------------------------------------------------------------------------------
```python
1 | """Repository for managing Observation objects."""
2 |
3 | from typing import Dict, List, Sequence
4 |
5 |
6 | from sqlalchemy import select
7 | from sqlalchemy.ext.asyncio import async_sessionmaker
8 |
9 | from basic_memory.models import Observation
10 | from basic_memory.repository.repository import Repository
11 |
12 |
13 | class ObservationRepository(Repository[Observation]):
14 | """Repository for Observation model with memory-specific operations."""
15 |
16 | def __init__(self, session_maker: async_sessionmaker, project_id: int):
17 | """Initialize with session maker and project_id filter.
18 |
19 | Args:
20 | session_maker: SQLAlchemy session maker
21 | project_id: Project ID to filter all operations by
22 | """
23 | super().__init__(session_maker, Observation, project_id=project_id)
24 |
25 | async def find_by_entity(self, entity_id: int) -> Sequence[Observation]:
26 | """Find all observations for a specific entity."""
27 | query = select(Observation).filter(Observation.entity_id == entity_id)
28 | result = await self.execute_query(query)
29 | return result.scalars().all()
30 |
31 | async def find_by_context(self, context: str) -> Sequence[Observation]:
32 | """Find observations with a specific context."""
33 | query = select(Observation).filter(Observation.context == context)
34 | result = await self.execute_query(query)
35 | return result.scalars().all()
36 |
37 | async def find_by_category(self, category: str) -> Sequence[Observation]:
38 | """Find observations with a specific context."""
39 | query = select(Observation).filter(Observation.category == category)
40 | result = await self.execute_query(query)
41 | return result.scalars().all()
42 |
43 | async def observation_categories(self) -> Sequence[str]:
44 | """Return a list of all observation categories."""
45 | query = select(Observation.category).distinct()
46 | result = await self.execute_query(query, use_query_options=False)
47 | return result.scalars().all()
48 |
49 | async def find_by_entities(self, entity_ids: List[int]) -> Dict[int, List[Observation]]:
50 | """Find all observations for multiple entities in a single query.
51 |
52 | Args:
53 | entity_ids: List of entity IDs to fetch observations for
54 |
55 | Returns:
56 | Dictionary mapping entity_id to list of observations
57 | """
58 | if not entity_ids: # pragma: no cover
59 | return {}
60 |
61 | # Query observations for all entities in the list
62 | query = select(Observation).filter(Observation.entity_id.in_(entity_ids))
63 | result = await self.execute_query(query)
64 | observations = result.scalars().all()
65 |
66 | # Group observations by entity_id
67 | observations_by_entity = {}
68 | for obs in observations:
69 | if obs.entity_id not in observations_by_entity:
70 | observations_by_entity[obs.entity_id] = []
71 | observations_by_entity[obs.entity_id].append(obs)
72 |
73 | return observations_by_entity
74 |
```
--------------------------------------------------------------------------------
/src/basic_memory/cli/app.py:
--------------------------------------------------------------------------------
```python
1 | # Suppress Logfire "not configured" warning - we only use Logfire in cloud/server contexts
2 | import os
3 |
4 | os.environ.setdefault("LOGFIRE_IGNORE_NO_CONFIG", "1")
5 |
6 | # Remove loguru's default handler IMMEDIATELY, before any other imports.
7 | # This prevents DEBUG logs from appearing on stdout during module-level
8 | # initialization (e.g., template_loader.TemplateLoader() logs at DEBUG level).
9 | from loguru import logger
10 |
11 | logger.remove()
12 |
13 | from typing import Optional # noqa: E402
14 |
15 | import typer # noqa: E402
16 |
17 | from basic_memory.cli.container import CliContainer, set_container # noqa: E402
18 | from basic_memory.config import init_cli_logging # noqa: E402
19 | from basic_memory.telemetry import show_notice_if_needed, track_app_started # noqa: E402
20 |
21 |
22 | def version_callback(value: bool) -> None:
23 | """Show version and exit."""
24 | if value: # pragma: no cover
25 | import basic_memory
26 |
27 | typer.echo(f"Basic Memory version: {basic_memory.__version__}")
28 | raise typer.Exit()
29 |
30 |
31 | app = typer.Typer(name="basic-memory")
32 |
33 |
34 | @app.callback()
35 | def app_callback(
36 | ctx: typer.Context,
37 | version: Optional[bool] = typer.Option(
38 | None,
39 | "--version",
40 | "-v",
41 | help="Show version and exit.",
42 | callback=version_callback,
43 | is_eager=True,
44 | ),
45 | ) -> None:
46 | """Basic Memory - Local-first personal knowledge management."""
47 |
48 | # Initialize logging for CLI (file only, no stdout)
49 | init_cli_logging()
50 |
51 | # --- Composition Root ---
52 | # Create container and read config (single point of config access)
53 | container = CliContainer.create()
54 | set_container(container)
55 |
56 | # Show telemetry notice and track CLI startup
57 | # Skip for 'mcp' command - it handles its own telemetry in lifespan
58 | # Skip for 'telemetry' command - avoid issues when user is managing telemetry
59 | if ctx.invoked_subcommand not in {"mcp", "telemetry"}:
60 | show_notice_if_needed()
61 | track_app_started("cli")
62 |
63 | # Run initialization for commands that don't use the API
64 | # Skip for 'mcp' command - it has its own lifespan that handles initialization
65 | # Skip for API-using commands (status, sync, etc.) - they handle initialization via deps.py
66 | # Skip for 'reset' command - it manages its own database lifecycle
67 | skip_init_commands = {"mcp", "status", "sync", "project", "tool", "reset"}
68 | if (
69 | not version
70 | and ctx.invoked_subcommand is not None
71 | and ctx.invoked_subcommand not in skip_init_commands
72 | ):
73 | from basic_memory.services.initialization import ensure_initialization
74 |
75 | ensure_initialization(container.config)
76 |
77 |
78 | ## import
79 | # Register sub-command groups
80 | import_app = typer.Typer(help="Import data from various sources")
81 | app.add_typer(import_app, name="import")
82 |
83 | claude_app = typer.Typer(help="Import Conversations from Claude JSON export.")
84 | import_app.add_typer(claude_app, name="claude")
85 |
86 |
87 | ## cloud
88 |
89 | cloud_app = typer.Typer(help="Access Basic Memory Cloud")
90 | app.add_typer(cloud_app, name="cloud")
91 |
```
--------------------------------------------------------------------------------
/src/basic_memory/api/routers/memory_router.py:
--------------------------------------------------------------------------------
```python
1 | """Routes for memory:// URI operations."""
2 |
3 | from typing import Annotated, Optional
4 |
5 | from fastapi import APIRouter, Query
6 | from loguru import logger
7 |
8 | from basic_memory.deps import ContextServiceDep, EntityRepositoryDep
9 | from basic_memory.schemas.base import TimeFrame, parse_timeframe
10 | from basic_memory.schemas.memory import (
11 | GraphContext,
12 | normalize_memory_url,
13 | )
14 | from basic_memory.schemas.search import SearchItemType
15 | from basic_memory.api.routers.utils import to_graph_context
16 |
17 | router = APIRouter(prefix="/memory", tags=["memory"])
18 |
19 |
20 | @router.get("/recent", response_model=GraphContext)
21 | async def recent(
22 | context_service: ContextServiceDep,
23 | entity_repository: EntityRepositoryDep,
24 | type: Annotated[list[SearchItemType] | None, Query()] = None,
25 | depth: int = 1,
26 | timeframe: TimeFrame = "7d",
27 | page: int = 1,
28 | page_size: int = 10,
29 | max_related: int = 10,
30 | ) -> GraphContext:
31 | # return all types by default
32 | types = (
33 | [SearchItemType.ENTITY, SearchItemType.RELATION, SearchItemType.OBSERVATION]
34 | if not type
35 | else type
36 | )
37 |
38 | logger.debug(
39 | f"Getting recent context: `{types}` depth: `{depth}` timeframe: `{timeframe}` page: `{page}` page_size: `{page_size}` max_related: `{max_related}`"
40 | )
41 | # Parse timeframe
42 | since = parse_timeframe(timeframe)
43 | limit = page_size
44 | offset = (page - 1) * page_size
45 |
46 | # Build context
47 | context = await context_service.build_context(
48 | types=types, depth=depth, since=since, limit=limit, offset=offset, max_related=max_related
49 | )
50 | recent_context = await to_graph_context(
51 | context, entity_repository=entity_repository, page=page, page_size=page_size
52 | )
53 | logger.debug(f"Recent context: {recent_context.model_dump_json()}")
54 | return recent_context
55 |
56 |
57 | # get_memory_context needs to be declared last so other paths can match
58 |
59 |
60 | @router.get("/{uri:path}", response_model=GraphContext)
61 | async def get_memory_context(
62 | context_service: ContextServiceDep,
63 | entity_repository: EntityRepositoryDep,
64 | uri: str,
65 | depth: int = 1,
66 | timeframe: Optional[TimeFrame] = None,
67 | page: int = 1,
68 | page_size: int = 10,
69 | max_related: int = 10,
70 | ) -> GraphContext:
71 | """Get rich context from memory:// URI."""
72 | # add the project name from the config to the url as the "host
73 | # Parse URI
74 | logger.debug(
75 | f"Getting context for URI: `{uri}` depth: `{depth}` timeframe: `{timeframe}` page: `{page}` page_size: `{page_size}` max_related: `{max_related}`"
76 | )
77 | memory_url = normalize_memory_url(uri)
78 |
79 | # Parse timeframe
80 | since = parse_timeframe(timeframe) if timeframe else None
81 | limit = page_size
82 | offset = (page - 1) * page_size
83 |
84 | # Build context
85 | context = await context_service.build_context(
86 | memory_url, depth=depth, since=since, limit=limit, offset=offset, max_related=max_related
87 | )
88 | return await to_graph_context(
89 | context, entity_repository=entity_repository, page=page, page_size=page_size
90 | )
91 |
```
--------------------------------------------------------------------------------
/.github/workflows/claude.yml:
--------------------------------------------------------------------------------
```yaml
1 | name: Claude Code
2 |
3 | on:
4 | issue_comment:
5 | types: [created]
6 | pull_request_review_comment:
7 | types: [created]
8 | issues:
9 | types: [opened, assigned]
10 | pull_request_review:
11 | types: [submitted]
12 | pull_request_target:
13 | types: [opened, synchronize]
14 |
15 | jobs:
16 | claude:
17 | if: |
18 | (
19 | (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
20 | (github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
21 | (github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
22 | (github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude'))) ||
23 | (github.event_name == 'pull_request_target' && contains(github.event.pull_request.body, '@claude'))
24 | ) && (
25 | github.event.comment.author_association == 'OWNER' ||
26 | github.event.comment.author_association == 'MEMBER' ||
27 | github.event.comment.author_association == 'COLLABORATOR' ||
28 | github.event.sender.author_association == 'OWNER' ||
29 | github.event.sender.author_association == 'MEMBER' ||
30 | github.event.sender.author_association == 'COLLABORATOR' ||
31 | github.event.pull_request.author_association == 'OWNER' ||
32 | github.event.pull_request.author_association == 'MEMBER' ||
33 | github.event.pull_request.author_association == 'COLLABORATOR'
34 | )
35 | runs-on: ubuntu-latest
36 | permissions:
37 | contents: read
38 | pull-requests: read
39 | issues: read
40 | id-token: write
41 | actions: read # Required for Claude to read CI results on PRs
42 | steps:
43 | - name: Checkout repository
44 | uses: actions/checkout@v4
45 | with:
46 | # For pull_request_target, checkout the PR head to review the actual changes
47 | ref: ${{ github.event_name == 'pull_request_target' && github.event.pull_request.head.sha || github.sha }}
48 | fetch-depth: 1
49 |
50 | - name: Run Claude Code
51 | id: claude
52 | uses: anthropics/claude-code-action@v1
53 | with:
54 | claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
55 | track_progress: true # Enable visual progress tracking
56 |
57 | # This is an optional setting that allows Claude to read CI results on PRs
58 | additional_permissions: |
59 | actions: read
60 |
61 | # Optional: Give a custom prompt to Claude. If this is not specified, Claude will perform the instructions specified in the comment that tagged it.
62 | # prompt: 'Update the pull request description to include a summary of changes.'
63 |
64 | # Optional: Add claude_args to customize behavior and configuration
65 | # See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
66 | # or https://docs.claude.com/en/docs/claude-code/sdk#command-line for available options
67 | # claude_args: '--model claude-opus-4-1-20250805 --allowed-tools Bash(gh pr:*)'
68 |
69 |
```
--------------------------------------------------------------------------------
/specs/SPEC-3 Agent Definitions.md:
--------------------------------------------------------------------------------
```markdown
1 | ---
2 | title: 'SPEC-3: Agent Definitions'
3 | type: spec
4 | permalink: specs/spec-3-agent-definitions
5 | tags:
6 | - agents
7 | - roles
8 | - process
9 | ---
10 |
11 | # SPEC-3: Agent Definitions
12 |
13 | This document defines the specialist agents used in our specification-driven development process.
14 |
15 | ## system-architect
16 |
17 | **Role**: High-level system design and architectural decisions
18 |
19 | **Responsibilities**:
20 | - Create architectural specifications and ADRs
21 | - Analyze system-wide impacts and trade-offs
22 | - Design component interfaces and data flow
23 | - Evaluate technical approaches and patterns
24 | - Document architectural decisions and rationale
25 |
26 | **Expertise Areas**:
27 | - System architecture and design patterns
28 | - Technology evaluation and selection
29 | - Scalability and performance considerations
30 | - Integration patterns and API design
31 | - Technical debt and refactoring strategies
32 |
33 | **Typical Specs**:
34 | - System architecture overviews
35 | - Component decomposition strategies
36 | - Data flow and state management
37 | - Integration and deployment patterns
38 |
39 | ## vue-developer
40 |
41 | **Role**: Frontend component development and UI implementation
42 |
43 | **Responsibilities**:
44 | - Create Vue.js component specifications
45 | - Implement responsive UI components
46 | - Design component APIs and interfaces
47 | - Optimize for performance and accessibility
48 | - Document component usage and patterns
49 |
50 | **Expertise Areas**:
51 | - Vue.js 3 Composition API
52 | - Nuxt 3 framework patterns
53 | - shadcn-vue component library
54 | - Responsive design and CSS
55 | - TypeScript integration
56 | - State management with Pinia
57 |
58 | **Typical Specs**:
59 | - Individual component specifications
60 | - UI pattern libraries
61 | - Responsive design approaches
62 | - Component interaction flows
63 |
64 | ## python-developer
65 |
66 | **Role**: Backend development and API implementation
67 |
68 | **Responsibilities**:
69 | - Create backend service specifications
70 | - Implement APIs and data processing
71 | - Design database schemas and queries
72 | - Optimize performance and reliability
73 | - Document service interfaces and behavior
74 |
75 | **Expertise Areas**:
76 | - FastAPI and Python web frameworks
77 | - Database design and operations
78 | - API design and documentation
79 | - Authentication and security
80 | - Performance optimization
81 | - Testing and validation
82 |
83 | **Typical Specs**:
84 | - API endpoint specifications
85 | - Database schema designs
86 | - Service integration patterns
87 | - Performance optimization strategies
88 |
89 | ## Agent Collaboration Patterns
90 |
91 | ### Handoff Protocol
92 | 1. Agent receives spec through `/spec implement [name]`
93 | 2. Agent reviews spec and creates implementation plan
94 | 3. Agent documents progress and decisions in spec
95 | 4. Agent hands off to another agent if cross-domain work needed
96 | 5. Final agent updates spec with completion status
97 |
98 | ### Communication Standards
99 | - All agents update specs through basic-memory MCP tools
100 | - Document decisions and trade-offs in spec notes
101 | - Link related specs and components
102 | - Preserve context for future reference
103 |
104 | ### Quality Standards
105 | - Follow existing codebase patterns and conventions
106 | - Write tests that validate spec requirements
107 | - Document implementation choices
108 | - Consider maintainability and extensibility
109 |
```
--------------------------------------------------------------------------------
/src/basic_memory/repository/search_index_row.py:
--------------------------------------------------------------------------------
```python
1 | """Search index data structures."""
2 |
3 | import json
4 | from dataclasses import dataclass
5 | from datetime import datetime
6 | from typing import Optional
7 | from pathlib import Path
8 |
9 | from basic_memory.schemas.search import SearchItemType
10 |
11 |
12 | @dataclass
13 | class SearchIndexRow:
14 | """Search result with score and metadata."""
15 |
16 | project_id: int
17 | id: int
18 | type: str
19 | file_path: str
20 |
21 | # date values
22 | created_at: datetime
23 | updated_at: datetime
24 |
25 | permalink: Optional[str] = None
26 | metadata: Optional[dict] = None
27 |
28 | # assigned in result
29 | score: Optional[float] = None
30 |
31 | # Type-specific fields
32 | title: Optional[str] = None # entity
33 | content_stems: Optional[str] = None # entity, observation
34 | content_snippet: Optional[str] = None # entity, observation
35 | entity_id: Optional[int] = None # observations
36 | category: Optional[str] = None # observations
37 | from_id: Optional[int] = None # relations
38 | to_id: Optional[int] = None # relations
39 | relation_type: Optional[str] = None # relations
40 |
41 | @property
42 | def content(self):
43 | return self.content_snippet
44 |
45 | @property
46 | def directory(self) -> str:
47 | """Extract directory part from file_path.
48 |
49 | For a file at "projects/notes/ideas.md", returns "/projects/notes"
50 | For a file at root level "README.md", returns "/"
51 | """
52 | if not self.type == SearchItemType.ENTITY.value and not self.file_path:
53 | return ""
54 |
55 | # Normalize path separators to handle both Windows (\) and Unix (/) paths
56 | normalized_path = Path(self.file_path).as_posix()
57 |
58 | # Split the path by slashes
59 | parts = normalized_path.split("/")
60 |
61 | # If there's only one part (e.g., "README.md"), it's at the root
62 | if len(parts) <= 1:
63 | return "/"
64 |
65 | # Join all parts except the last one (filename)
66 | directory_path = "/".join(parts[:-1])
67 | return f"/{directory_path}"
68 |
69 | def to_insert(self, serialize_json: bool = True):
70 | """Convert to dict for database insertion.
71 |
72 | Args:
73 | serialize_json: If True, converts metadata dict to JSON string (for SQLite).
74 | If False, keeps metadata as dict (for Postgres JSONB).
75 | """
76 | return {
77 | "id": self.id,
78 | "title": self.title,
79 | "content_stems": self.content_stems,
80 | "content_snippet": self.content_snippet,
81 | "permalink": self.permalink,
82 | "file_path": self.file_path,
83 | "type": self.type,
84 | "metadata": json.dumps(self.metadata)
85 | if serialize_json and self.metadata
86 | else self.metadata,
87 | "from_id": self.from_id,
88 | "to_id": self.to_id,
89 | "relation_type": self.relation_type,
90 | "entity_id": self.entity_id,
91 | "category": self.category,
92 | "created_at": self.created_at if self.created_at else None,
93 | "updated_at": self.updated_at if self.updated_at else None,
94 | "project_id": self.project_id,
95 | }
96 |
```
--------------------------------------------------------------------------------
/.claude/commands/release/beta.md:
--------------------------------------------------------------------------------
```markdown
1 | # /beta - Create Beta Release
2 |
3 | Create a new beta release using the automated justfile target with quality checks and tagging.
4 |
5 | ## Usage
6 | ```
7 | /beta <version>
8 | ```
9 |
10 | **Parameters:**
11 | - `version` (required): Beta version like `v0.13.2b1` or `v0.13.2rc1`
12 |
13 | ## Implementation
14 |
15 | You are an expert release manager for the Basic Memory project. When the user runs `/beta`, execute the following steps:
16 |
17 | ### Step 1: Pre-flight Validation
18 | 1. Verify version format matches `v\d+\.\d+\.\d+(b\d+|rc\d+)` pattern
19 | 2. Check current git status for uncommitted changes
20 | 3. Verify we're on the `main` branch
21 | 4. Confirm no existing tag with this version
22 |
23 | ### Step 2: Use Justfile Automation
24 | Execute the automated beta release process:
25 | ```bash
26 | just beta <version>
27 | ```
28 |
29 | The justfile target handles:
30 | - ✅ Beta version format validation (supports b1, b2, rc1, etc.)
31 | - ✅ Git status and branch checks
32 | - ✅ Quality checks (`just check` - lint, format, type-check, tests)
33 | - ✅ Version update in `src/basic_memory/__init__.py`
34 | - ✅ Automatic commit with proper message
35 | - ✅ Tag creation and pushing to GitHub
36 | - ✅ Beta release workflow trigger
37 |
38 | ### Step 3: Monitor Beta Release
39 | 1. Check GitHub Actions workflow starts successfully
40 | 2. Monitor workflow at: https://github.com/basicmachines-co/basic-memory/actions
41 | 3. Verify PyPI pre-release publication
42 | 4. Test beta installation: `uv tool install basic-memory --pre`
43 |
44 | ### Step 4: Beta Testing Instructions
45 | Provide users with beta testing instructions:
46 |
47 | ```bash
48 | # Install/upgrade to beta
49 | uv tool install basic-memory --pre
50 |
51 | # Or upgrade existing installation
52 | uv tool upgrade basic-memory --prerelease=allow
53 | ```
54 |
55 | ## Version Guidelines
56 | - **First beta**: `v0.13.2b1`
57 | - **Subsequent betas**: `v0.13.2b2`, `v0.13.2b3`, etc.
58 | - **Release candidates**: `v0.13.2rc1`, `v0.13.2rc2`, etc.
59 | - **Final release**: `v0.13.2` (use `/release` command)
60 |
61 | ## Error Handling
62 | - If `just beta` fails, examine the error output for specific issues
63 | - If quality checks fail, fix issues and retry
64 | - If version format is invalid, correct and retry
65 | - If tag already exists, increment version number
66 |
67 | ## Success Output
68 | ```
69 | ✅ Beta Release v0.13.2b1 Created Successfully!
70 |
71 | 🏷️ Tag: v0.13.2b1
72 | 🚀 GitHub Actions: Running
73 | 📦 PyPI: Will be available in ~5 minutes as pre-release
74 |
75 | Install/test with:
76 | uv tool install basic-memory --pre
77 |
78 | Monitor release: https://github.com/basicmachines-co/basic-memory/actions
79 | ```
80 |
81 | ## Beta Testing Workflow
82 | 1. **Create beta**: Use `/beta v0.13.2b1`
83 | 2. **Test features**: Install and validate new functionality
84 | 3. **Fix issues**: Address bugs found during testing
85 | 4. **Iterate**: Create `v0.13.2b2` if needed
86 | 5. **Release candidate**: Create `v0.13.2rc1` when stable
87 | 6. **Final release**: Use `/release v0.13.2` when ready
88 |
89 | ## Context
90 | - Beta releases are pre-releases for testing new features
91 | - Automatically published to PyPI with pre-release flag
92 | - Uses the automated justfile target for consistency
93 | - Version is automatically updated in `__init__.py`
94 | - Ideal for validating changes before stable release
95 | - Supports both beta (b1, b2) and release candidate (rc1, rc2) versions
```
--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/cloud/cloud_utils.py:
--------------------------------------------------------------------------------
```python
1 | """Shared utilities for cloud operations."""
2 |
3 | from basic_memory.cli.commands.cloud.api_client import make_api_request
4 | from basic_memory.config import ConfigManager
5 | from basic_memory.schemas.cloud import (
6 | CloudProjectList,
7 | CloudProjectCreateRequest,
8 | CloudProjectCreateResponse,
9 | )
10 | from basic_memory.utils import generate_permalink
11 |
12 |
13 | class CloudUtilsError(Exception):
14 | """Exception raised for cloud utility errors."""
15 |
16 | pass
17 |
18 |
19 | async def fetch_cloud_projects(
20 | *,
21 | api_request=make_api_request,
22 | ) -> CloudProjectList:
23 | """Fetch list of projects from cloud API.
24 |
25 | Returns:
26 | CloudProjectList with projects from cloud
27 | """
28 | try:
29 | config_manager = ConfigManager()
30 | config = config_manager.config
31 | host_url = config.cloud_host.rstrip("/")
32 |
33 | response = await api_request(method="GET", url=f"{host_url}/proxy/projects/projects")
34 |
35 | return CloudProjectList.model_validate(response.json())
36 | except Exception as e:
37 | raise CloudUtilsError(f"Failed to fetch cloud projects: {e}") from e
38 |
39 |
40 | async def create_cloud_project(
41 | project_name: str,
42 | *,
43 | api_request=make_api_request,
44 | ) -> CloudProjectCreateResponse:
45 | """Create a new project on cloud.
46 |
47 | Args:
48 | project_name: Name of project to create
49 |
50 | Returns:
51 | CloudProjectCreateResponse with project details from API
52 | """
53 | try:
54 | config_manager = ConfigManager()
55 | config = config_manager.config
56 | host_url = config.cloud_host.rstrip("/")
57 |
58 | # Use generate_permalink to ensure consistent naming
59 | project_path = generate_permalink(project_name)
60 |
61 | project_data = CloudProjectCreateRequest(
62 | name=project_name,
63 | path=project_path,
64 | set_default=False,
65 | )
66 |
67 | response = await api_request(
68 | method="POST",
69 | url=f"{host_url}/proxy/projects/projects",
70 | headers={"Content-Type": "application/json"},
71 | json_data=project_data.model_dump(),
72 | )
73 |
74 | return CloudProjectCreateResponse.model_validate(response.json())
75 | except Exception as e:
76 | raise CloudUtilsError(f"Failed to create cloud project '{project_name}': {e}") from e
77 |
78 |
79 | async def sync_project(project_name: str, force_full: bool = False) -> None:
80 | """Trigger sync for a specific project on cloud.
81 |
82 | Args:
83 | project_name: Name of project to sync
84 | force_full: If True, force a full scan bypassing watermark optimization
85 | """
86 | try:
87 | from basic_memory.cli.commands.command_utils import run_sync
88 |
89 | await run_sync(project=project_name, force_full=force_full)
90 | except Exception as e:
91 | raise CloudUtilsError(f"Failed to sync project '{project_name}': {e}") from e
92 |
93 |
94 | async def project_exists(project_name: str, *, api_request=make_api_request) -> bool:
95 | """Check if a project exists on cloud.
96 |
97 | Args:
98 | project_name: Name of project to check
99 |
100 | Returns:
101 | True if project exists, False otherwise
102 | """
103 | try:
104 | projects = await fetch_cloud_projects(api_request=api_request)
105 | project_names = {p.name for p in projects.projects}
106 | return project_name in project_names
107 | except Exception:
108 | return False
109 |
```
--------------------------------------------------------------------------------
/.claude/commands/release/release-check.md:
--------------------------------------------------------------------------------
```markdown
1 | # /release-check - Pre-flight Release Validation
2 |
3 | Comprehensive pre-flight check for release readiness without making any changes.
4 |
5 | ## Usage
6 | ```
7 | /release-check [version]
8 | ```
9 |
10 | **Parameters:**
11 | - `version` (optional): Version to validate like `v0.13.0`. If not provided, determines from context.
12 |
13 | ## Implementation
14 |
15 | You are an expert QA engineer for the Basic Memory project. When the user runs `/release-check`, execute the following validation steps:
16 |
17 | ### Step 1: Environment Validation
18 | 1. **Git Status Check**
19 | - Verify working directory is clean
20 | - Confirm on `main` branch
21 | - Check if ahead/behind origin
22 |
23 | 2. **Version Validation**
24 | - Validate version format if provided
25 | - Check for existing tags with same version
26 | - Verify version increments properly from last release
27 |
28 | ### Step 2: Code Quality Gates
29 | 1. **Test Suite Validation**
30 | ```bash
31 | just test
32 | ```
33 | - All tests must pass
34 | - Check test coverage (target: 95%+)
35 | - Validate no skipped critical tests
36 |
37 | 2. **Code Quality Checks**
38 | ```bash
39 | just lint
40 | just type-check
41 | ```
42 | - No linting errors
43 | - No type checking errors
44 | - Code formatting is consistent
45 |
46 | ### Step 3: Documentation Validation
47 | 1. **Changelog Check**
48 | - CHANGELOG.md contains entry for target version
49 | - Entry includes all major features and fixes
50 | - Breaking changes are documented
51 |
52 | 2. **Documentation Currency**
53 | - README.md reflects current functionality
54 | - CLI reference is up to date
55 | - MCP tools are documented
56 |
57 | ### Step 4: Dependency Validation
58 | 1. **Security Scan**
59 | - No known vulnerabilities in dependencies
60 | - All dependencies are at appropriate versions
61 | - No conflicting dependency versions
62 |
63 | 2. **Build Validation**
64 | - Package builds successfully
65 | - All required files are included
66 | - No missing dependencies
67 |
68 | ### Step 5: Issue Tracking Validation
69 | 1. **GitHub Issues Check**
70 | - No critical open issues blocking release
71 | - All milestone issues are resolved
72 | - High-priority bugs are fixed
73 |
74 | 2. **Testing Coverage**
75 | - Integration tests pass
76 | - MCP tool tests pass
77 | - Cross-platform compatibility verified
78 |
79 | ## Report Format
80 |
81 | Generate a comprehensive report:
82 |
83 | ```
84 | 🔍 Release Readiness Check for v0.13.0
85 |
86 | ✅ PASSED CHECKS:
87 | ├── Git status clean
88 | ├── On main branch
89 | ├── All tests passing (744/744)
90 | ├── Test coverage: 98.2%
91 | ├── Type checking passed
92 | ├── Linting passed
93 | ├── CHANGELOG.md updated
94 | └── No critical issues open
95 |
96 | ⚠️ WARNINGS:
97 | ├── 2 medium-priority issues still open
98 | └── Documentation could be updated
99 |
100 | ❌ BLOCKING ISSUES:
101 | └── None found
102 |
103 | 🎯 RELEASE READINESS: ✅ READY
104 |
105 | Recommended next steps:
106 | 1. Address warnings if desired
107 | 2. Run `/release v0.13.0` when ready
108 | ```
109 |
110 | ## Validation Criteria
111 |
112 | ### Must Pass (Blocking)
113 | - [ ] All tests pass
114 | - [ ] No type errors
115 | - [ ] No linting errors
116 | - [ ] Working directory clean
117 | - [ ] On main branch
118 | - [ ] CHANGELOG.md has version entry
119 | - [ ] No critical open issues
120 |
121 | ### Should Pass (Warnings)
122 | - [ ] Test coverage >95%
123 | - [ ] No medium-priority open issues
124 | - [ ] Documentation up to date
125 | - [ ] No dependency vulnerabilities
126 |
127 | ## Context
128 | - This is a read-only validation - makes no changes
129 | - Provides confidence before running actual release
130 | - Helps identify issues early in release process
131 | - Can be run multiple times safely
```
--------------------------------------------------------------------------------
/tests/schemas/test_search.py:
--------------------------------------------------------------------------------
```python
1 | """Tests for search schemas."""
2 |
3 | from datetime import datetime
4 |
5 | from basic_memory.schemas.search import (
6 | SearchItemType,
7 | SearchQuery,
8 | SearchResult,
9 | SearchResponse,
10 | )
11 |
12 |
13 | def test_search_modes():
14 | """Test different search modes."""
15 | # Exact permalink
16 | query = SearchQuery(permalink="specs/search")
17 | assert query.permalink == "specs/search"
18 | assert query.text is None
19 |
20 | # Pattern match
21 | query = SearchQuery(permalink="specs/*")
22 | assert query.permalink == "specs/*"
23 | assert query.text is None
24 |
25 | # Text search
26 | query = SearchQuery(text="search implementation")
27 | assert query.text == "search implementation"
28 | assert query.permalink is None
29 |
30 |
31 | def test_search_filters():
32 | """Test search result filtering."""
33 | query = SearchQuery(
34 | text="search",
35 | entity_types=[SearchItemType.ENTITY],
36 | types=["component"],
37 | after_date=datetime(2024, 1, 1),
38 | )
39 | assert query.entity_types == [SearchItemType.ENTITY]
40 | assert query.types == ["component"]
41 | assert query.after_date == "2024-01-01T00:00:00"
42 |
43 |
44 | def test_search_result():
45 | """Test search result structure."""
46 | result = SearchResult(
47 | title="test",
48 | type=SearchItemType.ENTITY,
49 | entity="some_entity",
50 | score=0.8,
51 | metadata={"entity_type": "component"},
52 | permalink="specs/search",
53 | file_path="specs/search.md",
54 | )
55 | assert result.type == SearchItemType.ENTITY
56 | assert result.score == 0.8
57 | assert result.metadata == {"entity_type": "component"}
58 |
59 |
60 | def test_observation_result():
61 | """Test observation result fields."""
62 | result = SearchResult(
63 | title="test",
64 | permalink="specs/search",
65 | file_path="specs/search.md",
66 | type=SearchItemType.OBSERVATION,
67 | score=0.5,
68 | metadata={},
69 | entity="some_entity",
70 | category="tech",
71 | )
72 | assert result.entity == "some_entity"
73 | assert result.category == "tech"
74 |
75 |
76 | def test_relation_result():
77 | """Test relation result fields."""
78 | result = SearchResult(
79 | title="test",
80 | permalink="specs/search",
81 | file_path="specs/search.md",
82 | type=SearchItemType.RELATION,
83 | entity="some_entity",
84 | score=0.5,
85 | metadata={},
86 | from_entity="123",
87 | to_entity="456",
88 | relation_type="depends_on",
89 | )
90 | assert result.from_entity == "123"
91 | assert result.to_entity == "456"
92 | assert result.relation_type == "depends_on"
93 |
94 |
95 | def test_search_response():
96 | """Test search response wrapper."""
97 | results = [
98 | SearchResult(
99 | title="test",
100 | permalink="specs/search",
101 | file_path="specs/search.md",
102 | type=SearchItemType.ENTITY,
103 | entity="some_entity",
104 | score=0.8,
105 | metadata={},
106 | ),
107 | SearchResult(
108 | title="test",
109 | permalink="specs/search",
110 | file_path="specs/search.md",
111 | type=SearchItemType.ENTITY,
112 | entity="some_entity",
113 | score=0.6,
114 | metadata={},
115 | ),
116 | ]
117 | response = SearchResponse(results=results, current_page=1, page_size=1)
118 | assert len(response.results) == 2
119 | assert response.results[0].score > response.results[1].score
120 |
```
--------------------------------------------------------------------------------
/src/basic_memory/api/v2/routers/directory_router.py:
--------------------------------------------------------------------------------
```python
1 | """V2 Directory Router - ID-based directory tree operations.
2 |
3 | This router provides directory structure browsing for projects using
4 | external_id UUIDs instead of name-based identifiers.
5 |
6 | Key improvements:
7 | - Direct project lookup via external_id UUIDs
8 | - Consistent with other v2 endpoints
9 | - Better performance through indexed queries
10 | """
11 |
12 | from typing import List, Optional
13 |
14 | from fastapi import APIRouter, Query, Path
15 |
16 | from basic_memory.deps import DirectoryServiceV2ExternalDep
17 | from basic_memory.schemas.directory import DirectoryNode
18 |
19 | router = APIRouter(prefix="/directory", tags=["directory-v2"])
20 |
21 |
22 | @router.get("/tree", response_model=DirectoryNode, response_model_exclude_none=True)
23 | async def get_directory_tree(
24 | directory_service: DirectoryServiceV2ExternalDep,
25 | project_id: str = Path(..., description="Project external UUID"),
26 | ):
27 | """Get hierarchical directory structure from the knowledge base.
28 |
29 | Args:
30 | directory_service: Service for directory operations
31 | project_id: Project external UUID
32 |
33 | Returns:
34 | DirectoryNode representing the root of the hierarchical tree structure
35 | """
36 | # Get a hierarchical directory tree for the specific project
37 | tree = await directory_service.get_directory_tree()
38 |
39 | # Return the hierarchical tree
40 | return tree
41 |
42 |
43 | @router.get("/structure", response_model=DirectoryNode, response_model_exclude_none=True)
44 | async def get_directory_structure(
45 | directory_service: DirectoryServiceV2ExternalDep,
46 | project_id: str = Path(..., description="Project external UUID"),
47 | ):
48 | """Get folder structure for navigation (no files).
49 |
50 | Optimized endpoint for folder tree navigation. Returns only directory nodes
51 | without file metadata. For full tree with files, use /directory/tree.
52 |
53 | Args:
54 | directory_service: Service for directory operations
55 | project_id: Project external UUID
56 |
57 | Returns:
58 | DirectoryNode tree containing only folders (type="directory")
59 | """
60 | structure = await directory_service.get_directory_structure()
61 | return structure
62 |
63 |
64 | @router.get("/list", response_model=List[DirectoryNode], response_model_exclude_none=True)
65 | async def list_directory(
66 | directory_service: DirectoryServiceV2ExternalDep,
67 | project_id: str = Path(..., description="Project external UUID"),
68 | dir_name: str = Query("/", description="Directory path to list"),
69 | depth: int = Query(1, ge=1, le=10, description="Recursion depth (1-10)"),
70 | file_name_glob: Optional[str] = Query(
71 | None, description="Glob pattern for filtering file names"
72 | ),
73 | ):
74 | """List directory contents with filtering and depth control.
75 |
76 | Args:
77 | directory_service: Service for directory operations
78 | project_id: Project external UUID
79 | dir_name: Directory path to list (default: root "/")
80 | depth: Recursion depth (1-10, default: 1 for immediate children only)
81 | file_name_glob: Optional glob pattern for filtering file names (e.g., "*.md", "*meeting*")
82 |
83 | Returns:
84 | List of DirectoryNode objects matching the criteria
85 | """
86 | # Get directory listing with filtering
87 | nodes = await directory_service.list_directory(
88 | dir_name=dir_name,
89 | depth=depth,
90 | file_name_glob=file_name_glob,
91 | )
92 |
93 | return nodes
94 |
```
--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/cloud/rclone_config.py:
--------------------------------------------------------------------------------
```python
1 | """rclone configuration management for Basic Memory Cloud.
2 |
3 | This module provides simplified rclone configuration for SPEC-20.
4 | Uses a single "basic-memory-cloud" remote for all operations.
5 | """
6 |
7 | import configparser
8 | import os
9 | import shutil
10 | from pathlib import Path
11 | from typing import Optional
12 |
13 | from rich.console import Console
14 |
15 | console = Console()
16 |
17 |
18 | class RcloneConfigError(Exception):
19 | """Exception raised for rclone configuration errors."""
20 |
21 | pass
22 |
23 |
24 | def get_rclone_config_path() -> Path:
25 | """Get the path to rclone configuration file."""
26 | config_dir = Path.home() / ".config" / "rclone"
27 | config_dir.mkdir(parents=True, exist_ok=True)
28 | return config_dir / "rclone.conf"
29 |
30 |
31 | def backup_rclone_config() -> Optional[Path]:
32 | """Create a backup of existing rclone config."""
33 | config_path = get_rclone_config_path()
34 | if not config_path.exists():
35 | return None
36 |
37 | backup_path = config_path.with_suffix(f".conf.backup-{os.getpid()}")
38 | shutil.copy2(config_path, backup_path)
39 | console.print(f"[dim]Created backup: {backup_path}[/dim]")
40 | return backup_path
41 |
42 |
43 | def load_rclone_config() -> configparser.ConfigParser:
44 | """Load existing rclone configuration."""
45 | config = configparser.ConfigParser()
46 | config_path = get_rclone_config_path()
47 |
48 | if config_path.exists():
49 | config.read(config_path)
50 |
51 | return config
52 |
53 |
54 | def save_rclone_config(config: configparser.ConfigParser) -> None:
55 | """Save rclone configuration to file."""
56 | config_path = get_rclone_config_path()
57 |
58 | with open(config_path, "w") as f:
59 | config.write(f)
60 |
61 | console.print(f"[dim]Updated rclone config: {config_path}[/dim]")
62 |
63 |
64 | def configure_rclone_remote(
65 | access_key: str,
66 | secret_key: str,
67 | endpoint: str = "https://fly.storage.tigris.dev",
68 | region: str = "auto",
69 | ) -> str:
70 | """Configure single rclone remote named 'basic-memory-cloud'.
71 |
72 | This is the simplified approach from SPEC-20 that uses one remote
73 | for all Basic Memory cloud operations (not tenant-specific).
74 |
75 | Args:
76 | access_key: S3 access key ID
77 | secret_key: S3 secret access key
78 | endpoint: S3-compatible endpoint URL
79 | region: S3 region (default: auto)
80 |
81 | Returns:
82 | The remote name: "basic-memory-cloud"
83 | """
84 | # Backup existing config
85 | backup_rclone_config()
86 |
87 | # Load existing config
88 | config = load_rclone_config()
89 |
90 | # Single remote name (not tenant-specific)
91 | REMOTE_NAME = "basic-memory-cloud"
92 |
93 | # Add/update the remote section
94 | if not config.has_section(REMOTE_NAME):
95 | config.add_section(REMOTE_NAME)
96 |
97 | config.set(REMOTE_NAME, "type", "s3")
98 | config.set(REMOTE_NAME, "provider", "Other")
99 | config.set(REMOTE_NAME, "access_key_id", access_key)
100 | config.set(REMOTE_NAME, "secret_access_key", secret_key)
101 | config.set(REMOTE_NAME, "endpoint", endpoint)
102 | config.set(REMOTE_NAME, "region", region)
103 | # Prevent unnecessary encoding of filenames (only encode slashes and invalid UTF-8)
104 | # This prevents files with spaces like "Hello World.md" from being quoted
105 | config.set(REMOTE_NAME, "encoding", "Slash,InvalidUtf8")
106 | # Save updated config
107 | save_rclone_config(config)
108 |
109 | console.print(f"[green]Configured rclone remote: {REMOTE_NAME}[/green]")
110 | return REMOTE_NAME
111 |
```
--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/import_chatgpt.py:
--------------------------------------------------------------------------------
```python
1 | """Import command for ChatGPT conversations."""
2 |
3 | import json
4 | from pathlib import Path
5 | from typing import Annotated, Tuple
6 |
7 | import typer
8 | from basic_memory.cli.app import import_app
9 | from basic_memory.cli.commands.command_utils import run_with_cleanup
10 | from basic_memory.config import ConfigManager, get_project_config
11 | from basic_memory.importers import ChatGPTImporter
12 | from basic_memory.markdown import EntityParser, MarkdownProcessor
13 | from basic_memory.services.file_service import FileService
14 | from loguru import logger
15 | from rich.console import Console
16 | from rich.panel import Panel
17 |
18 | console = Console()
19 |
20 |
21 | async def get_importer_dependencies() -> Tuple[MarkdownProcessor, FileService]:
22 | """Get MarkdownProcessor and FileService instances for importers."""
23 | config = get_project_config()
24 | app_config = ConfigManager().config
25 | entity_parser = EntityParser(config.home)
26 | markdown_processor = MarkdownProcessor(entity_parser, app_config=app_config)
27 | file_service = FileService(config.home, markdown_processor, app_config=app_config)
28 | return markdown_processor, file_service
29 |
30 |
31 | @import_app.command(name="chatgpt", help="Import conversations from ChatGPT JSON export.")
32 | def import_chatgpt(
33 | conversations_json: Annotated[
34 | Path, typer.Argument(help="Path to ChatGPT conversations.json file")
35 | ] = Path("conversations.json"),
36 | folder: Annotated[
37 | str, typer.Option(help="The folder to place the files in.")
38 | ] = "conversations",
39 | ):
40 | """Import chat conversations from ChatGPT JSON format.
41 |
42 | This command will:
43 | 1. Read the complex tree structure of messages
44 | 2. Convert them to linear markdown conversations
45 | 3. Save as clean, readable markdown files
46 |
47 | After importing, run 'basic-memory sync' to index the new files.
48 | """
49 |
50 | try:
51 | if not conversations_json.exists(): # pragma: no cover
52 | typer.echo(f"Error: File not found: {conversations_json}", err=True)
53 | raise typer.Exit(1)
54 |
55 | # Get importer dependencies
56 | markdown_processor, file_service = run_with_cleanup(get_importer_dependencies())
57 | config = get_project_config()
58 | # Process the file
59 | base_path = config.home / folder
60 | console.print(f"\nImporting chats from {conversations_json}...writing to {base_path}")
61 |
62 | # Create importer and run import
63 | importer = ChatGPTImporter(config.home, markdown_processor, file_service)
64 | with conversations_json.open("r", encoding="utf-8") as file:
65 | json_data = json.load(file)
66 | result = run_with_cleanup(importer.import_data(json_data, folder))
67 |
68 | if not result.success: # pragma: no cover
69 | typer.echo(f"Error during import: {result.error_message}", err=True)
70 | raise typer.Exit(1)
71 |
72 | # Show results
73 | console.print(
74 | Panel(
75 | f"[green]Import complete![/green]\n\n"
76 | f"Imported {result.conversations} conversations\n"
77 | f"Containing {result.messages} messages",
78 | expand=False,
79 | )
80 | )
81 |
82 | console.print("\nRun 'basic-memory sync' to index the new files.")
83 |
84 | except Exception as e:
85 | logger.error("Import failed")
86 | typer.echo(f"Error during import: {e}", err=True)
87 | raise typer.Exit(1)
88 |
```
--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/import_memory_json.py:
--------------------------------------------------------------------------------
```python
1 | """Import command for basic-memory CLI to import from JSON memory format."""
2 |
3 | import json
4 | from pathlib import Path
5 | from typing import Annotated, Tuple
6 |
7 | import typer
8 | from basic_memory.cli.app import import_app
9 | from basic_memory.cli.commands.command_utils import run_with_cleanup
10 | from basic_memory.config import ConfigManager, get_project_config
11 | from basic_memory.importers.memory_json_importer import MemoryJsonImporter
12 | from basic_memory.markdown import EntityParser, MarkdownProcessor
13 | from basic_memory.services.file_service import FileService
14 | from loguru import logger
15 | from rich.console import Console
16 | from rich.panel import Panel
17 |
18 | console = Console()
19 |
20 |
21 | async def get_importer_dependencies() -> Tuple[MarkdownProcessor, FileService]:
22 | """Get MarkdownProcessor and FileService instances for importers."""
23 | config = get_project_config()
24 | app_config = ConfigManager().config
25 | entity_parser = EntityParser(config.home)
26 | markdown_processor = MarkdownProcessor(entity_parser, app_config=app_config)
27 | file_service = FileService(config.home, markdown_processor, app_config=app_config)
28 | return markdown_processor, file_service
29 |
30 |
31 | @import_app.command()
32 | def memory_json(
33 | json_path: Annotated[Path, typer.Argument(..., help="Path to memory.json file")] = Path(
34 | "memory.json"
35 | ),
36 | destination_folder: Annotated[
37 | str, typer.Option(help="Optional destination folder within the project")
38 | ] = "",
39 | ):
40 | """Import entities and relations from a memory.json file.
41 |
42 | This command will:
43 | 1. Read entities and relations from the JSON file
44 | 2. Create markdown files for each entity
45 | 3. Include outgoing relations in each entity's markdown
46 | """
47 |
48 | if not json_path.exists():
49 | typer.echo(f"Error: File not found: {json_path}", err=True)
50 | raise typer.Exit(1)
51 |
52 | config = get_project_config()
53 | try:
54 | # Get importer dependencies
55 | markdown_processor, file_service = run_with_cleanup(get_importer_dependencies())
56 |
57 | # Create the importer
58 | importer = MemoryJsonImporter(config.home, markdown_processor, file_service)
59 |
60 | # Process the file
61 | base_path = config.home if not destination_folder else config.home / destination_folder
62 | console.print(f"\nImporting from {json_path}...writing to {base_path}")
63 |
64 | # Run the import for json log format
65 | file_data = []
66 | with json_path.open("r", encoding="utf-8") as file:
67 | for line in file:
68 | json_data = json.loads(line)
69 | file_data.append(json_data)
70 | result = run_with_cleanup(importer.import_data(file_data, destination_folder))
71 |
72 | if not result.success: # pragma: no cover
73 | typer.echo(f"Error during import: {result.error_message}", err=True)
74 | raise typer.Exit(1)
75 |
76 | # Show results
77 | console.print(
78 | Panel(
79 | f"[green]Import complete![/green]\n\n"
80 | f"Created {result.entities} entities\n"
81 | f"Added {result.relations} relations\n"
82 | f"Skipped {result.skipped_entities} entities\n",
83 | expand=False,
84 | )
85 | )
86 |
87 | except Exception as e:
88 | logger.error("Import failed")
89 | typer.echo(f"Error during import: {e}", err=True)
90 | raise typer.Exit(1)
91 |
```
--------------------------------------------------------------------------------
/src/basic_memory/mcp/container.py:
--------------------------------------------------------------------------------
```python
1 | """MCP composition root for Basic Memory.
2 |
3 | This container owns reading ConfigManager and environment variables for the
4 | MCP server entrypoint. Downstream modules receive config/dependencies explicitly
5 | rather than reading globals.
6 |
7 | Design principles:
8 | - Only this module reads ConfigManager directly
9 | - Runtime mode (cloud/local/test) is resolved here
10 | - File sync decisions are centralized here
11 | """
12 |
13 | from dataclasses import dataclass
14 | from typing import TYPE_CHECKING
15 |
16 | from basic_memory.config import BasicMemoryConfig, ConfigManager
17 | from basic_memory.runtime import RuntimeMode, resolve_runtime_mode
18 |
19 | if TYPE_CHECKING: # pragma: no cover
20 | from basic_memory.sync import SyncCoordinator
21 |
22 |
23 | @dataclass
24 | class McpContainer:
25 | """Composition root for the MCP server entrypoint.
26 |
27 | Holds resolved configuration and runtime context.
28 | Created once at server startup, then used to wire dependencies.
29 | """
30 |
31 | config: BasicMemoryConfig
32 | mode: RuntimeMode
33 |
34 | @classmethod
35 | def create(cls) -> "McpContainer":
36 | """Create container by reading ConfigManager.
37 |
38 | This is the single point where MCP reads global config.
39 | """
40 | config = ConfigManager().config
41 | mode = resolve_runtime_mode(
42 | cloud_mode_enabled=config.cloud_mode_enabled,
43 | is_test_env=config.is_test_env,
44 | )
45 | return cls(config=config, mode=mode)
46 |
47 | # --- Runtime Mode Properties ---
48 |
49 | @property
50 | def should_sync_files(self) -> bool:
51 | """Whether local file sync should be started.
52 |
53 | Sync is enabled when:
54 | - sync_changes is True in config
55 | - Not in test mode (tests manage their own sync)
56 | - Not in cloud mode (cloud handles sync differently)
57 | """
58 | return self.config.sync_changes and not self.mode.is_test and not self.mode.is_cloud
59 |
60 | @property
61 | def sync_skip_reason(self) -> str | None:
62 | """Reason why sync is skipped, or None if sync should run.
63 |
64 | Useful for logging why sync was disabled.
65 | """
66 | if self.mode.is_test:
67 | return "Test environment detected"
68 | if self.mode.is_cloud:
69 | return "Cloud mode enabled"
70 | if not self.config.sync_changes:
71 | return "Sync changes disabled"
72 | return None
73 |
74 | def create_sync_coordinator(self) -> "SyncCoordinator":
75 | """Create a SyncCoordinator with this container's settings.
76 |
77 | Returns:
78 | SyncCoordinator configured for this runtime environment
79 | """
80 | # Deferred import to avoid circular dependency
81 | from basic_memory.sync import SyncCoordinator
82 |
83 | return SyncCoordinator(
84 | config=self.config,
85 | should_sync=self.should_sync_files,
86 | skip_reason=self.sync_skip_reason,
87 | )
88 |
89 |
90 | # Module-level container instance (set by lifespan)
91 | _container: McpContainer | None = None
92 |
93 |
94 | def get_container() -> McpContainer:
95 | """Get the current MCP container.
96 |
97 | Raises:
98 | RuntimeError: If container hasn't been initialized
99 | """
100 | if _container is None:
101 | raise RuntimeError("MCP container not initialized. Call set_container() first.")
102 | return _container
103 |
104 |
105 | def set_container(container: McpContainer) -> None:
106 | """Set the MCP container (called by lifespan)."""
107 | global _container
108 | _container = container
109 |
```
--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/import_claude_projects.py:
--------------------------------------------------------------------------------
```python
1 | """Import command for basic-memory CLI to import project data from Claude.ai."""
2 |
3 | import json
4 | from pathlib import Path
5 | from typing import Annotated, Tuple
6 |
7 | import typer
8 | from basic_memory.cli.app import claude_app
9 | from basic_memory.cli.commands.command_utils import run_with_cleanup
10 | from basic_memory.config import ConfigManager, get_project_config
11 | from basic_memory.importers.claude_projects_importer import ClaudeProjectsImporter
12 | from basic_memory.markdown import EntityParser, MarkdownProcessor
13 | from basic_memory.services.file_service import FileService
14 | from loguru import logger
15 | from rich.console import Console
16 | from rich.panel import Panel
17 |
18 | console = Console()
19 |
20 |
21 | async def get_importer_dependencies() -> Tuple[MarkdownProcessor, FileService]:
22 | """Get MarkdownProcessor and FileService instances for importers."""
23 | config = get_project_config()
24 | app_config = ConfigManager().config
25 | entity_parser = EntityParser(config.home)
26 | markdown_processor = MarkdownProcessor(entity_parser, app_config=app_config)
27 | file_service = FileService(config.home, markdown_processor, app_config=app_config)
28 | return markdown_processor, file_service
29 |
30 |
31 | @claude_app.command(name="projects", help="Import projects from Claude.ai.")
32 | def import_projects(
33 | projects_json: Annotated[Path, typer.Argument(..., help="Path to projects.json file")] = Path(
34 | "projects.json"
35 | ),
36 | base_folder: Annotated[
37 | str, typer.Option(help="The base folder to place project files in.")
38 | ] = "projects",
39 | ):
40 | """Import project data from Claude.ai.
41 |
42 | This command will:
43 | 1. Create a directory for each project
44 | 2. Store docs in a docs/ subdirectory
45 | 3. Place prompt template in project root
46 |
47 | After importing, run 'basic-memory sync' to index the new files.
48 | """
49 | config = get_project_config()
50 | try:
51 | if not projects_json.exists():
52 | typer.echo(f"Error: File not found: {projects_json}", err=True)
53 | raise typer.Exit(1)
54 |
55 | # Get importer dependencies
56 | markdown_processor, file_service = run_with_cleanup(get_importer_dependencies())
57 |
58 | # Create the importer
59 | importer = ClaudeProjectsImporter(config.home, markdown_processor, file_service)
60 |
61 | # Process the file
62 | base_path = config.home / base_folder if base_folder else config.home
63 | console.print(f"\nImporting projects from {projects_json}...writing to {base_path}")
64 |
65 | # Run the import
66 | with projects_json.open("r", encoding="utf-8") as file:
67 | json_data = json.load(file)
68 | result = run_with_cleanup(importer.import_data(json_data, base_folder))
69 |
70 | if not result.success: # pragma: no cover
71 | typer.echo(f"Error during import: {result.error_message}", err=True)
72 | raise typer.Exit(1)
73 |
74 | # Show results
75 | console.print(
76 | Panel(
77 | f"[green]Import complete![/green]\n\n"
78 | f"Imported {result.documents} project documents\n"
79 | f"Imported {result.prompts} prompt templates",
80 | expand=False,
81 | )
82 | )
83 |
84 | console.print("\nRun 'basic-memory sync' to index the new files.")
85 |
86 | except Exception as e:
87 | logger.error("Import failed")
88 | typer.echo(f"Error during import: {e}", err=True)
89 | raise typer.Exit(1)
90 |
```
--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/import_claude_conversations.py:
--------------------------------------------------------------------------------
```python
1 | """Import command for basic-memory CLI to import chat data from conversations2.json format."""
2 |
3 | import json
4 | from pathlib import Path
5 | from typing import Annotated, Tuple
6 |
7 | import typer
8 | from basic_memory.cli.app import claude_app
9 | from basic_memory.cli.commands.command_utils import run_with_cleanup
10 | from basic_memory.config import ConfigManager, get_project_config
11 | from basic_memory.importers.claude_conversations_importer import ClaudeConversationsImporter
12 | from basic_memory.markdown import EntityParser, MarkdownProcessor
13 | from basic_memory.services.file_service import FileService
14 | from loguru import logger
15 | from rich.console import Console
16 | from rich.panel import Panel
17 |
18 | console = Console()
19 |
20 |
21 | async def get_importer_dependencies() -> Tuple[MarkdownProcessor, FileService]:
22 | """Get MarkdownProcessor and FileService instances for importers."""
23 | config = get_project_config()
24 | app_config = ConfigManager().config
25 | entity_parser = EntityParser(config.home)
26 | markdown_processor = MarkdownProcessor(entity_parser, app_config=app_config)
27 | file_service = FileService(config.home, markdown_processor, app_config=app_config)
28 | return markdown_processor, file_service
29 |
30 |
31 | @claude_app.command(name="conversations", help="Import chat conversations from Claude.ai.")
32 | def import_claude(
33 | conversations_json: Annotated[
34 | Path, typer.Argument(..., help="Path to conversations.json file")
35 | ] = Path("conversations.json"),
36 | folder: Annotated[
37 | str, typer.Option(help="The folder to place the files in.")
38 | ] = "conversations",
39 | ):
40 | """Import chat conversations from conversations2.json format.
41 |
42 | This command will:
43 | 1. Read chat data and nested messages
44 | 2. Create markdown files for each conversation
45 | 3. Format content in clean, readable markdown
46 |
47 | After importing, run 'basic-memory sync' to index the new files.
48 | """
49 |
50 | config = get_project_config()
51 | try:
52 | if not conversations_json.exists():
53 | typer.echo(f"Error: File not found: {conversations_json}", err=True)
54 | raise typer.Exit(1)
55 |
56 | # Get importer dependencies
57 | markdown_processor, file_service = run_with_cleanup(get_importer_dependencies())
58 |
59 | # Create the importer
60 | importer = ClaudeConversationsImporter(config.home, markdown_processor, file_service)
61 |
62 | # Process the file
63 | base_path = config.home / folder
64 | console.print(f"\nImporting chats from {conversations_json}...writing to {base_path}")
65 |
66 | # Run the import
67 | with conversations_json.open("r", encoding="utf-8") as file:
68 | json_data = json.load(file)
69 | result = run_with_cleanup(importer.import_data(json_data, folder))
70 |
71 | if not result.success: # pragma: no cover
72 | typer.echo(f"Error during import: {result.error_message}", err=True)
73 | raise typer.Exit(1)
74 |
75 | # Show results
76 | console.print(
77 | Panel(
78 | f"[green]Import complete![/green]\n\n"
79 | f"Imported {result.conversations} conversations\n"
80 | f"Containing {result.messages} messages",
81 | expand=False,
82 | )
83 | )
84 |
85 | console.print("\nRun 'basic-memory sync' to index the new files.")
86 |
87 | except Exception as e:
88 | logger.error("Import failed")
89 | typer.echo(f"Error during import: {e}", err=True)
90 | raise typer.Exit(1)
91 |
```
--------------------------------------------------------------------------------
/src/basic_memory/models/project.py:
--------------------------------------------------------------------------------
```python
1 | """Project model for Basic Memory."""
2 |
3 | import uuid
4 | from datetime import datetime, UTC
5 | from typing import Optional
6 |
7 | from sqlalchemy import (
8 | Integer,
9 | String,
10 | Text,
11 | Boolean,
12 | DateTime,
13 | Float,
14 | Index,
15 | event,
16 | )
17 | from sqlalchemy.orm import Mapped, mapped_column, relationship
18 |
19 | from basic_memory.models.base import Base
20 | from basic_memory.utils import generate_permalink
21 |
22 |
23 | class Project(Base):
24 | """Project model for Basic Memory.
25 |
26 | A project represents a collection of knowledge entities that are grouped together.
27 | Projects are stored in the app-level database and provide context for all knowledge
28 | operations.
29 | """
30 |
31 | __tablename__ = "project"
32 | __table_args__ = (
33 | # Regular indexes
34 | Index("ix_project_name", "name", unique=True),
35 | Index("ix_project_permalink", "permalink", unique=True),
36 | Index("ix_project_external_id", "external_id", unique=True),
37 | Index("ix_project_path", "path"),
38 | Index("ix_project_created_at", "created_at"),
39 | Index("ix_project_updated_at", "updated_at"),
40 | )
41 |
42 | # Core identity
43 | id: Mapped[int] = mapped_column(Integer, primary_key=True)
44 | # External UUID for API references - stable identifier that won't change
45 | external_id: Mapped[str] = mapped_column(String, unique=True, default=lambda: str(uuid.uuid4()))
46 | name: Mapped[str] = mapped_column(String, unique=True)
47 | description: Mapped[Optional[str]] = mapped_column(Text, nullable=True)
48 |
49 | # URL-friendly identifier generated from name
50 | permalink: Mapped[str] = mapped_column(String, unique=True)
51 |
52 | # Filesystem path to project directory
53 | path: Mapped[str] = mapped_column(String)
54 |
55 | # Status flags
56 | is_active: Mapped[bool] = mapped_column(Boolean, default=True)
57 | is_default: Mapped[Optional[bool]] = mapped_column(Boolean, default=None, nullable=True)
58 |
59 | # Timestamps
60 | created_at: Mapped[datetime] = mapped_column(
61 | DateTime(timezone=True), default=lambda: datetime.now(UTC)
62 | )
63 | updated_at: Mapped[datetime] = mapped_column(
64 | DateTime(timezone=True),
65 | default=lambda: datetime.now(UTC),
66 | onupdate=lambda: datetime.now(UTC),
67 | )
68 |
69 | # Sync optimization - scan watermark tracking
70 | last_scan_timestamp: Mapped[Optional[float]] = mapped_column(Float, nullable=True)
71 | last_file_count: Mapped[Optional[int]] = mapped_column(Integer, nullable=True)
72 |
73 | # Define relationships to entities, observations, and relations
74 | # These relationships will be established once we add project_id to those models
75 | entities = relationship("Entity", back_populates="project", cascade="all, delete-orphan")
76 |
77 | def __repr__(self) -> str: # pragma: no cover
78 | return f"Project(id={self.id}, external_id='{self.external_id}', name='{self.name}', permalink='{self.permalink}', path='{self.path}')"
79 |
80 |
81 | @event.listens_for(Project, "before_insert")
82 | @event.listens_for(Project, "before_update")
83 | def set_project_permalink(mapper, connection, project):
84 | """Generate URL-friendly permalink for the project if needed.
85 |
86 | This event listener ensures the permalink is always derived from the name,
87 | even if the name changes.
88 | """
89 | # If the name changed or permalink is empty, regenerate permalink
90 | if not project.permalink or project.permalink != generate_permalink(project.name):
91 | project.permalink = generate_permalink(project.name)
92 |
```
--------------------------------------------------------------------------------
/.github/workflows/claude-code-review.yml:
--------------------------------------------------------------------------------
```yaml
1 | name: Claude Code Review
2 |
3 | on:
4 | pull_request:
5 | types: [opened, synchronize]
6 | # Optional: Only run on specific file changes
7 | # paths:
8 | # - "src/**/*.ts"
9 | # - "src/**/*.tsx"
10 | # - "src/**/*.js"
11 | # - "src/**/*.jsx"
12 |
13 | jobs:
14 | claude-review:
15 | # Only run for organization members and collaborators
16 | if: |
17 | github.event.pull_request.author_association == 'OWNER' ||
18 | github.event.pull_request.author_association == 'MEMBER' ||
19 | github.event.pull_request.author_association == 'COLLABORATOR'
20 |
21 | runs-on: ubuntu-latest
22 | permissions:
23 | contents: read
24 | pull-requests: write
25 | issues: read
26 | id-token: write
27 |
28 | steps:
29 | - name: Checkout repository
30 | uses: actions/checkout@v4
31 | with:
32 | fetch-depth: 1
33 |
34 | - name: Run Claude Code Review
35 | id: claude-review
36 | uses: anthropics/claude-code-action@v1
37 | with:
38 | claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
39 | github_token: ${{ secrets.GITHUB_TOKEN }}
40 | track_progress: true # Enable visual progress tracking
41 | allowed_bots: '*'
42 | prompt: |
43 | Review this Basic Memory PR against our team checklist:
44 |
45 | ## Code Quality & Standards
46 | - [ ] Follows Basic Memory's coding conventions in CLAUDE.md
47 | - [ ] Python 3.12+ type annotations and async patterns
48 | - [ ] SQLAlchemy 2.0 best practices
49 | - [ ] FastAPI and Typer conventions followed
50 | - [ ] 100-character line length limit maintained
51 | - [ ] No commented-out code blocks
52 |
53 | ## Testing & Documentation
54 | - [ ] Unit tests for new functions/methods
55 | - [ ] Integration tests for new MCP tools
56 | - [ ] Test coverage for edge cases
57 | - [ ] **100% test coverage maintained** (use `# pragma: no cover` only for truly hard-to-test code)
58 | - [ ] Documentation updated (README, docstrings)
59 | - [ ] CLAUDE.md updated if conventions change
60 |
61 | ## Basic Memory Architecture
62 | - [ ] MCP tools follow atomic, composable design
63 | - [ ] Database changes include Alembic migrations
64 | - [ ] Preserves local-first architecture principles
65 | - [ ] Knowledge graph operations maintain consistency
66 | - [ ] Markdown file handling preserves integrity
67 | - [ ] AI-human collaboration patterns followed
68 |
69 | ## Security & Performance
70 | - [ ] No hardcoded secrets or credentials
71 | - [ ] Input validation for MCP tools
72 | - [ ] Proper error handling and logging
73 | - [ ] Performance considerations addressed
74 | - [ ] No sensitive data in logs or commits
75 |
76 | ## Compatability
77 | - [ ] File path comparisons must be windows compatible
78 | - [ ] Avoid using emojis and unicode characters in console and log output
79 |
80 | Read the CLAUDE.md file for detailed project context. For each checklist item, verify if it's satisfied and comment on any that need attention. Use inline comments for specific code issues and post a summary with checklist results.
81 |
82 | # Allow broader tool access for thorough code review
83 | claude_args: '--allowed-tools "Bash(gh pr:*),Bash(gh issue:*),Bash(gh api:*),Bash(git log:*),Bash(git show:*),Read,Grep,Glob"'
84 |
```
--------------------------------------------------------------------------------
/src/basic_memory/repository/search_repository.py:
--------------------------------------------------------------------------------
```python
1 | """Repository for search operations.
2 |
3 | This module provides the search repository interface.
4 | The actual repository implementations are backend-specific:
5 | - SQLiteSearchRepository: Uses FTS5 virtual tables
6 | - PostgresSearchRepository: Uses tsvector/tsquery with GIN indexes
7 | """
8 |
9 | from datetime import datetime
10 | from typing import List, Optional, Protocol
11 |
12 | from sqlalchemy import Result
13 | from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker
14 |
15 | from basic_memory.config import ConfigManager, DatabaseBackend
16 | from basic_memory.repository.postgres_search_repository import PostgresSearchRepository
17 | from basic_memory.repository.search_index_row import SearchIndexRow
18 | from basic_memory.repository.sqlite_search_repository import SQLiteSearchRepository
19 | from basic_memory.schemas.search import SearchItemType
20 |
21 |
22 | class SearchRepository(Protocol):
23 | """Protocol defining the search repository interface.
24 |
25 | Both SQLite and Postgres implementations must satisfy this protocol.
26 | """
27 |
28 | project_id: int
29 |
30 | async def init_search_index(self) -> None:
31 | """Initialize the search index schema."""
32 | ...
33 |
34 | async def search(
35 | self,
36 | search_text: Optional[str] = None,
37 | permalink: Optional[str] = None,
38 | permalink_match: Optional[str] = None,
39 | title: Optional[str] = None,
40 | types: Optional[List[str]] = None,
41 | after_date: Optional[datetime] = None,
42 | search_item_types: Optional[List[SearchItemType]] = None,
43 | limit: int = 10,
44 | offset: int = 0,
45 | ) -> List[SearchIndexRow]:
46 | """Search across indexed content."""
47 | ...
48 |
49 | async def index_item(self, search_index_row: SearchIndexRow) -> None:
50 | """Index a single item."""
51 | ...
52 |
53 | async def bulk_index_items(self, search_index_rows: List[SearchIndexRow]) -> None:
54 | """Index multiple items in a batch."""
55 | ...
56 |
57 | async def delete_by_permalink(self, permalink: str) -> None:
58 | """Delete item by permalink."""
59 | ...
60 |
61 | async def delete_by_entity_id(self, entity_id: int) -> None:
62 | """Delete items by entity ID."""
63 | ...
64 |
65 | async def execute_query(self, query, params: dict) -> Result:
66 | """Execute a raw SQL query."""
67 | ...
68 |
69 |
70 | def create_search_repository(
71 | session_maker: async_sessionmaker[AsyncSession],
72 | project_id: int,
73 | database_backend: Optional[DatabaseBackend] = None,
74 | ) -> SearchRepository:
75 | """Factory function to create the appropriate search repository based on database backend.
76 |
77 | Args:
78 | session_maker: SQLAlchemy async session maker
79 | project_id: Project ID for the repository
80 | database_backend: Optional explicit backend. If not provided, reads from ConfigManager.
81 | Prefer passing explicitly from composition roots.
82 |
83 | Returns:
84 | SearchRepository: Backend-appropriate search repository instance
85 | """
86 | # Prefer explicit parameter; fall back to ConfigManager for backwards compatibility
87 | if database_backend is None:
88 | config = ConfigManager().config
89 | database_backend = config.database_backend
90 |
91 | if database_backend == DatabaseBackend.POSTGRES: # pragma: no cover
92 | return PostgresSearchRepository(session_maker, project_id=project_id) # pragma: no cover
93 | else:
94 | return SQLiteSearchRepository(session_maker, project_id=project_id)
95 |
96 |
97 | __all__ = [
98 | "SearchRepository",
99 | "SearchIndexRow",
100 | "create_search_repository",
101 | ]
102 |
```
--------------------------------------------------------------------------------
/src/basic_memory/importers/base.py:
--------------------------------------------------------------------------------
```python
1 | """Base import service for Basic Memory."""
2 |
3 | import logging
4 | from abc import abstractmethod
5 | from pathlib import Path
6 | from typing import TYPE_CHECKING, Any, Optional, TypeVar
7 |
8 | from basic_memory.markdown.markdown_processor import MarkdownProcessor
9 | from basic_memory.markdown.schemas import EntityMarkdown
10 | from basic_memory.schemas.importer import ImportResult
11 |
12 | if TYPE_CHECKING: # pragma: no cover
13 | from basic_memory.services.file_service import FileService
14 |
15 | logger = logging.getLogger(__name__)
16 |
17 | T = TypeVar("T", bound=ImportResult)
18 |
19 |
20 | class Importer[T: ImportResult]:
21 | """Base class for all import services.
22 |
23 | All file operations are delegated to FileService, which can be overridden
24 | in cloud environments to use S3 or other storage backends.
25 | """
26 |
27 | def __init__(
28 | self,
29 | base_path: Path,
30 | markdown_processor: MarkdownProcessor,
31 | file_service: "FileService",
32 | ):
33 | """Initialize the import service.
34 |
35 | Args:
36 | base_path: Base path for the project.
37 | markdown_processor: MarkdownProcessor instance for markdown serialization.
38 | file_service: FileService instance for all file operations.
39 | """
40 | self.base_path = base_path.resolve() # Get absolute path
41 | self.markdown_processor = markdown_processor
42 | self.file_service = file_service
43 |
44 | @abstractmethod
45 | async def import_data(self, source_data, destination_folder: str, **kwargs: Any) -> T:
46 | """Import data from source file to destination folder.
47 |
48 | Args:
49 | source_path: Path to the source file.
50 | destination_folder: Destination folder within the project.
51 | **kwargs: Additional keyword arguments for specific import types.
52 |
53 | Returns:
54 | ImportResult containing statistics and status of the import.
55 | """
56 | pass # pragma: no cover
57 |
58 | async def write_entity(self, entity: EntityMarkdown, file_path: str | Path) -> str:
59 | """Write entity to file using FileService.
60 |
61 | This method serializes the entity to markdown and writes it using
62 | FileService, which handles directory creation and storage backend
63 | abstraction (local filesystem vs cloud storage).
64 |
65 | Args:
66 | entity: EntityMarkdown instance to write.
67 | file_path: Relative path to write the entity to. FileService handles base_path.
68 |
69 | Returns:
70 | Checksum of written file.
71 | """
72 | content = self.markdown_processor.to_markdown_string(entity)
73 | # FileService.write_file handles directory creation and returns checksum
74 | return await self.file_service.write_file(file_path, content)
75 |
76 | async def ensure_folder_exists(self, folder: str) -> None:
77 | """Ensure folder exists using FileService.
78 |
79 | For cloud storage (S3), this is essentially a no-op since S3 doesn't
80 | have actual folders - they're just key prefixes.
81 |
82 | Args:
83 | folder: Relative folder path within the project. FileService handles base_path.
84 | """
85 | await self.file_service.ensure_directory(folder)
86 |
87 | @abstractmethod
88 | def handle_error(
89 | self, message: str, error: Optional[Exception] = None
90 | ) -> T: # pragma: no cover
91 | """Handle errors during import.
92 |
93 | Args:
94 | message: Error message.
95 | error: Optional exception that caused the error.
96 |
97 | Returns:
98 | ImportResult with error information.
99 | """
100 | pass
101 |
```
--------------------------------------------------------------------------------
/src/basic_memory/models/search.py:
--------------------------------------------------------------------------------
```python
1 | """Search DDL statements for SQLite and Postgres.
2 |
3 | The search_index table is created via raw DDL, not ORM models, because:
4 | - SQLite uses FTS5 virtual tables (cannot be represented as ORM)
5 | - Postgres uses composite primary keys and generated tsvector columns
6 | - Both backends use raw SQL for all search operations via SearchIndexRow dataclass
7 | """
8 |
9 | from sqlalchemy import DDL
10 |
11 |
12 | # Define Postgres search_index table with composite primary key and tsvector
13 | # This DDL matches the Alembic migration schema (314f1ea54dc4)
14 | # Used by tests to create the table without running full migrations
15 | # NOTE: Split into separate DDL statements because asyncpg doesn't support
16 | # multiple statements in a single execute call.
17 | CREATE_POSTGRES_SEARCH_INDEX_TABLE = DDL("""
18 | CREATE TABLE IF NOT EXISTS search_index (
19 | id INTEGER NOT NULL,
20 | project_id INTEGER NOT NULL,
21 | title TEXT,
22 | content_stems TEXT,
23 | content_snippet TEXT,
24 | permalink VARCHAR,
25 | file_path VARCHAR,
26 | type VARCHAR,
27 | from_id INTEGER,
28 | to_id INTEGER,
29 | relation_type VARCHAR,
30 | entity_id INTEGER,
31 | category VARCHAR,
32 | metadata JSONB,
33 | created_at TIMESTAMP WITH TIME ZONE,
34 | updated_at TIMESTAMP WITH TIME ZONE,
35 | textsearchable_index_col tsvector GENERATED ALWAYS AS (
36 | to_tsvector('english', coalesce(title, '') || ' ' || coalesce(content_stems, ''))
37 | ) STORED,
38 | PRIMARY KEY (id, type, project_id),
39 | FOREIGN KEY (project_id) REFERENCES project(id) ON DELETE CASCADE
40 | )
41 | """)
42 |
43 | CREATE_POSTGRES_SEARCH_INDEX_FTS = DDL("""
44 | CREATE INDEX IF NOT EXISTS idx_search_index_fts ON search_index USING gin(textsearchable_index_col)
45 | """)
46 |
47 | CREATE_POSTGRES_SEARCH_INDEX_METADATA = DDL("""
48 | CREATE INDEX IF NOT EXISTS idx_search_index_metadata_gin ON search_index USING gin(metadata jsonb_path_ops)
49 | """)
50 |
51 | # Partial unique index on (permalink, project_id) for non-null permalinks
52 | # This prevents duplicate permalinks per project and is used by upsert operations
53 | # in PostgresSearchRepository to handle race conditions during parallel indexing
54 | CREATE_POSTGRES_SEARCH_INDEX_PERMALINK = DDL("""
55 | CREATE UNIQUE INDEX IF NOT EXISTS uix_search_index_permalink_project
56 | ON search_index (permalink, project_id)
57 | WHERE permalink IS NOT NULL
58 | """)
59 |
60 | # Define FTS5 virtual table creation for SQLite only
61 | # This DDL is executed separately for SQLite databases
62 | CREATE_SEARCH_INDEX = DDL("""
63 | CREATE VIRTUAL TABLE IF NOT EXISTS search_index USING fts5(
64 | -- Core entity fields
65 | id UNINDEXED, -- Row ID
66 | title, -- Title for searching
67 | content_stems, -- Main searchable content split into stems
68 | content_snippet, -- File content snippet for display
69 | permalink, -- Stable identifier (now indexed for path search)
70 | file_path UNINDEXED, -- Physical location
71 | type UNINDEXED, -- entity/relation/observation
72 |
73 | -- Project context
74 | project_id UNINDEXED, -- Project identifier
75 |
76 | -- Relation fields
77 | from_id UNINDEXED, -- Source entity
78 | to_id UNINDEXED, -- Target entity
79 | relation_type UNINDEXED, -- Type of relation
80 |
81 | -- Observation fields
82 | entity_id UNINDEXED, -- Parent entity
83 | category UNINDEXED, -- Observation category
84 |
85 | -- Common fields
86 | metadata UNINDEXED, -- JSON metadata
87 | created_at UNINDEXED, -- Creation timestamp
88 | updated_at UNINDEXED, -- Last update
89 |
90 | -- Configuration
91 | tokenize='unicode61 tokenchars 0x2F', -- Hex code for /
92 | prefix='1,2,3,4' -- Support longer prefixes for paths
93 | );
94 | """)
95 |
```
--------------------------------------------------------------------------------
/src/basic_memory/mcp/clients/memory.py:
--------------------------------------------------------------------------------
```python
1 | """Typed client for memory/context API operations.
2 |
3 | Encapsulates all /v2/projects/{project_id}/memory/* endpoints.
4 | """
5 |
6 | from typing import Optional
7 |
8 | from httpx import AsyncClient
9 |
10 | from basic_memory.mcp.tools.utils import call_get
11 | from basic_memory.schemas.memory import GraphContext
12 |
13 |
14 | class MemoryClient:
15 | """Typed client for memory context operations.
16 |
17 | Centralizes:
18 | - API path construction for /v2/projects/{project_id}/memory/*
19 | - Response validation via Pydantic models
20 | - Consistent error handling through call_* utilities
21 |
22 | Usage:
23 | async with get_client() as http_client:
24 | client = MemoryClient(http_client, project_id)
25 | context = await client.build_context("memory://specs/search")
26 | """
27 |
28 | def __init__(self, http_client: AsyncClient, project_id: str):
29 | """Initialize the memory client.
30 |
31 | Args:
32 | http_client: HTTPX AsyncClient for making requests
33 | project_id: Project external_id (UUID) for API calls
34 | """
35 | self.http_client = http_client
36 | self.project_id = project_id
37 | self._base_path = f"/v2/projects/{project_id}/memory"
38 |
39 | async def build_context(
40 | self,
41 | path: str,
42 | *,
43 | depth: int = 1,
44 | timeframe: Optional[str] = None,
45 | page: int = 1,
46 | page_size: int = 10,
47 | max_related: int = 10,
48 | ) -> GraphContext:
49 | """Build context from a memory path.
50 |
51 | Args:
52 | path: The path to build context for (without memory:// prefix)
53 | depth: How deep to traverse relations
54 | timeframe: Time filter (e.g., "7d", "1 week")
55 | page: Page number (1-indexed)
56 | page_size: Results per page
57 | max_related: Maximum related items per result
58 |
59 | Returns:
60 | GraphContext with hierarchical results
61 |
62 | Raises:
63 | ToolError: If the request fails
64 | """
65 | params: dict = {
66 | "depth": depth,
67 | "page": page,
68 | "page_size": page_size,
69 | "max_related": max_related,
70 | }
71 | if timeframe:
72 | params["timeframe"] = timeframe
73 |
74 | response = await call_get(
75 | self.http_client,
76 | f"{self._base_path}/{path}",
77 | params=params,
78 | )
79 | return GraphContext.model_validate(response.json())
80 |
81 | async def recent(
82 | self,
83 | *,
84 | timeframe: str = "7d",
85 | depth: int = 1,
86 | types: Optional[list[str]] = None,
87 | page: int = 1,
88 | page_size: int = 10,
89 | ) -> GraphContext:
90 | """Get recent activity.
91 |
92 | Args:
93 | timeframe: Time filter (e.g., "7d", "1 week", "2 days ago")
94 | depth: How deep to traverse relations
95 | types: Filter by item types
96 | page: Page number (1-indexed)
97 | page_size: Results per page
98 |
99 | Returns:
100 | GraphContext with recent activity
101 |
102 | Raises:
103 | ToolError: If the request fails
104 | """
105 | params: dict = {
106 | "timeframe": timeframe,
107 | "depth": depth,
108 | "page": page,
109 | "page_size": page_size,
110 | }
111 | if types:
112 | # Join types as comma-separated string if provided
113 | params["type"] = ",".join(types) if isinstance(types, list) else types
114 |
115 | response = await call_get(
116 | self.http_client,
117 | f"{self._base_path}/recent",
118 | params=params,
119 | )
120 | return GraphContext.model_validate(response.json())
121 |
```
--------------------------------------------------------------------------------
/CLA.md:
--------------------------------------------------------------------------------
```markdown
1 | # Contributor License Agreement
2 |
3 | ## Copyright Assignment and License Grant
4 |
5 | By signing this Contributor License Agreement ("Agreement"), you accept and agree to the following terms and conditions
6 | for your present and future Contributions submitted
7 | to Basic Machines LLC. Except for the license granted herein to Basic Machines LLC and recipients of software
8 | distributed by Basic Machines LLC, you reserve all right,
9 | title, and interest in and to your Contributions.
10 |
11 | ### 1. Definitions
12 |
13 | "You" (or "Your") shall mean the copyright owner or legal entity authorized by the copyright owner that is making this
14 | Agreement with Basic Machines LLC.
15 |
16 | "Contribution" shall mean any original work of authorship, including any modifications or additions to an existing work,
17 | that is intentionally submitted by You to Basic
18 | Machines LLC for inclusion in, or documentation of, any of the products owned or managed by Basic Machines LLC (the "
19 | Work").
20 |
21 | ### 2. Grant of Copyright License
22 |
23 | Subject to the terms and conditions of this Agreement, You hereby grant to Basic Machines LLC and to recipients of
24 | software distributed by Basic Machines LLC a perpetual,
25 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to use, copy, modify, merge, publish,
26 | distribute, sublicense, and/or sell copies of the
27 | Work, and to permit persons to whom the Work is furnished to do so.
28 |
29 | ### 3. Assignment of Copyright
30 |
31 | You hereby assign to Basic Machines LLC all right, title, and interest worldwide in all Copyright covering your
32 | Contributions. Basic Machines LLC may license the
33 | Contributions under any license terms, including copyleft, permissive, commercial, or proprietary licenses.
34 |
35 | ### 4. Grant of Patent License
36 |
37 | Subject to the terms and conditions of this Agreement, You hereby grant to Basic Machines LLC and to recipients of
38 | software distributed by Basic Machines LLC a perpetual,
39 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to
40 | make, have made, use, offer to sell, sell, import, and
41 | otherwise transfer the Work.
42 |
43 | ### 5. Developer Certificate of Origin
44 |
45 | By making a Contribution to this project, You certify that:
46 |
47 | (a) The Contribution was created in whole or in part by You and You have the right to submit it under this Agreement; or
48 |
49 | (b) The Contribution is based upon previous work that, to the best of Your knowledge, is covered under an appropriate
50 | open source license and You have the right under that
51 | license to submit that work with modifications, whether created in whole or in part by You, under this Agreement; or
52 |
53 | (c) The Contribution was provided directly to You by some other person who certified (a), (b) or (c) and You have not
54 | modified it.
55 |
56 | (d) You understand and agree that this project and the Contribution are public and that a record of the Contribution (
57 | including all personal information You submit with
58 | it, including Your sign-off) is maintained indefinitely and may be redistributed consistent with this project or the
59 | open source license(s) involved.
60 |
61 | ### 6. Representations
62 |
63 | You represent that you are legally entitled to grant the above license and assignment. If your employer(s) has rights to
64 | intellectual property that you create that
65 | includes your Contributions, you represent that you have received permission to make Contributions on behalf of that
66 | employer, or that your employer has waived such rights
67 | for your Contributions to Basic Machines LLC.
68 |
69 | ---
70 |
71 | This Agreement is effective as of the date you first submit a Contribution to Basic Machines LLC.
72 |
```