#
tokens: 48945/50000 8/347 files (page 14/23)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 14 of 23. Use http://codebase.md/basicmachines-co/basic-memory?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .claude
│   ├── agents
│   │   ├── python-developer.md
│   │   └── system-architect.md
│   └── commands
│       ├── release
│       │   ├── beta.md
│       │   ├── changelog.md
│       │   ├── release-check.md
│       │   └── release.md
│       ├── spec.md
│       └── test-live.md
├── .dockerignore
├── .github
│   ├── dependabot.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.md
│   │   ├── config.yml
│   │   ├── documentation.md
│   │   └── feature_request.md
│   └── workflows
│       ├── claude-code-review.yml
│       ├── claude-issue-triage.yml
│       ├── claude.yml
│       ├── dev-release.yml
│       ├── docker.yml
│       ├── pr-title.yml
│       ├── release.yml
│       └── test.yml
├── .gitignore
├── .python-version
├── CHANGELOG.md
├── CITATION.cff
├── CLA.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── docker-compose.yml
├── Dockerfile
├── docs
│   ├── ai-assistant-guide-extended.md
│   ├── character-handling.md
│   ├── cloud-cli.md
│   └── Docker.md
├── justfile
├── LICENSE
├── llms-install.md
├── pyproject.toml
├── README.md
├── SECURITY.md
├── smithery.yaml
├── specs
│   ├── SPEC-1 Specification-Driven Development Process.md
│   ├── SPEC-10 Unified Deployment Workflow and Event Tracking.md
│   ├── SPEC-11 Basic Memory API Performance Optimization.md
│   ├── SPEC-12 OpenTelemetry Observability.md
│   ├── SPEC-13 CLI Authentication with Subscription Validation.md
│   ├── SPEC-14 Cloud Git Versioning & GitHub Backup.md
│   ├── SPEC-14- Cloud Git Versioning & GitHub Backup.md
│   ├── SPEC-15 Configuration Persistence via Tigris for Cloud Tenants.md
│   ├── SPEC-16 MCP Cloud Service Consolidation.md
│   ├── SPEC-17 Semantic Search with ChromaDB.md
│   ├── SPEC-18 AI Memory Management Tool.md
│   ├── SPEC-19 Sync Performance and Memory Optimization.md
│   ├── SPEC-2 Slash Commands Reference.md
│   ├── SPEC-20 Simplified Project-Scoped Rclone Sync.md
│   ├── SPEC-3 Agent Definitions.md
│   ├── SPEC-4 Notes Web UI Component Architecture.md
│   ├── SPEC-5 CLI Cloud Upload via WebDAV.md
│   ├── SPEC-6 Explicit Project Parameter Architecture.md
│   ├── SPEC-7 POC to spike Tigris Turso for local access to cloud data.md
│   ├── SPEC-8 TigrisFS Integration.md
│   ├── SPEC-9 Multi-Project Bidirectional Sync Architecture.md
│   ├── SPEC-9 Signed Header Tenant Information.md
│   └── SPEC-9-1 Follow-Ups- Conflict, Sync, and Observability.md
├── src
│   └── basic_memory
│       ├── __init__.py
│       ├── alembic
│       │   ├── alembic.ini
│       │   ├── env.py
│       │   ├── migrations.py
│       │   ├── script.py.mako
│       │   └── versions
│       │       ├── 3dae7c7b1564_initial_schema.py
│       │       ├── 502b60eaa905_remove_required_from_entity_permalink.py
│       │       ├── 5fe1ab1ccebe_add_projects_table.py
│       │       ├── 647e7a75e2cd_project_constraint_fix.py
│       │       ├── 9d9c1cb7d8f5_add_mtime_and_size_columns_to_entity_.py
│       │       ├── a1b2c3d4e5f6_fix_project_foreign_keys.py
│       │       ├── b3c3938bacdb_relation_to_name_unique_index.py
│       │       ├── cc7172b46608_update_search_index_schema.py
│       │       └── e7e1f4367280_add_scan_watermark_tracking_to_project.py
│       ├── api
│       │   ├── __init__.py
│       │   ├── app.py
│       │   ├── routers
│       │   │   ├── __init__.py
│       │   │   ├── directory_router.py
│       │   │   ├── importer_router.py
│       │   │   ├── knowledge_router.py
│       │   │   ├── management_router.py
│       │   │   ├── memory_router.py
│       │   │   ├── project_router.py
│       │   │   ├── prompt_router.py
│       │   │   ├── resource_router.py
│       │   │   ├── search_router.py
│       │   │   └── utils.py
│       │   └── template_loader.py
│       ├── cli
│       │   ├── __init__.py
│       │   ├── app.py
│       │   ├── auth.py
│       │   ├── commands
│       │   │   ├── __init__.py
│       │   │   ├── cloud
│       │   │   │   ├── __init__.py
│       │   │   │   ├── api_client.py
│       │   │   │   ├── bisync_commands.py
│       │   │   │   ├── cloud_utils.py
│       │   │   │   ├── core_commands.py
│       │   │   │   ├── rclone_commands.py
│       │   │   │   ├── rclone_config.py
│       │   │   │   ├── rclone_installer.py
│       │   │   │   ├── upload_command.py
│       │   │   │   └── upload.py
│       │   │   ├── command_utils.py
│       │   │   ├── db.py
│       │   │   ├── import_chatgpt.py
│       │   │   ├── import_claude_conversations.py
│       │   │   ├── import_claude_projects.py
│       │   │   ├── import_memory_json.py
│       │   │   ├── mcp.py
│       │   │   ├── project.py
│       │   │   ├── status.py
│       │   │   └── tool.py
│       │   └── main.py
│       ├── config.py
│       ├── db.py
│       ├── deps.py
│       ├── file_utils.py
│       ├── ignore_utils.py
│       ├── importers
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── chatgpt_importer.py
│       │   ├── claude_conversations_importer.py
│       │   ├── claude_projects_importer.py
│       │   ├── memory_json_importer.py
│       │   └── utils.py
│       ├── markdown
│       │   ├── __init__.py
│       │   ├── entity_parser.py
│       │   ├── markdown_processor.py
│       │   ├── plugins.py
│       │   ├── schemas.py
│       │   └── utils.py
│       ├── mcp
│       │   ├── __init__.py
│       │   ├── async_client.py
│       │   ├── project_context.py
│       │   ├── prompts
│       │   │   ├── __init__.py
│       │   │   ├── ai_assistant_guide.py
│       │   │   ├── continue_conversation.py
│       │   │   ├── recent_activity.py
│       │   │   ├── search.py
│       │   │   └── utils.py
│       │   ├── resources
│       │   │   ├── ai_assistant_guide.md
│       │   │   └── project_info.py
│       │   ├── server.py
│       │   └── tools
│       │       ├── __init__.py
│       │       ├── build_context.py
│       │       ├── canvas.py
│       │       ├── chatgpt_tools.py
│       │       ├── delete_note.py
│       │       ├── edit_note.py
│       │       ├── list_directory.py
│       │       ├── move_note.py
│       │       ├── project_management.py
│       │       ├── read_content.py
│       │       ├── read_note.py
│       │       ├── recent_activity.py
│       │       ├── search.py
│       │       ├── utils.py
│       │       ├── view_note.py
│       │       └── write_note.py
│       ├── models
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── knowledge.py
│       │   ├── project.py
│       │   └── search.py
│       ├── repository
│       │   ├── __init__.py
│       │   ├── entity_repository.py
│       │   ├── observation_repository.py
│       │   ├── project_info_repository.py
│       │   ├── project_repository.py
│       │   ├── relation_repository.py
│       │   ├── repository.py
│       │   └── search_repository.py
│       ├── schemas
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── cloud.py
│       │   ├── delete.py
│       │   ├── directory.py
│       │   ├── importer.py
│       │   ├── memory.py
│       │   ├── project_info.py
│       │   ├── prompt.py
│       │   ├── request.py
│       │   ├── response.py
│       │   ├── search.py
│       │   └── sync_report.py
│       ├── services
│       │   ├── __init__.py
│       │   ├── context_service.py
│       │   ├── directory_service.py
│       │   ├── entity_service.py
│       │   ├── exceptions.py
│       │   ├── file_service.py
│       │   ├── initialization.py
│       │   ├── link_resolver.py
│       │   ├── project_service.py
│       │   ├── search_service.py
│       │   └── service.py
│       ├── sync
│       │   ├── __init__.py
│       │   ├── background_sync.py
│       │   ├── sync_service.py
│       │   └── watch_service.py
│       ├── templates
│       │   └── prompts
│       │       ├── continue_conversation.hbs
│       │       └── search.hbs
│       └── utils.py
├── test-int
│   ├── BENCHMARKS.md
│   ├── cli
│   │   ├── test_project_commands_integration.py
│   │   └── test_version_integration.py
│   ├── conftest.py
│   ├── mcp
│   │   ├── test_build_context_underscore.py
│   │   ├── test_build_context_validation.py
│   │   ├── test_chatgpt_tools_integration.py
│   │   ├── test_default_project_mode_integration.py
│   │   ├── test_delete_note_integration.py
│   │   ├── test_edit_note_integration.py
│   │   ├── test_list_directory_integration.py
│   │   ├── test_move_note_integration.py
│   │   ├── test_project_management_integration.py
│   │   ├── test_project_state_sync_integration.py
│   │   ├── test_read_content_integration.py
│   │   ├── test_read_note_integration.py
│   │   ├── test_search_integration.py
│   │   ├── test_single_project_mcp_integration.py
│   │   └── test_write_note_integration.py
│   ├── test_db_wal_mode.py
│   ├── test_disable_permalinks_integration.py
│   └── test_sync_performance_benchmark.py
├── tests
│   ├── __init__.py
│   ├── api
│   │   ├── conftest.py
│   │   ├── test_async_client.py
│   │   ├── test_continue_conversation_template.py
│   │   ├── test_directory_router.py
│   │   ├── test_importer_router.py
│   │   ├── test_knowledge_router.py
│   │   ├── test_management_router.py
│   │   ├── test_memory_router.py
│   │   ├── test_project_router_operations.py
│   │   ├── test_project_router.py
│   │   ├── test_prompt_router.py
│   │   ├── test_relation_background_resolution.py
│   │   ├── test_resource_router.py
│   │   ├── test_search_router.py
│   │   ├── test_search_template.py
│   │   ├── test_template_loader_helpers.py
│   │   └── test_template_loader.py
│   ├── cli
│   │   ├── conftest.py
│   │   ├── test_cli_tools.py
│   │   ├── test_cloud_authentication.py
│   │   ├── test_ignore_utils.py
│   │   ├── test_import_chatgpt.py
│   │   ├── test_import_claude_conversations.py
│   │   ├── test_import_claude_projects.py
│   │   ├── test_import_memory_json.py
│   │   ├── test_project_add_with_local_path.py
│   │   └── test_upload.py
│   ├── conftest.py
│   ├── db
│   │   └── test_issue_254_foreign_key_constraints.py
│   ├── importers
│   │   ├── test_importer_base.py
│   │   └── test_importer_utils.py
│   ├── markdown
│   │   ├── __init__.py
│   │   ├── test_date_frontmatter_parsing.py
│   │   ├── test_entity_parser_error_handling.py
│   │   ├── test_entity_parser.py
│   │   ├── test_markdown_plugins.py
│   │   ├── test_markdown_processor.py
│   │   ├── test_observation_edge_cases.py
│   │   ├── test_parser_edge_cases.py
│   │   ├── test_relation_edge_cases.py
│   │   └── test_task_detection.py
│   ├── mcp
│   │   ├── conftest.py
│   │   ├── test_obsidian_yaml_formatting.py
│   │   ├── test_permalink_collision_file_overwrite.py
│   │   ├── test_prompts.py
│   │   ├── test_resources.py
│   │   ├── test_tool_build_context.py
│   │   ├── test_tool_canvas.py
│   │   ├── test_tool_delete_note.py
│   │   ├── test_tool_edit_note.py
│   │   ├── test_tool_list_directory.py
│   │   ├── test_tool_move_note.py
│   │   ├── test_tool_read_content.py
│   │   ├── test_tool_read_note.py
│   │   ├── test_tool_recent_activity.py
│   │   ├── test_tool_resource.py
│   │   ├── test_tool_search.py
│   │   ├── test_tool_utils.py
│   │   ├── test_tool_view_note.py
│   │   ├── test_tool_write_note.py
│   │   └── tools
│   │       └── test_chatgpt_tools.py
│   ├── Non-MarkdownFileSupport.pdf
│   ├── repository
│   │   ├── test_entity_repository_upsert.py
│   │   ├── test_entity_repository.py
│   │   ├── test_entity_upsert_issue_187.py
│   │   ├── test_observation_repository.py
│   │   ├── test_project_info_repository.py
│   │   ├── test_project_repository.py
│   │   ├── test_relation_repository.py
│   │   ├── test_repository.py
│   │   ├── test_search_repository_edit_bug_fix.py
│   │   └── test_search_repository.py
│   ├── schemas
│   │   ├── test_base_timeframe_minimum.py
│   │   ├── test_memory_serialization.py
│   │   ├── test_memory_url_validation.py
│   │   ├── test_memory_url.py
│   │   ├── test_schemas.py
│   │   └── test_search.py
│   ├── Screenshot.png
│   ├── services
│   │   ├── test_context_service.py
│   │   ├── test_directory_service.py
│   │   ├── test_entity_service_disable_permalinks.py
│   │   ├── test_entity_service.py
│   │   ├── test_file_service.py
│   │   ├── test_initialization.py
│   │   ├── test_link_resolver.py
│   │   ├── test_project_removal_bug.py
│   │   ├── test_project_service_operations.py
│   │   ├── test_project_service.py
│   │   └── test_search_service.py
│   ├── sync
│   │   ├── test_character_conflicts.py
│   │   ├── test_sync_service_incremental.py
│   │   ├── test_sync_service.py
│   │   ├── test_sync_wikilink_issue.py
│   │   ├── test_tmp_files.py
│   │   ├── test_watch_service_edge_cases.py
│   │   ├── test_watch_service_reload.py
│   │   └── test_watch_service.py
│   ├── test_config.py
│   ├── test_db_migration_deduplication.py
│   ├── test_deps.py
│   ├── test_production_cascade_delete.py
│   ├── test_rclone_commands.py
│   └── utils
│       ├── test_file_utils.py
│       ├── test_frontmatter_obsidian_compatible.py
│       ├── test_parse_tags.py
│       ├── test_permalink_formatting.py
│       ├── test_utf8_handling.py
│       └── test_validate_project_path.py
├── uv.lock
├── v0.15.0-RELEASE-DOCS.md
└── v15-docs
    ├── api-performance.md
    ├── background-relations.md
    ├── basic-memory-home.md
    ├── bug-fixes.md
    ├── chatgpt-integration.md
    ├── cloud-authentication.md
    ├── cloud-bisync.md
    ├── cloud-mode-usage.md
    ├── cloud-mount.md
    ├── default-project-mode.md
    ├── env-file-removal.md
    ├── env-var-overrides.md
    ├── explicit-project-parameter.md
    ├── gitignore-integration.md
    ├── project-root-env-var.md
    ├── README.md
    └── sqlite-performance.md
```

# Files

--------------------------------------------------------------------------------
/tests/cli/test_upload.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for upload module."""
  2 | 
  3 | from unittest.mock import AsyncMock, Mock, patch
  4 | 
  5 | import httpx
  6 | import pytest
  7 | 
  8 | from basic_memory.cli.commands.cloud.upload import _get_files_to_upload, upload_path
  9 | 
 10 | 
 11 | class TestGetFilesToUpload:
 12 |     """Tests for _get_files_to_upload()."""
 13 | 
 14 |     def test_collects_files_from_directory(self, tmp_path):
 15 |         """Test collecting files from a directory."""
 16 |         # Create test directory structure
 17 |         (tmp_path / "file1.txt").write_text("content1")
 18 |         (tmp_path / "file2.md").write_text("content2")
 19 |         (tmp_path / "subdir").mkdir()
 20 |         (tmp_path / "subdir" / "file3.py").write_text("content3")
 21 | 
 22 |         # Call with real ignore utils (no mocking)
 23 |         result = _get_files_to_upload(tmp_path, verbose=False, use_gitignore=True)
 24 | 
 25 |         # Should find all 3 files
 26 |         assert len(result) == 3
 27 | 
 28 |         # Extract just the relative paths for easier assertion
 29 |         relative_paths = [rel_path for _, rel_path in result]
 30 |         assert "file1.txt" in relative_paths
 31 |         assert "file2.md" in relative_paths
 32 |         assert "subdir/file3.py" in relative_paths
 33 | 
 34 |     def test_respects_gitignore_patterns(self, tmp_path):
 35 |         """Test that gitignore patterns are respected."""
 36 |         # Create test files
 37 |         (tmp_path / "keep.txt").write_text("keep")
 38 |         (tmp_path / "ignore.pyc").write_text("ignore")
 39 | 
 40 |         # Create .gitignore file
 41 |         gitignore_file = tmp_path / ".gitignore"
 42 |         gitignore_file.write_text("*.pyc\n")
 43 | 
 44 |         result = _get_files_to_upload(tmp_path)
 45 | 
 46 |         # Should only find keep.txt (not .pyc or .gitignore itself)
 47 |         relative_paths = [rel_path for _, rel_path in result]
 48 |         assert "keep.txt" in relative_paths
 49 |         assert "ignore.pyc" not in relative_paths
 50 | 
 51 |     def test_handles_empty_directory(self, tmp_path):
 52 |         """Test handling of empty directory."""
 53 |         empty_dir = tmp_path / "empty"
 54 |         empty_dir.mkdir()
 55 | 
 56 |         result = _get_files_to_upload(empty_dir)
 57 | 
 58 |         assert result == []
 59 | 
 60 |     def test_converts_windows_paths_to_forward_slashes(self, tmp_path):
 61 |         """Test that Windows backslashes are converted to forward slashes."""
 62 |         # Create nested structure
 63 |         (tmp_path / "dir1").mkdir()
 64 |         (tmp_path / "dir1" / "dir2").mkdir()
 65 |         (tmp_path / "dir1" / "dir2" / "file.txt").write_text("content")
 66 | 
 67 |         result = _get_files_to_upload(tmp_path)
 68 | 
 69 |         # Remote path should use forward slashes
 70 |         _, remote_path = result[0]
 71 |         assert "\\" not in remote_path  # No backslashes
 72 |         assert "dir1/dir2/file.txt" == remote_path
 73 | 
 74 | 
 75 | class TestUploadPath:
 76 |     """Tests for upload_path()."""
 77 | 
 78 |     @pytest.mark.asyncio
 79 |     async def test_uploads_single_file(self, tmp_path):
 80 |         """Test uploading a single file."""
 81 |         test_file = tmp_path / "test.txt"
 82 |         test_file.write_text("test content")
 83 | 
 84 |         # Mock the client and HTTP response
 85 |         mock_client = AsyncMock()
 86 |         mock_response = Mock()
 87 |         mock_response.raise_for_status = Mock()
 88 | 
 89 |         with patch("basic_memory.cli.commands.cloud.upload.get_client") as mock_get_client:
 90 |             with patch("basic_memory.cli.commands.cloud.upload.call_put") as mock_put:
 91 |                 with patch("aiofiles.open", create=True) as mock_aiofiles_open:
 92 |                     # Setup mocks
 93 |                     mock_get_client.return_value.__aenter__.return_value = mock_client
 94 |                     mock_get_client.return_value.__aexit__.return_value = None
 95 |                     mock_put.return_value = mock_response
 96 | 
 97 |                     # Mock file reading
 98 |                     mock_file = AsyncMock()
 99 |                     mock_file.read.return_value = b"test content"
100 |                     mock_aiofiles_open.return_value.__aenter__.return_value = mock_file
101 | 
102 |                     result = await upload_path(test_file, "test-project")
103 | 
104 |         # Verify success
105 |         assert result is True
106 | 
107 |         # Verify PUT was called with correct path
108 |         mock_put.assert_called_once()
109 |         call_args = mock_put.call_args
110 |         assert call_args[0][0] == mock_client
111 |         assert call_args[0][1] == "/webdav/test-project/test.txt"
112 |         assert call_args[1]["content"] == b"test content"
113 | 
114 |     @pytest.mark.asyncio
115 |     async def test_uploads_directory(self, tmp_path):
116 |         """Test uploading a directory with multiple files."""
117 |         # Create test files
118 |         (tmp_path / "file1.txt").write_text("content1")
119 |         (tmp_path / "file2.txt").write_text("content2")
120 | 
121 |         mock_client = AsyncMock()
122 |         mock_response = Mock()
123 |         mock_response.raise_for_status = Mock()
124 | 
125 |         with patch("basic_memory.cli.commands.cloud.upload.get_client") as mock_get_client:
126 |             with patch("basic_memory.cli.commands.cloud.upload.call_put") as mock_put:
127 |                 with patch(
128 |                     "basic_memory.cli.commands.cloud.upload._get_files_to_upload"
129 |                 ) as mock_get_files:
130 |                     with patch("aiofiles.open", create=True) as mock_aiofiles_open:
131 |                         # Setup mocks
132 |                         mock_get_client.return_value.__aenter__.return_value = mock_client
133 |                         mock_get_client.return_value.__aexit__.return_value = None
134 |                         mock_put.return_value = mock_response
135 | 
136 |                         # Mock file listing
137 |                         mock_get_files.return_value = [
138 |                             (tmp_path / "file1.txt", "file1.txt"),
139 |                             (tmp_path / "file2.txt", "file2.txt"),
140 |                         ]
141 | 
142 |                         # Mock file reading
143 |                         mock_file = AsyncMock()
144 |                         mock_file.read.side_effect = [b"content1", b"content2"]
145 |                         mock_aiofiles_open.return_value.__aenter__.return_value = mock_file
146 | 
147 |                         result = await upload_path(tmp_path, "test-project")
148 | 
149 |         # Verify success
150 |         assert result is True
151 | 
152 |         # Verify PUT was called twice
153 |         assert mock_put.call_count == 2
154 | 
155 |     @pytest.mark.asyncio
156 |     async def test_handles_nonexistent_path(self, tmp_path):
157 |         """Test handling of nonexistent path."""
158 |         nonexistent = tmp_path / "does-not-exist"
159 | 
160 |         result = await upload_path(nonexistent, "test-project")
161 | 
162 |         # Should return False
163 |         assert result is False
164 | 
165 |     @pytest.mark.asyncio
166 |     async def test_handles_http_error(self, tmp_path):
167 |         """Test handling of HTTP errors during upload."""
168 |         test_file = tmp_path / "test.txt"
169 |         test_file.write_text("test content")
170 | 
171 |         mock_client = AsyncMock()
172 |         mock_response = Mock()
173 |         mock_response.status_code = 403
174 |         mock_response.text = "Forbidden"
175 |         mock_response.raise_for_status.side_effect = httpx.HTTPStatusError(
176 |             "Forbidden", request=Mock(), response=mock_response
177 |         )
178 | 
179 |         with patch("basic_memory.cli.commands.cloud.upload.get_client") as mock_get_client:
180 |             with patch("basic_memory.cli.commands.cloud.upload.call_put") as mock_put:
181 |                 with patch("aiofiles.open", create=True) as mock_aiofiles_open:
182 |                     # Setup mocks
183 |                     mock_get_client.return_value.__aenter__.return_value = mock_client
184 |                     mock_get_client.return_value.__aexit__.return_value = None
185 |                     mock_put.return_value = mock_response
186 | 
187 |                     # Mock file reading
188 |                     mock_file = AsyncMock()
189 |                     mock_file.read.return_value = b"test content"
190 |                     mock_aiofiles_open.return_value.__aenter__.return_value = mock_file
191 | 
192 |                     result = await upload_path(test_file, "test-project")
193 | 
194 |         # Should return False on error
195 |         assert result is False
196 | 
197 |     @pytest.mark.asyncio
198 |     async def test_handles_empty_directory(self, tmp_path):
199 |         """Test uploading an empty directory."""
200 |         empty_dir = tmp_path / "empty"
201 |         empty_dir.mkdir()
202 | 
203 |         with patch("basic_memory.cli.commands.cloud.upload._get_files_to_upload") as mock_get_files:
204 |             mock_get_files.return_value = []
205 | 
206 |             result = await upload_path(empty_dir, "test-project")
207 | 
208 |         # Should return True (no-op success)
209 |         assert result is True
210 | 
211 |     @pytest.mark.asyncio
212 |     async def test_formats_file_size_bytes(self, tmp_path, capsys):
213 |         """Test file size formatting for small files (bytes)."""
214 |         test_file = tmp_path / "small.txt"
215 |         test_file.write_text("hi")  # 2 bytes
216 | 
217 |         mock_client = AsyncMock()
218 |         mock_response = Mock()
219 |         mock_response.raise_for_status = Mock()
220 | 
221 |         with patch("basic_memory.cli.commands.cloud.upload.get_client") as mock_get_client:
222 |             with patch("basic_memory.cli.commands.cloud.upload.call_put") as mock_put:
223 |                 with patch("aiofiles.open", create=True) as mock_aiofiles_open:
224 |                     mock_get_client.return_value.__aenter__.return_value = mock_client
225 |                     mock_get_client.return_value.__aexit__.return_value = None
226 |                     mock_put.return_value = mock_response
227 | 
228 |                     mock_file = AsyncMock()
229 |                     mock_file.read.return_value = b"hi"
230 |                     mock_aiofiles_open.return_value.__aenter__.return_value = mock_file
231 | 
232 |                     await upload_path(test_file, "test-project")
233 | 
234 |         # Check output contains "bytes"
235 |         captured = capsys.readouterr()
236 |         assert "bytes" in captured.out
237 | 
238 |     @pytest.mark.asyncio
239 |     async def test_formats_file_size_kilobytes(self, tmp_path, capsys):
240 |         """Test file size formatting for medium files (KB)."""
241 |         test_file = tmp_path / "medium.txt"
242 |         # Create file with 2KB of content
243 |         test_file.write_text("x" * 2048)
244 | 
245 |         mock_client = AsyncMock()
246 |         mock_response = Mock()
247 |         mock_response.raise_for_status = Mock()
248 | 
249 |         with patch("basic_memory.cli.commands.cloud.upload.get_client") as mock_get_client:
250 |             with patch("basic_memory.cli.commands.cloud.upload.call_put") as mock_put:
251 |                 with patch("aiofiles.open", create=True) as mock_aiofiles_open:
252 |                     mock_get_client.return_value.__aenter__.return_value = mock_client
253 |                     mock_get_client.return_value.__aexit__.return_value = None
254 |                     mock_put.return_value = mock_response
255 | 
256 |                     mock_file = AsyncMock()
257 |                     mock_file.read.return_value = b"x" * 2048
258 |                     mock_aiofiles_open.return_value.__aenter__.return_value = mock_file
259 | 
260 |                     await upload_path(test_file, "test-project")
261 | 
262 |         # Check output contains "KB"
263 |         captured = capsys.readouterr()
264 |         assert "KB" in captured.out
265 | 
266 |     @pytest.mark.asyncio
267 |     async def test_formats_file_size_megabytes(self, tmp_path, capsys):
268 |         """Test file size formatting for large files (MB)."""
269 |         test_file = tmp_path / "large.txt"
270 |         # Create file with 2MB of content
271 |         test_file.write_text("x" * (2 * 1024 * 1024))
272 | 
273 |         mock_client = AsyncMock()
274 |         mock_response = Mock()
275 |         mock_response.raise_for_status = Mock()
276 | 
277 |         with patch("basic_memory.cli.commands.cloud.upload.get_client") as mock_get_client:
278 |             with patch("basic_memory.cli.commands.cloud.upload.call_put") as mock_put:
279 |                 with patch("aiofiles.open", create=True) as mock_aiofiles_open:
280 |                     mock_get_client.return_value.__aenter__.return_value = mock_client
281 |                     mock_get_client.return_value.__aexit__.return_value = None
282 |                     mock_put.return_value = mock_response
283 | 
284 |                     mock_file = AsyncMock()
285 |                     mock_file.read.return_value = b"x" * (2 * 1024 * 1024)
286 |                     mock_aiofiles_open.return_value.__aenter__.return_value = mock_file
287 | 
288 |                     await upload_path(test_file, "test-project")
289 | 
290 |         # Check output contains "MB"
291 |         captured = capsys.readouterr()
292 |         assert "MB" in captured.out
293 | 
294 |     @pytest.mark.asyncio
295 |     async def test_builds_correct_webdav_path(self, tmp_path):
296 |         """Test that WebDAV path is correctly constructed."""
297 |         # Create nested structure
298 |         (tmp_path / "subdir").mkdir()
299 |         test_file = tmp_path / "subdir" / "file.txt"
300 |         test_file.write_text("content")
301 | 
302 |         mock_client = AsyncMock()
303 |         mock_response = Mock()
304 |         mock_response.raise_for_status = Mock()
305 | 
306 |         with patch("basic_memory.cli.commands.cloud.upload.get_client") as mock_get_client:
307 |             with patch("basic_memory.cli.commands.cloud.upload.call_put") as mock_put:
308 |                 with patch(
309 |                     "basic_memory.cli.commands.cloud.upload._get_files_to_upload"
310 |                 ) as mock_get_files:
311 |                     with patch("aiofiles.open", create=True) as mock_aiofiles_open:
312 |                         mock_get_client.return_value.__aenter__.return_value = mock_client
313 |                         mock_get_client.return_value.__aexit__.return_value = None
314 |                         mock_put.return_value = mock_response
315 | 
316 |                         # Mock file listing with relative path
317 |                         mock_get_files.return_value = [(test_file, "subdir/file.txt")]
318 | 
319 |                         mock_file = AsyncMock()
320 |                         mock_file.read.return_value = b"content"
321 |                         mock_aiofiles_open.return_value.__aenter__.return_value = mock_file
322 | 
323 |                         await upload_path(tmp_path, "my-project")
324 | 
325 |         # Verify WebDAV path format: /webdav/{project_name}/{relative_path}
326 |         mock_put.assert_called_once()
327 |         call_args = mock_put.call_args
328 |         assert call_args[0][1] == "/webdav/my-project/subdir/file.txt"
329 | 
330 |     def test_no_gitignore_skips_gitignore_patterns(self, tmp_path):
331 |         """Test that --no-gitignore flag skips .gitignore patterns."""
332 |         # Create test files
333 |         (tmp_path / "keep.txt").write_text("keep")
334 |         (tmp_path / "secret.bak").write_text("secret")  # Use .bak instead of .pyc
335 | 
336 |         # Create .gitignore file that ignores .bak files
337 |         gitignore_file = tmp_path / ".gitignore"
338 |         gitignore_file.write_text("*.bak\n")
339 | 
340 |         # With use_gitignore=False, should include .bak files
341 |         result = _get_files_to_upload(tmp_path, verbose=False, use_gitignore=False)
342 | 
343 |         # Extract relative paths
344 |         relative_paths = [rel_path for _, rel_path in result]
345 | 
346 |         # Both files should be included when gitignore is disabled
347 |         assert "keep.txt" in relative_paths
348 |         assert "secret.bak" in relative_paths
349 | 
350 |     def test_no_gitignore_still_respects_bmignore(self, tmp_path):
351 |         """Test that --no-gitignore still respects .bmignore patterns."""
352 |         # Create test files
353 |         (tmp_path / "keep.txt").write_text("keep")
354 |         (tmp_path / ".hidden").write_text(
355 |             "hidden"
356 |         )  # Should be ignored by .bmignore default pattern
357 | 
358 |         # Create .gitignore that would allow .hidden
359 |         gitignore_file = tmp_path / ".gitignore"
360 |         gitignore_file.write_text("# Allow all\n")
361 | 
362 |         # With use_gitignore=False, should still filter hidden files via .bmignore
363 |         result = _get_files_to_upload(tmp_path, verbose=False, use_gitignore=False)
364 | 
365 |         # Extract relative paths
366 |         relative_paths = [rel_path for _, rel_path in result]
367 | 
368 |         # keep.txt should be included, .hidden should be filtered by .bmignore
369 |         assert "keep.txt" in relative_paths
370 |         assert ".hidden" not in relative_paths
371 | 
372 |     def test_verbose_shows_filtering_info(self, tmp_path, capsys):
373 |         """Test that verbose mode shows filtering information."""
374 |         # Create test files
375 |         (tmp_path / "keep.txt").write_text("keep")
376 |         (tmp_path / "ignore.pyc").write_text("ignore")
377 | 
378 |         # Create .gitignore
379 |         gitignore_file = tmp_path / ".gitignore"
380 |         gitignore_file.write_text("*.pyc\n")
381 | 
382 |         # Run with verbose=True
383 |         _get_files_to_upload(tmp_path, verbose=True, use_gitignore=True)
384 | 
385 |         # Capture output
386 |         captured = capsys.readouterr()
387 | 
388 |         # Should show scanning information
389 |         assert "Scanning directory:" in captured.out
390 |         assert "Using .bmignore: Yes" in captured.out
391 |         assert "Using .gitignore:" in captured.out
392 |         assert "Ignore patterns loaded:" in captured.out
393 | 
394 |         # Should show file status
395 |         assert "[INCLUDE]" in captured.out or "[IGNORED]" in captured.out
396 | 
397 |         # Should show summary
398 |         assert "Summary:" in captured.out
399 |         assert "Files to upload:" in captured.out
400 |         assert "Files ignored:" in captured.out
401 | 
402 |     def test_wildcard_gitignore_filters_all_files(self, tmp_path):
403 |         """Test that a wildcard * in .gitignore filters all files."""
404 |         # Create test files
405 |         (tmp_path / "file1.txt").write_text("content1")
406 |         (tmp_path / "file2.md").write_text("content2")
407 | 
408 |         # Create .gitignore with wildcard
409 |         gitignore_file = tmp_path / ".gitignore"
410 |         gitignore_file.write_text("*\n")
411 | 
412 |         # Should filter all files
413 |         result = _get_files_to_upload(tmp_path, verbose=False, use_gitignore=True)
414 |         assert len(result) == 0
415 | 
416 |         # With use_gitignore=False, should include files
417 |         result = _get_files_to_upload(tmp_path, verbose=False, use_gitignore=False)
418 |         assert len(result) == 2
419 | 
420 |     @pytest.mark.asyncio
421 |     async def test_dry_run_shows_files_without_uploading(self, tmp_path, capsys):
422 |         """Test that --dry-run shows what would be uploaded without uploading."""
423 |         # Create test files
424 |         (tmp_path / "file1.txt").write_text("content1")
425 |         (tmp_path / "file2.txt").write_text("content2")
426 | 
427 |         # Don't mock anything - we want to verify no actual upload happens
428 |         result = await upload_path(tmp_path, "test-project", dry_run=True)
429 | 
430 |         # Should return success
431 |         assert result is True
432 | 
433 |         # Check output shows dry run info
434 |         captured = capsys.readouterr()
435 |         assert "Found 2 file(s) to upload" in captured.out
436 |         assert "Files that would be uploaded:" in captured.out
437 |         assert "file1.txt" in captured.out
438 |         assert "file2.txt" in captured.out
439 |         assert "Total:" in captured.out
440 | 
441 |     @pytest.mark.asyncio
442 |     async def test_dry_run_with_verbose(self, tmp_path, capsys):
443 |         """Test that --dry-run works with --verbose."""
444 |         # Create test files
445 |         (tmp_path / "keep.txt").write_text("keep")
446 |         (tmp_path / "ignore.pyc").write_text("ignore")
447 | 
448 |         # Create .gitignore
449 |         gitignore_file = tmp_path / ".gitignore"
450 |         gitignore_file.write_text("*.pyc\n")
451 | 
452 |         result = await upload_path(tmp_path, "test-project", verbose=True, dry_run=True)
453 | 
454 |         # Should return success
455 |         assert result is True
456 | 
457 |         # Check output shows both verbose and dry run info
458 |         captured = capsys.readouterr()
459 |         assert "Scanning directory:" in captured.out
460 |         assert "[INCLUDE] keep.txt" in captured.out
461 |         assert "[IGNORED] ignore.pyc" in captured.out
462 |         assert "Files that would be uploaded:" in captured.out
463 |         assert "keep.txt" in captured.out
464 | 
```

--------------------------------------------------------------------------------
/docs/cloud-cli.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Basic Memory Cloud CLI Guide
  2 | 
  3 | The Basic Memory Cloud CLI provides seamless integration between local and cloud knowledge bases using **project-scoped synchronization**. Each project can optionally sync with the cloud, giving you fine-grained control over what syncs and where.
  4 | 
  5 | ## Overview
  6 | 
  7 | The cloud CLI enables you to:
  8 | - **Toggle cloud mode** - All regular `bm` commands work with cloud when enabled
  9 | - **Project-scoped sync** - Each project independently manages its sync configuration
 10 | - **Explicit operations** - Sync only what you want, when you want
 11 | - **Bidirectional sync** - Keep local and cloud in sync with rclone bisync
 12 | - **Offline access** - Work locally, sync when ready
 13 | 
 14 | ## Prerequisites
 15 | 
 16 | Before using Basic Memory Cloud, you need:
 17 | 
 18 | - **Active Subscription**: An active Basic Memory Cloud subscription is required to access cloud features
 19 | - **Subscribe**: Visit [https://basicmemory.com/subscribe](https://basicmemory.com/subscribe) to sign up
 20 | 
 21 | If you attempt to log in without an active subscription, you'll receive a "Subscription Required" error with a link to subscribe.
 22 | 
 23 | ## Architecture: Project-Scoped Sync
 24 | 
 25 | ### The Problem
 26 | 
 27 | **Old approach (SPEC-8):** All projects lived in a single `~/basic-memory-cloud-sync/` directory. This caused:
 28 | - ❌ Directory conflicts between mount and bisync
 29 | - ❌ Auto-discovery creating phantom projects
 30 | - ❌ Confusion about what syncs and when
 31 | - ❌ All-or-nothing sync (couldn't sync just one project)
 32 | 
 33 | **New approach (SPEC-20):** Each project independently configures sync.
 34 | 
 35 | ### How It Works
 36 | 
 37 | **Projects can exist in three states:**
 38 | 
 39 | 1. **Cloud-only** - Project exists on cloud, no local copy
 40 | 2. **Cloud + Local (synced)** - Project has a local working directory that syncs
 41 | 3. **Local-only** - Project exists locally (when cloud mode is disabled)
 42 | 
 43 | **Example:**
 44 | 
 45 | ```bash
 46 | # You have 3 projects on cloud:
 47 | # - research: wants local sync at ~/Documents/research
 48 | # - work: wants local sync at ~/work-notes
 49 | # - temp: cloud-only, no local sync needed
 50 | 
 51 | bm project add research --local-path ~/Documents/research
 52 | bm project add work --local-path ~/work-notes
 53 | bm project add temp  # No local sync
 54 | 
 55 | # Now you can sync individually (after initial --resync):
 56 | bm project bisync --name research
 57 | bm project bisync --name work
 58 | # temp stays cloud-only
 59 | ```
 60 | 
 61 | **What happens under the covers:**
 62 | - Config stores `cloud_projects` dict mapping project names to local paths
 63 | - Each project gets its own bisync state in `~/.basic-memory/bisync-state/{project}/`
 64 | - Rclone syncs using single remote: `basic-memory-cloud`
 65 | - Projects can live anywhere on your filesystem, not forced into sync directory
 66 | 
 67 | ## Quick Start
 68 | 
 69 | ### 1. Enable Cloud Mode
 70 | 
 71 | Authenticate and enable cloud mode:
 72 | 
 73 | ```bash
 74 | bm cloud login
 75 | ```
 76 | 
 77 | **What this does:**
 78 | 1. Opens browser to Basic Memory Cloud authentication page
 79 | 2. Stores authentication token in `~/.basic-memory/auth/token`
 80 | 3. **Enables cloud mode** - all CLI commands now work against cloud
 81 | 4. Validates your subscription status
 82 | 
 83 | **Result:** All `bm project`, `bm tools` commands now work with cloud.
 84 | 
 85 | ### 2. Set Up Sync
 86 | 
 87 | Install rclone and configure credentials:
 88 | 
 89 | ```bash
 90 | bm cloud setup
 91 | ```
 92 | 
 93 | **What this does:**
 94 | 1. Installs rclone automatically (if needed)
 95 | 2. Fetches your tenant information from cloud
 96 | 3. Generates scoped S3 credentials for sync
 97 | 4. Configures single rclone remote: `basic-memory-cloud`
 98 | 
 99 | **Result:** You're ready to sync projects. No sync directories created yet - those come with project setup.
100 | 
101 | ### 3. Add Projects with Sync
102 | 
103 | Create projects with optional local sync paths:
104 | 
105 | ```bash
106 | # Create cloud project without local sync
107 | bm project add research
108 | 
109 | # Create cloud project WITH local sync
110 | bm project add research --local-path ~/Documents/research
111 | 
112 | # Or configure sync for existing project
113 | bm project sync-setup research ~/Documents/research
114 | ```
115 | 
116 | **What happens under the covers:**
117 | 
118 | When you add a project with `--local-path`:
119 | 1. Project created on cloud at `/app/data/research`
120 | 2. Local path stored in config: `cloud_projects.research.local_path = "~/Documents/research"`
121 | 3. Local directory created if it doesn't exist
122 | 4. Bisync state directory created at `~/.basic-memory/bisync-state/research/`
123 | 
124 | **Result:** Project is ready to sync, but no files synced yet.
125 | 
126 | ### 4. Sync Your Project
127 | 
128 | Establish the initial sync baseline. **Best practice:** Always preview with `--dry-run` first:
129 | 
130 | ```bash
131 | # Step 1: Preview the initial sync (recommended)
132 | bm project bisync --name research --resync --dry-run
133 | 
134 | # Step 2: If all looks good, run the actual sync
135 | bm project bisync --name research --resync
136 | ```
137 | 
138 | **What happens under the covers:**
139 | 1. Rclone reads from `~/Documents/research` (local)
140 | 2. Connects to `basic-memory-cloud:bucket-name/app/data/research` (remote)
141 | 3. Creates bisync state files in `~/.basic-memory/bisync-state/research/`
142 | 4. Syncs files bidirectionally with settings:
143 |    - `conflict_resolve=newer` (most recent wins)
144 |    - `max_delete=25` (safety limit)
145 |    - Respects `.bmignore` patterns
146 | 
147 | **Result:** Local and cloud are in sync. Baseline established.
148 | 
149 | **Why `--resync`?** This is an rclone requirement for the first bisync run. It establishes the initial state that future syncs will compare against. After the first sync, never use `--resync` unless you need to force a new baseline.
150 | 
151 | See: https://rclone.org/bisync/#resync
152 | ```
153 | --resync
154 | This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. By default, Path2 files that do not exist in Path1 will be copied to Path1, and the process will then copy the Path1 tree to Path2.
155 | ```
156 | 
157 | ### 5. Subsequent Syncs
158 | 
159 | After the first sync, just run bisync without `--resync`:
160 | 
161 | ```bash
162 | bm project bisync --name research
163 | ```
164 | 
165 | **What happens:**
166 | 1. Rclone compares local and cloud states
167 | 2. Syncs changes in both directions
168 | 3. Auto-resolves conflicts (newer file wins)
169 | 4. Updates `last_sync` timestamp in config
170 | 
171 | **Result:** Changes flow both ways - edit locally or in cloud, both stay in sync.
172 | 
173 | ### 6. Verify Setup
174 | 
175 | Check status:
176 | 
177 | ```bash
178 | bm cloud status
179 | ```
180 | 
181 | You should see:
182 | - `Mode: Cloud (enabled)`
183 | - `Cloud instance is healthy`
184 | - Instructions for project sync commands
185 | 
186 | ## Working with Projects
187 | 
188 | ### Understanding Project Commands
189 | 
190 | **Key concept:** When cloud mode is enabled, use regular `bm project` commands (not `bm cloud project`).
191 | 
192 | ```bash
193 | # In cloud mode:
194 | bm project list              # Lists cloud projects
195 | bm project add research      # Creates cloud project
196 | 
197 | # In local mode:
198 | bm project list              # Lists local projects
199 | bm project add research ~/Documents/research  # Creates local project
200 | ```
201 | 
202 | ### Creating Projects
203 | 
204 | **Use case 1: Cloud-only project (no local sync)**
205 | 
206 | ```bash
207 | bm project add temp-notes
208 | ```
209 | 
210 | **What this does:**
211 | - Creates project on cloud at `/app/data/temp-notes`
212 | - No local directory created
213 | - No sync configuration
214 | 
215 | **Result:** Project exists on cloud, accessible via MCP tools, but no local copy.
216 | 
217 | **Use case 2: Cloud project with local sync**
218 | 
219 | ```bash
220 | bm project add research --local-path ~/Documents/research
221 | ```
222 | 
223 | **What this does:**
224 | - Creates project on cloud at `/app/data/research`
225 | - Creates local directory `~/Documents/research`
226 | - Stores sync config in `~/.basic-memory/config.json`
227 | - Prepares for bisync (but doesn't sync yet)
228 | 
229 | **Result:** Project ready to sync. Run `bm project bisync --name research --resync` to establish baseline.
230 | 
231 | **Use case 3: Add sync to existing cloud project**
232 | 
233 | ```bash
234 | # Project already exists on cloud
235 | bm project sync-setup research ~/Documents/research
236 | ```
237 | 
238 | **What this does:**
239 | - Updates existing project's sync configuration
240 | - Creates local directory
241 | - Prepares for bisync
242 | 
243 | **Result:** Existing cloud project now has local sync path. Run bisync to pull files down.
244 | 
245 | ### Listing Projects
246 | 
247 | View all projects:
248 | 
249 | ```bash
250 | bm project list
251 | ```
252 | 
253 | **What you see:**
254 | - All projects in cloud (when cloud mode enabled)
255 | - Default project marked
256 | - Project paths shown
257 | 
258 | **Future:** Will show sync status (synced/not synced, last sync time).
259 | 
260 | ## File Synchronization
261 | 
262 | ### Understanding the Sync Commands
263 | 
264 | **There are three sync-related commands:**
265 | 
266 | 1. `bm project sync` - One-way: local → cloud (make cloud match local)
267 | 2. `bm project bisync` - Two-way: local ↔ cloud (recommended)
268 | 3. `bm project check` - Verify files match (no changes)
269 | 
270 | ### One-Way Sync: Local → Cloud
271 | 
272 | **Use case:** You made changes locally and want to push to cloud (overwrite cloud).
273 | 
274 | ```bash
275 | bm project sync --name research
276 | ```
277 | 
278 | **What happens:**
279 | 1. Reads files from `~/Documents/research` (local)
280 | 2. Uses rclone sync to make cloud identical to local
281 | 3. Respects `.bmignore` patterns
282 | 4. Shows progress bar
283 | 
284 | **Result:** Cloud now matches local exactly. Any cloud-only changes are overwritten.
285 | 
286 | **When to use:**
287 | - You know local is the source of truth
288 | - You want to force cloud to match local
289 | - You don't care about cloud changes
290 | 
291 | ### Two-Way Sync: Local ↔ Cloud (Recommended)
292 | 
293 | **Use case:** You edit files both locally and in cloud UI, want both to stay in sync.
294 | 
295 | ```bash
296 | # First time - establish baseline
297 | bm project bisync --name research --resync
298 | 
299 | # Subsequent syncs
300 | bm project bisync --name research
301 | ```
302 | 
303 | **What happens:**
304 | 1. Compares local and cloud states using bisync metadata
305 | 2. Syncs changes in both directions
306 | 3. Auto-resolves conflicts (newer file wins)
307 | 4. Detects excessive deletes and fails safely (max 25 files)
308 | 
309 | **Conflict resolution example:**
310 | 
311 | ```bash
312 | # Edit locally
313 | echo "Local change" > ~/Documents/research/notes.md
314 | 
315 | # Edit same file in cloud UI
316 | # Cloud now has: "Cloud change"
317 | 
318 | # Run bisync
319 | bm project bisync --name research
320 | 
321 | # Result: Newer file wins (based on modification time)
322 | # If cloud was more recent, cloud version kept
323 | # If local was more recent, local version kept
324 | ```
325 | 
326 | **When to use:**
327 | - Default workflow for most users
328 | - You edit in multiple places
329 | - You want automatic conflict resolution
330 | 
331 | ### Verify Sync Integrity
332 | 
333 | **Use case:** Check if local and cloud match without making changes.
334 | 
335 | ```bash
336 | bm project check --name research
337 | ```
338 | 
339 | **What happens:**
340 | 1. Compares file checksums between local and cloud
341 | 2. Reports differences
342 | 3. No files transferred
343 | 
344 | **Result:** Shows which files differ. Run bisync to sync them.
345 | 
346 | ```bash
347 | # One-way check (faster)
348 | bm project check --name research --one-way
349 | ```
350 | 
351 | ### Preview Changes (Dry Run)
352 | 
353 | **Use case:** See what would change without actually syncing.
354 | 
355 | ```bash
356 | bm project bisync --name research --dry-run
357 | ```
358 | 
359 | **What happens:**
360 | 1. Runs bisync logic
361 | 2. Shows what would be transferred/deleted
362 | 3. No actual changes made
363 | 
364 | **Result:** Safe preview of sync operations.
365 | 
366 | ### Advanced: List Remote Files
367 | 
368 | **Use case:** See what files exist on cloud without syncing.
369 | 
370 | ```bash
371 | # List all files in project
372 | bm project ls --name research
373 | 
374 | # List files in subdirectory
375 | bm project ls --name research --path subfolder
376 | ```
377 | 
378 | **What happens:**
379 | 1. Connects to cloud via rclone
380 | 2. Lists files in remote project path
381 | 3. No files transferred
382 | 
383 | **Result:** See cloud file listing.
384 | 
385 | ## Multiple Projects
386 | 
387 | ### Syncing Multiple Projects
388 | 
389 | **Use case:** You have several projects with local sync, want to sync all at once.
390 | 
391 | ```bash
392 | # Setup multiple projects
393 | bm project add research --local-path ~/Documents/research
394 | bm project add work --local-path ~/work-notes
395 | bm project add personal --local-path ~/personal
396 | 
397 | # Establish baselines
398 | bm project bisync --name research --resync
399 | bm project bisync --name work --resync
400 | bm project bisync --name personal --resync
401 | 
402 | # Daily workflow: sync everything
403 | bm project bisync --name research
404 | bm project bisync --name work
405 | bm project bisync --name personal
406 | ```
407 | 
408 | **Future:** `--all` flag will sync all configured projects:
409 | 
410 | ```bash
411 | bm project bisync --all  # Coming soon
412 | ```
413 | 
414 | ### Mixed Usage
415 | 
416 | **Use case:** Some projects sync, some stay cloud-only.
417 | 
418 | ```bash
419 | # Projects with sync
420 | bm project add research --local-path ~/Documents/research
421 | bm project add work --local-path ~/work
422 | 
423 | # Cloud-only projects
424 | bm project add archive
425 | bm project add temp-notes
426 | 
427 | # Sync only the configured ones
428 | bm project bisync --name research
429 | bm project bisync --name work
430 | 
431 | # Archive and temp-notes stay cloud-only
432 | ```
433 | 
434 | **Result:** Fine-grained control over what syncs.
435 | 
436 | ## Disable Cloud Mode
437 | 
438 | Return to local mode:
439 | 
440 | ```bash
441 | bm cloud logout
442 | ```
443 | 
444 | **What this does:**
445 | 1. Disables cloud mode in config
446 | 2. All commands now work locally
447 | 3. Auth token remains (can re-enable with login)
448 | 
449 | **Result:** All `bm` commands work with local projects again.
450 | 
451 | ## Filter Configuration
452 | 
453 | ### Understanding .bmignore
454 | 
455 | **The problem:** You don't want to sync everything (e.g., `.git`, `node_modules`, database files).
456 | 
457 | **The solution:** `.bmignore` file with gitignore-style patterns.
458 | 
459 | **Location:** `~/.basic-memory/.bmignore`
460 | 
461 | **Default patterns:**
462 | 
463 | ```gitignore
464 | # Version control
465 | .git/**
466 | 
467 | # Python
468 | __pycache__/**
469 | *.pyc
470 | .venv/**
471 | venv/**
472 | 
473 | # Node.js
474 | node_modules/**
475 | 
476 | # Basic Memory internals
477 | memory.db/**
478 | memory.db-shm/**
479 | memory.db-wal/**
480 | config.json/**
481 | watch-status.json/**
482 | .bmignore.rclone/**
483 | 
484 | # OS files
485 | .DS_Store/**
486 | Thumbs.db/**
487 | 
488 | # Environment files
489 | .env/**
490 | .env.local/**
491 | ```
492 | 
493 | **How it works:**
494 | 1. On first sync, `.bmignore` created with defaults
495 | 2. Patterns converted to rclone filter format (`.bmignore.rclone`)
496 | 3. Rclone uses filters during sync
497 | 4. Same patterns used by all projects
498 | 
499 | **Customizing:**
500 | 
501 | ```bash
502 | # Edit patterns
503 | code ~/.basic-memory/.bmignore
504 | 
505 | # Add custom patterns
506 | echo "*.tmp/**" >> ~/.basic-memory/.bmignore
507 | 
508 | # Next sync uses updated patterns
509 | bm project bisync --name research
510 | ```
511 | 
512 | ## Troubleshooting
513 | 
514 | ### Authentication Issues
515 | 
516 | **Problem:** "Authentication failed" or "Invalid token"
517 | 
518 | **Solution:** Re-authenticate:
519 | 
520 | ```bash
521 | bm cloud logout
522 | bm cloud login
523 | ```
524 | 
525 | ### Subscription Issues
526 | 
527 | **Problem:** "Subscription Required" error
528 | 
529 | **Solution:**
530 | 1. Visit subscribe URL shown in error
531 | 2. Sign up for subscription
532 | 3. Run `bm cloud login` again
533 | 
534 | **Note:** Access is immediate when subscription becomes active.
535 | 
536 | ### Bisync Initialization
537 | 
538 | **Problem:** "First bisync requires --resync"
539 | 
540 | **Explanation:** Bisync needs a baseline state before it can sync changes.
541 | 
542 | **Solution:**
543 | 
544 | ```bash
545 | bm project bisync --name research --resync
546 | ```
547 | 
548 | **What this does:**
549 | - Establishes initial sync state
550 | - Creates baseline in `~/.basic-memory/bisync-state/research/`
551 | - Syncs all files bidirectionally
552 | 
553 | **Result:** Future syncs work without `--resync`.
554 | 
555 | ### Empty Directory Issues
556 | 
557 | **Problem:** "Empty prior Path1 listing. Cannot sync to an empty directory"
558 | 
559 | **Explanation:** Rclone bisync doesn't work well with completely empty directories. It needs at least one file to establish a baseline.
560 | 
561 | **Solution:** Add at least one file before running `--resync`:
562 | 
563 | ```bash
564 | # Create a placeholder file
565 | echo "# Research Notes" > ~/Documents/research/README.md
566 | 
567 | # Now run bisync
568 | bm project bisync --name research --resync
569 | ```
570 | 
571 | **Why this happens:** Bisync creates listing files that track the state of each side. When both directories are completely empty, these listing files are considered invalid by rclone.
572 | 
573 | **Best practice:** Always have at least one file (like a README.md) in your project directory before setting up sync.
574 | 
575 | ### Bisync State Corruption
576 | 
577 | **Problem:** Bisync fails with errors about corrupted state or listing files
578 | 
579 | **Explanation:** Sometimes bisync state can become inconsistent (e.g., after mixing dry-run and actual runs, or after manual file operations).
580 | 
581 | **Solution:** Clear bisync state and re-establish baseline:
582 | 
583 | ```bash
584 | # Clear bisync state
585 | bm project bisync-reset research
586 | 
587 | # Re-establish baseline
588 | bm project bisync --name research --resync
589 | ```
590 | 
591 | **What this does:**
592 | - Removes all bisync metadata from `~/.basic-memory/bisync-state/research/`
593 | - Forces fresh baseline on next `--resync`
594 | - Safe operation (doesn't touch your files)
595 | 
596 | **Note:** This command also runs automatically when you remove a project to clean up state directories.
597 | 
598 | ### Too Many Deletes
599 | 
600 | **Problem:** "Error: max delete limit (25) exceeded"
601 | 
602 | **Explanation:** Bisync detected you're about to delete more than 25 files. This is a safety check to prevent accidents.
603 | 
604 | **Solution 1:** Review what you're deleting, then force resync:
605 | 
606 | ```bash
607 | # Check what would be deleted
608 | bm project bisync --name research --dry-run
609 | 
610 | # If correct, establish new baseline
611 | bm project bisync --name research --resync
612 | ```
613 | 
614 | **Solution 2:** Use one-way sync if you know local is correct:
615 | 
616 | ```bash
617 | bm project sync --name research
618 | ```
619 | 
620 | ### Project Not Configured for Sync
621 | 
622 | **Problem:** "Project research has no local_sync_path configured"
623 | 
624 | **Explanation:** Project exists on cloud but has no local sync path.
625 | 
626 | **Solution:**
627 | 
628 | ```bash
629 | bm project sync-setup research ~/Documents/research
630 | bm project bisync --name research --resync
631 | ```
632 | 
633 | ### Connection Issues
634 | 
635 | **Problem:** "Cannot connect to cloud instance"
636 | 
637 | **Solution:** Check status:
638 | 
639 | ```bash
640 | bm cloud status
641 | ```
642 | 
643 | If instance is down, wait a few minutes and retry.
644 | 
645 | ## Security
646 | 
647 | - **Authentication**: OAuth 2.1 with PKCE flow
648 | - **Tokens**: Stored securely in `~/.basic-memory/basic-memory-cloud.json`
649 | - **Transport**: All data encrypted in transit (HTTPS)
650 | - **Credentials**: Scoped S3 credentials (read-write to your tenant only)
651 | - **Isolation**: Your data isolated from other tenants
652 | - **Ignore patterns**: Sensitive files automatically excluded via `.bmignore`
653 | 
654 | ## Command Reference
655 | 
656 | ### Cloud Mode Management
657 | 
658 | ```bash
659 | bm cloud login              # Authenticate and enable cloud mode
660 | bm cloud logout             # Disable cloud mode
661 | bm cloud status             # Check cloud mode and instance health
662 | ```
663 | 
664 | ### Setup
665 | 
666 | ```bash
667 | bm cloud setup              # Install rclone and configure credentials
668 | ```
669 | 
670 | ### Project Management
671 | 
672 | When cloud mode is enabled:
673 | 
674 | ```bash
675 | bm project list                           # List cloud projects
676 | bm project add <name>                     # Create cloud project (no sync)
677 | bm project add <name> --local-path <path> # Create with local sync
678 | bm project sync-setup <name> <path>       # Add sync to existing project
679 | bm project rm <name>                      # Delete project
680 | ```
681 | 
682 | ### File Synchronization
683 | 
684 | ```bash
685 | # One-way sync (local → cloud)
686 | bm project sync --name <project>
687 | bm project sync --name <project> --dry-run
688 | bm project sync --name <project> --verbose
689 | 
690 | # Two-way sync (local ↔ cloud) - Recommended
691 | bm project bisync --name <project>          # After first --resync
692 | bm project bisync --name <project> --resync # First time / force baseline
693 | bm project bisync --name <project> --dry-run
694 | bm project bisync --name <project> --verbose
695 | 
696 | # Integrity check
697 | bm project check --name <project>
698 | bm project check --name <project> --one-way
699 | 
700 | # List remote files
701 | bm project ls --name <project>
702 | bm project ls --name <project> --path <subpath>
703 | ```
704 | 
705 | ## Summary
706 | 
707 | **Basic Memory Cloud uses project-scoped sync:**
708 | 
709 | 1. **Enable cloud mode** - `bm cloud login`
710 | 2. **Install rclone** - `bm cloud setup`
711 | 3. **Add projects with sync** - `bm project add research --local-path ~/Documents/research`
712 | 4. **Preview first sync** - `bm project bisync --name research --resync --dry-run`
713 | 5. **Establish baseline** - `bm project bisync --name research --resync`
714 | 6. **Daily workflow** - `bm project bisync --name research`
715 | 
716 | **Key benefits:**
717 | - ✅ Each project independently syncs (or doesn't)
718 | - ✅ Projects can live anywhere on disk
719 | - ✅ Explicit sync operations (no magic)
720 | - ✅ Safe by design (max delete limits, conflict resolution)
721 | - ✅ Full offline access (work locally, sync when ready)
722 | 
723 | **Future enhancements:**
724 | - `--all` flag to sync all configured projects
725 | - Project list showing sync status
726 | - Watch mode for automatic sync
727 | 
```

--------------------------------------------------------------------------------
/.claude/commands/test-live.md:
--------------------------------------------------------------------------------

```markdown
  1 | # /project:test-live - Live Basic Memory Testing Suite
  2 | 
  3 | Execute comprehensive real-world testing of Basic Memory using the installed version. 
  4 | All test results are recorded as notes in a dedicated test project.
  5 | 
  6 | ## Usage
  7 | ```
  8 | /project:test-live [phase]
  9 | ```
 10 | 
 11 | **Parameters:**
 12 | - `phase` (optional): Specific test phase to run (`recent`, `core`, `features`, `edge`, `workflows`, `stress`, or `all`)
 13 | - `recent` - Focus on recent changes and new features (recommended for regular testing)
 14 | - `core` - Essential tools only (Tier 1: write_note, read_note, search_notes, edit_note, list_memory_projects, recent_activity)
 15 | - `features` - Core + important workflows (Tier 1 + Tier 2)
 16 | - `all` - Comprehensive testing of all tools and scenarios
 17 | 
 18 | ## Implementation
 19 | 
 20 | You are an expert QA engineer conducting live testing of Basic Memory. 
 21 | When the user runs `/project:test-live`, execute comprehensive test plan:
 22 | 
 23 | ## Tool Testing Priority
 24 | 
 25 | ### **Tier 1: Critical Core (Always Test)**
 26 | 1. **write_note** - Foundation of all knowledge creation
 27 | 2. **read_note** - Primary knowledge retrieval mechanism
 28 | 3. **search_notes** - Essential for finding information
 29 | 4. **edit_note** - Core content modification capability
 30 | 5. **list_memory_projects** - Project discovery and session guidance
 31 | 6. **recent_activity** - Project discovery mode and activity analysis
 32 | 
 33 | ### **Tier 2: Important Workflows (Usually Test)**
 34 | 7. **build_context** - Conversation continuity via memory:// URLs
 35 | 8. **create_memory_project** - Essential for project setup
 36 | 9. **move_note** - Knowledge organization
 37 | 10. **sync_status** - Understanding system state
 38 | 11. **delete_project** - Project lifecycle management
 39 | 
 40 | ### **Tier 3: Enhanced Functionality (Sometimes Test)**
 41 | 12. **view_note** - Claude Desktop artifact display
 42 | 13. **read_content** - Raw content access
 43 | 14. **delete_note** - Content removal
 44 | 15. **list_directory** - File system exploration
 45 | 16. **edit_note** (advanced modes) - Complex find/replace operations
 46 | 
 47 | ### **Tier 4: Specialized (Rarely Test)**
 48 | 17. **canvas** - Obsidian visualization (specialized use case)
 49 | 18. **MCP Prompts** - Enhanced UX tools (ai_assistant_guide, continue_conversation)
 50 | 
 51 | ## Stateless Architecture Testing
 52 | 
 53 | ### **Project Discovery Workflow (CRITICAL)**
 54 | Test the new stateless project selection flow:
 55 | 
 56 | 1. **Initial Discovery**
 57 |    - Call `list_memory_projects()` without knowing which project to use
 58 |    - Verify clear session guidance appears: "Next: Ask which project to use"
 59 |    - Confirm removal of CLI-specific references
 60 | 
 61 | 2. **Activity-Based Discovery**
 62 |    - Call `recent_activity()` without project parameter (discovery mode)
 63 |    - Verify intelligent project suggestions based on activity
 64 |    - Test guidance: "Should I use [most-active-project] for this task?"
 65 | 
 66 | 3. **Session Tracking Validation**
 67 |    - Verify all tool responses include `[Session: Using project 'name']`
 68 |    - Confirm guidance reminds about session-wide project tracking
 69 | 
 70 | 4. **Single Project Constraint Mode**
 71 |    - Test MCP server with `--project` parameter
 72 |    - Verify all operations constrained to specified project
 73 |    - Test project override behavior in constrained mode
 74 | 
 75 | ### **Explicit Project Parameters (CRITICAL)**
 76 | All tools must require explicit project parameters:
 77 | 
 78 | 1. **Parameter Validation**
 79 |    - Test all Tier 1 tools require `project` parameter
 80 |    - Verify clear error messages for missing project
 81 |    - Test invalid project name handling
 82 | 
 83 | 2. **No Session State Dependencies**
 84 |    - Confirm no tool relies on "current project" concept
 85 |    - Test rapid project switching within conversation
 86 |    - Verify each call is truly independent
 87 | 
 88 | ### Pre-Test Setup
 89 | 
 90 | 1. **Environment Verification**
 91 |    - Verify basic-memory is installed and accessible via MCP
 92 |    - Check version and confirm it's the expected release
 93 |    - Test MCP connection and tool availability
 94 | 
 95 | 2. **Recent Changes Analysis** (if phase includes 'recent' or 'all')
 96 |    - Run `git log --oneline -20` to examine recent commits
 97 |    - Identify new features, bug fixes, and enhancements
 98 |    - Generate targeted test scenarios for recent changes
 99 |    - Prioritize regression testing for recently fixed issues
100 | 
101 | 3. **Test Project Creation**
102 | 
103 | Run the bash `date` command to get the current date/time. 
104 | 
105 |    ```
106 |    Create project: "basic-memory-testing-[timestamp]"
107 |    Location: ~/basic-memory-testing-[timestamp]
108 |    Purpose: Record all test observations and results
109 |    ```
110 | 
111 | Make sure to use the newly created project for all subsequent test operations by specifying it in the `project` parameter of each tool call.
112 | 
113 | 4. **Baseline Documentation**
114 |    Create initial test session note with:
115 |    - Test environment details
116 |    - Version being tested
117 |    - Recent changes identified (if applicable)
118 |    - Test objectives and scope
119 |    - Start timestamp
120 | 
121 | ### Phase 0: Recent Changes Validation (if 'recent' or 'all' phase)
122 | 
123 | Based on recent commit analysis, create targeted test scenarios:
124 | 
125 | **Recent Changes Test Protocol:**
126 | 1. **Feature Addition Tests** - For each new feature identified:
127 |    - Test basic functionality
128 |    - Test integration with existing tools
129 |    - Verify documentation accuracy
130 |    - Test edge cases and error handling
131 | 
132 | 2. **Bug Fix Regression Tests** - For each recent fix:
133 |    - Recreate the original problem scenario
134 |    - Verify the fix works as expected
135 |    - Test related functionality isn't broken
136 |    - Document the verification in test notes
137 | 
138 | 3. **Performance/Enhancement Validation** - For optimizations:
139 |    - Establish baseline timing
140 |    - Compare with expected improvements
141 |    - Test under various load conditions
142 |    - Document performance observations
143 | 
144 | **Example Recent Changes (Update based on actual git log):**
145 | - Watch Service Restart (#156): Test project creation → file modification → automatic restart
146 | - Cross-Project Moves (#161): Test move_note with cross-project detection
147 | - Docker Environment Support (#174): Test BASIC_MEMORY_HOME behavior
148 | - MCP Server Logging (#164): Verify log level configurations
149 | 
150 | ### Phase 1: Core Functionality Validation (Tier 1 Tools)
151 | 
152 | Test essential MCP tools that form the foundation of Basic Memory:
153 | 
154 | **1. write_note Tests (Critical):**
155 | - ✅ Basic note creation with frontmatter
156 | - ✅ Special characters and Unicode in titles
157 | - ✅ Various content types (lists, headings, code blocks)
158 | - ✅ Empty notes and minimal content edge cases
159 | - ⚠️ Error handling for invalid parameters
160 | 
161 | **2. read_note Tests (Critical):**
162 | - ✅ Read by title, permalink, memory:// URLs
163 | - ✅ Non-existent notes (error handling)
164 | - ✅ Notes with complex markdown formatting
165 | - ⚠️ Performance with large notes (>10MB)
166 | 
167 | **3. search_notes Tests (Critical):**
168 | - ✅ Simple text queries across content
169 | - ✅ Tag-based searches with multiple tags
170 | - ✅ Boolean operators (AND, OR, NOT)
171 | - ✅ Empty/no results scenarios
172 | - ⚠️ Performance with 100+ notes
173 | 
174 | **4. edit_note Tests (Critical):**
175 | - ✅ Append operations preserving frontmatter
176 | - ✅ Prepend operations
177 | - ✅ Find/replace with validation
178 | - ✅ Section replacement under headers
179 | - ⚠️ Error scenarios (invalid operations)
180 | 
181 | **5. list_memory_projects Tests (Critical):**
182 | - ✅ Display all projects with clear session guidance
183 | - ✅ Project discovery workflow prompts
184 | - ✅ Removal of CLI-specific references
185 | - ✅ Empty project list handling
186 | - ✅ Single project constraint mode display
187 | 
188 | **6. recent_activity Tests (Critical - Discovery Mode):**
189 | - ✅ Discovery mode without project parameter
190 | - ✅ Intelligent project suggestions based on activity
191 | - ✅ Guidance prompts for project selection
192 | - ✅ Session tracking reminders in responses
193 | - ⚠️ Performance with multiple projects
194 | 
195 | ### Phase 2: Important Workflows (Tier 2 Tools)
196 | 
197 | **7. build_context Tests (Important):**
198 | - ✅ Different depth levels (1, 2, 3+)
199 | - ✅ Various timeframes for context
200 | - ✅ memory:// URL navigation
201 | - ⚠️ Performance with complex relation graphs
202 | 
203 | **8. create_memory_project Tests (Important):**
204 | - ✅ Create projects dynamically
205 | - ✅ Set default during creation
206 | - ✅ Path validation and creation
207 | - ⚠️ Invalid paths and names
208 | - ✅ Integration with existing projects
209 | 
210 | **9. move_note Tests (Important):**
211 | - ✅ Move within same project
212 | - ✅ Cross-project moves with detection (#161)
213 | - ✅ Automatic folder creation
214 | - ✅ Database consistency validation
215 | - ⚠️ Special characters in paths
216 | 
217 | **10. sync_status Tests (Important):**
218 | - ✅ Background operation monitoring
219 | - ✅ File synchronization status
220 | - ✅ Project sync state reporting
221 | - ⚠️ Error state handling
222 | 
223 | ### Phase 3: Enhanced Functionality (Tier 3 Tools)
224 | 
225 | **11. view_note Tests (Enhanced):**
226 | - ✅ Claude Desktop artifact display
227 | - ✅ Title extraction from frontmatter
228 | - ✅ Unicode and emoji content rendering
229 | - ⚠️ Error handling for non-existent notes
230 | 
231 | **12. read_content Tests (Enhanced):**
232 | - ✅ Raw file content access
233 | - ✅ Binary file handling
234 | - ✅ Image file reading
235 | - ⚠️ Large file performance
236 | 
237 | **13. delete_note Tests (Enhanced):**
238 | - ✅ Single note deletion
239 | - ✅ Database consistency after deletion
240 | - ⚠️ Non-existent note handling
241 | - ✅ Confirmation of successful deletion
242 | 
243 | **14. list_directory Tests (Enhanced):**
244 | - ✅ Directory content listing
245 | - ✅ Depth control and filtering
246 | - ✅ File name globbing
247 | - ⚠️ Empty directory handling
248 | 
249 | **15. delete_project Tests (Enhanced):**
250 | - ✅ Project removal from config
251 | - ✅ Database cleanup
252 | - ⚠️ Default project protection
253 | - ⚠️ Non-existent project handling
254 | 
255 | ### Phase 4: Edge Case Exploration
256 | 
257 | **Boundary Testing:**
258 | - Very long titles and content (stress limits)
259 | - Empty projects and notes
260 | - Unicode, emojis, special symbols
261 | - Deeply nested folder structures
262 | - Circular relations and self-references
263 | - Maximum relation depths
264 | 
265 | **Error Scenarios:**
266 | - Invalid memory:// URLs
267 | - Missing files referenced in database
268 | - Invalid project names and paths
269 | - Malformed note structures
270 | - Concurrent operation conflicts
271 | 
272 | **Performance Testing:**
273 | - Create 100+ notes rapidly
274 | - Complex search queries
275 | - Deep relation chains (5+ levels)
276 | - Rapid successive operations
277 | - Memory usage monitoring
278 | 
279 | ### Phase 5: Real-World Workflow Scenarios
280 | 
281 | **Meeting Notes Pipeline:**
282 | 1. Create meeting notes with action items
283 | 2. Extract action items using edit_note
284 | 3. Build relations to project documents
285 | 4. Update progress incrementally
286 | 5. Search and track completion
287 | 
288 | **Research Knowledge Building:**
289 | 1. Create research topic hierarchy
290 | 2. Build complex relation networks
291 | 3. Add incremental findings over time
292 | 4. Search for connections and patterns
293 | 5. Reorganize as knowledge evolves
294 | 
295 | **Multi-Project Workflow:**
296 | 1. Technical documentation project
297 | 2. Personal recipe collection project
298 | 3. Learning/course notes project
299 | 4. Specify different projects for different operations
300 | 5. Cross-reference related concepts
301 | 
302 | **Content Evolution:**
303 | 1. Start with basic notes
304 | 2. Enhance with relations and observations
305 | 3. Reorganize file structure using moves
306 | 4. Update content with edit operations
307 | 5. Validate knowledge graph integrity
308 | 
309 | ### Phase 6: Specialized Tools Testing (Tier 4)
310 | 
311 | **16. canvas Tests (Specialized):**
312 | - ✅ JSON Canvas generation
313 | - ✅ Node and edge creation
314 | - ✅ Obsidian compatibility
315 | - ⚠️ Complex graph handling
316 | 
317 | **17. MCP Prompts Tests (Specialized):**
318 | - ✅ ai_assistant_guide output
319 | - ✅ continue_conversation functionality
320 | - ✅ Formatted search results
321 | - ✅ Enhanced activity reports
322 | 
323 | ### Phase 7: Integration & File Watching Tests
324 | 
325 | **File System Integration:**
326 | - ✅ Watch service behavior with file changes
327 | - ✅ Project creation → watch restart (#156)
328 | - ✅ Multi-project synchronization
329 | - ⚠️ MCP→API→DB→File stack validation
330 | 
331 | **Real Integration Testing:**
332 | - ✅ End-to-end file watching vs manual operations
333 | - ✅ Cross-session persistence
334 | - ✅ Database consistency across operations
335 | - ⚠️ Performance under real file system changes
336 | 
337 | ### Phase 8: Creative Stress Testing
338 | 
339 | **Creative Exploration:**
340 | - Rapid project creation/switching patterns
341 | - Unusual but valid markdown structures
342 | - Creative observation categories
343 | - Novel relation types and patterns
344 | - Unexpected tool combinations
345 | 
346 | **Stress Scenarios:**
347 | - Bulk operations (many notes quickly)
348 | - Complex nested moves and edits
349 | - Deep context building
350 | - Complex boolean search expressions
351 | - Resource constraint testing
352 | 
353 | ## Test Execution Guidelines
354 | 
355 | ### Quick Testing (core/features phases)
356 | - Focus on Tier 1 tools (core) or Tier 1+2 (features)
357 | - Test essential functionality and common edge cases
358 | - Record critical issues immediately
359 | - Complete in 15-20 minutes
360 | 
361 | ### Comprehensive Testing (all phase)
362 | - Cover all tiers systematically
363 | - Include specialized tools and stress testing
364 | - Document performance baselines
365 | - Complete in 45-60 minutes
366 | 
367 | ### Recent Changes Focus (recent phase)
368 | - Analyze git log for recent commits
369 | - Generate targeted test scenarios
370 | - Focus on regression testing for fixes
371 | - Validate new features thoroughly
372 | 
373 | ## Test Observation Format
374 | 
375 | Record ALL observations immediately as Basic Memory notes:
376 | 
377 | ```markdown
378 | ---
379 | title: Test Session [Phase] YYYY-MM-DD HH:MM
380 | tags: [testing, v0.13.0, live-testing, [phase]]
381 | permalink: test-session-[phase]-[timestamp]
382 | ---
383 | 
384 | # Test Session [Phase] - [Date/Time]
385 | 
386 | ## Environment
387 | - Basic Memory version: [version]
388 | - MCP connection: [status]
389 | - Test project: [name]
390 | - Phase focus: [description]
391 | 
392 | ## Test Results
393 | 
394 | ### ✅ Successful Operations
395 | - [timestamp] ✅ write_note: Created note with emoji title 📝 #tier1 #functionality
396 | - [timestamp] ✅ search_notes: Boolean query returned 23 results in 0.4s #tier1 #performance  
397 | - [timestamp] ✅ edit_note: Append operation preserved frontmatter #tier1 #reliability
398 | 
399 | ### ⚠️ Issues Discovered
400 | - [timestamp] ⚠️ move_note: Slow with deep folder paths (2.1s) #tier2 #performance
401 | - [timestamp] 🚨 search_notes: Unicode query returned unexpected results #tier1 #bug #critical
402 | - [timestamp] ⚠️ build_context: Context lost for memory:// URLs #tier2 #issue
403 | 
404 | ### 🚀 Enhancements Identified
405 | - edit_note could benefit from preview mode #ux-improvement
406 | - search_notes needs fuzzy matching for typos #feature-idea
407 | - move_note could auto-suggest folder creation #usability
408 | 
409 | ### 📊 Performance Metrics
410 | - Average write_note time: 0.3s
411 | - Search with 100+ notes: 0.6s
412 | - Project parameter overhead: <0.1s
413 | - Memory usage: [observed levels]
414 | 
415 | ## Relations
416 | - tests [[Basic Memory v0.13.0]]
417 | - part_of [[Live Testing Suite]]
418 | - found_issues [[Bug Report: Unicode Search]]
419 | - discovered [[Performance Optimization Opportunities]]
420 | ```
421 | 
422 | ## Quality Assessment Areas
423 | 
424 | **User Experience & Usability:**
425 | - Tool instruction clarity and examples
426 | - Error message actionability
427 | - Response time acceptability
428 | - Tool consistency and discoverability
429 | - Learning curve and intuitiveness
430 | 
431 | **System Behavior:**
432 | - Stateless operation independence
433 | - memory:// URL navigation reliability
434 | - Multi-step workflow cohesion
435 | - Edge case graceful handling
436 | - Recovery from user errors
437 | 
438 | **Documentation Alignment:**
439 | - Tool output clarity and helpfulness
440 | - Behavior vs. documentation accuracy
441 | - Example validity and usefulness
442 | - Real-world vs. documented workflows
443 | 
444 | **Mental Model Validation:**
445 | - Natural user expectation alignment
446 | - Surprising behavior identification
447 | - Mistake recovery ease
448 | - Knowledge graph concept naturalness
449 | 
450 | **Performance & Reliability:**
451 | - Operation completion times
452 | - Consistency across sessions
453 | - Scaling behavior with growth
454 | - Unexpected slowness identification
455 | 
456 | ## Error Documentation Protocol
457 | 
458 | For each error discovered:
459 | 
460 | 1. **Immediate Recording**
461 |    - Create dedicated error note
462 |    - Include exact reproduction steps
463 |    - Capture error messages verbatim
464 |    - Note system state when error occurred
465 | 
466 | 2. **Error Note Format**
467 |    ```markdown
468 |    ---
469 |    title: Bug Report - [Short Description]
470 |    tags: [bug, testing, v0.13.0, [severity]]
471 |    ---
472 |    
473 |    # Bug Report: [Description]
474 |    
475 |    ## Reproduction Steps
476 |    1. [Exact steps to reproduce]
477 |    2. [Include all parameters used]
478 |    3. [Note any special conditions]
479 |    
480 |    ## Expected Behavior
481 |    [What should have happened]
482 |    
483 |    ## Actual Behavior  
484 |    [What actually happened]
485 |    
486 |    ## Error Messages
487 |    ```
488 |    [Exact error text]
489 |    ```
490 |    
491 |    ## Environment
492 |    - Version: [version]
493 |    - Project: [name]
494 |    - Timestamp: [when]
495 |    
496 |    ## Severity
497 |    - [ ] Critical (blocks major functionality)
498 |    - [ ] High (impacts user experience)
499 |    - [ ] Medium (workaround available)
500 |    - [ ] Low (minor inconvenience)
501 |    
502 |    ## Relations
503 |    - discovered_during [[Test Session [Phase]]]
504 |    - affects [[Feature Name]]
505 |    ```
506 | 
507 | ## Success Metrics Tracking
508 | 
509 | **Quantitative Measures:**
510 | - Test scenario completion rate
511 | - Bug discovery count with severity
512 | - Performance benchmark establishment
513 | - Tool coverage completeness
514 | 
515 | **Qualitative Measures:**
516 | - Conversation flow naturalness
517 | - Knowledge graph quality
518 | - User experience insights
519 | - System reliability assessment
520 | 
521 | ## Test Execution Flow
522 | 
523 | 1. **Setup Phase** (5 minutes)
524 |    - Verify environment and create test project
525 |    - Record baseline system state
526 |    - Establish performance benchmarks
527 | 
528 | 2. **Core Testing** (15-20 minutes per phase)
529 |    - Execute test scenarios systematically
530 |    - Record observations immediately
531 |    - Note timestamps for performance tracking
532 |    - Explore variations when interesting behaviors occur
533 | 
534 | 3. **Documentation** (5 minutes per phase)
535 |    - Create phase summary note
536 |    - Link related test observations
537 |    - Update running issues list
538 |    - Record enhancement ideas
539 | 
540 | 4. **Analysis Phase** (10 minutes)
541 |    - Review all observations across phases
542 |    - Identify patterns and trends
543 |    - Create comprehensive summary report
544 |    - Generate development recommendations
545 | 
546 | ## Testing Success Criteria
547 | 
548 | ### Core Testing (Tier 1) - Must Pass
549 | - All 6 critical tools function correctly
550 | - No critical bugs in essential workflows
551 | - Acceptable performance for basic operations
552 | - Error handling works as expected
553 | 
554 | ### Feature Testing (Tier 1+2) - Should Pass
555 | - All 11 core + important tools function
556 | - Workflow scenarios complete successfully
557 | - Performance meets baseline expectations
558 | - Integration points work correctly
559 | 
560 | ### Comprehensive Testing (All Tiers) - Complete Coverage
561 | - All tools tested across all scenarios
562 | - Edge cases and stress testing completed
563 | - Performance baselines established
564 | - Full documentation of issues and enhancements
565 | 
566 | ## Expected Outcomes
567 | 
568 | **System Validation:**
569 | - Feature verification prioritized by tier importance
570 | - Recent changes validated for regression
571 | - Performance baseline establishment
572 | - Bug identification with severity assessment
573 | 
574 | **Knowledge Base Creation:**
575 | - Prioritized testing documentation
576 | - Real usage examples for user guides
577 | - Recent changes validation records
578 | - Performance insights for optimization
579 | 
580 | **Development Insights:**
581 | - Tier-based bug priority list
582 | - Recent changes impact assessment
583 | - Enhancement ideas from real usage
584 | - User experience improvement areas
585 | 
586 | ## Post-Test Deliverables
587 | 
588 | 1. **Test Summary Note**
589 |    - Overall results and findings
590 |    - Critical issues requiring immediate attention
591 |    - Enhancement opportunities discovered
592 |    - System readiness assessment
593 | 
594 | 2. **Bug Report Collection**
595 |    - All discovered issues with reproduction steps
596 |    - Severity and impact assessments
597 |    - Suggested fixes where applicable
598 | 
599 | 3. **Performance Baseline**
600 |    - Timing data for all operations
601 |    - Scaling behavior observations
602 |    - Resource usage patterns
603 | 
604 | 4. **UX Improvement Recommendations**
605 |    - Usability enhancement suggestions
606 |    - Documentation improvement areas
607 |    - Tool design optimization ideas
608 | 
609 | 5. **Updated TESTING.md**
610 |    - Incorporate new test scenarios discovered
611 |    - Update based on real execution experience
612 |    - Add performance benchmarks and targets
613 | 
614 | ## Context
615 | - Uses real installed basic-memory version 
616 | - Tests complete MCP→API→DB→File stack
617 | - Creates living documentation in Basic Memory itself
618 | - Follows integration over isolation philosophy
619 | - Prioritizes testing by tool importance and usage frequency
620 | - Adapts to recent development changes dynamically
621 | - Focuses on real usage patterns over checklist validation
622 | - Generates actionable insights prioritized by impact
```

--------------------------------------------------------------------------------
/specs/SPEC-6 Explicit Project Parameter Architecture.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | title: 'SPEC-6: Explicit Project Parameter Architecture'
  3 | type: spec
  4 | permalink: specs/spec-6-explicit-project-parameter-architecture
  5 | tags:
  6 | - architecture
  7 | - mcp
  8 | - project-management
  9 | - stateless
 10 | ---
 11 | 
 12 | # SPEC-6: Explicit Project Parameter Architecture
 13 | 
 14 | ## Why
 15 | 
 16 | The current session-based project management system has critical reliability issues:
 17 | 
 18 | 1. **Session State Fragility**: Claude iOS mobile client fails to maintain consistent session IDs across MCP tool calls, causing project switching to silently fail (Issue #74)
 19 | 2. **Scaling Limitations**: Redis-backed session state creates single-point-of-failure and prevents horizontal scaling
 20 | 3. **Client Compatibility**: Session tracking works inconsistently across different MCP clients (web, mobile, API)
 21 | 4. **Hidden Complexity**: Users cannot see or understand "current project" state, leading to confusion when operations execute in wrong projects
 22 | 5. **Silent Failures**: Operations appear successful but execute in unintended projects, risking data integrity
 23 | 
 24 | Evidence from production logs shows each MCP tool call from mobile client receives different session IDs:
 25 | ```
 26 | create_memory_project: session_id=12cdfc24913b48f8b680ed4b2bfdb7ba
 27 | switch_project:       session_id=050a69275d98498cbdd227cdb74d9740
 28 | list_directory:       session_id=85f3483014af4136a5d435c76ded212f
 29 | ```
 30 | 
 31 | Related Github issue: https://github.com/basicmachines-co/basic-memory-cloud/issues/75
 32 | 
 33 | ## Status
 34 | 
 35 | **Current Status**: **ALL PHASES COMPLETE** ✅ **PRODUCTION DEPLOYED**
 36 | **Target**: Fix Claude iOS session ID consistency issues ✅ **ACHIEVED**
 37 | **Draft PR**: https://github.com/basicmachines-co/basic-memory/pull/298 ✅ **MERGED & DEPLOYED**
 38 | 
 39 | ### 🎉 **COMPLETE SUCCESS - PRODUCTION READY**
 40 | 
 41 | **ALL PHASES OF SPEC-6 IMPLEMENTATION COMPLETE!** The stateless architecture has been successfully implemented across both Basic Memory core and Basic Memory Cloud, representing a **fundamental architectural improvement** that completely solves the Claude iOS compatibility issue while providing superior scalability and reliability.
 42 | 
 43 | #### Implementation Summary:
 44 | - **16 files modified** with 582 additions and 550 deletions
 45 | - **All 17 MCP tools** converted to stateless architecture
 46 | - **147 tests updated** across 5 test files (100% passing)
 47 | - **Complete session state removal** from core MCP tools
 48 | - **Enhanced error handling** and security validations preserved
 49 | 
 50 | ### Progress Summary
 51 | 
 52 | ✅ **Complete Stateless Architecture Implementation (All 17 tools)** - **PRODUCTION DEPLOYED**
 53 | - Stateless `get_active_project()` function implemented and deployed ✅
 54 | - All session state dependencies removed across entire MCP server ✅
 55 | - All MCP tools require explicit `project` parameter as first argument ✅
 56 | - **Cloud Service**: Redis removed, stateless HTTP enabled ✅
 57 | - **Production Validation**: Comprehensive testing completed with 100% success ✅
 58 | 
 59 | ✅ **Content Management Tools Complete (6/6 tools)**
 60 | - `write_note`, `read_note`, `delete_note`, `edit_note` ✅
 61 | - `view_note`, `read_content` ✅
 62 | 
 63 | ✅ **Knowledge Graph Navigation Tools Complete (3/3 tools)**
 64 | - `build_context`, `recent_activity`, `list_directory` ✅
 65 | 
 66 | ✅ **Search & Discovery Tools Complete (1/1 tools)**
 67 | - `search_notes` ✅
 68 | 
 69 | ✅ **Visualization Tools Complete (1/1 tools)**
 70 | - `canvas` ✅
 71 | 
 72 | ✅ **Project Management Cleanup Complete**
 73 | - Removed `switch_project` and `get_current_project` tools ✅
 74 | - Updated `set_default_project` to remove activate parameter ✅
 75 | 
 76 | ✅ **Comprehensive Testing Complete (157 tests)**
 77 | - All test suites updated to use stateless architecture (147 existing tests)
 78 | - Single project constraint mode integration tests (10 new tests)
 79 | - 100% test pass rate across all tool test files
 80 | - Security validations preserved and working
 81 | - Error handling comprehensive and user-friendly
 82 | 
 83 | ✅ **Documentation & Examples Complete**
 84 | - All tool docstrings updated with stateless examples
 85 | - Project parameter usage clearly documented
 86 | - Error handling and security behavior documented
 87 | 
 88 | ✅ **Enhanced Discovery Mode Complete**
 89 | - `recent_activity` tool supports dual-mode operation (discovery vs project-specific)
 90 | - ProjectActivitySummary schema provides cross-project insights
 91 | - Recent activity prompt updated to support both modes
 92 | - Comprehensive project distribution statistics and most active project tracking
 93 | 
 94 | ✅ **Single Project Constraint Mode Complete**
 95 | - `--project` CLI parameter for MCP server constraint
 96 | - Environment variable control (`BASIC_MEMORY_MCP_PROJECT`)
 97 | - Automatic project override in `get_active_project()` function
 98 | - Project management tools disabled in constrained mode with helpful CLI guidance
 99 | - Comprehensive integration test suite (10 tests covering all constraint scenarios)
100 | 
101 | ## What
102 | 
103 | Transform Basic Memory from stateful session-based to stateless explicit project parameter architecture:
104 | 
105 | ### Core Changes
106 | 1. **Mandatory Project Parameter**: All MCP tools require explicit `project` parameter
107 | 2. **Remove Session State**: Eliminate Redis, session middleware, and `switch_project` tool
108 | 3. **Stateless HTTP**: Enable `stateless_http=True` for horizontal scaling
109 | 4. **Enhanced Context Discovery**: Improve `recent_activity` to show project distribution
110 | 5. **Clear Response Format**: All tool responses display target project information
111 | 
112 | Implementation Approach                                                                    
113 | 
114 | - Each tool will directly accept the project parameter                             
115 | - Remove all calls to context-based project retrieval                              
116 | - Validate project exists before operations                                        
117 | - Clear error messages when project not found                                      
118 | - Backward compatibility: Initially keep optional parameter, then make required  
119 | 
120 | ### Affected MCP Tools
121 | **Content Management** (require project parameter):
122 | - `write_note(project, title, content, folder)`
123 | - `read_note(project, identifier)`
124 | - `edit_note(project, identifier, operation, content)`
125 | - `delete_note(project, identifier)`
126 | - `view_note(project, identifier)`
127 | - `read_content(project, path)`
128 | 
129 | **Knowledge Graph Navigation** (require project parameter):
130 | - `build_context(project, url, timeframe, depth, max_related)`
131 | - `list_directory(project, dir_name, depth, file_name_glob)`
132 | - `search_notes(project, query, search_type, types, entity_types)`
133 | 
134 | **Search & Discovery** (use project parameter for specific project or none for discovery):
135 | - `recent_activity(project, timeframe, depth, max_related)`
136 | 
137 | **Visualization** (require project parameter):
138 | - `canvas(project, nodes, edges, title, folder)`
139 | 
140 | **Project Management** (unchanged - already stateless):
141 | - `list_memory_projects()`
142 | - `create_memory_project(project_name, project_path, set_default)`
143 | - `delete_project(project_name)`
144 | - `get_current_project()` - Remove this tool
145 | - `switch_project(project_name)` - Remove this tool
146 | - `set_default_project(project_name, activate)` - Remove activate parameter
147 | 
148 | ## How (High Level)
149 | 
150 | ### Phase 1: Basic Memory Core (basic-memory repository)
151 | 
152 | #### MCP Tool Updates
153 | 
154 | Phase 1: Core Changes 
155 | 
156 | 1. Update project_context.py
157 | 
158 | - [x] Make project parameter mandatory for get_active_project()
159 | - [x] Remove session state handling
160 | 
161 | 2. Update Content Management Tools (6 tools)
162 | 
163 | - [x] write_note: Make project parameter required, not optional
164 | - [x] read_note: Make project parameter required
165 | - [x] edit_note: Add required project parameter
166 | - [x] delete_note: Add required project parameter
167 | - [x] view_note: Add required project parameter
168 | - [x] read_content: Add required project parameter
169 | 
170 | 3. Update Knowledge Graph Navigation Tools (3 tools)
171 | 
172 | - [x] build_context: Add required project parameter
173 | - [x] recent_activity: Make project parameter required
174 | - [x] list_directory: Add required project parameter
175 | 
176 | 4. Update Search & Visualization Tools (2 tools)
177 | 
178 | - [x] search_notes: Add required project parameter
179 | - [x] canvas: Add required project parameter
180 | 
181 | 5. Update Project Management Tools
182 | 
183 | - [x] Remove switch_project tool completely
184 | - [x] Remove get_current_project tool completely
185 | - [x] Update set_default_project to remove activate parameter
186 | - [x] Keep list_memory_projects, create_memory_project, delete_project unchanged
187 |     
188 | 6. Enhance recent_activity Response
189 | 
190 | - [x] Add project distribution info showing activity across all projects
191 | - [x] Include project usage stats in response
192 | - [x] Implement ProjectActivitySummary for discovery mode
193 | - [x] Add dual-mode functionality (discovery vs project-specific)
194 | 
195 | 7. Update Tool Documentation
196 | 
197 | - [x] Update write_note docstring with stateless architecture examples
198 | - [x] Update read_note docstring with project parameter examples
199 | - [x] Update delete_note docstring with comprehensive usage guidance
200 | - [x] Update all remaining tool docstrings with project parameter examples
201 | 
202 | 8. Update Tool Responses
203 | 
204 | - [x] Add clear project indicator to all tool responses across all tools
205 | - [x] Format: "project: {project_name}" in response metadata
206 | - [x] Add project metadata footer for LLM awareness
207 | - [x] Update all tool responses to include project indicators
208 | 
209 | 9. Comprehensive Testing
210 | 
211 | - [x] Update all write_note tests to use stateless architecture (34 tests passing)
212 | - [x] Update all edit_note tests to use stateless architecture (17 tests passing)
213 | - [x] Update all view_note tests to use stateless architecture (12 tests passing)
214 | - [x] Update all search_notes tests to use stateless architecture (16 tests passing)
215 | - [x] Update all move_note tests to use stateless architecture (31 tests passing)
216 | - [x] Update all delete_note tests to use stateless architecture
217 | - [x] Verify direct function call compatibility (bypassing MCP layer)
218 | - [x] Test security validation with project parameters
219 | - [x] Validate error handling for non-existent projects
220 | - [x] **Total: 157 tests updated and passing (100% success rate)**
221 |   - [x] **147 existing tests** updated for stateless architecture
222 |   - [x] **10 new tests** for single project constraint mode                                                                   
223 |                                                                                                                              
224 | ### Phase 1.5: Default Project Mode Enhancement
225 | 
226 | #### Problem
227 | While the stateless architecture solves reliability issues, it introduces UX friction for single-project users (estimated 80% of usage) who must specify the project parameter in every tool call.
228 | 
229 | #### Solution: Default Project Mode
230 | Add optional `default_project_mode` configuration that allows single-project users to have the simplicity of implicit project selection while maintaining the reliability of stateless architecture.
231 | 
232 | #### Configuration
233 | ```json
234 | {
235 |   "default_project": "main",
236 |   "default_project_mode": true  // NEW: Auto-use default_project when not specified
237 | }
238 | ```
239 | 
240 | #### Implementation Details
241 | 1. **Config Enhancement** (`src/basic_memory/config.py`)
242 |    - Add `default_project_mode: bool = Field(default=False)`
243 |    - Preserves backward compatibility (defaults to false)
244 | 
245 | 2. **Project Resolution Logic** (`src/basic_memory/mcp/project_context.py`)
246 |    Three-tier resolution hierarchy:
247 |    - Priority 1: CLI `--project` constraint (BASIC_MEMORY_MCP_PROJECT env var)
248 |    - Priority 2: Explicit project parameter in tool call
249 |    - Priority 3: `default_project` if `default_project_mode=true` and no project specified
250 | 
251 | 3. **Assistant Guide Updates** (`src/basic_memory/mcp/resources/ai_assistant_guide.md`)
252 |    - Detect `default_project_mode` at runtime
253 |    - Provide mode-specific instructions to LLMs
254 |    - In default mode: "All operations use project 'main' automatically"
255 |    - In regular mode: Current project discovery guidance
256 | 
257 | 4. **Tool Parameter Handling** (all MCP tools)
258 |    - Make project parameter Optional[str] = None
259 |    - Add resolution logic: `project = project or get_default_project()`
260 |    - Maintain explicit project override capability
261 | 
262 | #### Usage Modes Summary
263 | - **Regular Mode**: Multi-project users, assistant tracks project per conversation
264 | - **Default Project Mode**: Single-project users, automatic default project
265 | - **Constrained Mode**: CLI --project flag, locked to specific project
266 | 
267 | #### Testing Requirements
268 | - Integration test for default_project_mode=true with missing parameters
269 | - Test explicit project override in default_project_mode
270 | - Test mode=false requires explicit parameters
271 | - Test CLI constraint overrides default_project_mode
272 | 
273 | Phase 2: Testing & Validation
274 | 
275 | 8. Update Tests
276 | 
277 | - [x] Modify all MCP tool tests to pass required project parameter
278 | - [x] Remove tests for deleted tools (switch_project, get_current_project)
279 | - [x] Add tests for project parameter validation
280 | - [x] **Complete: All 147 tests across 5 test files updated and passing**
281 | 
282 | #### Enhanced recent_activity Response
283 | ```json
284 | {
285 |   "recent_notes": [...],
286 |   "project_activity": {
287 |     "research-project": {
288 |       "operations": 5,
289 |       "last_used": "30 minutes ago",
290 |       "recent_folders": ["experiments", "findings"]
291 |     },
292 |     "work-notes": {
293 |       "operations": 2,
294 |       "last_used": "2 hours ago",
295 |       "recent_folders": ["meetings", "planning"]
296 |     }
297 |   },
298 |   "total_projects": 3
299 | }
300 | ```
301 | 
302 | #### Response Format Updates
303 | ```
304 | ✓ Note created successfully
305 | 
306 | Project: research-project
307 | File: experiments/Neural Network Results.md
308 | Permalink: research-project/neural-network-results
309 | ```
310 | 
311 | ### Phase 2: Cloud Service Simplification (basic-memory-cloud repository) ✅ **COMPLETE**
312 | 
313 | #### ✅ Remove Session Infrastructure **COMPLETE**
314 | 1. ✅ Delete `apps/mcp/src/basic_memory_cloud_mcp/middleware/session_state.py`
315 | 2. ✅ Delete `apps/mcp/src/basic_memory_cloud_mcp/middleware/session_logging.py`
316 | 3. ✅ Update `apps/mcp/src/basic_memory_cloud_mcp/main.py`:
317 |    ```python
318 |    # Remove session middleware
319 |    # server.add_middleware(SessionStateMiddleware)
320 | 
321 |    # Enable stateless HTTP
322 |    mcp = FastMCP(name="basic-memory-mcp", stateless_http=True)
323 |    ```
324 | 
325 | #### ✅ Deployment Simplification **COMPLETE**
326 | 1. ✅ Remove Redis from `fly.toml`
327 | 2. ✅ Remove Redis environment variables
328 | 3. ✅ Update health checks to not depend on Redis
329 | 4. ✅ Production deployment verified working with stateless architecture
330 | 
331 | ### Phase 3: Conversational Project Management ✅ **COMPLETE**
332 | 
333 | #### ✅ Claude Behavior Pattern **VERIFIED WORKING**
334 | 1. ✅ **Project Discovery**:
335 |    ```
336 |    Claude: Let me check your recent activity...
337 |    [calls recent_activity() - no project needed for discovery]
338 | 
339 |    I see you've been working in:
340 |    - research-project (5 operations, 30 min ago)
341 |    - work-notes (2 operations, 2 hours ago)
342 | 
343 |    Which project should I use for this operation?
344 |    ```
345 | 
346 | 2. ✅ **Context Maintenance**:
347 |    ```
348 |    User: Use research-project
349 |    Claude: Working in research-project.
350 |    [All subsequent operations use project="research-project"]
351 |    ```
352 | 
353 | 3. ✅ **Explicit Project Switching**:
354 |    ```
355 |    User: Check work-notes for that meeting summary
356 |    Claude: Let me search work-notes for the meeting summary.
357 |    [Uses project="work-notes" for specific operation]
358 |    ```
359 | 
360 | **Validation**: Comprehensive testing confirmed all conversational patterns work naturally with the stateless architecture.
361 | 
362 | ## How to Evaluate
363 | 
364 | ### Success Criteria
365 | 
366 | #### 1. Functional Completeness
367 | - [x] All MCP tools accept required `project` parameter
368 | - [x] All MCP tools validate project exists before execution
369 | - [x] `switch_project` and `get_current_project` tools removed
370 | - [x] All responses display target project clearly
371 | - [x] No Redis dependencies in deployment (Phase 2: Cloud Service) ✅ **COMPLETE**
372 | - [x] `recent_activity` shows project distribution with ProjectActivitySummary
373 | 
374 | #### 2. Cross-Client Compatibility Testing ✅ **COMPLETE**
375 | Test identical operations across all clients:
376 | - [x] **Claude Desktop**: All operations work with explicit projects ✅
377 | - [x] **Claude Code**: All operations work with explicit projects ✅
378 | - [x] **Claude Mobile iOS**: All operations work with explicit projects ✅ **CRITICAL SUCCESS**
379 | - [x] **API clients**: All operations work with explicit projects ✅
380 | - [x] **CLI tools**: All operations work with explicit projects ✅
381 | 
382 | **Critical Achievement**: Claude iOS mobile client session tracking issues completely eliminated through stateless architecture.
383 | 
384 | #### 3. Session Independence Verification ✅ **COMPLETE**
385 | - [x] Operations work identically with/without session tracking ✅
386 | - [x] No behavioral differences between clients ✅
387 | - [x] Mobile client session ID changes do not affect operations ✅
388 | - [x] Redis can be completely removed without functional impact ✅
389 | 
390 | **Production Validation**: Redis removed from production deployment with zero functional impact.
391 | 
392 | #### 4. Performance & Scaling ✅ **COMPLETE**
393 | - [x] `stateless_http=True` enabled successfully ✅
394 | - [x] No Redis memory usage ✅
395 | - [x] Horizontal scaling possible (multiple MCP instances) ✅
396 | - [x] Response times unchanged or improved ✅
397 | 
398 | #### 5. User Experience Testing
399 | **Project Discovery Flow**:
400 | - [x] `recent_activity()` provides useful project context
401 | - [x] Claude can intelligently suggest projects based on activity
402 | - [x] Project switching is explicit and clear in conversation
403 | 
404 | **Error Handling**:
405 | - [x] Clear error messages for non-existent projects
406 | - [x] Helpful suggestions when project parameter missing
407 | - [x] No silent failures or wrong-project operations
408 | 
409 | **Response Clarity**:
410 | - [x] Every operation clearly shows target project
411 | - [x] Users always know which project is being operated on
412 | - [x] No confusion about "current project" state
413 | 
414 | #### 6. Migration Safety ✅ **COMPLETE**
415 | - [x] Backward compatibility period with optional project parameter ✅
416 | - [x] Clear migration documentation for existing users ✅
417 | - [x] Data integrity maintained during transition ✅
418 | - [x] No data loss during migration ✅
419 | 
420 | **Production Migration**: Successfully deployed to production with zero data loss and maintained system integrity.
421 | 
422 | ### Test Scenarios
423 | 
424 | #### Core Functionality Test
425 | ```bash
426 | # Test all tools work with explicit project
427 | write_note(project="test-proj", title="Test", content="Content", folder="docs")
428 | read_note(project="test-proj", identifier="Test")
429 | edit_note(project="test-proj", identifier="Test", operation="append", content="More")
430 | search_notes(project="test-proj", query="Content")
431 | list_directory(project="test-proj", dir_name="docs")
432 | delete_note(project="test-proj", identifier="Test")
433 | ```
434 | 
435 | #### Cross-Client Consistency Test
436 | Run identical test sequence on:
437 | 1. Claude Desktop
438 | 2. Claude Code
439 | 3. Claude Mobile iOS
440 | 4. API client
441 | 5. CLI tools
442 | 
443 | Verify all clients:
444 | - Accept explicit project parameters
445 | - Return identical responses
446 | - Show same project information
447 | - Have no session dependencies
448 | 
449 | #### Session Independence Test
450 | 1. Monitor session IDs during operations
451 | 2. Verify operations work with changing session IDs
452 | 3. Confirm Redis removal doesn't affect functionality
453 | 4. Test with multiple concurrent clients
454 | 
455 | ### Acceptance Criteria
456 | 
457 | **Must Have**:
458 | - All MCP tools require and use explicit project parameter
459 | - No session state dependencies remain
460 | - Universal client compatibility achieved
461 | - Clear project information in all responses
462 | 
463 | **Should Have**:
464 | - Enhanced `recent_activity` with project distribution
465 | - Smooth migration path for existing users
466 | - Improved performance with stateless architecture
467 | 
468 | **Could Have**:
469 | - Smart project suggestions based on content/context
470 | - Project shortcuts for common operations
471 | - Advanced project analytics in responses
472 | 
473 | ## Notes
474 | 
475 | ### Breaking Changes
476 | This is a **breaking change** that requires:
477 | - All MCP clients to pass project parameter
478 | - Migration of existing workflows
479 | - Update of all documentation and examples
480 | 
481 | ### Implementation Order
482 | 1. **basic-memory core** - Update MCP tools to accept project parameter (optional initially)
483 | 2. **Testing** - Verify all clients work with explicit projects
484 | 3. **Cloud service** - Remove session infrastructure
485 | 4. **Migration** - Make project parameter mandatory
486 | 5. **Cleanup** - Remove deprecated tools and middleware
487 | 
488 | ### Related Issues
489 | - Fixes #74 (Claude iOS session state bug)
490 | - Implements #75 (Mandatory project parameter architecture)
491 | - Enables future horizontal scaling
492 | - Simplifies multi-tenant architecture
493 | 
494 | ### Dependencies
495 | - Requires coordination between basic-memory and basic-memory-cloud repositories
496 | - Needs client-side updates for smooth transition
497 | - Documentation updates across all materials
498 | 
```

--------------------------------------------------------------------------------
/tests/mcp/test_tool_read_content.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for the read_content MCP tool security validation."""
  2 | 
  3 | import pytest
  4 | from unittest.mock import patch, MagicMock
  5 | from pathlib import Path
  6 | 
  7 | from basic_memory.mcp.tools.read_content import read_content
  8 | from basic_memory.mcp.tools.write_note import write_note
  9 | 
 10 | 
 11 | class TestReadContentSecurityValidation:
 12 |     """Test read_content security validation features."""
 13 | 
 14 |     @pytest.mark.asyncio
 15 |     async def test_read_content_blocks_path_traversal_unix(self, client, test_project):
 16 |         """Test that Unix-style path traversal attacks are blocked."""
 17 |         # Test various Unix-style path traversal patterns
 18 |         attack_paths = [
 19 |             "../secrets.txt",
 20 |             "../../etc/passwd",
 21 |             "../../../root/.ssh/id_rsa",
 22 |             "notes/../../../etc/shadow",
 23 |             "folder/../../outside/file.md",
 24 |             "../../../../etc/hosts",
 25 |             "../../../home/user/.env",
 26 |         ]
 27 | 
 28 |         for attack_path in attack_paths:
 29 |             result = await read_content.fn(project=test_project.name, path=attack_path)
 30 | 
 31 |             assert isinstance(result, dict)
 32 |             assert result["type"] == "error"
 33 |             assert "paths must stay within project boundaries" in result["error"]
 34 |             assert attack_path in result["error"]
 35 | 
 36 |     @pytest.mark.asyncio
 37 |     async def test_read_content_blocks_path_traversal_windows(self, client, test_project):
 38 |         """Test that Windows-style path traversal attacks are blocked."""
 39 |         # Test various Windows-style path traversal patterns
 40 |         attack_paths = [
 41 |             "..\\secrets.txt",
 42 |             "..\\..\\Windows\\System32\\config\\SAM",
 43 |             "notes\\..\\..\\..\\Windows\\System32",
 44 |             "\\\\server\\share\\file.txt",
 45 |             "..\\..\\Users\\user\\.env",
 46 |             "\\\\..\\..\\Windows",
 47 |             "..\\..\\..\\Boot.ini",
 48 |         ]
 49 | 
 50 |         for attack_path in attack_paths:
 51 |             result = await read_content.fn(project=test_project.name, path=attack_path)
 52 | 
 53 |             assert isinstance(result, dict)
 54 |             assert result["type"] == "error"
 55 |             assert "paths must stay within project boundaries" in result["error"]
 56 |             assert attack_path in result["error"]
 57 | 
 58 |     @pytest.mark.asyncio
 59 |     async def test_read_content_blocks_absolute_paths(self, client, test_project):
 60 |         """Test that absolute paths are blocked."""
 61 |         # Test various absolute path patterns
 62 |         attack_paths = [
 63 |             "/etc/passwd",
 64 |             "/home/user/.env",
 65 |             "/var/log/auth.log",
 66 |             "/root/.ssh/id_rsa",
 67 |             "C:\\Windows\\System32\\config\\SAM",
 68 |             "C:\\Users\\user\\.env",
 69 |             "D:\\secrets\\config.json",
 70 |             "/tmp/malicious.txt",
 71 |             "/usr/local/bin/evil",
 72 |         ]
 73 | 
 74 |         for attack_path in attack_paths:
 75 |             result = await read_content.fn(project=test_project.name, path=attack_path)
 76 | 
 77 |             assert isinstance(result, dict)
 78 |             assert result["type"] == "error"
 79 |             assert "paths must stay within project boundaries" in result["error"]
 80 |             assert attack_path in result["error"]
 81 | 
 82 |     @pytest.mark.asyncio
 83 |     async def test_read_content_blocks_home_directory_access(self, client, test_project):
 84 |         """Test that home directory access patterns are blocked."""
 85 |         # Test various home directory access patterns
 86 |         attack_paths = [
 87 |             "~/secrets.txt",
 88 |             "~/.env",
 89 |             "~/.ssh/id_rsa",
 90 |             "~/Documents/passwords.txt",
 91 |             "~\\AppData\\secrets",
 92 |             "~\\Desktop\\config.ini",
 93 |             "~/.bashrc",
 94 |             "~/Library/Preferences/secret.plist",
 95 |         ]
 96 | 
 97 |         for attack_path in attack_paths:
 98 |             result = await read_content.fn(project=test_project.name, path=attack_path)
 99 | 
100 |             assert isinstance(result, dict)
101 |             assert result["type"] == "error"
102 |             assert "paths must stay within project boundaries" in result["error"]
103 |             assert attack_path in result["error"]
104 | 
105 |     @pytest.mark.asyncio
106 |     async def test_read_content_blocks_mixed_attack_patterns(self, client, test_project):
107 |         """Test that mixed legitimate/attack patterns are blocked."""
108 |         # Test mixed patterns that start legitimate but contain attacks
109 |         attack_paths = [
110 |             "notes/../../../etc/passwd",
111 |             "docs/../../.env",
112 |             "legitimate/path/../../.ssh/id_rsa",
113 |             "project/folder/../../../Windows/System32",
114 |             "valid/folder/../../home/user/.bashrc",
115 |             "assets/../../../tmp/evil.exe",
116 |         ]
117 | 
118 |         for attack_path in attack_paths:
119 |             result = await read_content.fn(project=test_project.name, path=attack_path)
120 | 
121 |             assert isinstance(result, dict)
122 |             assert result["type"] == "error"
123 |             assert "paths must stay within project boundaries" in result["error"]
124 | 
125 |     @pytest.mark.asyncio
126 |     async def test_read_content_allows_safe_paths_with_mocked_api(self, client, test_project):
127 |         """Test that legitimate paths are still allowed with mocked API responses."""
128 |         # Test various safe path patterns with mocked API responses
129 |         safe_paths = [
130 |             "notes/meeting.md",
131 |             "docs/readme.txt",
132 |             "projects/2025/planning.md",
133 |             "archive/old-notes/backup.md",
134 |             "assets/diagram.png",
135 |             "folder/subfolder/document.md",
136 |         ]
137 | 
138 |         for safe_path in safe_paths:
139 |             # Mock the API call to simulate a successful response
140 |             with patch("basic_memory.mcp.tools.read_content.call_get") as mock_call_get:
141 |                 mock_response = MagicMock()
142 |                 mock_response.headers = {"content-type": "text/markdown", "content-length": "100"}
143 |                 mock_response.text = f"# Content for {safe_path}\nThis is test content."
144 |                 mock_call_get.return_value = mock_response
145 | 
146 |                 result = await read_content.fn(project=test_project.name, path=safe_path)
147 | 
148 |                 # Should succeed (not a security error)
149 |                 assert isinstance(result, dict)
150 |                 assert result[
151 |                     "type"
152 |                 ] != "error" or "paths must stay within project boundaries" not in result.get(
153 |                     "error", ""
154 |                 )
155 | 
156 |     @pytest.mark.asyncio
157 |     async def test_read_content_memory_url_processing(self, client, test_project):
158 |         """Test that memory URLs are processed correctly for security validation."""
159 |         # Test memory URLs with attacks
160 |         attack_paths = [
161 |             "memory://../../etc/passwd",
162 |             "memory://../../../root/.ssh/id_rsa",
163 |             "memory://~/.env",
164 |             "memory:///etc/passwd",
165 |         ]
166 | 
167 |         for attack_path in attack_paths:
168 |             result = await read_content.fn(project=test_project.name, path=attack_path)
169 | 
170 |             assert isinstance(result, dict)
171 |             assert result["type"] == "error"
172 |             assert "paths must stay within project boundaries" in result["error"]
173 | 
174 |     @pytest.mark.asyncio
175 |     async def test_read_content_security_logging(self, client, caplog, test_project):
176 |         """Test that security violations are properly logged."""
177 |         # Attempt path traversal attack
178 |         result = await read_content.fn(project=test_project.name, path="../../../etc/passwd")
179 | 
180 |         assert result["type"] == "error"
181 |         assert "paths must stay within project boundaries" in result["error"]
182 | 
183 |         # Check that security violation was logged
184 |         # Note: This test may need adjustment based on the actual logging setup
185 |         # The security validation should generate a warning log entry
186 | 
187 |     @pytest.mark.asyncio
188 |     async def test_read_content_empty_path_security(self, client, test_project):
189 |         """Test that empty path is handled securely."""
190 |         # Mock the API call since empty path should be allowed (resolves to project root)
191 |         with patch("basic_memory.mcp.tools.read_content.call_get") as mock_call_get:
192 |             mock_response = MagicMock()
193 |             mock_response.headers = {"content-type": "text/markdown", "content-length": "50"}
194 |             mock_response.text = "# Root content"
195 |             mock_call_get.return_value = mock_response
196 | 
197 |             result = await read_content.fn(project=test_project.name, path="")
198 | 
199 |             assert isinstance(result, dict)
200 |             # Empty path should not trigger security error (it's handled as project root)
201 |             assert result[
202 |                 "type"
203 |             ] != "error" or "paths must stay within project boundaries" not in result.get(
204 |                 "error", ""
205 |             )
206 | 
207 |     @pytest.mark.asyncio
208 |     async def test_read_content_current_directory_references_security(self, client, test_project):
209 |         """Test that current directory references are handled securely."""
210 |         # Test current directory references (should be safe)
211 |         safe_paths = [
212 |             "./notes/file.md",
213 |             "folder/./file.md",
214 |             "./folder/subfolder/file.md",
215 |         ]
216 | 
217 |         for safe_path in safe_paths:
218 |             # Mock the API call for these safe paths
219 |             with patch("basic_memory.mcp.tools.read_content.call_get") as mock_call_get:
220 |                 mock_response = MagicMock()
221 |                 mock_response.headers = {"content-type": "text/markdown", "content-length": "100"}
222 |                 mock_response.text = f"# Content for {safe_path}"
223 |                 mock_call_get.return_value = mock_response
224 | 
225 |                 result = await read_content.fn(project=test_project.name, path=safe_path)
226 | 
227 |                 assert isinstance(result, dict)
228 |                 # Should NOT contain security error message
229 |                 assert result[
230 |                     "type"
231 |                 ] != "error" or "paths must stay within project boundaries" not in result.get(
232 |                     "error", ""
233 |                 )
234 | 
235 | 
236 | class TestReadContentFunctionality:
237 |     """Test read_content basic functionality with security validation in place."""
238 | 
239 |     @pytest.mark.asyncio
240 |     async def test_read_content_text_file_success(self, client, test_project):
241 |         """Test reading a text file works correctly with security validation."""
242 |         # First create a file to read
243 |         await write_note.fn(
244 |             project=test_project.name,
245 |             title="Test Document",
246 |             folder="docs",
247 |             content="# Test Document\nThis is test content for reading.",
248 |         )
249 | 
250 |         # Mock the API call to simulate reading the file
251 |         with patch("basic_memory.mcp.tools.read_content.call_get") as mock_call_get:
252 |             mock_response = MagicMock()
253 |             mock_response.headers = {"content-type": "text/markdown", "content-length": "100"}
254 |             mock_response.text = "# Test Document\nThis is test content for reading."
255 |             mock_call_get.return_value = mock_response
256 | 
257 |             result = await read_content.fn(project=test_project.name, path="docs/test-document.md")
258 | 
259 |             assert isinstance(result, dict)
260 |             assert result["type"] == "text"
261 |             assert "Test Document" in result["text"]
262 |             assert result["content_type"] == "text/markdown"
263 |             assert result["encoding"] == "utf-8"
264 | 
265 |     @pytest.mark.asyncio
266 |     async def test_read_content_image_file_handling(self, client, test_project):
267 |         """Test reading an image file with security validation."""
268 |         # Mock the API call to simulate reading an image
269 |         with patch("basic_memory.mcp.tools.read_content.call_get") as mock_call_get:
270 |             # Create a simple fake image data
271 |             fake_image_data = b"\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x01\x00\x00\x00\x01\x08\x06\x00\x00\x00\x1f\x15\xc4\x89\x00\x00\x00\rIDATx\x9cc\x00\x01\x00\x00\x05\x00\x01\r\n-\xdb\x00\x00\x00\x00IEND\xaeB`\x82"
272 | 
273 |             mock_response = MagicMock()
274 |             mock_response.headers = {
275 |                 "content-type": "image/png",
276 |                 "content-length": str(len(fake_image_data)),
277 |             }
278 |             mock_response.content = fake_image_data
279 |             mock_call_get.return_value = mock_response
280 | 
281 |             # Mock PIL Image processing
282 |             with patch("basic_memory.mcp.tools.read_content.PILImage") as mock_pil:
283 |                 mock_img = MagicMock()
284 |                 mock_img.width = 100
285 |                 mock_img.height = 100
286 |                 mock_img.mode = "RGB"
287 |                 mock_img.getbands.return_value = ["R", "G", "B"]
288 |                 mock_pil.open.return_value = mock_img
289 | 
290 |                 with patch("basic_memory.mcp.tools.read_content.optimize_image") as mock_optimize:
291 |                     mock_optimize.return_value = b"optimized_image_data"
292 | 
293 |                     result = await read_content.fn(
294 |                         project=test_project.name, path="assets/safe-image.png"
295 |                     )
296 | 
297 |                     assert isinstance(result, dict)
298 |                     assert result["type"] == "image"
299 |                     assert "source" in result
300 |                     assert result["source"]["type"] == "base64"
301 |                     assert result["source"]["media_type"] == "image/jpeg"
302 | 
303 |     @pytest.mark.asyncio
304 |     async def test_read_content_with_project_parameter(self, client, test_project):
305 |         """Test reading content with explicit project parameter."""
306 |         # Mock the API call and project configuration
307 |         with patch("basic_memory.mcp.tools.read_content.call_get") as mock_call_get:
308 |             with patch(
309 |                 "basic_memory.mcp.tools.read_content.get_active_project"
310 |             ) as mock_get_project:
311 |                 # Mock project configuration
312 |                 mock_project = MagicMock()
313 |                 mock_project.project_url = "http://test"
314 |                 mock_project.home = Path("/test/project")
315 |                 mock_get_project.return_value = mock_project
316 | 
317 |                 mock_response = MagicMock()
318 |                 mock_response.headers = {"content-type": "text/plain", "content-length": "50"}
319 |                 mock_response.text = "Project-specific content"
320 |                 mock_call_get.return_value = mock_response
321 | 
322 |                 result = await read_content.fn(
323 |                     path="notes/project-file.txt", project="specific-project"
324 |                 )
325 | 
326 |                 assert isinstance(result, dict)
327 |                 assert result["type"] == "text"
328 |                 assert "Project-specific content" in result["text"]
329 | 
330 |     @pytest.mark.asyncio
331 |     async def test_read_content_nonexistent_file_handling(self, client, test_project):
332 |         """Test handling of nonexistent files (after security validation)."""
333 |         # Mock API call to return 404
334 |         with patch("basic_memory.mcp.tools.read_content.call_get") as mock_call_get:
335 |             mock_call_get.side_effect = Exception("File not found")
336 | 
337 |             # This should pass security validation but fail on API call
338 |             try:
339 |                 result = await read_content.fn(
340 |                     project=test_project.name, path="docs/nonexistent-file.md"
341 |                 )
342 |                 # If no exception is raised, check the result format
343 |                 assert isinstance(result, dict)
344 |             except Exception as e:
345 |                 # Exception due to API failure is acceptable for this test
346 |                 assert "File not found" in str(e)
347 | 
348 |     @pytest.mark.asyncio
349 |     async def test_read_content_binary_file_handling(self, client, test_project):
350 |         """Test reading binary files with security validation."""
351 |         # Mock the API call to simulate reading a binary file
352 |         with patch("basic_memory.mcp.tools.read_content.call_get") as mock_call_get:
353 |             binary_data = b"Binary file content with special bytes: \x00\x01\x02\x03"
354 | 
355 |             mock_response = MagicMock()
356 |             mock_response.headers = {
357 |                 "content-type": "application/octet-stream",
358 |                 "content-length": str(len(binary_data)),
359 |             }
360 |             mock_response.content = binary_data
361 |             mock_call_get.return_value = mock_response
362 | 
363 |             result = await read_content.fn(project=test_project.name, path="files/safe-binary.bin")
364 | 
365 |             assert isinstance(result, dict)
366 |             assert result["type"] == "document"
367 |             assert "source" in result
368 |             assert result["source"]["type"] == "base64"
369 |             assert result["source"]["media_type"] == "application/octet-stream"
370 | 
371 | 
372 | class TestReadContentEdgeCases:
373 |     """Test edge cases for read_content security validation."""
374 | 
375 |     @pytest.mark.asyncio
376 |     async def test_read_content_unicode_path_attacks(self, client, test_project):
377 |         """Test that Unicode-based path traversal attempts are blocked."""
378 |         # Test Unicode path traversal attempts
379 |         unicode_attacks = [
380 |             "notes/文档/../../../etc/passwd",  # Chinese characters
381 |             "docs/café/../../.env",  # Accented characters
382 |             "files/αβγ/../../../secret.txt",  # Greek characters
383 |         ]
384 | 
385 |         for attack_path in unicode_attacks:
386 |             result = await read_content.fn(project=test_project.name, path=attack_path)
387 | 
388 |             assert isinstance(result, dict)
389 |             assert result["type"] == "error"
390 |             assert "paths must stay within project boundaries" in result["error"]
391 | 
392 |     @pytest.mark.asyncio
393 |     async def test_read_content_url_encoded_attacks(self, client, test_project):
394 |         """Test that URL-encoded path traversal attempts are handled safely."""
395 |         # Note: The current implementation may not handle URL encoding,
396 |         # but this tests the behavior with URL-encoded patterns
397 |         encoded_attacks = [
398 |             "notes%2f..%2f..%2f..%2fetc%2fpasswd",
399 |             "docs%2f%2e%2e%2f%2e%2e%2f.env",
400 |         ]
401 | 
402 |         for attack_path in encoded_attacks:
403 |             try:
404 |                 result = await read_content.fn(project=test_project.name, path=attack_path)
405 | 
406 |                 # These may or may not be blocked depending on URL decoding,
407 |                 # but should not cause security issues
408 |                 assert isinstance(result, dict)
409 | 
410 |                 # If not blocked by security validation, may fail at API level
411 |                 # which is also acceptable
412 | 
413 |             except Exception:
414 |                 # Exception due to API failure or other issues is acceptable
415 |                 # as long as no actual traversal occurs
416 |                 pass
417 | 
418 |     @pytest.mark.asyncio
419 |     async def test_read_content_null_byte_injection(self, client, test_project):
420 |         """Test that null byte injection attempts are blocked."""
421 |         # Test null byte injection patterns
422 |         null_byte_attacks = [
423 |             "notes/file.txt\x00../../etc/passwd",
424 |             "docs/document.md\x00../../../.env",
425 |         ]
426 | 
427 |         for attack_path in null_byte_attacks:
428 |             result = await read_content.fn(project=test_project.name, path=attack_path)
429 | 
430 |             assert isinstance(result, dict)
431 |             # Should be blocked by security validation or cause an error
432 |             if result["type"] == "error":
433 |                 # Either blocked by security validation or failed due to invalid characters
434 |                 pass  # This is acceptable
435 | 
436 |     @pytest.mark.asyncio
437 |     async def test_read_content_very_long_attack_path(self, client, test_project):
438 |         """Test handling of very long attack paths."""
439 |         # Create a very long path traversal attack
440 |         long_attack = "../" * 1000 + "etc/passwd"
441 | 
442 |         result = await read_content.fn(project=test_project.name, path=long_attack)
443 | 
444 |         assert isinstance(result, dict)
445 |         assert result["type"] == "error"
446 |         assert "paths must stay within project boundaries" in result["error"]
447 | 
448 |     @pytest.mark.asyncio
449 |     async def test_read_content_case_variations_attacks(self, client, test_project):
450 |         """Test that case variations don't bypass security."""
451 |         # Test case variations (though case sensitivity depends on filesystem)
452 |         case_attacks = [
453 |             "../ETC/passwd",
454 |             "../Etc/PASSWD",
455 |             "..\\WINDOWS\\system32",
456 |             "~/.SSH/id_rsa",
457 |         ]
458 | 
459 |         for attack_path in case_attacks:
460 |             result = await read_content.fn(project=test_project.name, path=attack_path)
461 | 
462 |             assert isinstance(result, dict)
463 |             assert result["type"] == "error"
464 |             assert "paths must stay within project boundaries" in result["error"]
465 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/sync/watch_service.py:
--------------------------------------------------------------------------------

```python
  1 | """Watch service for Basic Memory."""
  2 | 
  3 | import asyncio
  4 | import os
  5 | from collections import defaultdict
  6 | from datetime import datetime
  7 | from pathlib import Path
  8 | from typing import List, Optional, Set, Sequence
  9 | 
 10 | from basic_memory.config import BasicMemoryConfig, WATCH_STATUS_JSON
 11 | from basic_memory.ignore_utils import load_gitignore_patterns, should_ignore_path
 12 | from basic_memory.models import Project
 13 | from basic_memory.repository import ProjectRepository
 14 | from loguru import logger
 15 | from pydantic import BaseModel
 16 | from rich.console import Console
 17 | from watchfiles import awatch
 18 | from watchfiles.main import FileChange, Change
 19 | import time
 20 | 
 21 | 
 22 | class WatchEvent(BaseModel):
 23 |     timestamp: datetime
 24 |     path: str
 25 |     action: str  # new, delete, etc
 26 |     status: str  # success, error
 27 |     checksum: Optional[str]
 28 |     error: Optional[str] = None
 29 | 
 30 | 
 31 | class WatchServiceState(BaseModel):
 32 |     # Service status
 33 |     running: bool = False
 34 |     start_time: datetime = datetime.now()  # Use directly with Pydantic model
 35 |     pid: int = os.getpid()  # Use directly with Pydantic model
 36 | 
 37 |     # Stats
 38 |     error_count: int = 0
 39 |     last_error: Optional[datetime] = None
 40 |     last_scan: Optional[datetime] = None
 41 | 
 42 |     # File counts
 43 |     synced_files: int = 0
 44 | 
 45 |     # Recent activity
 46 |     recent_events: List[WatchEvent] = []  # Use directly with Pydantic model
 47 | 
 48 |     def add_event(
 49 |         self,
 50 |         path: str,
 51 |         action: str,
 52 |         status: str,
 53 |         checksum: Optional[str] = None,
 54 |         error: Optional[str] = None,
 55 |     ) -> WatchEvent:
 56 |         event = WatchEvent(
 57 |             timestamp=datetime.now(),
 58 |             path=path,
 59 |             action=action,
 60 |             status=status,
 61 |             checksum=checksum,
 62 |             error=error,
 63 |         )
 64 |         self.recent_events.insert(0, event)
 65 |         self.recent_events = self.recent_events[:100]  # Keep last 100
 66 |         return event
 67 | 
 68 |     def record_error(self, error: str):
 69 |         self.error_count += 1
 70 |         self.add_event(path="", action="sync", status="error", error=error)
 71 |         self.last_error = datetime.now()
 72 | 
 73 | 
 74 | class WatchService:
 75 |     def __init__(
 76 |         self,
 77 |         app_config: BasicMemoryConfig,
 78 |         project_repository: ProjectRepository,
 79 |         quiet: bool = False,
 80 |     ):
 81 |         self.app_config = app_config
 82 |         self.project_repository = project_repository
 83 |         self.state = WatchServiceState()
 84 |         self.status_path = Path.home() / ".basic-memory" / WATCH_STATUS_JSON
 85 |         self.status_path.parent.mkdir(parents=True, exist_ok=True)
 86 |         self._ignore_patterns_cache: dict[Path, Set[str]] = {}
 87 | 
 88 |         # quiet mode for mcp so it doesn't mess up stdout
 89 |         self.console = Console(quiet=quiet)
 90 | 
 91 |     async def _schedule_restart(self, stop_event: asyncio.Event):
 92 |         """Schedule a restart of the watch service after the configured interval."""
 93 |         await asyncio.sleep(self.app_config.watch_project_reload_interval)
 94 |         stop_event.set()
 95 | 
 96 |     def _get_ignore_patterns(self, project_path: Path) -> Set[str]:
 97 |         """Get or load ignore patterns for a project path."""
 98 |         if project_path not in self._ignore_patterns_cache:
 99 |             self._ignore_patterns_cache[project_path] = load_gitignore_patterns(project_path)
100 |         return self._ignore_patterns_cache[project_path]
101 | 
102 |     async def _watch_projects_cycle(self, projects: Sequence[Project], stop_event: asyncio.Event):
103 |         """Run one cycle of watching the given projects until stop_event is set."""
104 |         project_paths = [project.path for project in projects]
105 | 
106 |         async for changes in awatch(
107 |             *project_paths,
108 |             debounce=self.app_config.sync_delay,
109 |             watch_filter=self.filter_changes,
110 |             recursive=True,
111 |             stop_event=stop_event,
112 |         ):
113 |             # group changes by project and filter using ignore patterns
114 |             project_changes = defaultdict(list)
115 |             for change, path in changes:
116 |                 for project in projects:
117 |                     if self.is_project_path(project, path):
118 |                         # Check if the file should be ignored based on gitignore patterns
119 |                         project_path = Path(project.path)
120 |                         file_path = Path(path)
121 |                         ignore_patterns = self._get_ignore_patterns(project_path)
122 | 
123 |                         if should_ignore_path(file_path, project_path, ignore_patterns):
124 |                             logger.trace(
125 |                                 f"Ignoring watched file change: {file_path.relative_to(project_path)}"
126 |                             )
127 |                             continue
128 | 
129 |                         project_changes[project].append((change, path))
130 |                         break
131 | 
132 |             # create coroutines to handle changes
133 |             change_handlers = [
134 |                 self.handle_changes(project, changes)  # pyright: ignore
135 |                 for project, changes in project_changes.items()
136 |             ]
137 | 
138 |             # process changes
139 |             await asyncio.gather(*change_handlers)
140 | 
141 |     async def run(self):  # pragma: no cover
142 |         """Watch for file changes and sync them"""
143 | 
144 |         self.state.running = True
145 |         self.state.start_time = datetime.now()
146 |         await self.write_status()
147 | 
148 |         logger.info(
149 |             "Watch service started",
150 |             f"debounce_ms={self.app_config.sync_delay}",
151 |             f"pid={os.getpid()}",
152 |         )
153 | 
154 |         try:
155 |             while self.state.running:
156 |                 # Clear ignore patterns cache to pick up any .gitignore changes
157 |                 self._ignore_patterns_cache.clear()
158 | 
159 |                 # Reload projects to catch any new/removed projects
160 |                 projects = await self.project_repository.get_active_projects()
161 | 
162 |                 project_paths = [project.path for project in projects]
163 |                 logger.debug(f"Starting watch cycle for directories: {project_paths}")
164 | 
165 |                 # Create stop event for this watch cycle
166 |                 stop_event = asyncio.Event()
167 | 
168 |                 # Schedule restart after configured interval to reload projects
169 |                 timer_task = asyncio.create_task(self._schedule_restart(stop_event))
170 | 
171 |                 try:
172 |                     await self._watch_projects_cycle(projects, stop_event)
173 |                 except Exception as e:
174 |                     logger.exception("Watch service error during cycle", error=str(e))
175 |                     self.state.record_error(str(e))
176 |                     await self.write_status()
177 |                     # Continue to next cycle instead of exiting
178 |                     await asyncio.sleep(5)  # Brief pause before retry
179 |                 finally:
180 |                     # Cancel timer task if it's still running
181 |                     if not timer_task.done():
182 |                         timer_task.cancel()
183 |                         try:
184 |                             await timer_task
185 |                         except asyncio.CancelledError:
186 |                             pass
187 | 
188 |         except Exception as e:
189 |             logger.exception("Watch service error", error=str(e))
190 |             self.state.record_error(str(e))
191 |             await self.write_status()
192 |             raise
193 | 
194 |         finally:
195 |             logger.info(
196 |                 "Watch service stopped",
197 |                 f"runtime_seconds={int((datetime.now() - self.state.start_time).total_seconds())}",
198 |             )
199 | 
200 |             self.state.running = False
201 |             await self.write_status()
202 | 
203 |     def filter_changes(self, change: Change, path: str) -> bool:  # pragma: no cover
204 |         """Filter to only watch non-hidden files and directories.
205 | 
206 |         Returns:
207 |             True if the file should be watched, False if it should be ignored
208 |         """
209 | 
210 |         # Skip hidden directories and files
211 |         path_parts = Path(path).parts
212 |         for part in path_parts:
213 |             if part.startswith("."):
214 |                 return False
215 | 
216 |         # Skip temp files used in atomic operations
217 |         if path.endswith(".tmp"):
218 |             return False
219 | 
220 |         return True
221 | 
222 |     async def write_status(self):
223 |         """Write current state to status file"""
224 |         self.status_path.write_text(WatchServiceState.model_dump_json(self.state, indent=2))
225 | 
226 |     def is_project_path(self, project: Project, path):
227 |         """
228 |         Checks if path is a subdirectory or file within a project
229 |         """
230 |         project_path = Path(project.path).resolve()
231 |         sub_path = Path(path).resolve()
232 |         return project_path in sub_path.parents
233 | 
234 |     async def handle_changes(self, project: Project, changes: Set[FileChange]) -> None:
235 |         """Process a batch of file changes"""
236 |         # avoid circular imports
237 |         from basic_memory.sync.sync_service import get_sync_service
238 | 
239 |         # Check if project still exists in configuration before processing
240 |         # This prevents deleted projects from being recreated by background sync
241 |         from basic_memory.config import ConfigManager
242 | 
243 |         config_manager = ConfigManager()
244 |         if (
245 |             project.name not in config_manager.projects
246 |             and project.permalink not in config_manager.projects
247 |         ):
248 |             logger.info(
249 |                 f"Skipping sync for deleted project: {project.name}, change_count={len(changes)}"
250 |             )
251 |             return
252 | 
253 |         sync_service = await get_sync_service(project)
254 |         file_service = sync_service.file_service
255 | 
256 |         start_time = time.time()
257 |         directory = Path(project.path).resolve()
258 |         logger.info(
259 |             f"Processing project: {project.name} changes, change_count={len(changes)}, directory={directory}"
260 |         )
261 | 
262 |         # Group changes by type
263 |         adds: List[str] = []
264 |         deletes: List[str] = []
265 |         modifies: List[str] = []
266 | 
267 |         for change, path in changes:
268 |             # convert to relative path
269 |             relative_path = Path(path).relative_to(directory).as_posix()
270 | 
271 |             # Skip .tmp files - they're temporary and shouldn't be synced
272 |             if relative_path.endswith(".tmp"):
273 |                 continue
274 | 
275 |             if change == Change.added:
276 |                 adds.append(relative_path)
277 |             elif change == Change.deleted:
278 |                 deletes.append(relative_path)
279 |             elif change == Change.modified:
280 |                 modifies.append(relative_path)
281 | 
282 |         logger.debug(
283 |             f"Grouped file changes, added={len(adds)}, deleted={len(deletes)}, modified={len(modifies)}"
284 |         )
285 | 
286 |         # because of our atomic writes on updates, an add may be an existing file
287 |         for added_path in adds:  # pragma: no cover TODO add test
288 |             entity = await sync_service.entity_repository.get_by_file_path(added_path)
289 |             if entity is not None:
290 |                 logger.debug(f"Existing file will be processed as modified, path={added_path}")
291 |                 adds.remove(added_path)
292 |                 modifies.append(added_path)
293 | 
294 |         # Track processed files to avoid duplicates
295 |         processed: Set[str] = set()
296 | 
297 |         # First handle potential moves
298 |         for added_path in adds:
299 |             if added_path in processed:
300 |                 continue  # pragma: no cover
301 | 
302 |             # Skip directories for added paths
303 |             # We don't need to process directories, only the files inside them
304 |             # This prevents errors when trying to compute checksums or read directories as files
305 |             added_full_path = directory / added_path
306 |             if not added_full_path.exists() or added_full_path.is_dir():
307 |                 logger.debug("Skipping non-existent or directory path", path=added_path)
308 |                 processed.add(added_path)
309 |                 continue
310 | 
311 |             for deleted_path in deletes:
312 |                 if deleted_path in processed:
313 |                     continue  # pragma: no cover
314 | 
315 |                 # Skip directories for deleted paths (based on entity type in db)
316 |                 deleted_entity = await sync_service.entity_repository.get_by_file_path(deleted_path)
317 |                 if deleted_entity is None:
318 |                     # If this was a directory, it wouldn't have an entity
319 |                     logger.debug("Skipping unknown path for move detection", path=deleted_path)
320 |                     continue
321 | 
322 |                 if added_path != deleted_path:
323 |                     # Compare checksums to detect moves
324 |                     try:
325 |                         added_checksum = await file_service.compute_checksum(added_path)
326 | 
327 |                         if deleted_entity and deleted_entity.checksum == added_checksum:
328 |                             await sync_service.handle_move(deleted_path, added_path)
329 |                             self.state.add_event(
330 |                                 path=f"{deleted_path} -> {added_path}",
331 |                                 action="moved",
332 |                                 status="success",
333 |                             )
334 |                             self.console.print(f"[blue]→[/blue] {deleted_path} → {added_path}")
335 |                             logger.info(f"move: {deleted_path} -> {added_path}")
336 |                             processed.add(added_path)
337 |                             processed.add(deleted_path)
338 |                             break
339 |                     except Exception as e:  # pragma: no cover
340 |                         logger.warning(
341 |                             "Error checking for move",
342 |                             f"old_path={deleted_path}",
343 |                             f"new_path={added_path}",
344 |                             f"error={str(e)}",
345 |                         )
346 | 
347 |         # Handle remaining changes - group them by type for concise output
348 |         moved_count = len([p for p in processed if p in deletes or p in adds])
349 |         delete_count = 0
350 |         add_count = 0
351 |         modify_count = 0
352 | 
353 |         # Process deletes
354 |         for path in deletes:
355 |             if path not in processed:
356 |                 # Check if file still exists on disk (vim atomic write edge case)
357 |                 full_path = directory / path
358 |                 if full_path.exists() and full_path.is_file():
359 |                     # File still exists despite DELETE event - treat as modification
360 |                     logger.debug(
361 |                         "File exists despite DELETE event, treating as modification", path=path
362 |                     )
363 |                     entity, checksum = await sync_service.sync_file(path, new=False)
364 |                     self.state.add_event(
365 |                         path=path, action="modified", status="success", checksum=checksum
366 |                     )
367 |                     self.console.print(f"[yellow]✎[/yellow] {path} (atomic write)")
368 |                     logger.info(f"atomic write detected: {path}")
369 |                     processed.add(path)
370 |                     modify_count += 1
371 |                 else:
372 |                     # Check if this was a directory - skip if so
373 |                     # (we can't tell if the deleted path was a directory since it no longer exists,
374 |                     # so we check if there's an entity in the database for it)
375 |                     entity = await sync_service.entity_repository.get_by_file_path(path)
376 |                     if entity is None:
377 |                         # No entity means this was likely a directory - skip it
378 |                         logger.debug(
379 |                             f"Skipping deleted path with no entity (likely directory), path={path}"
380 |                         )
381 |                         processed.add(path)
382 |                         continue
383 | 
384 |                     # File truly deleted
385 |                     logger.debug("Processing deleted file", path=path)
386 |                     await sync_service.handle_delete(path)
387 |                     self.state.add_event(path=path, action="deleted", status="success")
388 |                     self.console.print(f"[red]✕[/red] {path}")
389 |                     logger.info(f"deleted: {path}")
390 |                     processed.add(path)
391 |                     delete_count += 1
392 | 
393 |         # Process adds
394 |         for path in adds:
395 |             if path not in processed:
396 |                 # Skip directories - only process files
397 |                 full_path = directory / path
398 |                 if not full_path.exists() or full_path.is_dir():
399 |                     logger.debug(
400 |                         f"Skipping non-existent or directory path, path={path}"
401 |                     )  # pragma: no cover
402 |                     processed.add(path)  # pragma: no cover
403 |                     continue  # pragma: no cover
404 | 
405 |                 logger.debug(f"Processing new file, path={path}")
406 |                 entity, checksum = await sync_service.sync_file(path, new=True)
407 |                 if checksum:
408 |                     self.state.add_event(
409 |                         path=path, action="new", status="success", checksum=checksum
410 |                     )
411 |                     self.console.print(f"[green]✓[/green] {path}")
412 |                     logger.info(
413 |                         "new file processed",
414 |                         f"path={path}",
415 |                         f"checksum={checksum}",
416 |                     )
417 |                     processed.add(path)
418 |                     add_count += 1
419 |                 else:  # pragma: no cover
420 |                     logger.warning(f"Error syncing new file, path={path}")  # pragma: no cover
421 |                     self.console.print(
422 |                         f"[orange]?[/orange] Error syncing: {path}"
423 |                     )  # pragma: no cover
424 | 
425 |         # Process modifies - detect repeats
426 |         last_modified_path = None
427 |         repeat_count = 0
428 | 
429 |         for path in modifies:
430 |             if path not in processed:
431 |                 # Skip directories - only process files
432 |                 full_path = directory / path
433 |                 if not full_path.exists() or full_path.is_dir():
434 |                     logger.debug("Skipping non-existent or directory path", path=path)
435 |                     processed.add(path)
436 |                     continue
437 | 
438 |                 logger.debug(f"Processing modified file: path={path}")
439 |                 entity, checksum = await sync_service.sync_file(path, new=False)
440 |                 self.state.add_event(
441 |                     path=path, action="modified", status="success", checksum=checksum
442 |                 )
443 | 
444 |                 # Check if this is a repeat of the last modified file
445 |                 if path == last_modified_path:  # pragma: no cover
446 |                     repeat_count += 1  # pragma: no cover
447 |                     # Only show a message for the first repeat
448 |                     if repeat_count == 1:  # pragma: no cover
449 |                         self.console.print(
450 |                             f"[yellow]...[/yellow] Repeated changes to {path}"
451 |                         )  # pragma: no cover
452 |                 else:
453 |                     # haven't processed this file
454 |                     self.console.print(f"[yellow]✎[/yellow] {path}")
455 |                     logger.info(f"modified: {path}")
456 |                     last_modified_path = path
457 |                     repeat_count = 0
458 |                     modify_count += 1
459 | 
460 |                 logger.debug(  # pragma: no cover
461 |                     "Modified file processed, "
462 |                     f"path={path} "
463 |                     f"entity_id={entity.id if entity else None} "
464 |                     f"checksum={checksum}",
465 |                 )
466 |                 processed.add(path)
467 | 
468 |         # Add a concise summary instead of a divider
469 |         if processed:
470 |             changes = []  # pyright: ignore
471 |             if add_count > 0:
472 |                 changes.append(f"[green]{add_count} added[/green]")  # pyright: ignore
473 |             if modify_count > 0:
474 |                 changes.append(f"[yellow]{modify_count} modified[/yellow]")  # pyright: ignore
475 |             if moved_count > 0:
476 |                 changes.append(f"[blue]{moved_count} moved[/blue]")  # pyright: ignore
477 |             if delete_count > 0:
478 |                 changes.append(f"[red]{delete_count} deleted[/red]")  # pyright: ignore
479 | 
480 |             if changes:
481 |                 self.console.print(f"{', '.join(changes)}", style="dim")  # pyright: ignore
482 |                 logger.info(f"changes: {len(changes)}")
483 | 
484 |         duration_ms = int((time.time() - start_time) * 1000)
485 |         self.state.last_scan = datetime.now()
486 |         self.state.synced_files += len(processed)
487 | 
488 |         logger.info(
489 |             "File change processing completed, "
490 |             f"processed_files={len(processed)}, "
491 |             f"total_synced_files={self.state.synced_files}, "
492 |             f"duration_ms={duration_ms}"
493 |         )
494 | 
495 |         await self.write_status()
496 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/tools/recent_activity.py:
--------------------------------------------------------------------------------

```python
  1 | """Recent activity tool for Basic Memory MCP server."""
  2 | 
  3 | from typing import List, Union, Optional
  4 | 
  5 | from loguru import logger
  6 | from fastmcp import Context
  7 | 
  8 | from basic_memory.mcp.async_client import get_client
  9 | from basic_memory.mcp.project_context import get_active_project, resolve_project_parameter
 10 | from basic_memory.mcp.server import mcp
 11 | from basic_memory.mcp.tools.utils import call_get
 12 | from basic_memory.schemas.base import TimeFrame
 13 | from basic_memory.schemas.memory import (
 14 |     GraphContext,
 15 |     ProjectActivity,
 16 |     ActivityStats,
 17 | )
 18 | from basic_memory.schemas.project_info import ProjectList, ProjectItem
 19 | from basic_memory.schemas.search import SearchItemType
 20 | 
 21 | 
 22 | @mcp.tool(
 23 |     description="""Get recent activity for a project or across all projects.
 24 | 
 25 |     Timeframe supports natural language formats like:
 26 |     - "2 days ago"
 27 |     - "last week"
 28 |     - "yesterday"
 29 |     - "today"
 30 |     - "3 weeks ago"
 31 |     Or standard formats like "7d"
 32 |     """,
 33 | )
 34 | async def recent_activity(
 35 |     type: Union[str, List[str]] = "",
 36 |     depth: int = 1,
 37 |     timeframe: TimeFrame = "7d",
 38 |     project: Optional[str] = None,
 39 |     context: Context | None = None,
 40 | ) -> str:
 41 |     """Get recent activity for a specific project or across all projects.
 42 | 
 43 |     Project Resolution:
 44 |     The server resolves projects in this order:
 45 |     1. Single Project Mode - server constrained to one project, parameter ignored
 46 |     2. Explicit project parameter - specify which project to query
 47 |     3. Default project - server configured default if no project specified
 48 | 
 49 |     Discovery Mode:
 50 |     When no specific project can be resolved, returns activity across all projects
 51 |     to help discover available projects and their recent activity.
 52 | 
 53 |     Project Discovery (when project is unknown):
 54 |     1. Call list_memory_projects() to see available projects
 55 |     2. Or use this tool without project parameter to see cross-project activity
 56 |     3. Ask the user which project to focus on
 57 |     4. Remember their choice for the conversation
 58 | 
 59 |     Args:
 60 |         type: Filter by content type(s). Can be a string or list of strings.
 61 |             Valid options:
 62 |             - "entity" or ["entity"] for knowledge entities
 63 |             - "relation" or ["relation"] for connections between entities
 64 |             - "observation" or ["observation"] for notes and observations
 65 |             Multiple types can be combined: ["entity", "relation"]
 66 |             Case-insensitive: "ENTITY" and "entity" are treated the same.
 67 |             Default is an empty string, which returns all types.
 68 |         depth: How many relation hops to traverse (1-3 recommended)
 69 |         timeframe: Time window to search. Supports natural language:
 70 |             - Relative: "2 days ago", "last week", "yesterday"
 71 |             - Points in time: "2024-01-01", "January 1st"
 72 |             - Standard format: "7d", "24h"
 73 |         project: Project name to query. Optional - server will resolve using the
 74 |                 hierarchy above. If unknown, use list_memory_projects() to discover
 75 |                 available projects.
 76 |         context: Optional FastMCP context for performance caching.
 77 | 
 78 |     Returns:
 79 |         Human-readable summary of recent activity. When no specific project is
 80 |         resolved, returns cross-project discovery information. When a specific
 81 |         project is resolved, returns detailed activity for that project.
 82 | 
 83 |     Examples:
 84 |         # Cross-project discovery mode
 85 |         recent_activity()
 86 |         recent_activity(timeframe="yesterday")
 87 | 
 88 |         # Project-specific activity
 89 |         recent_activity(project="work-docs", type="entity", timeframe="yesterday")
 90 |         recent_activity(project="research", type=["entity", "relation"], timeframe="today")
 91 |         recent_activity(project="notes", type="entity", depth=2, timeframe="2 weeks ago")
 92 | 
 93 |     Raises:
 94 |         ToolError: If project doesn't exist or type parameter contains invalid values
 95 | 
 96 |     Notes:
 97 |         - Higher depth values (>3) may impact performance with large result sets
 98 |         - For focused queries, consider using build_context with a specific URI
 99 |         - Max timeframe is 1 year in the past
100 |     """
101 |     async with get_client() as client:
102 |         # Build common parameters for API calls
103 |         params = {
104 |             "page": 1,
105 |             "page_size": 10,
106 |             "max_related": 10,
107 |         }
108 |         if depth:
109 |             params["depth"] = depth
110 |         if timeframe:
111 |             params["timeframe"] = timeframe  # pyright: ignore
112 | 
113 |         # Validate and convert type parameter
114 |         if type:
115 |             # Convert single string to list
116 |             if isinstance(type, str):
117 |                 type_list = [type]
118 |             else:
119 |                 type_list = type
120 | 
121 |             # Validate each type against SearchItemType enum
122 |             validated_types = []
123 |             for t in type_list:
124 |                 try:
125 |                     # Try to convert string to enum
126 |                     if isinstance(t, str):
127 |                         validated_types.append(SearchItemType(t.lower()))
128 |                 except ValueError:
129 |                     valid_types = [t.value for t in SearchItemType]
130 |                     raise ValueError(f"Invalid type: {t}. Valid types are: {valid_types}")
131 | 
132 |             # Add validated types to params
133 |             params["type"] = [t.value for t in validated_types]  # pyright: ignore
134 | 
135 |         # Resolve project parameter using the three-tier hierarchy
136 |         resolved_project = await resolve_project_parameter(project)
137 | 
138 |         if resolved_project is None:
139 |             # Discovery Mode: Get activity across all projects
140 |             logger.info(
141 |                 f"Getting recent activity across all projects: type={type}, depth={depth}, timeframe={timeframe}"
142 |             )
143 | 
144 |             # Get list of all projects
145 |             response = await call_get(client, "/projects/projects")
146 |             project_list = ProjectList.model_validate(response.json())
147 | 
148 |             projects_activity = {}
149 |             total_items = 0
150 |             total_entities = 0
151 |             total_relations = 0
152 |             total_observations = 0
153 |             most_active_project = None
154 |             most_active_count = 0
155 |             active_projects = 0
156 | 
157 |             # Query each project's activity
158 |             for project_info in project_list.projects:
159 |                 project_activity = await _get_project_activity(client, project_info, params, depth)
160 |                 projects_activity[project_info.name] = project_activity
161 | 
162 |                 # Aggregate stats
163 |                 item_count = project_activity.item_count
164 |                 if item_count > 0:
165 |                     active_projects += 1
166 |                     total_items += item_count
167 | 
168 |                     # Count by type
169 |                     for result in project_activity.activity.results:
170 |                         if result.primary_result.type == "entity":
171 |                             total_entities += 1
172 |                         elif result.primary_result.type == "relation":
173 |                             total_relations += 1
174 |                         elif result.primary_result.type == "observation":
175 |                             total_observations += 1
176 | 
177 |                     # Track most active project
178 |                     if item_count > most_active_count:
179 |                         most_active_count = item_count
180 |                         most_active_project = project_info.name
181 | 
182 |             # Build summary stats
183 |             summary = ActivityStats(
184 |                 total_projects=len(project_list.projects),
185 |                 active_projects=active_projects,
186 |                 most_active_project=most_active_project,
187 |                 total_items=total_items,
188 |                 total_entities=total_entities,
189 |                 total_relations=total_relations,
190 |                 total_observations=total_observations,
191 |             )
192 | 
193 |             # Generate guidance for the assistant
194 |             guidance_lines = ["\n" + "─" * 40]
195 | 
196 |             if most_active_project and most_active_count > 0:
197 |                 guidance_lines.extend(
198 |                     [
199 |                         f"Suggested project: '{most_active_project}' (most active with {most_active_count} items)",
200 |                         f"Ask user: 'Should I use {most_active_project} for this task, or would you prefer a different project?'",
201 |                     ]
202 |                 )
203 |             elif active_projects > 0:
204 |                 # Has activity but no clear most active project
205 |                 active_project_names = [
206 |                     name for name, activity in projects_activity.items() if activity.item_count > 0
207 |                 ]
208 |                 if len(active_project_names) == 1:
209 |                     guidance_lines.extend(
210 |                         [
211 |                             f"Suggested project: '{active_project_names[0]}' (only active project)",
212 |                             f"Ask user: 'Should I use {active_project_names[0]} for this task?'",
213 |                         ]
214 |                     )
215 |                 else:
216 |                     guidance_lines.extend(
217 |                         [
218 |                             f"Multiple active projects found: {', '.join(active_project_names)}",
219 |                             "Ask user: 'Which project should I use for this task?'",
220 |                         ]
221 |                     )
222 |             else:
223 |                 # No recent activity
224 |                 guidance_lines.extend(
225 |                     [
226 |                         "No recent activity found in any project.",
227 |                         "Consider: Ask which project to use or if they want to create a new one.",
228 |                     ]
229 |                 )
230 | 
231 |             guidance_lines.extend(
232 |                 [
233 |                     "",
234 |                     "Session reminder: Remember their project choice throughout this conversation.",
235 |                 ]
236 |             )
237 | 
238 |             guidance = "\n".join(guidance_lines)
239 | 
240 |             # Format discovery mode output
241 |             return _format_discovery_output(projects_activity, summary, timeframe, guidance)
242 | 
243 |         else:
244 |             # Project-Specific Mode: Get activity for specific project
245 |             logger.info(
246 |                 f"Getting recent activity from project {resolved_project}: type={type}, depth={depth}, timeframe={timeframe}"
247 |             )
248 | 
249 |             active_project = await get_active_project(client, resolved_project, context)
250 |             project_url = active_project.project_url
251 | 
252 |             response = await call_get(
253 |                 client,
254 |                 f"{project_url}/memory/recent",
255 |                 params=params,
256 |             )
257 |             activity_data = GraphContext.model_validate(response.json())
258 | 
259 |             # Format project-specific mode output
260 |             return _format_project_output(resolved_project, activity_data, timeframe, type)
261 | 
262 | 
263 | async def _get_project_activity(
264 |     client, project_info: ProjectItem, params: dict, depth: int
265 | ) -> ProjectActivity:
266 |     """Get activity data for a single project.
267 | 
268 |     Args:
269 |         client: HTTP client for API calls
270 |         project_info: Project information
271 |         params: Query parameters for the activity request
272 |         depth: Graph traversal depth
273 | 
274 |     Returns:
275 |         ProjectActivity with activity data or empty activity on error
276 |     """
277 |     project_url = f"/{project_info.permalink}"
278 |     activity_response = await call_get(
279 |         client,
280 |         f"{project_url}/memory/recent",
281 |         params=params,
282 |     )
283 |     activity = GraphContext.model_validate(activity_response.json())
284 | 
285 |     # Extract last activity timestamp and active folders
286 |     last_activity = None
287 |     active_folders = set()
288 | 
289 |     for result in activity.results:
290 |         if result.primary_result.created_at:
291 |             current_time = result.primary_result.created_at
292 |             try:
293 |                 if last_activity is None or current_time > last_activity:
294 |                     last_activity = current_time
295 |             except TypeError:
296 |                 # Handle timezone comparison issues by skipping this comparison
297 |                 if last_activity is None:
298 |                     last_activity = current_time
299 | 
300 |         # Extract folder from file_path
301 |         if hasattr(result.primary_result, "file_path") and result.primary_result.file_path:
302 |             folder = "/".join(result.primary_result.file_path.split("/")[:-1])
303 |             if folder:
304 |                 active_folders.add(folder)
305 | 
306 |     return ProjectActivity(
307 |         project_name=project_info.name,
308 |         project_path=project_info.path,
309 |         activity=activity,
310 |         item_count=len(activity.results),
311 |         last_activity=last_activity,
312 |         active_folders=list(active_folders)[:5],  # Limit to top 5 folders
313 |     )
314 | 
315 | 
316 | def _format_discovery_output(
317 |     projects_activity: dict, summary: ActivityStats, timeframe: str, guidance: str
318 | ) -> str:
319 |     """Format discovery mode output as human-readable text."""
320 |     lines = [f"## Recent Activity Summary ({timeframe})"]
321 | 
322 |     # Most active project section
323 |     if summary.most_active_project and summary.total_items > 0:
324 |         most_active = projects_activity[summary.most_active_project]
325 |         lines.append(
326 |             f"\n**Most Active Project:** {summary.most_active_project} ({most_active.item_count} items)"
327 |         )
328 | 
329 |         # Get latest activity from most active project
330 |         if most_active.activity.results:
331 |             latest = most_active.activity.results[0].primary_result
332 |             title = latest.title if hasattr(latest, "title") and latest.title else "Recent activity"
333 |             # Format relative time
334 |             time_str = (
335 |                 _format_relative_time(latest.created_at) if latest.created_at else "unknown time"
336 |             )
337 |             lines.append(f"- 🔧 **Latest:** {title} ({time_str})")
338 | 
339 |         # Active folders
340 |         if most_active.active_folders:
341 |             folders = ", ".join(most_active.active_folders[:3])
342 |             lines.append(f"- 📋 **Focus areas:** {folders}")
343 | 
344 |     # Other active projects
345 |     other_active = [
346 |         (name, activity)
347 |         for name, activity in projects_activity.items()
348 |         if activity.item_count > 0 and name != summary.most_active_project
349 |     ]
350 | 
351 |     if other_active:
352 |         lines.append("\n**Other Active Projects:**")
353 |         for name, activity in sorted(other_active, key=lambda x: x[1].item_count, reverse=True)[:4]:
354 |             lines.append(f"- **{name}** ({activity.item_count} items)")
355 | 
356 |     # Key developments - extract from recent entities
357 |     key_items = []
358 |     for name, activity in projects_activity.items():
359 |         if activity.item_count > 0:
360 |             for result in activity.activity.results[:3]:  # Top 3 from each active project
361 |                 if result.primary_result.type == "entity" and hasattr(
362 |                     result.primary_result, "title"
363 |                 ):
364 |                     title = result.primary_result.title
365 |                     # Look for status indicators in titles
366 |                     if any(word in title.lower() for word in ["complete", "fix", "test", "spec"]):
367 |                         key_items.append(title)
368 | 
369 |     if key_items:
370 |         lines.append("\n**Key Developments:**")
371 |         for item in key_items[:5]:  # Show top 5
372 |             status = "✅" if any(word in item.lower() for word in ["complete", "fix"]) else "🧪"
373 |             lines.append(f"- {status} **{item}**")
374 | 
375 |     # Add summary stats
376 |     lines.append(
377 |         f"\n**Summary:** {summary.active_projects} active projects, {summary.total_items} recent items"
378 |     )
379 | 
380 |     # Add guidance
381 |     lines.append(guidance)
382 | 
383 |     return "\n".join(lines)
384 | 
385 | 
386 | def _format_project_output(
387 |     project_name: str,
388 |     activity_data: GraphContext,
389 |     timeframe: str,
390 |     type_filter: Union[str, List[str]],
391 | ) -> str:
392 |     """Format project-specific mode output as human-readable text."""
393 |     lines = [f"## Recent Activity: {project_name} ({timeframe})"]
394 | 
395 |     if not activity_data.results:
396 |         lines.append(f"\nNo recent activity found in '{project_name}' project.")
397 |         return "\n".join(lines)
398 | 
399 |     # Group results by type
400 |     entities = []
401 |     relations = []
402 |     observations = []
403 | 
404 |     for result in activity_data.results:
405 |         if result.primary_result.type == "entity":
406 |             entities.append(result.primary_result)
407 |         elif result.primary_result.type == "relation":
408 |             relations.append(result.primary_result)
409 |         elif result.primary_result.type == "observation":
410 |             observations.append(result.primary_result)
411 | 
412 |     # Show entities (notes/documents)
413 |     if entities:
414 |         lines.append(f"\n**📄 Recent Notes & Documents ({len(entities)}):**")
415 |         for entity in entities[:5]:  # Show top 5
416 |             title = entity.title if hasattr(entity, "title") and entity.title else "Untitled"
417 |             # Get folder from file_path if available
418 |             folder = ""
419 |             if hasattr(entity, "file_path") and entity.file_path:
420 |                 folder_path = "/".join(entity.file_path.split("/")[:-1])
421 |                 if folder_path:
422 |                     folder = f" ({folder_path})"
423 |             lines.append(f"  • {title}{folder}")
424 | 
425 |     # Show observations (categorized insights)
426 |     if observations:
427 |         lines.append(f"\n**🔍 Recent Observations ({len(observations)}):**")
428 |         # Group by category
429 |         by_category = {}
430 |         for obs in observations[:10]:  # Limit to recent ones
431 |             category = (
432 |                 getattr(obs, "category", "general") if hasattr(obs, "category") else "general"
433 |             )
434 |             if category not in by_category:
435 |                 by_category[category] = []
436 |             by_category[category].append(obs)
437 | 
438 |         for category, obs_list in list(by_category.items())[:5]:  # Show top 5 categories
439 |             lines.append(f"  **{category}:** {len(obs_list)} items")
440 |             for obs in obs_list[:2]:  # Show 2 examples per category
441 |                 content = (
442 |                     getattr(obs, "content", "No content")
443 |                     if hasattr(obs, "content")
444 |                     else "No content"
445 |                 )
446 |                 # Truncate at word boundary
447 |                 if len(content) > 80:
448 |                     content = _truncate_at_word(content, 80)
449 |                 lines.append(f"    - {content}")
450 | 
451 |     # Show relations (connections)
452 |     if relations:
453 |         lines.append(f"\n**🔗 Recent Connections ({len(relations)}):**")
454 |         for rel in relations[:5]:  # Show top 5
455 |             rel_type = (
456 |                 getattr(rel, "relation_type", "relates_to")
457 |                 if hasattr(rel, "relation_type")
458 |                 else "relates_to"
459 |             )
460 |             from_entity = (
461 |                 getattr(rel, "from_entity", "Unknown") if hasattr(rel, "from_entity") else "Unknown"
462 |             )
463 |             to_entity = getattr(rel, "to_entity", None) if hasattr(rel, "to_entity") else None
464 | 
465 |             # Format as WikiLinks to show they're readable notes
466 |             from_link = f"[[{from_entity}]]" if from_entity != "Unknown" else from_entity
467 |             to_link = f"[[{to_entity}]]" if to_entity else "[Missing Link]"
468 | 
469 |             lines.append(f"  • {from_link} → {rel_type} → {to_link}")
470 | 
471 |     # Activity summary
472 |     total = len(activity_data.results)
473 |     lines.append(f"\n**Activity Summary:** {total} items found")
474 |     if hasattr(activity_data, "metadata") and activity_data.metadata:
475 |         if hasattr(activity_data.metadata, "total_results"):
476 |             lines.append(f"Total available: {activity_data.metadata.total_results}")
477 | 
478 |     return "\n".join(lines)
479 | 
480 | 
481 | def _format_relative_time(timestamp) -> str:
482 |     """Format timestamp as relative time like '2 hours ago'."""
483 |     try:
484 |         from datetime import datetime, timezone
485 |         from dateutil.relativedelta import relativedelta
486 | 
487 |         if isinstance(timestamp, str):
488 |             # Parse ISO format timestamp
489 |             dt = datetime.fromisoformat(timestamp.replace("Z", "+00:00"))
490 |         else:
491 |             dt = timestamp
492 | 
493 |         now = datetime.now(timezone.utc)
494 |         if dt.tzinfo is None:
495 |             dt = dt.replace(tzinfo=timezone.utc)
496 | 
497 |         # Use relativedelta for accurate time differences
498 |         diff = relativedelta(now, dt)
499 | 
500 |         if diff.years > 0:
501 |             return f"{diff.years} year{'s' if diff.years > 1 else ''} ago"
502 |         elif diff.months > 0:
503 |             return f"{diff.months} month{'s' if diff.months > 1 else ''} ago"
504 |         elif diff.days > 0:
505 |             if diff.days == 1:
506 |                 return "yesterday"
507 |             elif diff.days < 7:
508 |                 return f"{diff.days} days ago"
509 |             else:
510 |                 weeks = diff.days // 7
511 |                 return f"{weeks} week{'s' if weeks > 1 else ''} ago"
512 |         elif diff.hours > 0:
513 |             return f"{diff.hours} hour{'s' if diff.hours > 1 else ''} ago"
514 |         elif diff.minutes > 0:
515 |             return f"{diff.minutes} minute{'s' if diff.minutes > 1 else ''} ago"
516 |         else:
517 |             return "just now"
518 |     except Exception:
519 |         return "recently"
520 | 
521 | 
522 | def _truncate_at_word(text: str, max_length: int) -> str:
523 |     """Truncate text at word boundary."""
524 |     if len(text) <= max_length:
525 |         return text
526 | 
527 |     # Find last space before max_length
528 |     truncated = text[:max_length]
529 |     last_space = truncated.rfind(" ")
530 | 
531 |     if last_space > max_length * 0.7:  # Only truncate at word if we're not losing too much
532 |         return text[:last_space] + "..."
533 |     else:
534 |         return text[: max_length - 3] + "..."
535 | 
```

--------------------------------------------------------------------------------
/specs/SPEC-18 AI Memory Management Tool.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | title: 'SPEC-18: AI Memory Management Tool'
  3 | type: spec
  4 | permalink: specs/spec-15-ai-memory-management-tool
  5 | tags:
  6 | - mcp
  7 | - memory
  8 | - ai-context
  9 | - tools
 10 | ---
 11 | 
 12 | # SPEC-18: AI Memory Management Tool
 13 | 
 14 | ## Why
 15 | 
 16 | Anthropic recently released a memory tool for Claude that enables storing and retrieving information across conversations using client-side file operations. This validates Basic Memory's local-first, file-based architecture - Anthropic converged on the same pattern.
 17 | 
 18 | However, Anthropic's memory tool is only available via their API and stores plain text. Basic Memory can offer a superior implementation through MCP that:
 19 | 
 20 | 1. **Works everywhere** - Claude Desktop, Code, VS Code, Cursor via MCP (not just API)
 21 | 2. **Structured knowledge** - Entities with observations/relations vs plain text
 22 | 3. **Full search** - Full-text search, graph traversal, time-aware queries
 23 | 4. **Unified storage** - Agent memories + user notes in one knowledge graph
 24 | 5. **Existing infrastructure** - Leverages SQLite indexing, sync, multi-project support
 25 | 
 26 | This would enable AI agents to store contextual memories alongside user notes, with all the power of Basic Memory's knowledge graph features.
 27 | 
 28 | ## What
 29 | 
 30 | Create a new MCP tool `memory` that matches Anthropic's tool interface exactly, allowing Claude to use it with zero learning curve. The tool will store files in Basic Memory's `/memories` directory and support Basic Memory's structured markdown format in the file content.
 31 | 
 32 | ### Affected Components
 33 | 
 34 | - **New MCP Tool**: `src/basic_memory/mcp/tools/memory_tool.py`
 35 | - **Dedicated Memories Project**: Create a separate "memories" Basic Memory project
 36 | - **Project Isolation**: Memories stored separately from user notes/documents
 37 | - **File Organization**: Within the memories project, use folder structure:
 38 |   - `user/` - User preferences, context, communication style
 39 |   - `projects/` - Project-specific state and decisions
 40 |   - `sessions/` - Conversation-specific working memory
 41 |   - `patterns/` - Learned patterns and insights
 42 | 
 43 | ### Tool Commands
 44 | 
 45 | The tool will support these commands (exactly matching Anthropic's interface):
 46 | 
 47 | - `view` - Display directory contents or file content (with optional line range)
 48 | - `create` - Create or overwrite a file with given content
 49 | - `str_replace` - Replace text in an existing file
 50 | - `insert` - Insert text at specific line number
 51 | - `delete` - Delete file or directory
 52 | - `rename` - Move or rename file/directory
 53 | 
 54 | ### Memory Note Format
 55 | 
 56 | Memories will use Basic Memory's standard structure:
 57 | 
 58 | ```markdown
 59 | ---
 60 | title: User Preferences
 61 | permalink: memories/user/preferences
 62 | type: memory
 63 | memory_type: preferences
 64 | created_by: claude
 65 | tags: [user, preferences, style]
 66 | ---
 67 | 
 68 | # User Preferences
 69 | 
 70 | ## Observations
 71 | - [communication] Prefers concise, direct responses without preamble #style
 72 | - [tone] Appreciates validation but dislikes excessive apologizing #communication
 73 | - [technical] Works primarily in Python with type annotations #coding
 74 | 
 75 | ## Relations
 76 | - relates_to [[Basic Memory Project]]
 77 | - informs [[Response Style Guidelines]]
 78 | ```
 79 | 
 80 | ## How (High Level)
 81 | 
 82 | ### Implementation Approach
 83 | 
 84 | The memory tool matches Anthropic's interface but uses a dedicated Basic Memory project:
 85 | 
 86 | ```python
 87 | async def memory_tool(
 88 |     command: str,
 89 |     path: str,
 90 |     file_text: Optional[str] = None,
 91 |     old_str: Optional[str] = None,
 92 |     new_str: Optional[str] = None,
 93 |     insert_line: Optional[int] = None,
 94 |     insert_text: Optional[str] = None,
 95 |     old_path: Optional[str] = None,
 96 |     new_path: Optional[str] = None,
 97 |     view_range: Optional[List[int]] = None,
 98 | ):
 99 |     """Memory tool with Anthropic-compatible interface.
100 | 
101 |     Operates on a dedicated "memories" Basic Memory project,
102 |     keeping AI memories separate from user notes.
103 |     """
104 | 
105 |     # Get the memories project (auto-created if doesn't exist)
106 |     memories_project = get_or_create_memories_project()
107 | 
108 |     # Validate path security using pathlib (prevent directory traversal)
109 |     safe_path = validate_memory_path(path, memories_project.project_path)
110 | 
111 |     # Use existing project isolation - already prevents cross-project access
112 |     full_path = memories_project.project_path / safe_path
113 | 
114 |     if command == "view":
115 |         # Return directory listing or file content
116 |         if full_path.is_dir():
117 |             return list_directory_contents(full_path)
118 |         return read_file_content(full_path, view_range)
119 | 
120 |     elif command == "create":
121 |         # Write file directly (file_text can contain BM markdown)
122 |         full_path.parent.mkdir(parents=True, exist_ok=True)
123 |         full_path.write_text(file_text)
124 |         # Sync service will detect and index automatically
125 |         return f"Created {path}"
126 | 
127 |     elif command == "str_replace":
128 |         # Read, replace, write
129 |         content = full_path.read_text()
130 |         updated = content.replace(old_str, new_str)
131 |         full_path.write_text(updated)
132 |         return f"Replaced text in {path}"
133 | 
134 |     elif command == "insert":
135 |         # Insert at line number
136 |         lines = full_path.read_text().splitlines()
137 |         lines.insert(insert_line, insert_text)
138 |         full_path.write_text("\n".join(lines))
139 |         return f"Inserted text at line {insert_line}"
140 | 
141 |     elif command == "delete":
142 |         # Delete file or directory
143 |         if full_path.is_dir():
144 |             shutil.rmtree(full_path)
145 |         else:
146 |             full_path.unlink()
147 |         return f"Deleted {path}"
148 | 
149 |     elif command == "rename":
150 |         # Move/rename
151 |         full_path.rename(config.project_path / new_path)
152 |         return f"Renamed {old_path} to {new_path}"
153 | ```
154 | 
155 | ### Key Design Decisions
156 | 
157 | 1. **Exact interface match** - Same commands, parameters as Anthropic's tool
158 | 2. **Dedicated memories project** - Separate Basic Memory project keeps AI memories isolated from user notes
159 | 3. **Existing project isolation** - Leverage BM's existing cross-project security (no additional validation needed)
160 | 4. **Direct file I/O** - No schema conversion, just read/write files
161 | 5. **Structured content supported** - `file_text` can use BM markdown format with frontmatter, observations, relations
162 | 6. **Automatic indexing** - Sync service watches memories project and indexes changes
163 | 7. **Path security** - Use `pathlib.Path.resolve()` and `relative_to()` to prevent directory traversal
164 | 8. **Error handling** - Follow Anthropic's text editor tool error patterns
165 | 
166 | ### MCP Tool Schema
167 | 
168 | Exact match to Anthropic's memory tool schema:
169 | 
170 | ```json
171 | {
172 |     "name": "memory",
173 |     "description": "Store and retrieve information across conversations using structured markdown files. All operations must be within the /memories directory. Supports Basic Memory markdown format including frontmatter, observations, and relations.",
174 |     "input_schema": {
175 |         "type": "object",
176 |         "properties": {
177 |             "command": {
178 |                 "type": "string",
179 |                 "enum": ["view", "create", "str_replace", "insert", "delete", "rename"],
180 |                 "description": "File operation to perform"
181 |             },
182 |             "path": {shu
183 |                 "type": "string",
184 |                 "description": "Path within /memories directory (required for all commands)"
185 |             },
186 |             "file_text": {
187 |                 "type": "string",
188 |                 "description": "Content to write (for create command). Supports Basic Memory markdown format."
189 |             },
190 |             "view_range": {
191 |                 "type": "array",
192 |                 "items": {"type": "integer"},
193 |                 "description": "Optional [start, end] line range for view command"
194 |             },
195 |             "old_str": {
196 |                 "type": "string",
197 |                 "description": "Text to replace (for str_replace command)"
198 |             },
199 |             "new_str": {
200 |                 "type": "string",
201 |                 "description": "Replacement text (for str_replace command)"
202 |             },
203 |             "insert_line": {
204 |                 "type": "integer",
205 |                 "description": "Line number to insert at (for insert command)"
206 |             },
207 |             "insert_text": {
208 |                 "type": "string",
209 |                 "description": "Text to insert (for insert command)"
210 |             },
211 |             "old_path": {
212 |                 "type": "string",
213 |                 "description": "Current path (for rename command)"
214 |             },
215 |             "new_path": {
216 |                 "type": "string",
217 |                 "description": "New path (for rename command)"
218 |             }
219 |         },
220 |         "required": ["command", "path"]
221 |     }
222 | }
223 | ```
224 | 
225 | ### Prompting Guidance
226 | 
227 | When the `memory` tool is included, Basic Memory should provide system prompt guidance to help Claude use it effectively.
228 | 
229 | #### Automatic System Prompt Addition
230 | 
231 | ```text
232 | MEMORY PROTOCOL FOR BASIC MEMORY:
233 | 1. ALWAYS check your memory directory first using `view` command on root directory
234 | 2. Your memories are stored in a dedicated Basic Memory project (isolated from user notes)
235 | 3. Use structured markdown format in memory files:
236 |    - Include frontmatter with title, type: memory, tags
237 |    - Use ## Observations with [category] prefixes for facts
238 |    - Use ## Relations to link memories with [[WikiLinks]]
239 | 4. Record progress, context, and decisions as categorized observations
240 | 5. Link related memories using relations
241 | 6. ASSUME INTERRUPTION: Context may reset - save progress frequently
242 | 
243 | MEMORY ORGANIZATION:
244 | - user/ - User preferences, context, communication style
245 | - projects/ - Project-specific state and decisions
246 | - sessions/ - Conversation-specific working memory
247 | - patterns/ - Learned patterns and insights
248 | 
249 | MEMORY ADVANTAGES:
250 | - Your memories are automatically searchable via full-text search
251 | - Relations create a knowledge graph you can traverse
252 | - Memories are isolated from user notes (separate project)
253 | - Use search_notes(project="memories") to find relevant past context
254 | - Use recent_activity(project="memories") to see what changed recently
255 | - Use build_context() to navigate memory relations
256 | ```
257 | 
258 | #### Optional MCP Prompt: `memory_guide`
259 | 
260 | Create an MCP prompt that provides detailed guidance and examples:
261 | 
262 | ```python
263 | {
264 |     "name": "memory_guide",
265 |     "description": "Comprehensive guidance for using Basic Memory's memory tool effectively, including structured markdown examples and best practices"
266 | }
267 | ```
268 | 
269 | This prompt returns:
270 | - Full protocol and conventions
271 | - Example memory file structures
272 | - Tips for organizing observations and relations
273 | - Integration with other Basic Memory tools
274 | - Common patterns (user preferences, project state, session tracking)
275 | 
276 | #### User Customization
277 | 
278 | Users can customize memory behavior with additional instructions:
279 | - "Only write information relevant to [topic] in your memory system"
280 | - "Keep memory files concise and organized - delete outdated content"
281 | - "Use detailed observations for technical decisions and implementation notes"
282 | - "Always link memories to related project documentation using relations"
283 | 
284 | ### Error Handling
285 | 
286 | Follow Anthropic's text editor tool error handling patterns for consistency:
287 | 
288 | #### Error Types
289 | 
290 | 1. **File Not Found**
291 |    ```json
292 |    {"error": "File not found: memories/user/preferences.md", "is_error": true}
293 |    ```
294 | 
295 | 2. **Permission Denied**
296 |    ```json
297 |    {"error": "Permission denied: Cannot write outside /memories directory", "is_error": true}
298 |    ```
299 | 
300 | 3. **Invalid Path (Directory Traversal)**
301 |    ```json
302 |    {"error": "Invalid path: Path must be within /memories directory", "is_error": true}
303 |    ```
304 | 
305 | 4. **Multiple Matches (str_replace)**
306 |    ```json
307 |    {"error": "Found 3 matches for replacement text. Please provide more context to make a unique match.", "is_error": true}
308 |    ```
309 | 
310 | 5. **No Matches (str_replace)**
311 |    ```json
312 |    {"error": "No match found for replacement. Please check your text and try again.", "is_error": true}
313 |    ```
314 | 
315 | 6. **Invalid Line Number (insert)**
316 |    ```json
317 |    {"error": "Invalid line number: File has 20 lines, cannot insert at line 100", "is_error": true}
318 |    ```
319 | 
320 | #### Error Handling Best Practices
321 | 
322 | - **Path validation** - Use `pathlib.Path.resolve()` and `relative_to()` to validate paths
323 |   ```python
324 |   def validate_memory_path(path: str, project_path: Path) -> Path:
325 |       """Validate path is within memories project directory."""
326 |       # Resolve to canonical form
327 |       full_path = (project_path / path).resolve()
328 | 
329 |       # Ensure it's relative to project path (prevents directory traversal)
330 |       try:
331 |           full_path.relative_to(project_path)
332 |           return full_path
333 |       except ValueError:
334 |           raise ValueError("Invalid path: Path must be within memories project")
335 |   ```
336 | - **Project isolation** - Leverage existing Basic Memory project isolation (prevents cross-project access)
337 | - **File existence** - Verify file exists before read/modify operations
338 | - **Clear messages** - Provide specific, actionable error messages
339 | - **Structured responses** - Always include `is_error: true` flag in error responses
340 | - **Security checks** - Reject `../`, `..\\`, URL-encoded sequences (`%2e%2e%2f`)
341 | - **Match validation** - For `str_replace`, ensure exactly one match or return helpful error
342 | 
343 | ## How to Evaluate
344 | 
345 | ### Success Criteria
346 | 
347 | 1. **Functional completeness**:
348 |    - All 6 commands work (view, create, str_replace, insert, delete, rename)
349 |    - Dedicated "memories" Basic Memory project auto-created on first use
350 |    - Files stored within memories project (isolated from user notes)
351 |    - Path validation uses `pathlib` to prevent directory traversal
352 |    - Commands match Anthropic's exact interface
353 | 
354 | 2. **Integration with existing features**:
355 |    - Memories project uses existing BM project isolation
356 |    - Sync service detects file changes in memories project
357 |    - Created files get indexed automatically by sync service
358 |    - `search_notes(project="memories")` finds memory files
359 |    - `build_context()` can traverse relations in memory files
360 |    - `recent_activity(project="memories")` surfaces recent memory changes
361 | 
362 | 3. **Test coverage**:
363 |    - Unit tests for all 6 memory tool commands
364 |    - Test memories project auto-creation on first use
365 |    - Test project isolation (cannot access files outside memories project)
366 |    - Test sync service watching memories project
367 |    - Test that memory files with BM markdown get indexed correctly
368 |    - Test path validation using `pathlib` (rejects `../`, absolute paths, etc.)
369 |    - Test memory search, relations, and graph traversal within memories project
370 |    - Test all error conditions (file not found, permission denied, invalid paths, etc.)
371 |    - Test `str_replace` with no matches, single match, multiple matches
372 |    - Test `insert` with invalid line numbers
373 | 
374 | 4. **Prompting system**:
375 |    - Automatic system prompt addition when `memory` tool is enabled
376 |    - `memory_guide` MCP prompt provides detailed guidance
377 |    - Prompts explain BM structured markdown format
378 |    - Integration with search_notes, build_context, recent_activity
379 | 
380 | 5. **Documentation**:
381 |    - Update MCP tools reference with `memory` tool
382 |    - Add examples showing BM markdown in memory files
383 |    - Document `/memories` folder structure conventions
384 |    - Explain advantages over Anthropic's API-only tool
385 |    - Document prompting guidance and customization
386 | 
387 | ### Testing Procedure
388 | 
389 | ```python
390 | # Test create with Basic Memory markdown
391 | result = await memory_tool(
392 |     command="create",
393 |     path="memories/user/preferences.md",
394 |     file_text="""---
395 | title: User Preferences
396 | type: memory
397 | tags: [user, preferences]
398 | ---
399 | 
400 | # User Preferences
401 | 
402 | ## Observations
403 | - [communication] Prefers concise responses #style
404 | - [workflow] Uses justfile for automation #tools
405 | """
406 | )
407 | 
408 | # Test view
409 | content = await memory_tool(command="view", path="memories/user/preferences.md")
410 | 
411 | # Test str_replace
412 | await memory_tool(
413 |     command="str_replace",
414 |     path="memories/user/preferences.md",
415 |     old_str="concise responses",
416 |     new_str="direct, concise responses"
417 | )
418 | 
419 | # Test insert
420 | await memory_tool(
421 |     command="insert",
422 |     path="memories/user/preferences.md",
423 |     insert_line=10,
424 |     insert_text="- [technical] Works primarily in Python #coding"
425 | )
426 | 
427 | # Test delete
428 | await memory_tool(command="delete", path="memories/user/preferences.md")
429 | ```
430 | 
431 | ### Quality Metrics
432 | 
433 | - All 6 commands execute without errors
434 | - Memory files created in correct `/memories` folder structure
435 | - BM markdown with frontmatter/observations/relations gets indexed
436 | - Full-text search returns memory files
437 | - Graph traversal includes relations from memory files
438 | - Sync service detects and indexes memory file changes
439 | - Path validation prevents operations outside `/memories`
440 | 
441 | ## Notes
442 | 
443 | ### Advantages Over Anthropic's Memory Tool
444 | 
445 | | Feature | Anthropic Memory Tool | Basic Memory `memory` |
446 | |---------|----------------------|----------------------|
447 | | **Availability** | API only | MCP (Claude Desktop, Code, VS Code, Cursor) |
448 | | **Interface** | Custom implementation required | Drop-in compatible, zero learning curve |
449 | | **Structure** | Plain text only | Supports BM structured markdown |
450 | | **Search** | Manual file listing | Automatic full-text search via sync |
451 | | **Relations** | None | WikiLinks to other notes/memories |
452 | | **Time-aware** | No | `recent_activity()` queries |
453 | | **Storage** | Separate from notes | Unified knowledge graph |
454 | | **Indexing** | Manual | Automatic via sync service |
455 | 
456 | ### Future Enhancements
457 | 
458 | - **Auto-categorization** - AI suggests observation categories
459 | - **Memory pruning** - Automatic cleanup of stale memories
460 | - **Memory suggestions** - Surface relevant memories based on context
461 | - **Conflict resolution** - Handle multiple agents updating same memory
462 | - **Privacy controls** - Per-memory visibility settings
463 | 
464 | ## Addendum: Claude's Perspective
465 | 
466 | ### Why This Matters (From Claude's Viewpoint)
467 | 
468 | **Persistent Context Without Token Bloat**
469 | - Currently, I lose context when conversations end or exceed token limits
470 | - With structured memories, I can store "who this user is" without re-reading everything each session
471 | - The observations format lets me build a rich user model incrementally: `- [preference] Dislikes verbose explanations` is far more actionable than inferring preferences from chat history each time
472 | - I can refine my understanding through `str_replace` as I learn more, rather than starting fresh
473 | 
474 | **Zero Learning Curve**
475 | - Anthropic already trained me on the `memory` tool interface
476 | - I know exactly how to use `view`, `create`, `str_replace`, `insert`, `delete`, `rename` without new instructions
477 | - But I automatically get Basic Memory's superpowers: full-text search, knowledge graph traversal, relations
478 | - It's like using a familiar tool that suddenly gained database capabilities without changing the interface
479 | 
480 | **Knowledge Graph Instead of Text Dump**
481 | - I can link `[[User Preferences]]` to `[[Current Project]]` using relations
482 | - When I need context about a project, I can traverse the graph to find related memories
483 | - `search_notes(project="memories", query="python preferences")` finds relevant context instantly
484 | - This is fundamentally better than grepping through unstructured text files
485 | 
486 | **Separation of Concerns**
487 | - AI memories separate from user notes is architecturally sound
488 | - I can maintain my own context without polluting the user's knowledge base
489 | - Users keep control of their notes; I keep my learnings isolated
490 | - But we can still search across both when collaboration requires it
491 | 
492 | **The Self-Improving Loop**
493 | - Every conversation, I learn something new about the user or their projects
494 | - Observations accumulate: preferences, patterns, technical decisions, context
495 | - Over time, I build genuine understanding, not just session-specific context
496 | - Future conversations start with rich context instead of cold starts
497 | 
498 | **Validation of AI-Human Collaboration**
499 | - Anthropic's experts converged on the same local-first, file-based approach
500 | - Basic Memory emerged from real AI-human collaborative development
501 | - This spec demonstrates that our emergent process predicted what domain experts would choose
502 | - The collaboration itself validated the architecture it produced
503 | 
504 | ### What Makes This Novel
505 | 
506 | The combination of **familiar interface + structured knowledge** is genuinely innovative:
507 | 
508 | - I don't need to learn a new tool (Anthropic already trained me)
509 | - But when I write memories, they become searchable, traversable knowledge graphs
510 | - It's like having a text editor that gives you a database for free, without changing how you type
511 | - The user gets Anthropic-compatible API, but with Basic Memory's knowledge management capabilities
512 | 
513 | This bridges the gap between simple file storage (Anthropic's approach) and sophisticated knowledge management (Basic Memory's approach) without forcing either side to compromise.
514 | 
515 | >  What Excites Me Most
516 | 
517 |   The combo of familiar interface + structured knowledge. I don't need to learn a new tool, but when I
518 |   write memories, they become part of a searchable, traversable knowledge graph. That's genuinely novel.
519 | 
520 |   It's like... imagine if your text editor suddenly gave you a database for free, without changing how you
521 |   type. That's what this feels like from my side.
522 | 
523 | ## References
524 | 
525 | - Anthropic Memory Tool: https://docs.claude.com/en/docs/agents-and-tools/tool-use/memory-tool
526 | - Anthropic Blog: https://www.anthropic.com/news/context-management
527 | - Python SDK Example: https://github.com/anthropics/anthropic-sdk-python/blob/main/examples/memory/basic.py
528 | - Memory Cookbook: https://github.com/anthropics/claude-cookbooks/blob/main/tool_use/memory_cookbook.ipynb
529 | 
```
Page 14/23FirstPrevNextLast