#
tokens: 49066/50000 18/348 files (page 6/23)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 6 of 23. Use http://codebase.md/basicmachines-co/basic-memory?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .claude
│   ├── agents
│   │   ├── python-developer.md
│   │   └── system-architect.md
│   └── commands
│       ├── release
│       │   ├── beta.md
│       │   ├── changelog.md
│       │   ├── release-check.md
│       │   └── release.md
│       ├── spec.md
│       └── test-live.md
├── .dockerignore
├── .github
│   ├── dependabot.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.md
│   │   ├── config.yml
│   │   ├── documentation.md
│   │   └── feature_request.md
│   └── workflows
│       ├── claude-code-review.yml
│       ├── claude-issue-triage.yml
│       ├── claude.yml
│       ├── dev-release.yml
│       ├── docker.yml
│       ├── pr-title.yml
│       ├── release.yml
│       └── test.yml
├── .gitignore
├── .python-version
├── CHANGELOG.md
├── CITATION.cff
├── CLA.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── docker-compose.yml
├── Dockerfile
├── docs
│   ├── ai-assistant-guide-extended.md
│   ├── character-handling.md
│   ├── cloud-cli.md
│   └── Docker.md
├── justfile
├── LICENSE
├── llms-install.md
├── pyproject.toml
├── README.md
├── SECURITY.md
├── smithery.yaml
├── specs
│   ├── SPEC-1 Specification-Driven Development Process.md
│   ├── SPEC-10 Unified Deployment Workflow and Event Tracking.md
│   ├── SPEC-11 Basic Memory API Performance Optimization.md
│   ├── SPEC-12 OpenTelemetry Observability.md
│   ├── SPEC-13 CLI Authentication with Subscription Validation.md
│   ├── SPEC-14 Cloud Git Versioning & GitHub Backup.md
│   ├── SPEC-14- Cloud Git Versioning & GitHub Backup.md
│   ├── SPEC-15 Configuration Persistence via Tigris for Cloud Tenants.md
│   ├── SPEC-16 MCP Cloud Service Consolidation.md
│   ├── SPEC-17 Semantic Search with ChromaDB.md
│   ├── SPEC-18 AI Memory Management Tool.md
│   ├── SPEC-19 Sync Performance and Memory Optimization.md
│   ├── SPEC-2 Slash Commands Reference.md
│   ├── SPEC-3 Agent Definitions.md
│   ├── SPEC-4 Notes Web UI Component Architecture.md
│   ├── SPEC-5 CLI Cloud Upload via WebDAV.md
│   ├── SPEC-6 Explicit Project Parameter Architecture.md
│   ├── SPEC-7 POC to spike Tigris Turso for local access to cloud data.md
│   ├── SPEC-8 TigrisFS Integration.md
│   ├── SPEC-9 Multi-Project Bidirectional Sync Architecture.md
│   ├── SPEC-9 Signed Header Tenant Information.md
│   └── SPEC-9-1 Follow-Ups- Conflict, Sync, and Observability.md
├── src
│   └── basic_memory
│       ├── __init__.py
│       ├── alembic
│       │   ├── alembic.ini
│       │   ├── env.py
│       │   ├── migrations.py
│       │   ├── script.py.mako
│       │   └── versions
│       │       ├── 3dae7c7b1564_initial_schema.py
│       │       ├── 502b60eaa905_remove_required_from_entity_permalink.py
│       │       ├── 5fe1ab1ccebe_add_projects_table.py
│       │       ├── 647e7a75e2cd_project_constraint_fix.py
│       │       ├── 9d9c1cb7d8f5_add_mtime_and_size_columns_to_entity_.py
│       │       ├── a1b2c3d4e5f6_fix_project_foreign_keys.py
│       │       ├── b3c3938bacdb_relation_to_name_unique_index.py
│       │       ├── cc7172b46608_update_search_index_schema.py
│       │       └── e7e1f4367280_add_scan_watermark_tracking_to_project.py
│       ├── api
│       │   ├── __init__.py
│       │   ├── app.py
│       │   ├── routers
│       │   │   ├── __init__.py
│       │   │   ├── directory_router.py
│       │   │   ├── importer_router.py
│       │   │   ├── knowledge_router.py
│       │   │   ├── management_router.py
│       │   │   ├── memory_router.py
│       │   │   ├── project_router.py
│       │   │   ├── prompt_router.py
│       │   │   ├── resource_router.py
│       │   │   ├── search_router.py
│       │   │   └── utils.py
│       │   └── template_loader.py
│       ├── cli
│       │   ├── __init__.py
│       │   ├── app.py
│       │   ├── auth.py
│       │   ├── commands
│       │   │   ├── __init__.py
│       │   │   ├── cloud
│       │   │   │   ├── __init__.py
│       │   │   │   ├── api_client.py
│       │   │   │   ├── bisync_commands.py
│       │   │   │   ├── cloud_utils.py
│       │   │   │   ├── core_commands.py
│       │   │   │   ├── mount_commands.py
│       │   │   │   ├── rclone_config.py
│       │   │   │   ├── rclone_installer.py
│       │   │   │   ├── upload_command.py
│       │   │   │   └── upload.py
│       │   │   ├── command_utils.py
│       │   │   ├── db.py
│       │   │   ├── import_chatgpt.py
│       │   │   ├── import_claude_conversations.py
│       │   │   ├── import_claude_projects.py
│       │   │   ├── import_memory_json.py
│       │   │   ├── mcp.py
│       │   │   ├── project.py
│       │   │   ├── status.py
│       │   │   ├── sync.py
│       │   │   └── tool.py
│       │   └── main.py
│       ├── config.py
│       ├── db.py
│       ├── deps.py
│       ├── file_utils.py
│       ├── ignore_utils.py
│       ├── importers
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── chatgpt_importer.py
│       │   ├── claude_conversations_importer.py
│       │   ├── claude_projects_importer.py
│       │   ├── memory_json_importer.py
│       │   └── utils.py
│       ├── markdown
│       │   ├── __init__.py
│       │   ├── entity_parser.py
│       │   ├── markdown_processor.py
│       │   ├── plugins.py
│       │   ├── schemas.py
│       │   └── utils.py
│       ├── mcp
│       │   ├── __init__.py
│       │   ├── async_client.py
│       │   ├── project_context.py
│       │   ├── prompts
│       │   │   ├── __init__.py
│       │   │   ├── ai_assistant_guide.py
│       │   │   ├── continue_conversation.py
│       │   │   ├── recent_activity.py
│       │   │   ├── search.py
│       │   │   └── utils.py
│       │   ├── resources
│       │   │   ├── ai_assistant_guide.md
│       │   │   └── project_info.py
│       │   ├── server.py
│       │   └── tools
│       │       ├── __init__.py
│       │       ├── build_context.py
│       │       ├── canvas.py
│       │       ├── chatgpt_tools.py
│       │       ├── delete_note.py
│       │       ├── edit_note.py
│       │       ├── list_directory.py
│       │       ├── move_note.py
│       │       ├── project_management.py
│       │       ├── read_content.py
│       │       ├── read_note.py
│       │       ├── recent_activity.py
│       │       ├── search.py
│       │       ├── utils.py
│       │       ├── view_note.py
│       │       └── write_note.py
│       ├── models
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── knowledge.py
│       │   ├── project.py
│       │   └── search.py
│       ├── repository
│       │   ├── __init__.py
│       │   ├── entity_repository.py
│       │   ├── observation_repository.py
│       │   ├── project_info_repository.py
│       │   ├── project_repository.py
│       │   ├── relation_repository.py
│       │   ├── repository.py
│       │   └── search_repository.py
│       ├── schemas
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── cloud.py
│       │   ├── delete.py
│       │   ├── directory.py
│       │   ├── importer.py
│       │   ├── memory.py
│       │   ├── project_info.py
│       │   ├── prompt.py
│       │   ├── request.py
│       │   ├── response.py
│       │   ├── search.py
│       │   └── sync_report.py
│       ├── services
│       │   ├── __init__.py
│       │   ├── context_service.py
│       │   ├── directory_service.py
│       │   ├── entity_service.py
│       │   ├── exceptions.py
│       │   ├── file_service.py
│       │   ├── initialization.py
│       │   ├── link_resolver.py
│       │   ├── project_service.py
│       │   ├── search_service.py
│       │   └── service.py
│       ├── sync
│       │   ├── __init__.py
│       │   ├── background_sync.py
│       │   ├── sync_service.py
│       │   └── watch_service.py
│       ├── templates
│       │   └── prompts
│       │       ├── continue_conversation.hbs
│       │       └── search.hbs
│       └── utils.py
├── test-int
│   ├── BENCHMARKS.md
│   ├── cli
│   │   ├── test_project_commands_integration.py
│   │   ├── test_sync_commands_integration.py
│   │   └── test_version_integration.py
│   ├── conftest.py
│   ├── mcp
│   │   ├── test_build_context_underscore.py
│   │   ├── test_build_context_validation.py
│   │   ├── test_chatgpt_tools_integration.py
│   │   ├── test_default_project_mode_integration.py
│   │   ├── test_delete_note_integration.py
│   │   ├── test_edit_note_integration.py
│   │   ├── test_list_directory_integration.py
│   │   ├── test_move_note_integration.py
│   │   ├── test_project_management_integration.py
│   │   ├── test_project_state_sync_integration.py
│   │   ├── test_read_content_integration.py
│   │   ├── test_read_note_integration.py
│   │   ├── test_search_integration.py
│   │   ├── test_single_project_mcp_integration.py
│   │   └── test_write_note_integration.py
│   ├── test_db_wal_mode.py
│   ├── test_disable_permalinks_integration.py
│   └── test_sync_performance_benchmark.py
├── tests
│   ├── __init__.py
│   ├── api
│   │   ├── conftest.py
│   │   ├── test_async_client.py
│   │   ├── test_continue_conversation_template.py
│   │   ├── test_directory_router.py
│   │   ├── test_importer_router.py
│   │   ├── test_knowledge_router.py
│   │   ├── test_management_router.py
│   │   ├── test_memory_router.py
│   │   ├── test_project_router_operations.py
│   │   ├── test_project_router.py
│   │   ├── test_prompt_router.py
│   │   ├── test_relation_background_resolution.py
│   │   ├── test_resource_router.py
│   │   ├── test_search_router.py
│   │   ├── test_search_template.py
│   │   ├── test_template_loader_helpers.py
│   │   └── test_template_loader.py
│   ├── cli
│   │   ├── conftest.py
│   │   ├── test_bisync_commands.py
│   │   ├── test_cli_tools.py
│   │   ├── test_cloud_authentication.py
│   │   ├── test_cloud_utils.py
│   │   ├── test_ignore_utils.py
│   │   ├── test_import_chatgpt.py
│   │   ├── test_import_claude_conversations.py
│   │   ├── test_import_claude_projects.py
│   │   ├── test_import_memory_json.py
│   │   └── test_upload.py
│   ├── conftest.py
│   ├── db
│   │   └── test_issue_254_foreign_key_constraints.py
│   ├── importers
│   │   ├── test_importer_base.py
│   │   └── test_importer_utils.py
│   ├── markdown
│   │   ├── __init__.py
│   │   ├── test_date_frontmatter_parsing.py
│   │   ├── test_entity_parser_error_handling.py
│   │   ├── test_entity_parser.py
│   │   ├── test_markdown_plugins.py
│   │   ├── test_markdown_processor.py
│   │   ├── test_observation_edge_cases.py
│   │   ├── test_parser_edge_cases.py
│   │   ├── test_relation_edge_cases.py
│   │   └── test_task_detection.py
│   ├── mcp
│   │   ├── conftest.py
│   │   ├── test_obsidian_yaml_formatting.py
│   │   ├── test_permalink_collision_file_overwrite.py
│   │   ├── test_prompts.py
│   │   ├── test_resources.py
│   │   ├── test_tool_build_context.py
│   │   ├── test_tool_canvas.py
│   │   ├── test_tool_delete_note.py
│   │   ├── test_tool_edit_note.py
│   │   ├── test_tool_list_directory.py
│   │   ├── test_tool_move_note.py
│   │   ├── test_tool_read_content.py
│   │   ├── test_tool_read_note.py
│   │   ├── test_tool_recent_activity.py
│   │   ├── test_tool_resource.py
│   │   ├── test_tool_search.py
│   │   ├── test_tool_utils.py
│   │   ├── test_tool_view_note.py
│   │   ├── test_tool_write_note.py
│   │   └── tools
│   │       └── test_chatgpt_tools.py
│   ├── Non-MarkdownFileSupport.pdf
│   ├── repository
│   │   ├── test_entity_repository_upsert.py
│   │   ├── test_entity_repository.py
│   │   ├── test_entity_upsert_issue_187.py
│   │   ├── test_observation_repository.py
│   │   ├── test_project_info_repository.py
│   │   ├── test_project_repository.py
│   │   ├── test_relation_repository.py
│   │   ├── test_repository.py
│   │   ├── test_search_repository_edit_bug_fix.py
│   │   └── test_search_repository.py
│   ├── schemas
│   │   ├── test_base_timeframe_minimum.py
│   │   ├── test_memory_serialization.py
│   │   ├── test_memory_url_validation.py
│   │   ├── test_memory_url.py
│   │   ├── test_schemas.py
│   │   └── test_search.py
│   ├── Screenshot.png
│   ├── services
│   │   ├── test_context_service.py
│   │   ├── test_directory_service.py
│   │   ├── test_entity_service_disable_permalinks.py
│   │   ├── test_entity_service.py
│   │   ├── test_file_service.py
│   │   ├── test_initialization.py
│   │   ├── test_link_resolver.py
│   │   ├── test_project_removal_bug.py
│   │   ├── test_project_service_operations.py
│   │   ├── test_project_service.py
│   │   └── test_search_service.py
│   ├── sync
│   │   ├── test_character_conflicts.py
│   │   ├── test_sync_service_incremental.py
│   │   ├── test_sync_service.py
│   │   ├── test_sync_wikilink_issue.py
│   │   ├── test_tmp_files.py
│   │   ├── test_watch_service_edge_cases.py
│   │   ├── test_watch_service_reload.py
│   │   └── test_watch_service.py
│   ├── test_config.py
│   ├── test_db_migration_deduplication.py
│   ├── test_deps.py
│   ├── test_production_cascade_delete.py
│   └── utils
│       ├── test_file_utils.py
│       ├── test_frontmatter_obsidian_compatible.py
│       ├── test_parse_tags.py
│       ├── test_permalink_formatting.py
│       ├── test_utf8_handling.py
│       └── test_validate_project_path.py
├── uv.lock
├── v0.15.0-RELEASE-DOCS.md
└── v15-docs
    ├── api-performance.md
    ├── background-relations.md
    ├── basic-memory-home.md
    ├── bug-fixes.md
    ├── chatgpt-integration.md
    ├── cloud-authentication.md
    ├── cloud-bisync.md
    ├── cloud-mode-usage.md
    ├── cloud-mount.md
    ├── default-project-mode.md
    ├── env-file-removal.md
    ├── env-var-overrides.md
    ├── explicit-project-parameter.md
    ├── gitignore-integration.md
    ├── project-root-env-var.md
    ├── README.md
    └── sqlite-performance.md
```

# Files

--------------------------------------------------------------------------------
/src/basic_memory/ignore_utils.py:
--------------------------------------------------------------------------------

```python
  1 | """Utilities for handling .gitignore patterns and file filtering."""
  2 | 
  3 | import fnmatch
  4 | from pathlib import Path
  5 | from typing import Set
  6 | 
  7 | 
  8 | # Common directories and patterns to ignore by default
  9 | # These are used as fallback if .bmignore doesn't exist
 10 | DEFAULT_IGNORE_PATTERNS = {
 11 |     # Hidden files (files starting with dot)
 12 |     ".*",
 13 |     # Basic Memory internal files
 14 |     "*.db",
 15 |     "*.db-shm",
 16 |     "*.db-wal",
 17 |     "config.json",
 18 |     # Version control
 19 |     ".git",
 20 |     ".svn",
 21 |     # Python
 22 |     "__pycache__",
 23 |     "*.pyc",
 24 |     "*.pyo",
 25 |     "*.pyd",
 26 |     ".pytest_cache",
 27 |     ".coverage",
 28 |     "*.egg-info",
 29 |     ".tox",
 30 |     ".mypy_cache",
 31 |     ".ruff_cache",
 32 |     # Virtual environments
 33 |     ".venv",
 34 |     "venv",
 35 |     "env",
 36 |     ".env",
 37 |     # Node.js
 38 |     "node_modules",
 39 |     # Build artifacts
 40 |     "build",
 41 |     "dist",
 42 |     ".cache",
 43 |     # IDE
 44 |     ".idea",
 45 |     ".vscode",
 46 |     # OS files
 47 |     ".DS_Store",
 48 |     "Thumbs.db",
 49 |     "desktop.ini",
 50 |     # Obsidian
 51 |     ".obsidian",
 52 |     # Temporary files
 53 |     "*.tmp",
 54 |     "*.swp",
 55 |     "*.swo",
 56 |     "*~",
 57 | }
 58 | 
 59 | 
 60 | def get_bmignore_path() -> Path:
 61 |     """Get path to .bmignore file.
 62 | 
 63 |     Returns:
 64 |         Path to ~/.basic-memory/.bmignore
 65 |     """
 66 |     return Path.home() / ".basic-memory" / ".bmignore"
 67 | 
 68 | 
 69 | def create_default_bmignore() -> None:
 70 |     """Create default .bmignore file if it doesn't exist.
 71 | 
 72 |     This ensures users have a file they can customize for all Basic Memory operations.
 73 |     """
 74 |     bmignore_path = get_bmignore_path()
 75 | 
 76 |     if bmignore_path.exists():
 77 |         return
 78 | 
 79 |     bmignore_path.parent.mkdir(parents=True, exist_ok=True)
 80 |     bmignore_path.write_text("""# Basic Memory Ignore Patterns
 81 | # This file is used by both 'bm cloud upload', 'bm cloud bisync', and file sync
 82 | # Patterns use standard gitignore-style syntax
 83 | 
 84 | # Hidden files (files starting with dot)
 85 | .*
 86 | 
 87 | # Basic Memory internal files (includes test databases)
 88 | *.db
 89 | *.db-shm
 90 | *.db-wal
 91 | config.json
 92 | 
 93 | # Version control
 94 | .git
 95 | .svn
 96 | 
 97 | # Python
 98 | __pycache__
 99 | *.pyc
100 | *.pyo
101 | *.pyd
102 | .pytest_cache
103 | .coverage
104 | *.egg-info
105 | .tox
106 | .mypy_cache
107 | .ruff_cache
108 | 
109 | # Virtual environments
110 | .venv
111 | venv
112 | env
113 | .env
114 | 
115 | # Node.js
116 | node_modules
117 | 
118 | # Build artifacts
119 | build
120 | dist
121 | .cache
122 | 
123 | # IDE
124 | .idea
125 | .vscode
126 | 
127 | # OS files
128 | .DS_Store
129 | Thumbs.db
130 | desktop.ini
131 | 
132 | # Obsidian
133 | .obsidian
134 | 
135 | # Temporary files
136 | *.tmp
137 | *.swp
138 | *.swo
139 | *~
140 | """)
141 | 
142 | 
143 | def load_bmignore_patterns() -> Set[str]:
144 |     """Load patterns from .bmignore file.
145 | 
146 |     Returns:
147 |         Set of patterns from .bmignore, or DEFAULT_IGNORE_PATTERNS if file doesn't exist
148 |     """
149 |     bmignore_path = get_bmignore_path()
150 | 
151 |     # Create default file if it doesn't exist
152 |     if not bmignore_path.exists():
153 |         create_default_bmignore()
154 | 
155 |     patterns = set()
156 | 
157 |     try:
158 |         with bmignore_path.open("r", encoding="utf-8") as f:
159 |             for line in f:
160 |                 line = line.strip()
161 |                 # Skip empty lines and comments
162 |                 if line and not line.startswith("#"):
163 |                     patterns.add(line)
164 |     except Exception:
165 |         # If we can't read .bmignore, fall back to defaults
166 |         return set(DEFAULT_IGNORE_PATTERNS)
167 | 
168 |     # If no patterns were loaded, use defaults
169 |     if not patterns:
170 |         return set(DEFAULT_IGNORE_PATTERNS)
171 | 
172 |     return patterns
173 | 
174 | 
175 | def load_gitignore_patterns(base_path: Path, use_gitignore: bool = True) -> Set[str]:
176 |     """Load gitignore patterns from .gitignore file and .bmignore.
177 | 
178 |     Combines patterns from:
179 |     1. ~/.basic-memory/.bmignore (user's global ignore patterns)
180 |     2. {base_path}/.gitignore (project-specific patterns, if use_gitignore=True)
181 | 
182 |     Args:
183 |         base_path: The base directory to search for .gitignore file
184 |         use_gitignore: If False, only load patterns from .bmignore (default: True)
185 | 
186 |     Returns:
187 |         Set of patterns to ignore
188 |     """
189 |     # Start with patterns from .bmignore
190 |     patterns = load_bmignore_patterns()
191 | 
192 |     if use_gitignore:
193 |         gitignore_file = base_path / ".gitignore"
194 |         if gitignore_file.exists():
195 |             try:
196 |                 with gitignore_file.open("r", encoding="utf-8") as f:
197 |                     for line in f:
198 |                         line = line.strip()
199 |                         # Skip empty lines and comments
200 |                         if line and not line.startswith("#"):
201 |                             patterns.add(line)
202 |             except Exception:
203 |                 # If we can't read .gitignore, just use default patterns
204 |                 pass
205 | 
206 |     return patterns
207 | 
208 | 
209 | def should_ignore_path(file_path: Path, base_path: Path, ignore_patterns: Set[str]) -> bool:
210 |     """Check if a file path should be ignored based on gitignore patterns.
211 | 
212 |     Args:
213 |         file_path: The file path to check
214 |         base_path: The base directory for relative path calculation
215 |         ignore_patterns: Set of patterns to match against
216 | 
217 |     Returns:
218 |         True if the path should be ignored, False otherwise
219 |     """
220 |     # Get the relative path from base
221 |     try:
222 |         relative_path = file_path.relative_to(base_path)
223 |         relative_str = str(relative_path)
224 |         relative_posix = relative_path.as_posix()  # Use forward slashes for matching
225 | 
226 |         # Check each pattern
227 |         for pattern in ignore_patterns:
228 |             # Handle patterns starting with / (root relative)
229 |             if pattern.startswith("/"):
230 |                 root_pattern = pattern[1:]  # Remove leading /
231 | 
232 |                 # For directory patterns ending with /
233 |                 if root_pattern.endswith("/"):
234 |                     dir_name = root_pattern[:-1]  # Remove trailing /
235 |                     # Check if the first part of the path matches the directory name
236 |                     if len(relative_path.parts) > 0 and relative_path.parts[0] == dir_name:
237 |                         return True
238 |                 else:
239 |                     # Regular root-relative pattern
240 |                     if fnmatch.fnmatch(relative_posix, root_pattern):
241 |                         return True
242 |                 continue
243 | 
244 |             # Handle directory patterns (ending with /)
245 |             if pattern.endswith("/"):
246 |                 dir_name = pattern[:-1]  # Remove trailing /
247 |                 # Check if any path part matches the directory name
248 |                 if dir_name in relative_path.parts:
249 |                     return True
250 |                 continue
251 | 
252 |             # Direct name match (e.g., ".git", "node_modules")
253 |             if pattern in relative_path.parts:
254 |                 return True
255 | 
256 |             # Check if any individual path part matches the glob pattern
257 |             # This handles cases like ".*" matching ".hidden.md" in "concept/.hidden.md"
258 |             for part in relative_path.parts:
259 |                 if fnmatch.fnmatch(part, pattern):
260 |                     return True
261 | 
262 |             # Glob pattern match on full path
263 |             if fnmatch.fnmatch(relative_posix, pattern) or fnmatch.fnmatch(relative_str, pattern):
264 |                 return True
265 | 
266 |         return False
267 |     except ValueError:
268 |         # If we can't get relative path, don't ignore
269 |         return False
270 | 
271 | 
272 | def filter_files(
273 |     files: list[Path], base_path: Path, ignore_patterns: Set[str] | None = None
274 | ) -> tuple[list[Path], int]:
275 |     """Filter a list of files based on gitignore patterns.
276 | 
277 |     Args:
278 |         files: List of file paths to filter
279 |         base_path: The base directory for relative path calculation
280 |         ignore_patterns: Set of patterns to ignore. If None, loads from .gitignore
281 | 
282 |     Returns:
283 |         Tuple of (filtered_files, ignored_count)
284 |     """
285 |     if ignore_patterns is None:
286 |         ignore_patterns = load_gitignore_patterns(base_path)
287 | 
288 |     filtered_files = []
289 |     ignored_count = 0
290 | 
291 |     for file_path in files:
292 |         if should_ignore_path(file_path, base_path, ignore_patterns):
293 |             ignored_count += 1
294 |         else:
295 |             filtered_files.append(file_path)
296 | 
297 |     return filtered_files, ignored_count
298 | 
```

--------------------------------------------------------------------------------
/v15-docs/cloud-authentication.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Cloud Authentication (SPEC-13)
  2 | 
  3 | **Status**: New Feature
  4 | **PR**: #327
  5 | **Requires**: Active Basic Memory subscription
  6 | 
  7 | ## What's New
  8 | 
  9 | v0.15.0 introduces **JWT-based cloud authentication** with automatic subscription validation. This enables secure access to Basic Memory Cloud features including bidirectional sync, cloud storage, and multi-device access.
 10 | 
 11 | ## Quick Start
 12 | 
 13 | ### Login to Cloud
 14 | 
 15 | ```bash
 16 | # Authenticate with Basic Memory Cloud
 17 | bm cloud login
 18 | 
 19 | # Opens browser for OAuth flow
 20 | # Validates subscription status
 21 | # Stores JWT token locally
 22 | ```
 23 | 
 24 | ### Check Authentication Status
 25 | 
 26 | ```bash
 27 | # View current authentication status
 28 | bm cloud status
 29 | ```
 30 | 
 31 | ### Logout
 32 | 
 33 | ```bash
 34 | # Clear authentication session
 35 | bm cloud logout
 36 | ```
 37 | 
 38 | ## How It Works
 39 | 
 40 | ### Authentication Flow
 41 | 
 42 | 1. **Initiate Login**: `bm cloud login`
 43 | 2. **Browser Opens**: OAuth 2.1 flow with PKCE
 44 | 3. **Authorize**: Login with your Basic Memory account
 45 | 4. **Subscription Check**: Validates active subscription
 46 | 5. **Token Storage**: JWT stored in `~/.basic-memory/cloud-auth.json`
 47 | 6. **Auto-Refresh**: Token automatically refreshed when needed
 48 | 
 49 | ### Subscription Validation
 50 | 
 51 | All cloud commands validate your subscription status:
 52 | 
 53 | **Active Subscription:**
 54 | ```bash
 55 | $ bm cloud sync
 56 | ✓ Syncing with cloud...
 57 | ```
 58 | 
 59 | **No Active Subscription:**
 60 | ```bash
 61 | $ bm cloud sync
 62 | ✗ Active subscription required
 63 | Subscribe at: https://basicmemory.com/subscribe
 64 | ```
 65 | 
 66 | ## Authentication Commands
 67 | 
 68 | ### bm cloud login
 69 | 
 70 | Authenticate with Basic Memory Cloud.
 71 | 
 72 | ```bash
 73 | # Basic login
 74 | bm cloud login
 75 | 
 76 | # Login opens browser automatically
 77 | # Redirects to: https://eloquent-lotus-05.authkit.app/...
 78 | ```
 79 | 
 80 | **What happens:**
 81 | - Opens OAuth authorization in browser
 82 | - Handles PKCE challenge/response
 83 | - Validates subscription
 84 | - Stores JWT token
 85 | - Displays success message
 86 | 
 87 | **Error cases:**
 88 | - No subscription: Shows subscribe URL
 89 | - Network error: Retries with exponential backoff
 90 | - Invalid credentials: Prompts to try again
 91 | 
 92 | ### bm cloud logout
 93 | 
 94 | Clear authentication session.
 95 | 
 96 | ```bash
 97 | bm cloud logout
 98 | ```
 99 | 
100 | **What happens:**
101 | - Removes `~/.basic-memory/cloud-auth.json`
102 | - Clears cached credentials
103 | - Requires re-authentication for cloud commands
104 | 
105 | ### bm cloud status
106 | 
107 | View authentication and sync status.
108 | 
109 | ```bash
110 | bm cloud status
111 | ```
112 | 
113 | **Shows:**
114 | - Authentication status (logged in/out)
115 | - Subscription status (active/expired)
116 | - Last sync time
117 | - Cloud project count
118 | - Tenant information
119 | 
120 | ## Token Management
121 | 
122 | ### Automatic Token Refresh
123 | 
124 | The CLI automatically handles token refresh:
125 | 
126 | ```python
127 | # Internal - happens automatically
128 | async def get_authenticated_headers():
129 |     # Checks token expiration
130 |     # Refreshes if needed
131 |     # Returns valid Bearer token
132 |     return {"Authorization": f"Bearer {token}"}
133 | ```
134 | 
135 | ### Token Storage
136 | 
137 | Location: `~/.basic-memory/cloud-auth.json`
138 | 
139 | ```json
140 | {
141 |   "access_token": "eyJ0eXAiOiJKV1QiLCJhbGc...",
142 |   "refresh_token": "eyJ0eXAiOiJKV1QiLCJhbGc...",
143 |   "expires_at": 1234567890,
144 |   "tenant_id": "org_abc123"
145 | }
146 | ```
147 | 
148 | **Security:**
149 | - File permissions: 600 (user read/write only)
150 | - Tokens expire after 1 hour
151 | - Refresh tokens valid for 30 days
152 | - Never commit this file to git
153 | 
154 | ### Manual Token Revocation
155 | 
156 | To revoke access:
157 | 1. `bm cloud logout` (clears local token)
158 | 2. Visit account settings to revoke all sessions
159 | 
160 | ## Subscription Management
161 | 
162 | ### Check Subscription Status
163 | 
164 | ```bash
165 | # View current subscription
166 | bm cloud status
167 | 
168 | # Shows:
169 | # - Subscription tier
170 | # - Expiration date
171 | # - Features enabled
172 | ```
173 | 
174 | ### Subscribe
175 | 
176 | If you don't have a subscription:
177 | 
178 | ```bash
179 | # Displays subscribe URL
180 | bm cloud login
181 | # > Active subscription required
182 | # > Subscribe at: https://basicmemory.com/subscribe
183 | ```
184 | 
185 | ### Subscription Tiers
186 | 
187 | | Feature | Free | Pro | Team |
188 | |---------|------|-----|------|
189 | | Cloud Authentication | ✓ | ✓ | ✓ |
190 | | Cloud Sync | - | ✓ | ✓ |
191 | | Cloud Storage | - | 10GB | 100GB |
192 | | Multi-device | - | ✓ | ✓ |
193 | | API Access | - | ✓ | ✓ |
194 | 
195 | ## Using Authenticated APIs
196 | 
197 | ### In CLI Commands
198 | 
199 | Authentication is automatic for all cloud commands:
200 | 
201 | ```bash
202 | # These all use stored JWT automatically
203 | bm cloud sync
204 | bm cloud mount
205 | bm cloud check
206 | bm cloud bisync
207 | ```
208 | 
209 | ### In Custom Scripts
210 | 
211 | ```python
212 | from basic_memory.cli.auth import CLIAuth
213 | 
214 | # Get authenticated headers
215 | client_id, domain, _ = get_cloud_config()
216 | auth = CLIAuth(client_id=client_id, authkit_domain=domain)
217 | token = await auth.get_valid_token()
218 | 
219 | headers = {"Authorization": f"Bearer {token}"}
220 | 
221 | # Use with httpx or requests
222 | import httpx
223 | async with httpx.AsyncClient() as client:
224 |     response = await client.get(
225 |         "https://api.basicmemory.cloud/tenant/projects",
226 |         headers=headers
227 |     )
228 | ```
229 | 
230 | ### Error Handling
231 | 
232 | ```python
233 | from basic_memory.cli.commands.cloud.api_client import (
234 |     CloudAPIError,
235 |     SubscriptionRequiredError
236 | )
237 | 
238 | try:
239 |     response = await make_api_request("GET", url)
240 | except SubscriptionRequiredError as e:
241 |     print(f"Subscription required: {e.message}")
242 |     print(f"Subscribe at: {e.subscribe_url}")
243 | except CloudAPIError as e:
244 |     print(f"API error: {e.status_code} - {e.detail}")
245 | ```
246 | 
247 | ## OAuth Configuration
248 | 
249 | ### Default Settings
250 | 
251 | ```python
252 | # From config.py
253 | cloud_client_id = "client_01K6KWQPW6J1M8VV7R3TZP5A6M"
254 | cloud_domain = "https://eloquent-lotus-05.authkit.app"
255 | cloud_host = "https://api.basicmemory.cloud"
256 | ```
257 | 
258 | ### Custom Configuration
259 | 
260 | Override via environment variables:
261 | 
262 | ```bash
263 | export BASIC_MEMORY_CLOUD_CLIENT_ID="your_client_id"
264 | export BASIC_MEMORY_CLOUD_DOMAIN="https://your-authkit.app"
265 | export BASIC_MEMORY_CLOUD_HOST="https://your-api.example.com"
266 | 
267 | bm cloud login
268 | ```
269 | 
270 | Or in `~/.basic-memory/config.json`:
271 | 
272 | ```json
273 | {
274 |   "cloud_client_id": "your_client_id",
275 |   "cloud_domain": "https://your-authkit.app",
276 |   "cloud_host": "https://your-api.example.com"
277 | }
278 | ```
279 | 
280 | ## Troubleshooting
281 | 
282 | ### "Not authenticated" Error
283 | 
284 | ```bash
285 | $ bm cloud sync
286 | [red]Not authenticated. Please run 'bm cloud login' first.[/red]
287 | ```
288 | 
289 | **Solution**: Run `bm cloud login`
290 | 
291 | ### Token Expired
292 | 
293 | ```bash
294 | $ bm cloud status
295 | Token expired, refreshing...
296 | ✓ Authenticated
297 | ```
298 | 
299 | **Automatic**: Token refresh happens automatically
300 | 
301 | ### Subscription Expired
302 | 
303 | ```bash
304 | $ bm cloud sync
305 | Active subscription required
306 | Subscribe at: https://basicmemory.com/subscribe
307 | ```
308 | 
309 | **Solution**: Renew subscription at provided URL
310 | 
311 | ### Browser Not Opening
312 | 
313 | ```bash
314 | $ bm cloud login
315 | # If browser doesn't open automatically:
316 | # Visit this URL: https://eloquent-lotus-05.authkit.app/...
317 | ```
318 | 
319 | **Manual**: Copy/paste URL into browser
320 | 
321 | ### Network Issues
322 | 
323 | ```bash
324 | $ bm cloud login
325 | Connection error, retrying in 2s...
326 | Connection error, retrying in 4s...
327 | ```
328 | 
329 | **Automatic**: Exponential backoff with retries
330 | 
331 | ## Security Best Practices
332 | 
333 | 1. **Never share tokens**: Keep `cloud-auth.json` private
334 | 2. **Use logout**: Always logout on shared machines
335 | 3. **Monitor sessions**: Check `bm cloud status` regularly
336 | 4. **Revoke access**: Use account settings to revoke compromised tokens
337 | 5. **Use HTTPS only**: Cloud commands enforce HTTPS
338 | 
339 | ## Related Commands
340 | 
341 | - `bm cloud sync` - Bidirectional cloud sync (see `cloud-bisync.md`)
342 | - `bm cloud mount` - Mount cloud storage (see `cloud-mount.md`)
343 | - `bm cloud check` - Verify cloud integrity
344 | - `bm cloud status` - View authentication and sync status
345 | 
346 | ## Technical Details
347 | 
348 | ### JWT Claims
349 | 
350 | ```json
351 | {
352 |   "sub": "user_abc123",
353 |   "org_id": "org_xyz789",
354 |   "tenant_id": "org_xyz789",
355 |   "subscription_status": "active",
356 |   "subscription_tier": "pro",
357 |   "exp": 1234567890,
358 |   "iat": 1234564290
359 | }
360 | ```
361 | 
362 | ### API Integration
363 | 
364 | The cloud API validates JWT on every request:
365 | 
366 | ```python
367 | # Middleware validates JWT and extracts tenant context
368 | @app.middleware("http")
369 | async def tenant_middleware(request: Request, call_next):
370 |     token = request.headers.get("Authorization")
371 |     claims = verify_jwt(token)
372 |     request.state.tenant_id = claims["tenant_id"]
373 |     request.state.subscription = claims["subscription_status"]
374 |     # ...
375 | ```
376 | 
377 | ## See Also
378 | 
379 | - SPEC-13: CLI Authentication with Subscription Validation
380 | - `cloud-bisync.md` - Using authenticated sync
381 | - `cloud-mode-usage.md` - Working with cloud APIs
382 | 
```

--------------------------------------------------------------------------------
/test-int/conftest.py:
--------------------------------------------------------------------------------

```python
  1 | """
  2 | Shared fixtures for integration tests.
  3 | 
  4 | Integration tests verify the complete flow: MCP Client → MCP Server → FastAPI → Database.
  5 | Unlike unit tests which use in-memory databases and mocks, integration tests use real SQLite
  6 | files and test the full application stack to ensure all components work together correctly.
  7 | 
  8 | ## Architecture
  9 | 
 10 | The integration test setup creates this flow:
 11 | 
 12 | ```
 13 | Test → MCP Client → MCP Server → HTTP Request (ASGITransport) → FastAPI App → Database
 14 |                                                                       ↑
 15 |                                                                Dependency overrides
 16 |                                                                point to test database
 17 | ```
 18 | 
 19 | ## Key Components
 20 | 
 21 | 1. **Real SQLite Database**: Uses `DatabaseType.FILESYSTEM` with actual SQLite files
 22 |    in temporary directories instead of in-memory databases.
 23 | 
 24 | 2. **Shared Database Connection**: Both MCP server and FastAPI app use the same
 25 |    database via dependency injection overrides.
 26 | 
 27 | 3. **Project Session Management**: Initializes the MCP project session with test
 28 |    project configuration so tools know which project to operate on.
 29 | 
 30 | 4. **Search Index Initialization**: Creates the FTS5 search index tables that
 31 |    the application requires for search functionality.
 32 | 
 33 | 5. **Global Configuration Override**: Modifies the global `basic_memory_app_config`
 34 |    so MCP tools use test project settings instead of user configuration.
 35 | 
 36 | ## Usage
 37 | 
 38 | Integration tests should include both `mcp_server` and `app` fixtures to ensure
 39 | the complete stack is wired correctly:
 40 | 
 41 | ```python
 42 | @pytest.mark.asyncio
 43 | async def test_my_mcp_tool(mcp_server, app):
 44 |     async with Client(mcp_server) as client:
 45 |         result = await client.call_tool("tool_name", {"param": "value"})
 46 |         # Assert on results...
 47 | ```
 48 | 
 49 | The `app` fixture ensures FastAPI dependency overrides are active, and
 50 | `mcp_server` provides the MCP server with proper project session initialization.
 51 | """
 52 | 
 53 | from typing import AsyncGenerator
 54 | 
 55 | import pytest
 56 | import pytest_asyncio
 57 | from pathlib import Path
 58 | 
 59 | from httpx import AsyncClient, ASGITransport
 60 | 
 61 | from basic_memory.config import BasicMemoryConfig, ProjectConfig, ConfigManager
 62 | from basic_memory.db import engine_session_factory, DatabaseType
 63 | from basic_memory.models import Project
 64 | from basic_memory.repository.project_repository import ProjectRepository
 65 | from fastapi import FastAPI
 66 | 
 67 | from basic_memory.deps import get_project_config, get_engine_factory, get_app_config
 68 | 
 69 | 
 70 | # Import MCP tools so they're available for testing
 71 | from basic_memory.mcp import tools  # noqa: F401
 72 | 
 73 | 
 74 | @pytest_asyncio.fixture(scope="function")
 75 | async def engine_factory(tmp_path):
 76 |     """Create a SQLite file engine factory for integration testing."""
 77 |     db_path = tmp_path / "test.db"
 78 |     async with engine_session_factory(db_path, DatabaseType.FILESYSTEM) as (
 79 |         engine,
 80 |         session_maker,
 81 |     ):
 82 |         # Initialize database schema
 83 |         from basic_memory.models.base import Base
 84 | 
 85 |         async with engine.begin() as conn:
 86 |             await conn.run_sync(Base.metadata.create_all)
 87 | 
 88 |         yield engine, session_maker
 89 | 
 90 | 
 91 | @pytest_asyncio.fixture(scope="function")
 92 | async def test_project(config_home, engine_factory) -> Project:
 93 |     """Create a test project."""
 94 |     project_data = {
 95 |         "name": "test-project",
 96 |         "description": "Project used for integration tests",
 97 |         "path": str(config_home),
 98 |         "is_active": True,
 99 |         "is_default": True,
100 |     }
101 | 
102 |     engine, session_maker = engine_factory
103 |     project_repository = ProjectRepository(session_maker)
104 |     project = await project_repository.create(project_data)
105 |     return project
106 | 
107 | 
108 | @pytest.fixture
109 | def config_home(tmp_path, monkeypatch) -> Path:
110 |     monkeypatch.setenv("HOME", str(tmp_path))
111 |     # Set BASIC_MEMORY_HOME to the test directory
112 |     monkeypatch.setenv("BASIC_MEMORY_HOME", str(tmp_path / "basic-memory"))
113 |     return tmp_path
114 | 
115 | 
116 | @pytest.fixture(scope="function", autouse=True)
117 | def app_config(config_home, tmp_path, monkeypatch) -> BasicMemoryConfig:
118 |     """Create test app configuration."""
119 |     # Disable cloud mode for CLI tests
120 |     monkeypatch.setenv("BASIC_MEMORY_CLOUD_MODE", "false")
121 | 
122 |     # Create a basic config with test-project like unit tests do
123 |     projects = {"test-project": str(config_home)}
124 |     app_config = BasicMemoryConfig(
125 |         env="test",
126 |         projects=projects,
127 |         default_project="test-project",
128 |         default_project_mode=False,  # Match real-world usage - tools must pass explicit project
129 |         update_permalinks_on_move=True,
130 |         cloud_mode=False,  # Explicitly disable cloud mode
131 |     )
132 |     return app_config
133 | 
134 | 
135 | @pytest.fixture(scope="function", autouse=True)
136 | def config_manager(app_config: BasicMemoryConfig, config_home) -> ConfigManager:
137 |     config_manager = ConfigManager()
138 |     # Update its paths to use the test directory
139 |     config_manager.config_dir = config_home / ".basic-memory"
140 |     config_manager.config_file = config_manager.config_dir / "config.json"
141 |     config_manager.config_dir.mkdir(parents=True, exist_ok=True)
142 | 
143 |     # Ensure the config file is written to disk
144 |     config_manager.save_config(app_config)
145 |     return config_manager
146 | 
147 | 
148 | @pytest.fixture(scope="function", autouse=True)
149 | def project_config(test_project):
150 |     """Create test project configuration."""
151 | 
152 |     project_config = ProjectConfig(
153 |         name=test_project.name,
154 |         home=Path(test_project.path),
155 |     )
156 | 
157 |     return project_config
158 | 
159 | 
160 | @pytest.fixture(scope="function")
161 | def app(app_config, project_config, engine_factory, test_project, config_manager) -> FastAPI:
162 |     """Create test FastAPI application with single project."""
163 | 
164 |     # Import the FastAPI app AFTER the config_manager has written the test config to disk
165 |     # This ensures that when the app's lifespan manager runs, it reads the correct test config
166 |     from basic_memory.api.app import app as fastapi_app
167 | 
168 |     app = fastapi_app
169 |     app.dependency_overrides[get_project_config] = lambda: project_config
170 |     app.dependency_overrides[get_engine_factory] = lambda: engine_factory
171 |     app.dependency_overrides[get_app_config] = lambda: app_config
172 |     return app
173 | 
174 | 
175 | @pytest_asyncio.fixture(scope="function")
176 | async def search_service(engine_factory, test_project):
177 |     """Create and initialize search service for integration tests."""
178 |     from basic_memory.repository.search_repository import SearchRepository
179 |     from basic_memory.repository.entity_repository import EntityRepository
180 |     from basic_memory.services.file_service import FileService
181 |     from basic_memory.services.search_service import SearchService
182 |     from basic_memory.markdown.markdown_processor import MarkdownProcessor
183 |     from basic_memory.markdown import EntityParser
184 | 
185 |     engine, session_maker = engine_factory
186 | 
187 |     # Create repositories
188 |     search_repository = SearchRepository(session_maker, project_id=test_project.id)
189 |     entity_repository = EntityRepository(session_maker, project_id=test_project.id)
190 | 
191 |     # Create file service
192 |     entity_parser = EntityParser(Path(test_project.path))
193 |     markdown_processor = MarkdownProcessor(entity_parser)
194 |     file_service = FileService(Path(test_project.path), markdown_processor)
195 | 
196 |     # Create and initialize search service
197 |     service = SearchService(search_repository, entity_repository, file_service)
198 |     await service.init_search_index()
199 |     return service
200 | 
201 | 
202 | @pytest.fixture(scope="function")
203 | def mcp_server(config_manager, search_service):
204 |     # Import mcp instance
205 |     from basic_memory.mcp.server import mcp as server
206 | 
207 |     # Import mcp tools to register them
208 |     import basic_memory.mcp.tools  # noqa: F401
209 | 
210 |     # Import prompts to register them
211 |     import basic_memory.mcp.prompts  # noqa: F401
212 | 
213 |     return server
214 | 
215 | 
216 | @pytest_asyncio.fixture(scope="function")
217 | async def client(app: FastAPI) -> AsyncGenerator[AsyncClient, None]:
218 |     """Create test client that both MCP and tests will use."""
219 |     async with AsyncClient(transport=ASGITransport(app=app), base_url="http://test") as client:
220 |         yield client
221 | 
```

--------------------------------------------------------------------------------
/tests/test_deps.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for dependency injection functions in deps.py."""
  2 | 
  3 | from datetime import datetime, timezone
  4 | from pathlib import Path
  5 | 
  6 | import pytest
  7 | import pytest_asyncio
  8 | from fastapi import HTTPException
  9 | 
 10 | from basic_memory.deps import get_project_config, get_project_id
 11 | from basic_memory.models.project import Project
 12 | from basic_memory.repository.project_repository import ProjectRepository
 13 | 
 14 | 
 15 | @pytest_asyncio.fixture
 16 | async def project_with_spaces(project_repository: ProjectRepository) -> Project:
 17 |     """Create a project with spaces in the name for testing permalink normalization."""
 18 |     project_data = {
 19 |         "name": "My Test Project",
 20 |         "description": "A project with spaces in the name",
 21 |         "path": "/my/test/project",
 22 |         "is_active": True,
 23 |         "is_default": False,
 24 |         "created_at": datetime.now(timezone.utc),
 25 |         "updated_at": datetime.now(timezone.utc),
 26 |     }
 27 |     return await project_repository.create(project_data)
 28 | 
 29 | 
 30 | @pytest_asyncio.fixture
 31 | async def project_with_special_chars(project_repository: ProjectRepository) -> Project:
 32 |     """Create a project with special characters for testing permalink normalization."""
 33 |     project_data = {
 34 |         "name": "Project: Test & Development!",
 35 |         "description": "A project with special characters",
 36 |         "path": "/project/test/dev",
 37 |         "is_active": True,
 38 |         "is_default": False,
 39 |         "created_at": datetime.now(timezone.utc),
 40 |         "updated_at": datetime.now(timezone.utc),
 41 |     }
 42 |     return await project_repository.create(project_data)
 43 | 
 44 | 
 45 | @pytest.mark.asyncio
 46 | async def test_get_project_config_with_spaces(
 47 |     project_repository: ProjectRepository, project_with_spaces: Project
 48 | ):
 49 |     """Test that get_project_config normalizes project names with spaces."""
 50 |     # The project name has spaces: "My Test Project"
 51 |     # The permalink should be: "my-test-project"
 52 |     assert project_with_spaces.name == "My Test Project"
 53 |     assert project_with_spaces.permalink == "my-test-project"
 54 | 
 55 |     # Call get_project_config with the project name (not permalink)
 56 |     # This simulates what happens when the project name comes from URL path
 57 |     config = await get_project_config(
 58 |         project="My Test Project", project_repository=project_repository
 59 |     )
 60 | 
 61 |     # Verify we got the correct project config
 62 |     assert config.name == "My Test Project"
 63 |     assert config.home == Path("/my/test/project")
 64 | 
 65 | 
 66 | @pytest.mark.asyncio
 67 | async def test_get_project_config_with_permalink(
 68 |     project_repository: ProjectRepository, project_with_spaces: Project
 69 | ):
 70 |     """Test that get_project_config works when already given a permalink."""
 71 |     # Call with the permalink directly
 72 |     config = await get_project_config(
 73 |         project="my-test-project", project_repository=project_repository
 74 |     )
 75 | 
 76 |     # Verify we got the correct project config
 77 |     assert config.name == "My Test Project"
 78 |     assert config.home == Path("/my/test/project")
 79 | 
 80 | 
 81 | @pytest.mark.asyncio
 82 | async def test_get_project_config_with_special_chars(
 83 |     project_repository: ProjectRepository, project_with_special_chars: Project
 84 | ):
 85 |     """Test that get_project_config normalizes project names with special characters."""
 86 |     # The project name has special chars: "Project: Test & Development!"
 87 |     # The permalink should be: "project-test-development"
 88 |     assert project_with_special_chars.name == "Project: Test & Development!"
 89 |     assert project_with_special_chars.permalink == "project-test-development"
 90 | 
 91 |     # Call get_project_config with the project name
 92 |     config = await get_project_config(
 93 |         project="Project: Test & Development!", project_repository=project_repository
 94 |     )
 95 | 
 96 |     # Verify we got the correct project config
 97 |     assert config.name == "Project: Test & Development!"
 98 |     assert config.home == Path("/project/test/dev")
 99 | 
100 | 
101 | @pytest.mark.asyncio
102 | async def test_get_project_config_not_found(project_repository: ProjectRepository):
103 |     """Test that get_project_config raises HTTPException when project not found."""
104 |     with pytest.raises(HTTPException) as exc_info:
105 |         await get_project_config(
106 |             project="Nonexistent Project", project_repository=project_repository
107 |         )
108 | 
109 |     assert exc_info.value.status_code == 404
110 |     assert "Project 'Nonexistent Project' not found" in exc_info.value.detail
111 | 
112 | 
113 | @pytest.mark.asyncio
114 | async def test_get_project_id_with_spaces(
115 |     project_repository: ProjectRepository, project_with_spaces: Project
116 | ):
117 |     """Test that get_project_id normalizes project names with spaces."""
118 |     # Call get_project_id with the project name (not permalink)
119 |     project_id = await get_project_id(
120 |         project_repository=project_repository, project="My Test Project"
121 |     )
122 | 
123 |     # Verify we got the correct project ID
124 |     assert project_id == project_with_spaces.id
125 | 
126 | 
127 | @pytest.mark.asyncio
128 | async def test_get_project_id_with_permalink(
129 |     project_repository: ProjectRepository, project_with_spaces: Project
130 | ):
131 |     """Test that get_project_id works when already given a permalink."""
132 |     # Call with the permalink directly
133 |     project_id = await get_project_id(
134 |         project_repository=project_repository, project="my-test-project"
135 |     )
136 | 
137 |     # Verify we got the correct project ID
138 |     assert project_id == project_with_spaces.id
139 | 
140 | 
141 | @pytest.mark.asyncio
142 | async def test_get_project_id_with_special_chars(
143 |     project_repository: ProjectRepository, project_with_special_chars: Project
144 | ):
145 |     """Test that get_project_id normalizes project names with special characters."""
146 |     # Call get_project_id with the project name
147 |     project_id = await get_project_id(
148 |         project_repository=project_repository, project="Project: Test & Development!"
149 |     )
150 | 
151 |     # Verify we got the correct project ID
152 |     assert project_id == project_with_special_chars.id
153 | 
154 | 
155 | @pytest.mark.asyncio
156 | async def test_get_project_id_not_found(project_repository: ProjectRepository):
157 |     """Test that get_project_id raises HTTPException when project not found."""
158 |     with pytest.raises(HTTPException) as exc_info:
159 |         await get_project_id(project_repository=project_repository, project="Nonexistent Project")
160 | 
161 |     assert exc_info.value.status_code == 404
162 |     assert "Project 'Nonexistent Project' not found" in exc_info.value.detail
163 | 
164 | 
165 | @pytest.mark.asyncio
166 | async def test_get_project_id_fallback_to_name(
167 |     project_repository: ProjectRepository, test_project: Project
168 | ):
169 |     """Test that get_project_id falls back to name lookup if permalink lookup fails.
170 | 
171 |     This test verifies the fallback behavior in get_project_id where it tries
172 |     get_by_name if get_by_permalink returns None.
173 |     """
174 |     # The test_project fixture has name "test-project" and permalink "test-project"
175 |     # Since both are the same, we can't easily test the fallback with existing fixtures
176 |     # So this test just verifies the normal path works with test_project
177 |     project_id = await get_project_id(project_repository=project_repository, project="test-project")
178 | 
179 |     assert project_id == test_project.id
180 | 
181 | 
182 | @pytest.mark.asyncio
183 | async def test_get_project_config_case_sensitivity(
184 |     project_repository: ProjectRepository, project_with_spaces: Project
185 | ):
186 |     """Test that get_project_config handles case variations correctly.
187 | 
188 |     Permalink normalization should convert to lowercase, so different case
189 |     variations of the same name should resolve to the same project.
190 |     """
191 |     # Create project with mixed case: "My Test Project" -> permalink "my-test-project"
192 | 
193 |     # Try with different case variations
194 |     config1 = await get_project_config(
195 |         project="My Test Project", project_repository=project_repository
196 |     )
197 |     config2 = await get_project_config(
198 |         project="my test project", project_repository=project_repository
199 |     )
200 |     config3 = await get_project_config(
201 |         project="MY TEST PROJECT", project_repository=project_repository
202 |     )
203 | 
204 |     # All should resolve to the same project
205 |     assert config1.name == config2.name == config3.name == "My Test Project"
206 |     assert config1.home == config2.home == config3.home == Path("/my/test/project")
207 | 
```

--------------------------------------------------------------------------------
/tests/api/test_template_loader.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for the template loader functionality."""
  2 | 
  3 | import datetime
  4 | import pytest
  5 | from pathlib import Path
  6 | 
  7 | from basic_memory.api.template_loader import TemplateLoader
  8 | 
  9 | 
 10 | @pytest.fixture
 11 | def temp_template_dir(tmpdir):
 12 |     """Create a temporary directory for test templates."""
 13 |     template_dir = tmpdir.mkdir("templates").mkdir("prompts")
 14 |     return template_dir
 15 | 
 16 | 
 17 | @pytest.fixture
 18 | def custom_template_loader(temp_template_dir):
 19 |     """Return a TemplateLoader instance with a custom template directory."""
 20 |     return TemplateLoader(str(temp_template_dir))
 21 | 
 22 | 
 23 | @pytest.fixture
 24 | def simple_template(temp_template_dir):
 25 |     """Create a simple test template."""
 26 |     template_path = temp_template_dir / "simple.hbs"
 27 |     template_path.write_text("Hello, {{name}}!", encoding="utf-8")
 28 |     return "simple.hbs"
 29 | 
 30 | 
 31 | @pytest.mark.asyncio
 32 | async def test_render_simple_template(custom_template_loader, simple_template):
 33 |     """Test rendering a simple template."""
 34 |     context = {"name": "World"}
 35 |     result = await custom_template_loader.render(simple_template, context)
 36 |     assert result == "Hello, World!"
 37 | 
 38 | 
 39 | @pytest.mark.asyncio
 40 | async def test_template_cache(custom_template_loader, simple_template):
 41 |     """Test that templates are cached."""
 42 |     context = {"name": "World"}
 43 | 
 44 |     # First render, should load template
 45 |     await custom_template_loader.render(simple_template, context)
 46 | 
 47 |     # Check that template is in cache
 48 |     assert simple_template in custom_template_loader.template_cache
 49 | 
 50 |     # Modify the template file - shouldn't affect the cached version
 51 |     template_path = Path(custom_template_loader.template_dir) / simple_template
 52 |     template_path.write_text("Goodbye, {{name}}!", encoding="utf-8")
 53 | 
 54 |     # Second render, should use cached template
 55 |     result = await custom_template_loader.render(simple_template, context)
 56 |     assert result == "Hello, World!"
 57 | 
 58 |     # Clear cache and render again - should use updated template
 59 |     custom_template_loader.clear_cache()
 60 |     assert simple_template not in custom_template_loader.template_cache
 61 | 
 62 |     result = await custom_template_loader.render(simple_template, context)
 63 |     assert result == "Goodbye, World!"
 64 | 
 65 | 
 66 | @pytest.mark.asyncio
 67 | async def test_date_helper(custom_template_loader, temp_template_dir):
 68 |     # Test date helper
 69 |     date_path = temp_template_dir / "date.hbs"
 70 |     date_path.write_text("{{date timestamp}}", encoding="utf-8")
 71 |     date_result = await custom_template_loader.render(
 72 |         "date.hbs", {"timestamp": datetime.datetime(2023, 1, 1, 12, 30)}
 73 |     )
 74 |     assert "2023-01-01" in date_result
 75 | 
 76 | 
 77 | @pytest.mark.asyncio
 78 | async def test_default_helper(custom_template_loader, temp_template_dir):
 79 |     # Test default helper
 80 |     default_path = temp_template_dir / "default.hbs"
 81 |     default_path.write_text("{{default null 'default-value'}}", encoding="utf-8")
 82 |     default_result = await custom_template_loader.render("default.hbs", {"null": None})
 83 |     assert default_result == "default-value"
 84 | 
 85 | 
 86 | @pytest.mark.asyncio
 87 | async def test_capitalize_helper(custom_template_loader, temp_template_dir):
 88 |     # Test capitalize helper
 89 |     capitalize_path = temp_template_dir / "capitalize.hbs"
 90 |     capitalize_path.write_text("{{capitalize 'test'}}", encoding="utf-8")
 91 |     capitalize_result = await custom_template_loader.render("capitalize.hbs", {})
 92 |     assert capitalize_result == "Test"
 93 | 
 94 | 
 95 | @pytest.mark.asyncio
 96 | async def test_size_helper(custom_template_loader, temp_template_dir):
 97 |     # Test size helper
 98 |     size_path = temp_template_dir / "size.hbs"
 99 |     size_path.write_text("{{size collection}}", encoding="utf-8")
100 |     size_result = await custom_template_loader.render("size.hbs", {"collection": [1, 2, 3]})
101 |     assert size_result == "3"
102 | 
103 | 
104 | @pytest.mark.asyncio
105 | async def test_json_helper(custom_template_loader, temp_template_dir):
106 |     # Test json helper
107 |     json_path = temp_template_dir / "json.hbs"
108 |     json_path.write_text("{{json data}}", encoding="utf-8")
109 |     json_result = await custom_template_loader.render("json.hbs", {"data": {"key": "value"}})
110 |     assert json_result == '{"key": "value"}'
111 | 
112 | 
113 | @pytest.mark.asyncio
114 | async def test_less_than_helper(custom_template_loader, temp_template_dir):
115 |     # Test lt (less than) helper
116 |     lt_path = temp_template_dir / "lt.hbs"
117 |     lt_path.write_text("{{#if_cond (lt 2 3)}}true{{else}}false{{/if_cond}}", encoding="utf-8")
118 |     lt_result = await custom_template_loader.render("lt.hbs", {})
119 |     assert lt_result == "true"
120 | 
121 | 
122 | @pytest.mark.asyncio
123 | async def test_file_not_found(custom_template_loader):
124 |     """Test that FileNotFoundError is raised when a template doesn't exist."""
125 |     with pytest.raises(FileNotFoundError):
126 |         await custom_template_loader.render("non_existent_template.hbs", {})
127 | 
128 | 
129 | @pytest.mark.asyncio
130 | async def test_extension_handling(custom_template_loader, temp_template_dir):
131 |     """Test that template extensions are handled correctly."""
132 |     # Create template with .hbs extension
133 |     template_path = temp_template_dir / "test_extension.hbs"
134 |     template_path.write_text("Template with extension: {{value}}", encoding="utf-8")
135 | 
136 |     # Test accessing with full extension
137 |     result = await custom_template_loader.render("test_extension.hbs", {"value": "works"})
138 |     assert result == "Template with extension: works"
139 | 
140 |     # Test accessing without extension
141 |     result = await custom_template_loader.render("test_extension", {"value": "also works"})
142 |     assert result == "Template with extension: also works"
143 | 
144 |     # Test accessing with wrong extension gets converted
145 |     template_path = temp_template_dir / "liquid_template.hbs"
146 |     template_path.write_text("Liquid template: {{value}}", encoding="utf-8")
147 | 
148 |     result = await custom_template_loader.render("liquid_template.liquid", {"value": "converted"})
149 |     assert result == "Liquid template: converted"
150 | 
151 | 
152 | @pytest.mark.asyncio
153 | async def test_dedent_helper(custom_template_loader, temp_template_dir):
154 |     """Test the dedent helper for text blocks."""
155 |     dedent_path = temp_template_dir / "dedent.hbs"
156 | 
157 |     # Create a template with indented text blocks
158 |     template_content = """Before
159 |     {{#dedent}}
160 |         This is indented text
161 |             with nested indentation
162 |         that should be dedented
163 |         while preserving relative indentation
164 |     {{/dedent}}
165 | After"""
166 | 
167 |     dedent_path.write_text(template_content, encoding="utf-8")
168 | 
169 |     # Render the template
170 |     result = await custom_template_loader.render("dedent.hbs", {})
171 | 
172 |     # Print the actual output for debugging
173 |     print(f"Dedent helper result: {repr(result)}")
174 | 
175 |     # Check that the indentation is properly removed
176 |     assert "This is indented text" in result
177 |     assert "with nested indentation" in result
178 |     assert "that should be dedented" in result
179 |     assert "while preserving relative indentation" in result
180 |     assert "Before" in result
181 |     assert "After" in result
182 | 
183 |     # Check that relative indentation is preserved
184 |     assert result.find("with nested indentation") > result.find("This is indented text")
185 | 
186 | 
187 | @pytest.mark.asyncio
188 | async def test_nested_dedent_helper(custom_template_loader, temp_template_dir):
189 |     """Test the dedent helper with nested content."""
190 |     dedent_path = temp_template_dir / "nested_dedent.hbs"
191 | 
192 |     # Create a template with nested indented blocks
193 |     template_content = """
194 | {{#each items}}
195 |     {{#dedent}}
196 |         --- Item {{this}}
197 |         
198 |         Details for item {{this}}
199 |           - Indented detail 1
200 |           - Indented detail 2
201 |     {{/dedent}}
202 | {{/each}}"""
203 | 
204 |     dedent_path.write_text(template_content, encoding="utf-8")
205 | 
206 |     # Render the template
207 |     result = await custom_template_loader.render("nested_dedent.hbs", {"items": [1, 2]})
208 | 
209 |     # Print the actual output for debugging
210 |     print(f"Actual result: {repr(result)}")
211 | 
212 |     # Use a more flexible assertion that checks individual components
213 |     # instead of exact string matching
214 |     assert "--- Item 1" in result
215 |     assert "Details for item 1" in result
216 |     assert "- Indented detail 1" in result
217 |     assert "--- Item 2" in result
218 |     assert "Details for item 2" in result
219 |     assert "- Indented detail 2" in result
220 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/models/knowledge.py:
--------------------------------------------------------------------------------

```python
  1 | """Knowledge graph models."""
  2 | 
  3 | from datetime import datetime
  4 | from basic_memory.utils import ensure_timezone_aware
  5 | from typing import Optional
  6 | 
  7 | from sqlalchemy import (
  8 |     Integer,
  9 |     String,
 10 |     Text,
 11 |     ForeignKey,
 12 |     UniqueConstraint,
 13 |     DateTime,
 14 |     Index,
 15 |     JSON,
 16 |     Float,
 17 |     text,
 18 | )
 19 | from sqlalchemy.orm import Mapped, mapped_column, relationship
 20 | 
 21 | from basic_memory.models.base import Base
 22 | from basic_memory.utils import generate_permalink
 23 | 
 24 | 
 25 | class Entity(Base):
 26 |     """Core entity in the knowledge graph.
 27 | 
 28 |     Entities represent semantic nodes maintained by the AI layer. Each entity:
 29 |     - Has a unique numeric ID (database-generated)
 30 |     - Maps to a file on disk
 31 |     - Maintains a checksum for change detection
 32 |     - Tracks both source file and semantic properties
 33 |     - Belongs to a specific project
 34 |     """
 35 | 
 36 |     __tablename__ = "entity"
 37 |     __table_args__ = (
 38 |         # Regular indexes
 39 |         Index("ix_entity_type", "entity_type"),
 40 |         Index("ix_entity_title", "title"),
 41 |         Index("ix_entity_created_at", "created_at"),  # For timeline queries
 42 |         Index("ix_entity_updated_at", "updated_at"),  # For timeline queries
 43 |         Index("ix_entity_project_id", "project_id"),  # For project filtering
 44 |         # Project-specific uniqueness constraints
 45 |         Index(
 46 |             "uix_entity_permalink_project",
 47 |             "permalink",
 48 |             "project_id",
 49 |             unique=True,
 50 |             sqlite_where=text("content_type = 'text/markdown' AND permalink IS NOT NULL"),
 51 |         ),
 52 |         Index(
 53 |             "uix_entity_file_path_project",
 54 |             "file_path",
 55 |             "project_id",
 56 |             unique=True,
 57 |         ),
 58 |     )
 59 | 
 60 |     # Core identity
 61 |     id: Mapped[int] = mapped_column(Integer, primary_key=True)
 62 |     title: Mapped[str] = mapped_column(String)
 63 |     entity_type: Mapped[str] = mapped_column(String)
 64 |     entity_metadata: Mapped[Optional[dict]] = mapped_column(JSON, nullable=True)
 65 |     content_type: Mapped[str] = mapped_column(String)
 66 | 
 67 |     # Project reference
 68 |     project_id: Mapped[int] = mapped_column(Integer, ForeignKey("project.id"), nullable=False)
 69 | 
 70 |     # Normalized path for URIs - required for markdown files only
 71 |     permalink: Mapped[Optional[str]] = mapped_column(String, nullable=True, index=True)
 72 |     # Actual filesystem relative path
 73 |     file_path: Mapped[str] = mapped_column(String, index=True)
 74 |     # checksum of file
 75 |     checksum: Mapped[Optional[str]] = mapped_column(String, nullable=True)
 76 | 
 77 |     # File metadata for sync
 78 |     # mtime: file modification timestamp (Unix epoch float) for change detection
 79 |     mtime: Mapped[Optional[float]] = mapped_column(Float, nullable=True)
 80 |     # size: file size in bytes for quick change detection
 81 |     size: Mapped[Optional[int]] = mapped_column(Integer, nullable=True)
 82 | 
 83 |     # Metadata and tracking
 84 |     created_at: Mapped[datetime] = mapped_column(
 85 |         DateTime(timezone=True), default=lambda: datetime.now().astimezone()
 86 |     )
 87 |     updated_at: Mapped[datetime] = mapped_column(
 88 |         DateTime(timezone=True),
 89 |         default=lambda: datetime.now().astimezone(),
 90 |         onupdate=lambda: datetime.now().astimezone(),
 91 |     )
 92 | 
 93 |     # Relationships
 94 |     project = relationship("Project", back_populates="entities")
 95 |     observations = relationship(
 96 |         "Observation", back_populates="entity", cascade="all, delete-orphan"
 97 |     )
 98 |     outgoing_relations = relationship(
 99 |         "Relation",
100 |         back_populates="from_entity",
101 |         foreign_keys="[Relation.from_id]",
102 |         cascade="all, delete-orphan",
103 |     )
104 |     incoming_relations = relationship(
105 |         "Relation",
106 |         back_populates="to_entity",
107 |         foreign_keys="[Relation.to_id]",
108 |         cascade="all, delete-orphan",
109 |     )
110 | 
111 |     @property
112 |     def relations(self):
113 |         """Get all relations (incoming and outgoing) for this entity."""
114 |         return self.incoming_relations + self.outgoing_relations
115 | 
116 |     @property
117 |     def is_markdown(self):
118 |         """Check if the entity is a markdown file."""
119 |         return self.content_type == "text/markdown"
120 | 
121 |     def __getattribute__(self, name):
122 |         """Override attribute access to ensure datetime fields are timezone-aware."""
123 |         value = super().__getattribute__(name)
124 | 
125 |         # Ensure datetime fields are timezone-aware
126 |         if name in ("created_at", "updated_at") and isinstance(value, datetime):
127 |             return ensure_timezone_aware(value)
128 | 
129 |         return value
130 | 
131 |     def __repr__(self) -> str:
132 |         return f"Entity(id={self.id}, name='{self.title}', type='{self.entity_type}'"
133 | 
134 | 
135 | class Observation(Base):
136 |     """An observation about an entity.
137 | 
138 |     Observations are atomic facts or notes about an entity.
139 |     """
140 | 
141 |     __tablename__ = "observation"
142 |     __table_args__ = (
143 |         Index("ix_observation_entity_id", "entity_id"),  # Add FK index
144 |         Index("ix_observation_category", "category"),  # Add category index
145 |     )
146 | 
147 |     id: Mapped[int] = mapped_column(Integer, primary_key=True)
148 |     entity_id: Mapped[int] = mapped_column(Integer, ForeignKey("entity.id", ondelete="CASCADE"))
149 |     content: Mapped[str] = mapped_column(Text)
150 |     category: Mapped[str] = mapped_column(String, nullable=False, default="note")
151 |     context: Mapped[Optional[str]] = mapped_column(Text, nullable=True)
152 |     tags: Mapped[Optional[list[str]]] = mapped_column(
153 |         JSON, nullable=True, default=list, server_default="[]"
154 |     )
155 | 
156 |     # Relationships
157 |     entity = relationship("Entity", back_populates="observations")
158 | 
159 |     @property
160 |     def permalink(self) -> str:
161 |         """Create synthetic permalink for the observation.
162 | 
163 |         We can construct these because observations are always defined in
164 |         and owned by a single entity.
165 |         """
166 |         return generate_permalink(
167 |             f"{self.entity.permalink}/observations/{self.category}/{self.content}"
168 |         )
169 | 
170 |     def __repr__(self) -> str:  # pragma: no cover
171 |         return f"Observation(id={self.id}, entity_id={self.entity_id}, content='{self.content}')"
172 | 
173 | 
174 | class Relation(Base):
175 |     """A directed relation between two entities."""
176 | 
177 |     __tablename__ = "relation"
178 |     __table_args__ = (
179 |         UniqueConstraint("from_id", "to_id", "relation_type", name="uix_relation_from_id_to_id"),
180 |         UniqueConstraint(
181 |             "from_id", "to_name", "relation_type", name="uix_relation_from_id_to_name"
182 |         ),
183 |         Index("ix_relation_type", "relation_type"),
184 |         Index("ix_relation_from_id", "from_id"),  # Add FK indexes
185 |         Index("ix_relation_to_id", "to_id"),
186 |     )
187 | 
188 |     id: Mapped[int] = mapped_column(Integer, primary_key=True)
189 |     from_id: Mapped[int] = mapped_column(Integer, ForeignKey("entity.id", ondelete="CASCADE"))
190 |     to_id: Mapped[Optional[int]] = mapped_column(
191 |         Integer, ForeignKey("entity.id", ondelete="CASCADE"), nullable=True
192 |     )
193 |     to_name: Mapped[str] = mapped_column(String)
194 |     relation_type: Mapped[str] = mapped_column(String)
195 |     context: Mapped[Optional[str]] = mapped_column(Text, nullable=True)
196 | 
197 |     # Relationships
198 |     from_entity = relationship(
199 |         "Entity", foreign_keys=[from_id], back_populates="outgoing_relations"
200 |     )
201 |     to_entity = relationship("Entity", foreign_keys=[to_id], back_populates="incoming_relations")
202 | 
203 |     @property
204 |     def permalink(self) -> str:
205 |         """Create relation permalink showing the semantic connection.
206 | 
207 |         Format: source/relation_type/target
208 |         Example: "specs/search/implements/features/search-ui"
209 |         """
210 |         # Only create permalinks when both source and target have permalinks
211 |         from_permalink = self.from_entity.permalink or self.from_entity.file_path
212 | 
213 |         if self.to_entity:
214 |             to_permalink = self.to_entity.permalink or self.to_entity.file_path
215 |             return generate_permalink(f"{from_permalink}/{self.relation_type}/{to_permalink}")
216 |         return generate_permalink(f"{from_permalink}/{self.relation_type}/{self.to_name}")
217 | 
218 |     def __repr__(self) -> str:
219 |         return f"Relation(id={self.id}, from_id={self.from_id}, to_id={self.to_id}, to_name={self.to_name}, type='{self.relation_type}')"  # pragma: no cover
220 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/tools/project_management.py:
--------------------------------------------------------------------------------

```python
  1 | """Project management tools for Basic Memory MCP server.
  2 | 
  3 | These tools allow users to switch between projects, list available projects,
  4 | and manage project context during conversations.
  5 | """
  6 | 
  7 | import os
  8 | from fastmcp import Context
  9 | 
 10 | from basic_memory.mcp.async_client import get_client
 11 | from basic_memory.mcp.server import mcp
 12 | from basic_memory.mcp.tools.utils import call_get, call_post, call_delete
 13 | from basic_memory.schemas.project_info import (
 14 |     ProjectList,
 15 |     ProjectStatusResponse,
 16 |     ProjectInfoRequest,
 17 | )
 18 | from basic_memory.utils import generate_permalink
 19 | 
 20 | 
 21 | @mcp.tool("list_memory_projects")
 22 | async def list_memory_projects(context: Context | None = None) -> str:
 23 |     """List all available projects with their status.
 24 | 
 25 |     Shows all Basic Memory projects that are available for MCP operations.
 26 |     Use this tool to discover projects when you need to know which project to use.
 27 | 
 28 |     Use this tool:
 29 |     - At conversation start when project is unknown
 30 |     - When user asks about available projects
 31 |     - Before any operation requiring a project
 32 | 
 33 |     After calling:
 34 |     - Ask user which project to use
 35 |     - Remember their choice for the session
 36 | 
 37 |     Returns:
 38 |         Formatted list of projects with session management guidance
 39 | 
 40 |     Example:
 41 |         list_memory_projects()
 42 |     """
 43 |     async with get_client() as client:
 44 |         if context:  # pragma: no cover
 45 |             await context.info("Listing all available projects")
 46 | 
 47 |         # Check if server is constrained to a specific project
 48 |         constrained_project = os.environ.get("BASIC_MEMORY_MCP_PROJECT")
 49 | 
 50 |         # Get projects from API
 51 |         response = await call_get(client, "/projects/projects")
 52 |         project_list = ProjectList.model_validate(response.json())
 53 | 
 54 |         if constrained_project:
 55 |             result = f"Project: {constrained_project}\n\n"
 56 |             result += "Note: This MCP server is constrained to a single project.\n"
 57 |             result += "All operations will automatically use this project."
 58 |         else:
 59 |             # Show all projects with session guidance
 60 |             result = "Available projects:\n"
 61 | 
 62 |             for project in project_list.projects:
 63 |                 result += f"• {project.name}\n"
 64 | 
 65 |             result += "\n" + "─" * 40 + "\n"
 66 |             result += "Next: Ask which project to use for this session.\n"
 67 |             result += "Example: 'Which project should I use for this task?'\n\n"
 68 |             result += "Session reminder: Track the selected project for all subsequent operations in this conversation.\n"
 69 |             result += "The user can say 'switch to [project]' to change projects."
 70 | 
 71 |         return result
 72 | 
 73 | 
 74 | @mcp.tool("create_memory_project")
 75 | async def create_memory_project(
 76 |     project_name: str, project_path: str, set_default: bool = False, context: Context | None = None
 77 | ) -> str:
 78 |     """Create a new Basic Memory project.
 79 | 
 80 |     Creates a new project with the specified name and path. The project directory
 81 |     will be created if it doesn't exist. Optionally sets the new project as default.
 82 | 
 83 |     Args:
 84 |         project_name: Name for the new project (must be unique)
 85 |         project_path: File system path where the project will be stored
 86 |         set_default: Whether to set this project as the default (optional, defaults to False)
 87 | 
 88 |     Returns:
 89 |         Confirmation message with project details
 90 | 
 91 |     Example:
 92 |         create_memory_project("my-research", "~/Documents/research")
 93 |         create_memory_project("work-notes", "/home/user/work", set_default=True)
 94 |     """
 95 |     async with get_client() as client:
 96 |         # Check if server is constrained to a specific project
 97 |         constrained_project = os.environ.get("BASIC_MEMORY_MCP_PROJECT")
 98 |         if constrained_project:
 99 |             return f'# Error\n\nProject creation disabled - MCP server is constrained to project \'{constrained_project}\'.\nUse the CLI to create projects: `basic-memory project add "{project_name}" "{project_path}"`'
100 | 
101 |         if context:  # pragma: no cover
102 |             await context.info(f"Creating project: {project_name} at {project_path}")
103 | 
104 |         # Create the project request
105 |         project_request = ProjectInfoRequest(
106 |             name=project_name, path=project_path, set_default=set_default
107 |         )
108 | 
109 |         # Call API to create project
110 |         response = await call_post(client, "/projects/projects", json=project_request.model_dump())
111 |         status_response = ProjectStatusResponse.model_validate(response.json())
112 | 
113 |         result = f"✓ {status_response.message}\n\n"
114 | 
115 |         if status_response.new_project:
116 |             result += "Project Details:\n"
117 |             result += f"• Name: {status_response.new_project.name}\n"
118 |             result += f"• Path: {status_response.new_project.path}\n"
119 | 
120 |             if set_default:
121 |                 result += "• Set as default project\n"
122 | 
123 |         result += "\nProject is now available for use in tool calls.\n"
124 |         result += f"Use '{project_name}' as the project parameter in MCP tool calls.\n"
125 | 
126 |         return result
127 | 
128 | 
129 | @mcp.tool()
130 | async def delete_project(project_name: str, context: Context | None = None) -> str:
131 |     """Delete a Basic Memory project.
132 | 
133 |     Removes a project from the configuration and database. This does NOT delete
134 |     the actual files on disk - only removes the project from Basic Memory's
135 |     configuration and database records.
136 | 
137 |     Args:
138 |         project_name: Name of the project to delete
139 | 
140 |     Returns:
141 |         Confirmation message about project deletion
142 | 
143 |     Example:
144 |         delete_project("old-project")
145 | 
146 |     Warning:
147 |         This action cannot be undone. The project will need to be re-added
148 |         to access its content through Basic Memory again.
149 |     """
150 |     async with get_client() as client:
151 |         # Check if server is constrained to a specific project
152 |         constrained_project = os.environ.get("BASIC_MEMORY_MCP_PROJECT")
153 |         if constrained_project:
154 |             return f"# Error\n\nProject deletion disabled - MCP server is constrained to project '{constrained_project}'.\nUse the CLI to delete projects: `basic-memory project remove \"{project_name}\"`"
155 | 
156 |         if context:  # pragma: no cover
157 |             await context.info(f"Deleting project: {project_name}")
158 | 
159 |         # Get project info before deletion to validate it exists
160 |         response = await call_get(client, "/projects/projects")
161 |         project_list = ProjectList.model_validate(response.json())
162 | 
163 |         # Find the project by name (case-insensitive) or permalink - same logic as switch_project
164 |         project_permalink = generate_permalink(project_name)
165 |         target_project = None
166 |         for p in project_list.projects:
167 |             # Match by permalink (handles case-insensitive input)
168 |             if p.permalink == project_permalink:
169 |                 target_project = p
170 |                 break
171 |             # Also match by name comparison (case-insensitive)
172 |             if p.name.lower() == project_name.lower():
173 |                 target_project = p
174 |                 break
175 | 
176 |         if not target_project:
177 |             available_projects = [p.name for p in project_list.projects]
178 |             raise ValueError(
179 |                 f"Project '{project_name}' not found. Available projects: {', '.join(available_projects)}"
180 |             )
181 | 
182 |         # Call API to delete project using URL encoding for special characters
183 |         from urllib.parse import quote
184 | 
185 |         encoded_name = quote(target_project.name, safe="")
186 |         response = await call_delete(client, f"/projects/{encoded_name}")
187 |         status_response = ProjectStatusResponse.model_validate(response.json())
188 | 
189 |         result = f"✓ {status_response.message}\n\n"
190 | 
191 |         if status_response.old_project:
192 |             result += "Removed project details:\n"
193 |             result += f"• Name: {status_response.old_project.name}\n"
194 |             if hasattr(status_response.old_project, "path"):
195 |                 result += f"• Path: {status_response.old_project.path}\n"
196 | 
197 |         result += "Files remain on disk but project is no longer tracked by Basic Memory.\n"
198 |         result += "Re-add the project to access its content again.\n"
199 | 
200 |         return result
201 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/api/routers/resource_router.py:
--------------------------------------------------------------------------------

```python
  1 | """Routes for getting entity content."""
  2 | 
  3 | import tempfile
  4 | from pathlib import Path
  5 | from typing import Annotated
  6 | 
  7 | from fastapi import APIRouter, HTTPException, BackgroundTasks, Body
  8 | from fastapi.responses import FileResponse, JSONResponse
  9 | from loguru import logger
 10 | 
 11 | from basic_memory.deps import (
 12 |     ProjectConfigDep,
 13 |     LinkResolverDep,
 14 |     SearchServiceDep,
 15 |     EntityServiceDep,
 16 |     FileServiceDep,
 17 |     EntityRepositoryDep,
 18 | )
 19 | from basic_memory.repository.search_repository import SearchIndexRow
 20 | from basic_memory.schemas.memory import normalize_memory_url
 21 | from basic_memory.schemas.search import SearchQuery, SearchItemType
 22 | from basic_memory.models.knowledge import Entity as EntityModel
 23 | from datetime import datetime
 24 | 
 25 | router = APIRouter(prefix="/resource", tags=["resources"])
 26 | 
 27 | 
 28 | def get_entity_ids(item: SearchIndexRow) -> set[int]:
 29 |     match item.type:
 30 |         case SearchItemType.ENTITY:
 31 |             return {item.id}
 32 |         case SearchItemType.OBSERVATION:
 33 |             return {item.entity_id}  # pyright: ignore [reportReturnType]
 34 |         case SearchItemType.RELATION:
 35 |             from_entity = item.from_id
 36 |             to_entity = item.to_id  # pyright: ignore [reportReturnType]
 37 |             return {from_entity, to_entity} if to_entity else {from_entity}  # pyright: ignore [reportReturnType]
 38 |         case _:  # pragma: no cover
 39 |             raise ValueError(f"Unexpected type: {item.type}")
 40 | 
 41 | 
 42 | @router.get("/{identifier:path}")
 43 | async def get_resource_content(
 44 |     config: ProjectConfigDep,
 45 |     link_resolver: LinkResolverDep,
 46 |     search_service: SearchServiceDep,
 47 |     entity_service: EntityServiceDep,
 48 |     file_service: FileServiceDep,
 49 |     background_tasks: BackgroundTasks,
 50 |     identifier: str,
 51 |     page: int = 1,
 52 |     page_size: int = 10,
 53 | ) -> FileResponse:
 54 |     """Get resource content by identifier: name or permalink."""
 55 |     logger.debug(f"Getting content for: {identifier}")
 56 | 
 57 |     # Find single entity by permalink
 58 |     entity = await link_resolver.resolve_link(identifier)
 59 |     results = [entity] if entity else []
 60 | 
 61 |     # pagination for multiple results
 62 |     limit = page_size
 63 |     offset = (page - 1) * page_size
 64 | 
 65 |     # search using the identifier as a permalink
 66 |     if not results:
 67 |         # if the identifier contains a wildcard, use GLOB search
 68 |         query = (
 69 |             SearchQuery(permalink_match=identifier)
 70 |             if "*" in identifier
 71 |             else SearchQuery(permalink=identifier)
 72 |         )
 73 |         search_results = await search_service.search(query, limit, offset)
 74 |         if not search_results:
 75 |             raise HTTPException(status_code=404, detail=f"Resource not found: {identifier}")
 76 | 
 77 |         # get the deduplicated entities related to the search results
 78 |         entity_ids = {id for result in search_results for id in get_entity_ids(result)}
 79 |         results = await entity_service.get_entities_by_id(list(entity_ids))
 80 | 
 81 |     # return single response
 82 |     if len(results) == 1:
 83 |         entity = results[0]
 84 |         file_path = Path(f"{config.home}/{entity.file_path}")
 85 |         if not file_path.exists():
 86 |             raise HTTPException(
 87 |                 status_code=404,
 88 |                 detail=f"File not found: {file_path}",
 89 |             )
 90 |         return FileResponse(path=file_path)
 91 | 
 92 |     # for multiple files, initialize a temporary file for writing the results
 93 |     with tempfile.NamedTemporaryFile(delete=False, mode="w", suffix=".md") as tmp_file:
 94 |         temp_file_path = tmp_file.name
 95 | 
 96 |         for result in results:
 97 |             # Read content for each entity
 98 |             content = await file_service.read_entity_content(result)
 99 |             memory_url = normalize_memory_url(result.permalink)
100 |             modified_date = result.updated_at.isoformat()
101 |             checksum = result.checksum[:8] if result.checksum else ""
102 | 
103 |             # Prepare the delimited content
104 |             response_content = f"--- {memory_url} {modified_date} {checksum}\n"
105 |             response_content += f"\n{content}\n"
106 |             response_content += "\n"
107 | 
108 |             # Write content directly to the temporary file in append mode
109 |             tmp_file.write(response_content)
110 | 
111 |         # Ensure all content is written to disk
112 |         tmp_file.flush()
113 | 
114 |     # Schedule the temporary file to be deleted after the response
115 |     background_tasks.add_task(cleanup_temp_file, temp_file_path)
116 | 
117 |     # Return the file response
118 |     return FileResponse(path=temp_file_path)
119 | 
120 | 
121 | def cleanup_temp_file(file_path: str):
122 |     """Delete the temporary file."""
123 |     try:
124 |         Path(file_path).unlink()  # Deletes the file
125 |         logger.debug(f"Temporary file deleted: {file_path}")
126 |     except Exception as e:  # pragma: no cover
127 |         logger.error(f"Error deleting temporary file {file_path}: {e}")
128 | 
129 | 
130 | @router.put("/{file_path:path}")
131 | async def write_resource(
132 |     config: ProjectConfigDep,
133 |     file_service: FileServiceDep,
134 |     entity_repository: EntityRepositoryDep,
135 |     search_service: SearchServiceDep,
136 |     file_path: str,
137 |     content: Annotated[str, Body()],
138 | ) -> JSONResponse:
139 |     """Write content to a file in the project.
140 | 
141 |     This endpoint allows writing content directly to a file in the project.
142 |     Also creates an entity record and indexes the file for search.
143 | 
144 |     Args:
145 |         file_path: Path to write to, relative to project root
146 |         request: Contains the content to write
147 | 
148 |     Returns:
149 |         JSON response with file information
150 |     """
151 |     try:
152 |         # Get content from request body
153 | 
154 |         # Ensure it's UTF-8 string content
155 |         if isinstance(content, bytes):  # pragma: no cover
156 |             content_str = content.decode("utf-8")
157 |         else:
158 |             content_str = str(content)
159 | 
160 |         # Get full file path
161 |         full_path = Path(f"{config.home}/{file_path}")
162 | 
163 |         # Ensure parent directory exists
164 |         full_path.parent.mkdir(parents=True, exist_ok=True)
165 | 
166 |         # Write content to file
167 |         checksum = await file_service.write_file(full_path, content_str)
168 | 
169 |         # Get file info
170 |         file_stats = file_service.file_stats(full_path)
171 | 
172 |         # Determine file details
173 |         file_name = Path(file_path).name
174 |         content_type = file_service.content_type(full_path)
175 | 
176 |         entity_type = "canvas" if file_path.endswith(".canvas") else "file"
177 | 
178 |         # Check if entity already exists
179 |         existing_entity = await entity_repository.get_by_file_path(file_path)
180 | 
181 |         if existing_entity:
182 |             # Update existing entity
183 |             entity = await entity_repository.update(
184 |                 existing_entity.id,
185 |                 {
186 |                     "title": file_name,
187 |                     "entity_type": entity_type,
188 |                     "content_type": content_type,
189 |                     "file_path": file_path,
190 |                     "checksum": checksum,
191 |                     "updated_at": datetime.fromtimestamp(file_stats.st_mtime).astimezone(),
192 |                 },
193 |             )
194 |             status_code = 200
195 |         else:
196 |             # Create a new entity model
197 |             entity = EntityModel(
198 |                 title=file_name,
199 |                 entity_type=entity_type,
200 |                 content_type=content_type,
201 |                 file_path=file_path,
202 |                 checksum=checksum,
203 |                 created_at=datetime.fromtimestamp(file_stats.st_ctime).astimezone(),
204 |                 updated_at=datetime.fromtimestamp(file_stats.st_mtime).astimezone(),
205 |             )
206 |             entity = await entity_repository.add(entity)
207 |             status_code = 201
208 | 
209 |         # Index the file for search
210 |         await search_service.index_entity(entity)  # pyright: ignore
211 | 
212 |         # Return success response
213 |         return JSONResponse(
214 |             status_code=status_code,
215 |             content={
216 |                 "file_path": file_path,
217 |                 "checksum": checksum,
218 |                 "size": file_stats.st_size,
219 |                 "created_at": file_stats.st_ctime,
220 |                 "modified_at": file_stats.st_mtime,
221 |             },
222 |         )
223 |     except Exception as e:  # pragma: no cover
224 |         logger.error(f"Error writing resource {file_path}: {e}")
225 |         raise HTTPException(status_code=500, detail=f"Failed to write resource: {str(e)}")
226 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/cloud/rclone_config.py:
--------------------------------------------------------------------------------

```python
  1 | """rclone configuration management for Basic Memory Cloud."""
  2 | 
  3 | import configparser
  4 | import os
  5 | import shutil
  6 | import subprocess
  7 | from pathlib import Path
  8 | from typing import Dict, List, Optional
  9 | 
 10 | from rich.console import Console
 11 | 
 12 | console = Console()
 13 | 
 14 | 
 15 | class RcloneConfigError(Exception):
 16 |     """Exception raised for rclone configuration errors."""
 17 | 
 18 |     pass
 19 | 
 20 | 
 21 | class RcloneMountProfile:
 22 |     """Mount profile with optimized settings."""
 23 | 
 24 |     def __init__(
 25 |         self,
 26 |         name: str,
 27 |         cache_time: str,
 28 |         poll_interval: str,
 29 |         attr_timeout: str,
 30 |         write_back: str,
 31 |         description: str,
 32 |         extra_args: Optional[List[str]] = None,
 33 |     ):
 34 |         self.name = name
 35 |         self.cache_time = cache_time
 36 |         self.poll_interval = poll_interval
 37 |         self.attr_timeout = attr_timeout
 38 |         self.write_back = write_back
 39 |         self.description = description
 40 |         self.extra_args = extra_args or []
 41 | 
 42 | 
 43 | # Mount profiles based on SPEC-7 Phase 4 testing
 44 | MOUNT_PROFILES = {
 45 |     "fast": RcloneMountProfile(
 46 |         name="fast",
 47 |         cache_time="5s",
 48 |         poll_interval="3s",
 49 |         attr_timeout="3s",
 50 |         write_back="1s",
 51 |         description="Ultra-fast development (5s sync, higher bandwidth)",
 52 |     ),
 53 |     "balanced": RcloneMountProfile(
 54 |         name="balanced",
 55 |         cache_time="10s",
 56 |         poll_interval="5s",
 57 |         attr_timeout="5s",
 58 |         write_back="2s",
 59 |         description="Fast development (10-15s sync, recommended)",
 60 |     ),
 61 |     "safe": RcloneMountProfile(
 62 |         name="safe",
 63 |         cache_time="15s",
 64 |         poll_interval="10s",
 65 |         attr_timeout="10s",
 66 |         write_back="5s",
 67 |         description="Conflict-aware mount with backup",
 68 |         extra_args=[
 69 |             "--conflict-suffix",
 70 |             ".conflict-{DateTimeExt}",
 71 |             "--backup-dir",
 72 |             "~/.basic-memory/conflicts",
 73 |             "--track-renames",
 74 |         ],
 75 |     ),
 76 | }
 77 | 
 78 | 
 79 | def get_rclone_config_path() -> Path:
 80 |     """Get the path to rclone configuration file."""
 81 |     config_dir = Path.home() / ".config" / "rclone"
 82 |     config_dir.mkdir(parents=True, exist_ok=True)
 83 |     return config_dir / "rclone.conf"
 84 | 
 85 | 
 86 | def backup_rclone_config() -> Optional[Path]:
 87 |     """Create a backup of existing rclone config."""
 88 |     config_path = get_rclone_config_path()
 89 |     if not config_path.exists():
 90 |         return None
 91 | 
 92 |     backup_path = config_path.with_suffix(f".conf.backup-{os.getpid()}")
 93 |     shutil.copy2(config_path, backup_path)
 94 |     console.print(f"[dim]Created backup: {backup_path}[/dim]")
 95 |     return backup_path
 96 | 
 97 | 
 98 | def load_rclone_config() -> configparser.ConfigParser:
 99 |     """Load existing rclone configuration."""
100 |     config = configparser.ConfigParser()
101 |     config_path = get_rclone_config_path()
102 | 
103 |     if config_path.exists():
104 |         config.read(config_path)
105 | 
106 |     return config
107 | 
108 | 
109 | def save_rclone_config(config: configparser.ConfigParser) -> None:
110 |     """Save rclone configuration to file."""
111 |     config_path = get_rclone_config_path()
112 | 
113 |     with open(config_path, "w") as f:
114 |         config.write(f)
115 | 
116 |     console.print(f"[dim]Updated rclone config: {config_path}[/dim]")
117 | 
118 | 
119 | def add_tenant_to_rclone_config(
120 |     tenant_id: str,
121 |     bucket_name: str,
122 |     access_key: str,
123 |     secret_key: str,
124 |     endpoint: str = "https://fly.storage.tigris.dev",
125 |     region: str = "auto",
126 | ) -> str:
127 |     """Add tenant configuration to rclone config file."""
128 | 
129 |     # Backup existing config
130 |     backup_rclone_config()
131 | 
132 |     # Load existing config
133 |     config = load_rclone_config()
134 | 
135 |     # Create section name
136 |     section_name = f"basic-memory-{tenant_id}"
137 | 
138 |     # Add/update the tenant section
139 |     if not config.has_section(section_name):
140 |         config.add_section(section_name)
141 | 
142 |     config.set(section_name, "type", "s3")
143 |     config.set(section_name, "provider", "Other")
144 |     config.set(section_name, "access_key_id", access_key)
145 |     config.set(section_name, "secret_access_key", secret_key)
146 |     config.set(section_name, "endpoint", endpoint)
147 |     config.set(section_name, "region", region)
148 | 
149 |     # Save updated config
150 |     save_rclone_config(config)
151 | 
152 |     console.print(f"[green]✓ Added tenant {tenant_id} to rclone config[/green]")
153 |     return section_name
154 | 
155 | 
156 | def remove_tenant_from_rclone_config(tenant_id: str) -> bool:
157 |     """Remove tenant configuration from rclone config."""
158 |     config = load_rclone_config()
159 |     section_name = f"basic-memory-{tenant_id}"
160 | 
161 |     if config.has_section(section_name):
162 |         backup_rclone_config()
163 |         config.remove_section(section_name)
164 |         save_rclone_config(config)
165 |         console.print(f"[green]✓ Removed tenant {tenant_id} from rclone config[/green]")
166 |         return True
167 | 
168 |     return False
169 | 
170 | 
171 | def get_default_mount_path() -> Path:
172 |     """Get default mount path (fixed location per SPEC-9).
173 | 
174 |     Returns:
175 |         Fixed mount path: ~/basic-memory-cloud/
176 |     """
177 |     return Path.home() / "basic-memory-cloud"
178 | 
179 | 
180 | def build_mount_command(
181 |     tenant_id: str, bucket_name: str, mount_path: Path, profile: RcloneMountProfile
182 | ) -> List[str]:
183 |     """Build rclone mount command with optimized settings."""
184 | 
185 |     rclone_remote = f"basic-memory-{tenant_id}:{bucket_name}"
186 | 
187 |     cmd = [
188 |         "rclone",
189 |         "nfsmount",
190 |         rclone_remote,
191 |         str(mount_path),
192 |         "--vfs-cache-mode",
193 |         "writes",
194 |         "--dir-cache-time",
195 |         profile.cache_time,
196 |         "--vfs-cache-poll-interval",
197 |         profile.poll_interval,
198 |         "--attr-timeout",
199 |         profile.attr_timeout,
200 |         "--vfs-write-back",
201 |         profile.write_back,
202 |         "--daemon",
203 |     ]
204 | 
205 |     # Add profile-specific extra arguments
206 |     cmd.extend(profile.extra_args)
207 | 
208 |     return cmd
209 | 
210 | 
211 | def is_path_mounted(mount_path: Path) -> bool:
212 |     """Check if a path is currently mounted."""
213 |     if not mount_path.exists():
214 |         return False
215 | 
216 |     try:
217 |         # Check if mount point is actually mounted by looking for mount table entry
218 |         result = subprocess.run(["mount"], capture_output=True, text=True, check=False)
219 | 
220 |         if result.returncode == 0:
221 |             # Look for our mount path in mount output
222 |             mount_str = str(mount_path.resolve())
223 |             return mount_str in result.stdout
224 | 
225 |         return False
226 |     except Exception:
227 |         return False
228 | 
229 | 
230 | def get_rclone_processes() -> List[Dict[str, str]]:
231 |     """Get list of running rclone processes."""
232 |     try:
233 |         # Use ps to find rclone processes
234 |         result = subprocess.run(
235 |             ["ps", "-eo", "pid,args"], capture_output=True, text=True, check=False
236 |         )
237 | 
238 |         processes = []
239 |         if result.returncode == 0:
240 |             for line in result.stdout.split("\n"):
241 |                 if "rclone" in line and "basic-memory" in line:
242 |                     parts = line.strip().split(None, 1)
243 |                     if len(parts) >= 2:
244 |                         processes.append({"pid": parts[0], "command": parts[1]})
245 | 
246 |         return processes
247 |     except Exception:
248 |         return []
249 | 
250 | 
251 | def kill_rclone_process(pid: str) -> bool:
252 |     """Kill a specific rclone process."""
253 |     try:
254 |         subprocess.run(["kill", pid], check=True)
255 |         console.print(f"[green]✓ Killed rclone process {pid}[/green]")
256 |         return True
257 |     except subprocess.CalledProcessError:
258 |         console.print(f"[red]✗ Failed to kill rclone process {pid}[/red]")
259 |         return False
260 | 
261 | 
262 | def unmount_path(mount_path: Path) -> bool:
263 |     """Unmount a mounted path."""
264 |     if not is_path_mounted(mount_path):
265 |         return True
266 | 
267 |     try:
268 |         subprocess.run(["umount", str(mount_path)], check=True)
269 |         console.print(f"[green]✓ Unmounted {mount_path}[/green]")
270 |         return True
271 |     except subprocess.CalledProcessError as e:
272 |         console.print(f"[red]✗ Failed to unmount {mount_path}: {e}[/red]")
273 |         return False
274 | 
275 | 
276 | def cleanup_orphaned_rclone_processes() -> int:
277 |     """Clean up orphaned rclone processes for basic-memory."""
278 |     processes = get_rclone_processes()
279 |     killed_count = 0
280 | 
281 |     for proc in processes:
282 |         console.print(
283 |             f"[yellow]Found rclone process: {proc['pid']} - {proc['command'][:80]}...[/yellow]"
284 |         )
285 |         if kill_rclone_process(proc["pid"]):
286 |             killed_count += 1
287 | 
288 |     return killed_count
289 | 
```

--------------------------------------------------------------------------------
/tests/markdown/test_date_frontmatter_parsing.py:
--------------------------------------------------------------------------------

```python
  1 | """Test that YAML date parsing doesn't break frontmatter processing.
  2 | 
  3 | This test reproduces GitHub issue #236 from basic-memory-cloud where date fields
  4 | in YAML frontmatter are automatically parsed as datetime.date objects by PyYAML,
  5 | but later code expects strings and calls .strip() on them, causing AttributeError.
  6 | """
  7 | 
  8 | import pytest
  9 | from pathlib import Path
 10 | from basic_memory.markdown.entity_parser import EntityParser
 11 | 
 12 | 
 13 | @pytest.fixture
 14 | def test_file_with_date(tmp_path):
 15 |     """Create a test file with date fields in frontmatter."""
 16 |     test_file = tmp_path / "test_note.md"
 17 |     content = """---
 18 | title: Test Note
 19 | date: 2025-10-24
 20 | created: 2025-10-24
 21 | tags:
 22 |   - python
 23 |   - testing
 24 | ---
 25 | 
 26 | # Test Content
 27 | 
 28 | This file has date fields in frontmatter that PyYAML will parse as datetime.date objects.
 29 | """
 30 |     test_file.write_text(content)
 31 |     return test_file
 32 | 
 33 | 
 34 | @pytest.fixture
 35 | def test_file_with_date_in_tags(tmp_path):
 36 |     """Create a test file with a date value in tags (edge case)."""
 37 |     test_file = tmp_path / "test_note_date_tags.md"
 38 |     content = """---
 39 | title: Test Note with Date Tags
 40 | tags: 2025-10-24
 41 | ---
 42 | 
 43 | # Test Content
 44 | 
 45 | This file has a date value as tags, which will be parsed as datetime.date.
 46 | """
 47 |     test_file.write_text(content)
 48 |     return test_file
 49 | 
 50 | 
 51 | @pytest.fixture
 52 | def test_file_with_dates_in_tag_list(tmp_path):
 53 |     """Create a test file with dates in a tag list (edge case)."""
 54 |     test_file = tmp_path / "test_note_dates_in_list.md"
 55 |     content = """---
 56 | title: Test Note with Dates in Tags List
 57 | tags:
 58 |   - valid-tag
 59 |   - 2025-10-24
 60 |   - another-tag
 61 | ---
 62 | 
 63 | # Test Content
 64 | 
 65 | This file has date values mixed into tags list.
 66 | """
 67 |     test_file.write_text(content)
 68 |     return test_file
 69 | 
 70 | 
 71 | @pytest.mark.asyncio
 72 | async def test_parse_file_with_date_fields(test_file_with_date, tmp_path):
 73 |     """Test that files with date fields in frontmatter can be parsed without errors."""
 74 |     parser = EntityParser(tmp_path)
 75 | 
 76 |     # This should not raise AttributeError about .strip()
 77 |     entity_markdown = await parser.parse_file(test_file_with_date)
 78 | 
 79 |     # Verify basic parsing worked
 80 |     assert entity_markdown.frontmatter.title == "Test Note"
 81 | 
 82 |     # Date fields should be converted to ISO format strings
 83 |     date_field = entity_markdown.frontmatter.metadata.get("date")
 84 |     assert date_field is not None
 85 |     assert isinstance(date_field, str), "Date should be converted to string"
 86 |     assert date_field == "2025-10-24", "Date should be in ISO format"
 87 | 
 88 |     created_field = entity_markdown.frontmatter.metadata.get("created")
 89 |     assert created_field is not None
 90 |     assert isinstance(created_field, str), "Created date should be converted to string"
 91 |     assert created_field == "2025-10-24", "Created date should be in ISO format"
 92 | 
 93 | 
 94 | @pytest.mark.asyncio
 95 | async def test_parse_file_with_date_as_tags(test_file_with_date_in_tags, tmp_path):
 96 |     """Test that date values in tags field don't cause errors."""
 97 |     parser = EntityParser(tmp_path)
 98 | 
 99 |     # This should not raise AttributeError - date should be converted to string
100 |     entity_markdown = await parser.parse_file(test_file_with_date_in_tags)
101 |     assert entity_markdown.frontmatter.title == "Test Note with Date Tags"
102 | 
103 |     # The date should be converted to ISO format string before parse_tags processes it
104 |     tags = entity_markdown.frontmatter.tags
105 |     assert tags is not None
106 |     assert isinstance(tags, list)
107 |     # The date value should be converted to string
108 |     assert "2025-10-24" in tags
109 | 
110 | 
111 | @pytest.mark.asyncio
112 | async def test_parse_file_with_dates_in_tag_list(test_file_with_dates_in_tag_list, tmp_path):
113 |     """Test that date values in a tags list don't cause errors."""
114 |     parser = EntityParser(tmp_path)
115 | 
116 |     # This should not raise AttributeError - dates should be converted to strings
117 |     entity_markdown = await parser.parse_file(test_file_with_dates_in_tag_list)
118 |     assert entity_markdown.frontmatter.title == "Test Note with Dates in Tags List"
119 | 
120 |     # Tags should be parsed
121 |     tags = entity_markdown.frontmatter.tags
122 |     assert tags is not None
123 |     assert isinstance(tags, list)
124 | 
125 |     # Should have 3 tags (2 valid + 1 date converted to ISO string)
126 |     assert len(tags) == 3
127 |     assert "valid-tag" in tags
128 |     assert "another-tag" in tags
129 |     # Date should be converted to ISO format string
130 |     assert "2025-10-24" in tags
131 | 
132 | 
133 | @pytest.mark.asyncio
134 | async def test_parse_file_with_various_yaml_types(tmp_path):
135 |     """Test that various YAML types in frontmatter don't cause errors.
136 | 
137 |     This reproduces the broader issue from GitHub #236 where ANY non-string
138 |     YAML type (dates, lists, numbers, booleans) can cause AttributeError
139 |     when code expects strings and calls .strip().
140 |     """
141 |     test_file = tmp_path / "test_yaml_types.md"
142 |     content = """---
143 | title: Test YAML Types
144 | date: 2025-10-24
145 | priority: 1
146 | completed: true
147 | tags:
148 |   - python
149 |   - testing
150 | metadata:
151 |   author: Test User
152 |   version: 1.0
153 | ---
154 | 
155 | # Test Content
156 | 
157 | This file has various YAML types that need to be normalized.
158 | """
159 |     test_file.write_text(content)
160 | 
161 |     parser = EntityParser(tmp_path)
162 |     entity_markdown = await parser.parse_file(test_file)
163 | 
164 |     # All values should be accessible without AttributeError
165 |     assert entity_markdown.frontmatter.title == "Test YAML Types"
166 | 
167 |     # Date should be converted to ISO string
168 |     date_field = entity_markdown.frontmatter.metadata.get("date")
169 |     assert isinstance(date_field, str)
170 |     assert date_field == "2025-10-24"
171 | 
172 |     # Number should be converted to string
173 |     priority = entity_markdown.frontmatter.metadata.get("priority")
174 |     assert isinstance(priority, str)
175 |     assert priority == "1"
176 | 
177 |     # Boolean should be converted to string
178 |     completed = entity_markdown.frontmatter.metadata.get("completed")
179 |     assert isinstance(completed, str)
180 |     assert completed == "True"  # Python's str(True) always returns "True"
181 | 
182 |     # List should be preserved as list, but items should be strings
183 |     tags = entity_markdown.frontmatter.tags
184 |     assert isinstance(tags, list)
185 |     assert all(isinstance(tag, str) for tag in tags)
186 |     assert "python" in tags
187 |     assert "testing" in tags
188 | 
189 |     # Dict should be preserved as dict, but nested values should be strings
190 |     metadata = entity_markdown.frontmatter.metadata.get("metadata")
191 |     assert isinstance(metadata, dict)
192 |     assert isinstance(metadata.get("author"), str)
193 |     assert metadata.get("author") == "Test User"
194 |     assert isinstance(metadata.get("version"), str)
195 |     assert metadata.get("version") in ("1.0", "1")
196 | 
197 | 
198 | @pytest.mark.asyncio
199 | async def test_parse_file_with_datetime_objects(tmp_path):
200 |     """Test that datetime objects (not just date objects) are properly normalized.
201 | 
202 |     This tests the edge case where frontmatter might contain datetime values
203 |     with time components (as parsed by PyYAML), ensuring they're converted to ISO format strings.
204 |     """
205 |     test_file = tmp_path / "test_datetime.md"
206 | 
207 |     # YAML datetime strings that PyYAML will parse as datetime objects
208 |     # Format: YYYY-MM-DD HH:MM:SS or YYYY-MM-DDTHH:MM:SS
209 |     content = """---
210 | title: Test Datetime
211 | created_at: 2025-10-24 14:30:00
212 | updated_at: 2025-10-24T00:00:00
213 | ---
214 | 
215 | # Test Content
216 | 
217 | This file has datetime values in frontmatter that PyYAML will parse as datetime objects.
218 | """
219 |     test_file.write_text(content)
220 | 
221 |     parser = EntityParser(tmp_path)
222 |     entity_markdown = await parser.parse_file(test_file)
223 | 
224 |     # Verify datetime objects are converted to ISO format strings
225 |     created_at = entity_markdown.frontmatter.metadata.get("created_at")
226 |     assert isinstance(created_at, str), "Datetime should be converted to string"
227 |     # PyYAML parses "2025-10-24 14:30:00" as datetime, which we normalize to ISO
228 |     assert "2025-10-24" in created_at and "14:30:00" in created_at, \
229 |         f"Datetime with time should be normalized to ISO format, got: {created_at}"
230 | 
231 |     updated_at = entity_markdown.frontmatter.metadata.get("updated_at")
232 |     assert isinstance(updated_at, str), "Datetime should be converted to string"
233 |     # PyYAML parses "2025-10-24T00:00:00" as datetime, which we normalize to ISO
234 |     assert "2025-10-24" in updated_at and "00:00:00" in updated_at, \
235 |         f"Datetime at midnight should be normalized to ISO format, got: {updated_at}"
```

--------------------------------------------------------------------------------
/tests/mcp/test_tool_canvas.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for canvas tool that exercise the full stack with SQLite."""
  2 | 
  3 | import json
  4 | from pathlib import Path
  5 | 
  6 | import pytest
  7 | 
  8 | from basic_memory.mcp.tools import canvas
  9 | 
 10 | 
 11 | @pytest.mark.asyncio
 12 | async def test_create_canvas(app, project_config, test_project):
 13 |     """Test creating a new canvas file.
 14 | 
 15 |     Should:
 16 |     - Create canvas file with correct content
 17 |     - Create entity in database
 18 |     - Return successful status
 19 |     """
 20 |     # Test data
 21 |     nodes = [
 22 |         {
 23 |             "id": "node1",
 24 |             "type": "text",
 25 |             "text": "Test Node",
 26 |             "x": 100,
 27 |             "y": 200,
 28 |             "width": 400,
 29 |             "height": 300,
 30 |         }
 31 |     ]
 32 |     edges = [{"id": "edge1", "fromNode": "node1", "toNode": "node2", "label": "connects to"}]
 33 |     title = "test-canvas"
 34 |     folder = "visualizations"
 35 | 
 36 |     # Execute
 37 |     result = await canvas.fn(
 38 |         project=test_project.name, nodes=nodes, edges=edges, title=title, folder=folder
 39 |     )
 40 | 
 41 |     # Verify result message
 42 |     assert result
 43 |     assert "Created: visualizations/test-canvas" in result
 44 |     assert "The canvas is ready to open in Obsidian" in result
 45 | 
 46 |     # Verify file was created
 47 |     file_path = Path(project_config.home) / folder / f"{title}.canvas"
 48 |     assert file_path.exists()
 49 | 
 50 |     # Verify content is correct
 51 |     content = json.loads(file_path.read_text(encoding="utf-8"))
 52 |     assert content["nodes"] == nodes
 53 |     assert content["edges"] == edges
 54 | 
 55 | 
 56 | @pytest.mark.asyncio
 57 | async def test_create_canvas_with_extension(app, project_config, test_project):
 58 |     """Test creating a canvas file with .canvas extension already in the title."""
 59 |     # Test data
 60 |     nodes = [
 61 |         {
 62 |             "id": "node1",
 63 |             "type": "text",
 64 |             "text": "Extension Test",
 65 |             "x": 100,
 66 |             "y": 200,
 67 |             "width": 400,
 68 |             "height": 300,
 69 |         }
 70 |     ]
 71 |     edges = []
 72 |     title = "extension-test.canvas"  # Already has extension
 73 |     folder = "visualizations"
 74 | 
 75 |     # Execute
 76 |     result = await canvas.fn(
 77 |         project=test_project.name, nodes=nodes, edges=edges, title=title, folder=folder
 78 |     )
 79 | 
 80 |     # Verify
 81 |     assert "Created: visualizations/extension-test.canvas" in result
 82 | 
 83 |     # Verify file exists with correct name (shouldn't have double extension)
 84 |     file_path = Path(project_config.home) / folder / title
 85 |     assert file_path.exists()
 86 | 
 87 |     # Verify content
 88 |     content = json.loads(file_path.read_text(encoding="utf-8"))
 89 |     assert content["nodes"] == nodes
 90 | 
 91 | 
 92 | @pytest.mark.asyncio
 93 | async def test_update_existing_canvas(app, project_config, test_project):
 94 |     """Test updating an existing canvas file."""
 95 |     # First create a canvas
 96 |     nodes = [
 97 |         {
 98 |             "id": "initial",
 99 |             "type": "text",
100 |             "text": "Initial content",
101 |             "x": 0,
102 |             "y": 0,
103 |             "width": 200,
104 |             "height": 100,
105 |         }
106 |     ]
107 |     edges = []
108 |     title = "update-test"
109 |     folder = "visualizations"
110 | 
111 |     # Create initial canvas
112 |     await canvas.fn(project=test_project.name, nodes=nodes, edges=edges, title=title, folder=folder)
113 | 
114 |     # Verify file exists
115 |     file_path = Path(project_config.home) / folder / f"{title}.canvas"
116 |     assert file_path.exists()
117 | 
118 |     # Now update with new content
119 |     updated_nodes = [
120 |         {
121 |             "id": "updated",
122 |             "type": "text",
123 |             "text": "Updated content",
124 |             "x": 100,
125 |             "y": 100,
126 |             "width": 300,
127 |             "height": 200,
128 |         }
129 |     ]
130 |     updated_edges = [
131 |         {"id": "new-edge", "fromNode": "updated", "toNode": "other", "label": "new connection"}
132 |     ]
133 | 
134 |     # Execute update
135 |     result = await canvas.fn(
136 |         project=test_project.name,
137 |         nodes=updated_nodes,
138 |         edges=updated_edges,
139 |         title=title,
140 |         folder=folder,
141 |     )
142 | 
143 |     # Verify result indicates update
144 |     assert "Updated: visualizations/update-test.canvas" in result
145 | 
146 |     # Verify content was updated
147 |     content = json.loads(file_path.read_text(encoding="utf-8"))
148 |     assert content["nodes"] == updated_nodes
149 |     assert content["edges"] == updated_edges
150 | 
151 | 
152 | @pytest.mark.asyncio
153 | async def test_create_canvas_with_nested_folders(app, project_config, test_project):
154 |     """Test creating a canvas in nested folders that don't exist yet."""
155 |     # Test data
156 |     nodes = [
157 |         {
158 |             "id": "test",
159 |             "type": "text",
160 |             "text": "Nested folder test",
161 |             "x": 0,
162 |             "y": 0,
163 |             "width": 200,
164 |             "height": 100,
165 |         }
166 |     ]
167 |     edges = []
168 |     title = "nested-test"
169 |     folder = "visualizations/nested/folders"  # Deep path
170 | 
171 |     # Execute
172 |     result = await canvas.fn(
173 |         project=test_project.name, nodes=nodes, edges=edges, title=title, folder=folder
174 |     )
175 | 
176 |     # Verify
177 |     assert "Created: visualizations/nested/folders/nested-test.canvas" in result
178 | 
179 |     # Verify folders and file were created
180 |     file_path = Path(project_config.home) / folder / f"{title}.canvas"
181 |     assert file_path.exists()
182 |     assert file_path.parent.exists()
183 | 
184 | 
185 | @pytest.mark.asyncio
186 | async def test_create_canvas_complex_content(app, project_config, test_project):
187 |     """Test creating a canvas with complex content structures."""
188 |     # Test data - more complex structure with all node types
189 |     nodes = [
190 |         {
191 |             "id": "text-node",
192 |             "type": "text",
193 |             "text": "# Heading\n\nThis is a test with *markdown* formatting",
194 |             "x": 100,
195 |             "y": 100,
196 |             "width": 400,
197 |             "height": 300,
198 |             "color": "4",  # Using a preset color
199 |         },
200 |         {
201 |             "id": "file-node",
202 |             "type": "file",
203 |             "file": "test/test-file.md",  # Reference a file
204 |             "x": 600,
205 |             "y": 100,
206 |             "width": 400,
207 |             "height": 300,
208 |             "color": "#FF5500",  # Using hex color
209 |         },
210 |         {
211 |             "id": "link-node",
212 |             "type": "link",
213 |             "url": "https://example.com",
214 |             "x": 100,
215 |             "y": 500,
216 |             "width": 400,
217 |             "height": 200,
218 |         },
219 |         {
220 |             "id": "group-node",
221 |             "type": "group",
222 |             "label": "Group Label",
223 |             "x": 600,
224 |             "y": 500,
225 |             "width": 600,
226 |             "height": 400,
227 |         },
228 |     ]
229 | 
230 |     edges = [
231 |         {
232 |             "id": "edge1",
233 |             "fromNode": "text-node",
234 |             "toNode": "file-node",
235 |             "label": "references",
236 |             "fromSide": "right",
237 |             "toSide": "left",
238 |         },
239 |         {
240 |             "id": "edge2",
241 |             "fromNode": "link-node",
242 |             "toNode": "group-node",
243 |             "label": "belongs to",
244 |             "color": "6",
245 |         },
246 |     ]
247 | 
248 |     title = "complex-test"
249 |     folder = "visualizations"
250 | 
251 |     # Create a test file that we're referencing
252 |     test_file_path = Path(project_config.home) / "test/test-file.md"
253 |     test_file_path.parent.mkdir(parents=True, exist_ok=True)
254 |     test_file_path.write_text("# Test File\nThis is referenced by the canvas")
255 | 
256 |     # Execute
257 |     result = await canvas.fn(
258 |         project=test_project.name, nodes=nodes, edges=edges, title=title, folder=folder
259 |     )
260 | 
261 |     # Verify
262 |     assert "Created: visualizations/complex-test.canvas" in result
263 | 
264 |     # Verify file was created
265 |     file_path = Path(project_config.home) / folder / f"{title}.canvas"
266 |     assert file_path.exists()
267 | 
268 |     # Verify content is correct with all complex structures
269 |     content = json.loads(file_path.read_text(encoding="utf-8"))
270 |     assert len(content["nodes"]) == 4
271 |     assert len(content["edges"]) == 2
272 | 
273 |     # Verify specific content elements are preserved
274 |     assert any(node["type"] == "text" and "#" in node["text"] for node in content["nodes"])
275 |     assert any(
276 |         node["type"] == "file" and "test-file.md" in node["file"] for node in content["nodes"]
277 |     )
278 |     assert any(node["type"] == "link" and "example.com" in node["url"] for node in content["nodes"])
279 |     assert any(
280 |         node["type"] == "group" and "Group Label" == node["label"] for node in content["nodes"]
281 |     )
282 | 
283 |     # Verify edge properties
284 |     assert any(
285 |         edge["fromSide"] == "right" and edge["toSide"] == "left" for edge in content["edges"]
286 |     )
287 |     assert any(edge["label"] == "belongs to" and edge["color"] == "6" for edge in content["edges"])
288 | 
```

--------------------------------------------------------------------------------
/tests/mcp/test_tool_list_directory.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for the list_directory MCP tool."""
  2 | 
  3 | import pytest
  4 | 
  5 | from basic_memory.mcp.tools.list_directory import list_directory
  6 | from basic_memory.mcp.tools.write_note import write_note
  7 | 
  8 | 
  9 | @pytest.mark.asyncio
 10 | async def test_list_directory_empty(client, test_project):
 11 |     """Test listing directory when no entities exist."""
 12 |     result = await list_directory.fn(project=test_project.name)
 13 | 
 14 |     assert isinstance(result, str)
 15 |     assert "No files found in directory '/'" in result
 16 | 
 17 | 
 18 | @pytest.mark.asyncio
 19 | async def test_list_directory_with_test_graph(client, test_graph, test_project):
 20 |     """Test listing directory with test_graph fixture."""
 21 |     # test_graph provides:
 22 |     # /test/Connected Entity 1.md
 23 |     # /test/Connected Entity 2.md
 24 |     # /test/Deep Entity.md
 25 |     # /test/Deeper Entity.md
 26 |     # /test/Root.md
 27 | 
 28 |     # List root directory
 29 |     result = await list_directory.fn(project=test_project.name)
 30 | 
 31 |     assert isinstance(result, str)
 32 |     assert "Contents of '/' (depth 1):" in result
 33 |     assert "📁 test" in result
 34 |     assert "Total: 1 items (1 directory)" in result
 35 | 
 36 | 
 37 | @pytest.mark.asyncio
 38 | async def test_list_directory_specific_path(client, test_graph, test_project):
 39 |     """Test listing specific directory path."""
 40 |     # List the test directory
 41 |     result = await list_directory.fn(project=test_project.name, dir_name="/test")
 42 | 
 43 |     assert isinstance(result, str)
 44 |     assert "Contents of '/test' (depth 1):" in result
 45 |     assert "📄 Connected Entity 1.md" in result
 46 |     assert "📄 Connected Entity 2.md" in result
 47 |     assert "📄 Deep Entity.md" in result
 48 |     assert "📄 Deeper Entity.md" in result
 49 |     assert "📄 Root.md" in result
 50 |     assert "Total: 5 items (5 files)" in result
 51 | 
 52 | 
 53 | @pytest.mark.asyncio
 54 | async def test_list_directory_with_glob_filter(client, test_graph, test_project):
 55 |     """Test listing directory with glob filtering."""
 56 |     # Filter for files containing "Connected"
 57 |     result = await list_directory.fn(
 58 |         project=test_project.name, dir_name="/test", file_name_glob="*Connected*"
 59 |     )
 60 | 
 61 |     assert isinstance(result, str)
 62 |     assert "Files in '/test' matching '*Connected*' (depth 1):" in result
 63 |     assert "📄 Connected Entity 1.md" in result
 64 |     assert "📄 Connected Entity 2.md" in result
 65 |     # Should not contain other files
 66 |     assert "Deep Entity.md" not in result
 67 |     assert "Deeper Entity.md" not in result
 68 |     assert "Root.md" not in result
 69 |     assert "Total: 2 items (2 files)" in result
 70 | 
 71 | 
 72 | @pytest.mark.asyncio
 73 | async def test_list_directory_with_markdown_filter(client, test_graph, test_project):
 74 |     """Test listing directory with markdown file filter."""
 75 |     result = await list_directory.fn(
 76 |         project=test_project.name, dir_name="/test", file_name_glob="*.md"
 77 |     )
 78 | 
 79 |     assert isinstance(result, str)
 80 |     assert "Files in '/test' matching '*.md' (depth 1):" in result
 81 |     # All files in test_graph are markdown files
 82 |     assert "📄 Connected Entity 1.md" in result
 83 |     assert "📄 Connected Entity 2.md" in result
 84 |     assert "📄 Deep Entity.md" in result
 85 |     assert "📄 Deeper Entity.md" in result
 86 |     assert "📄 Root.md" in result
 87 |     assert "Total: 5 items (5 files)" in result
 88 | 
 89 | 
 90 | @pytest.mark.asyncio
 91 | async def test_list_directory_with_depth_control(client, test_graph, test_project):
 92 |     """Test listing directory with depth control."""
 93 |     # Depth 1: should return only the test directory
 94 |     result_depth_1 = await list_directory.fn(project=test_project.name, dir_name="/", depth=1)
 95 | 
 96 |     assert isinstance(result_depth_1, str)
 97 |     assert "Contents of '/' (depth 1):" in result_depth_1
 98 |     assert "📁 test" in result_depth_1
 99 |     assert "Total: 1 items (1 directory)" in result_depth_1
100 | 
101 |     # Depth 2: should return directory + its files
102 |     result_depth_2 = await list_directory.fn(project=test_project.name, dir_name="/", depth=2)
103 | 
104 |     assert isinstance(result_depth_2, str)
105 |     assert "Contents of '/' (depth 2):" in result_depth_2
106 |     assert "📁 test" in result_depth_2
107 |     assert "📄 Connected Entity 1.md" in result_depth_2
108 |     assert "📄 Connected Entity 2.md" in result_depth_2
109 |     assert "📄 Deep Entity.md" in result_depth_2
110 |     assert "📄 Deeper Entity.md" in result_depth_2
111 |     assert "📄 Root.md" in result_depth_2
112 |     assert "Total: 6 items (1 directory, 5 files)" in result_depth_2
113 | 
114 | 
115 | @pytest.mark.asyncio
116 | async def test_list_directory_nonexistent_path(client, test_graph, test_project):
117 |     """Test listing nonexistent directory."""
118 |     result = await list_directory.fn(project=test_project.name, dir_name="/nonexistent")
119 | 
120 |     assert isinstance(result, str)
121 |     assert "No files found in directory '/nonexistent'" in result
122 | 
123 | 
124 | @pytest.mark.asyncio
125 | async def test_list_directory_glob_no_matches(client, test_graph, test_project):
126 |     """Test listing directory with glob that matches nothing."""
127 |     result = await list_directory.fn(
128 |         project=test_project.name, dir_name="/test", file_name_glob="*.xyz"
129 |     )
130 | 
131 |     assert isinstance(result, str)
132 |     assert "No files found in directory '/test' matching '*.xyz'" in result
133 | 
134 | 
135 | @pytest.mark.asyncio
136 | async def test_list_directory_with_created_notes(client, test_project):
137 |     """Test listing directory with dynamically created notes."""
138 |     # Create some test notes
139 |     await write_note.fn(
140 |         project=test_project.name,
141 |         title="Project Planning",
142 |         folder="projects",
143 |         content="# Project Planning\nThis is about planning projects.",
144 |         tags=["planning", "project"],
145 |     )
146 | 
147 |     await write_note.fn(
148 |         project=test_project.name,
149 |         title="Meeting Notes",
150 |         folder="projects",
151 |         content="# Meeting Notes\nNotes from the meeting.",
152 |         tags=["meeting", "notes"],
153 |     )
154 | 
155 |     await write_note.fn(
156 |         project=test_project.name,
157 |         title="Research Document",
158 |         folder="research",
159 |         content="# Research\nSome research findings.",
160 |         tags=["research"],
161 |     )
162 | 
163 |     # List root directory
164 |     result_root = await list_directory.fn(project=test_project.name)
165 | 
166 |     assert isinstance(result_root, str)
167 |     assert "Contents of '/' (depth 1):" in result_root
168 |     assert "📁 projects" in result_root
169 |     assert "📁 research" in result_root
170 |     assert "Total: 2 items (2 directories)" in result_root
171 | 
172 |     # List projects directory
173 |     result_projects = await list_directory.fn(project=test_project.name, dir_name="/projects")
174 | 
175 |     assert isinstance(result_projects, str)
176 |     assert "Contents of '/projects' (depth 1):" in result_projects
177 |     assert "📄 Project Planning.md" in result_projects
178 |     assert "📄 Meeting Notes.md" in result_projects
179 |     assert "Total: 2 items (2 files)" in result_projects
180 | 
181 |     # Test glob filter for "Meeting"
182 |     result_meeting = await list_directory.fn(
183 |         project=test_project.name, dir_name="/projects", file_name_glob="*Meeting*"
184 |     )
185 | 
186 |     assert isinstance(result_meeting, str)
187 |     assert "Files in '/projects' matching '*Meeting*' (depth 1):" in result_meeting
188 |     assert "📄 Meeting Notes.md" in result_meeting
189 |     assert "Project Planning.md" not in result_meeting
190 |     assert "Total: 1 items (1 file)" in result_meeting
191 | 
192 | 
193 | @pytest.mark.asyncio
194 | async def test_list_directory_path_normalization(client, test_graph, test_project):
195 |     """Test that various path formats work correctly."""
196 |     # Test various equivalent path formats
197 |     paths_to_test = ["/test", "test", "/test/", "test/"]
198 | 
199 |     for path in paths_to_test:
200 |         result = await list_directory.fn(project=test_project.name, dir_name=path)
201 |         # All should return the same number of items
202 |         assert "Total: 5 items (5 files)" in result
203 |         assert "📄 Connected Entity 1.md" in result
204 | 
205 | 
206 | @pytest.mark.asyncio
207 | async def test_list_directory_shows_file_metadata(client, test_graph, test_project):
208 |     """Test that file metadata is displayed correctly."""
209 |     result = await list_directory.fn(project=test_project.name, dir_name="/test")
210 | 
211 |     assert isinstance(result, str)
212 |     # Should show file names
213 |     assert "📄 Connected Entity 1.md" in result
214 |     assert "📄 Connected Entity 2.md" in result
215 | 
216 |     # Should show directory paths
217 |     assert "test/Connected Entity 1.md" in result
218 |     assert "test/Connected Entity 2.md" in result
219 | 
220 |     # Files should be listed after directories (but no directories in this case)
221 |     lines = result.split("\n")
222 |     file_lines = [line for line in lines if "📄" in line]
223 |     assert len(file_lines) == 5  # All 5 files from test_graph
224 | 
```

--------------------------------------------------------------------------------
/specs/SPEC-12 OpenTelemetry Observability.md:
--------------------------------------------------------------------------------

```markdown
  1 | # SPEC-12: OpenTelemetry Observability
  2 | 
  3 | ## Why
  4 | 
  5 | We need comprehensive observability for basic-memory-cloud to:
  6 | - Track request flows across our multi-tenant architecture (MCP → Cloud → API services)
  7 | - Debug performance issues and errors in production
  8 | - Understand user behavior and system usage patterns
  9 | - Correlate issues to specific tenants for targeted debugging
 10 | - Monitor service health and latency across the distributed system
 11 | 
 12 | Currently, we only have basic logging without request correlation or distributed tracing capabilities.
 13 | 
 14 | ## What
 15 | 
 16 | Implement OpenTelemetry instrumentation across all basic-memory-cloud services with:
 17 | 
 18 | ### Core Requirements
 19 | 1. **Distributed Tracing**: End-to-end request tracing from MCP gateway through to tenant API instances
 20 | 2. **Tenant Correlation**: All traces tagged with tenant_id, user_id, and workos_user_id
 21 | 3. **Service Identification**: Clear service naming and namespace separation
 22 | 4. **Auto-instrumentation**: Automatic tracing for FastAPI, SQLAlchemy, HTTP clients
 23 | 5. **Grafana Cloud Integration**: Direct OTLP export to Grafana Cloud Tempo
 24 | 
 25 | ### Services to Instrument
 26 | - **MCP Gateway** (basic-memory-mcp): Entry point with JWT extraction
 27 | - **Cloud Service** (basic-memory-cloud): Provisioning and management operations
 28 | - **API Service** (basic-memory-api): Tenant-specific instances
 29 | - **Worker Processes** (ARQ workers): Background job processing
 30 | 
 31 | ### Key Trace Attributes
 32 | - `tenant.id`: UUID from UserProfile.tenant_id
 33 | - `user.id`: WorkOS user identifier
 34 | - `user.email`: User email for debugging
 35 | - `service.name`: Specific service identifier
 36 | - `service.namespace`: Environment (development/production)
 37 | - `operation.type`: Business operation (provision/update/delete)
 38 | - `tenant.app_name`: Fly.io app name for tenant instances
 39 | 
 40 | ## How
 41 | 
 42 | ### Phase 1: Setup OpenTelemetry SDK
 43 | 1. Add OpenTelemetry dependencies to each service's pyproject.toml:
 44 |    ```python
 45 |    "opentelemetry-distro[otlp]>=1.29.0",
 46 |    "opentelemetry-instrumentation-fastapi>=0.50b0",
 47 |    "opentelemetry-instrumentation-httpx>=0.50b0",
 48 |    "opentelemetry-instrumentation-sqlalchemy>=0.50b0",
 49 |    "opentelemetry-instrumentation-logging>=0.50b0",
 50 |    ```
 51 | 
 52 | 2. Create shared telemetry initialization module (`apps/shared/telemetry.py`)
 53 | 
 54 | 3. Configure Grafana Cloud OTLP endpoint via environment variables:
 55 |    ```bash
 56 |    OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp-gateway-prod-us-east-2.grafana.net/otlp
 57 |    OTEL_EXPORTER_OTLP_HEADERS=Authorization=Basic[token]
 58 |    OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf
 59 |    ```
 60 | 
 61 | ### Phase 2: Instrument MCP Gateway
 62 | 1. Extract tenant context from AuthKit JWT in middleware
 63 | 2. Create root span with tenant attributes
 64 | 3. Propagate trace context to downstream services via headers
 65 | 
 66 | ### Phase 3: Instrument Cloud Service
 67 | 1. Continue trace from MCP gateway
 68 | 2. Add operation-specific attributes (provisioning events)
 69 | 3. Instrument ARQ worker jobs for async operations
 70 | 4. Track Fly.io API calls and latency
 71 | 
 72 | ### Phase 4: Instrument API Service
 73 | 1. Extract tenant context from JWT
 74 | 2. Add machine-specific metadata (instance ID, region)
 75 | 3. Instrument database operations with SQLAlchemy
 76 | 4. Track MCP protocol operations
 77 | 
 78 | ### Phase 5: Configure and Deploy
 79 | 1. Add OTLP configuration to `.env.example` and `.env.example.secrets`
 80 | 2. Set Fly.io secrets for production deployment
 81 | 3. Update Dockerfiles to use `opentelemetry-instrument` wrapper
 82 | 4. Deploy to development environment first for testing
 83 | 
 84 | ## How to Evaluate
 85 | 
 86 | ### Success Criteria
 87 | 1. **End-to-end traces visible in Grafana Cloud** showing complete request flow
 88 | 2. **Tenant filtering works** - Can filter traces by tenant_id to see all requests for a user
 89 | 3. **Service maps accurate** - Grafana shows correct service dependencies
 90 | 4. **Performance overhead < 5%** - Minimal latency impact from instrumentation
 91 | 5. **Error correlation** - Can trace errors back to specific tenant and operation
 92 | 
 93 | ### Testing Checklist
 94 | - [x] Single request creates connected trace across all services
 95 | - [x] Tenant attributes present on all spans
 96 | - [x] Background jobs (ARQ) appear in traces
 97 | - [x] Database queries show in trace timeline
 98 | - [x] HTTP calls to Fly.io API tracked
 99 | - [x] Traces exported successfully to Grafana Cloud
100 | - [x] Can search traces by tenant_id in Grafana
101 | - [x] Service dependency graph shows correct flow
102 | 
103 | ### Monitoring Success
104 | - All services reporting traces to Grafana Cloud
105 | - No OTLP export errors in logs
106 | - Trace sampling working correctly (if implemented)
107 | - Resource usage acceptable (CPU/memory)
108 | 
109 | ## Dependencies
110 | - Grafana Cloud account with OTLP endpoint configured
111 | - OpenTelemetry Python SDK v1.29.0+
112 | - FastAPI instrumentation compatibility
113 | - Network access from Fly.io to Grafana Cloud
114 | 
115 | ## Implementation Assignment
116 | **Recommended Agent**: python-developer
117 | - Requires Python/FastAPI expertise
118 | - Needs understanding of distributed systems
119 | - Must implement middleware and context propagation
120 | - Should understand OpenTelemetry SDK and instrumentation
121 | 
122 | ## Follow-up Tasks
123 | 
124 | ### Enhanced Log Correlation
125 | While basic trace-to-log correlation works automatically via OpenTelemetry logging instrumentation, consider adding structured logging for improved log filtering:
126 | 
127 | 1. **Structured Logging Context**: Add `logger.bind()` calls to inject tenant/user context directly into log records
128 | 2. **Custom Loguru Formatter**: Extract OpenTelemetry span attributes for better log readability
129 | 3. **Direct Log Filtering**: Enable searching logs directly by tenant_id, workflow_id without going through traces
130 | 
131 | This would complement the existing automatic trace correlation and provide better log search capabilities.
132 | 
133 | ## Alternative Solution: Logfire
134 | 
135 | After implementing OpenTelemetry with Grafana Cloud, we discovered limitations in the observability experience:
136 | - Traces work but lack useful context without correlated logs
137 | - Setting up log correlation with Grafana is complex and requires additional infrastructure
138 | - The developer experience for Python observability is suboptimal
139 | 
140 | ### Logfire Evaluation
141 | 
142 | **Pydantic Logfire** offers a compelling alternative that addresses your specific requirements:
143 | 
144 | #### Core Requirements Match
145 | - ✅ **User Activity Tracking**: Automatic request tracing with business context
146 | - ✅ **Error Monitoring**: Built-in exception tracking with full context
147 | - ✅ **Performance Metrics**: Automatic latency and performance monitoring
148 | - ✅ **Request Tracing**: Native distributed tracing across services
149 | - ✅ **Log Correlation**: Seamless trace-to-log correlation without setup
150 | 
151 | #### Key Advantages
152 | 1. **Python-First Design**: Built specifically for Python/FastAPI applications by the Pydantic team
153 | 2. **Simple Integration**: `pip install logfire` + `logfire.configure()` vs complex OTLP setup
154 | 3. **Automatic Correlation**: Logs automatically include trace context without manual configuration
155 | 4. **Real-time SQL Interface**: Query spans and logs using SQL with auto-completion
156 | 5. **Better Developer UX**: Purpose-built observability UI vs generic Grafana dashboards
157 | 6. **Loguru Integration**: `logger.configure(handlers=[logfire.loguru_handler()])` maintains existing logging
158 | 
159 | #### Pricing Assessment
160 | - **Free Tier**: 10M spans/month (suitable for development and small production workloads)
161 | - **Transparent Pricing**: $1 per million spans/metrics after free tier
162 | - **No Hidden Costs**: No per-host fees, only usage-based metering
163 | - **Production Ready**: Recently exited beta, enterprise features available
164 | 
165 | #### Migration Path
166 | The existing OpenTelemetry instrumentation is compatible - Logfire uses OpenTelemetry under the hood, so the current spans and attributes would work unchanged.
167 | 
168 | ### Recommendation
169 | 
170 | **Consider migrating to Logfire** for the following reasons:
171 | 1. It directly addresses the "next to useless" traces problem by providing integrated logs
172 | 2. Dramatically simpler setup and maintenance compared to Grafana Cloud + custom log correlation
173 | 3. Better ROI on observability investment with purpose-built Python tooling
174 | 4. Free tier sufficient for current development needs with clear scaling path
175 | 
176 | The current Grafana Cloud implementation provides a solid foundation and could remain as a backup/export target, while Logfire becomes the primary observability platform.
177 | 
178 | ## Status
179 | **Created**: 2024-01-28
180 | **Status**: Completed (OpenTelemetry + Grafana Cloud)
181 | **Next Phase**: Evaluate Logfire migration
182 | **Priority**: High - Critical for production observability
183 | 
```

--------------------------------------------------------------------------------
/tests/mcp/test_prompts.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for MCP prompts."""
  2 | 
  3 | from datetime import timezone, datetime
  4 | 
  5 | import pytest
  6 | 
  7 | from basic_memory.mcp.prompts.continue_conversation import continue_conversation
  8 | from basic_memory.mcp.prompts.search import search_prompt
  9 | from basic_memory.mcp.prompts.recent_activity import recent_activity_prompt
 10 | 
 11 | 
 12 | @pytest.mark.asyncio
 13 | async def test_continue_conversation_with_topic(client, test_graph):
 14 |     """Test continue_conversation with a topic."""
 15 |     # We can use the test_graph fixture which already has relevant content
 16 | 
 17 |     # Call the function with a topic that should match existing content
 18 |     result = await continue_conversation.fn(topic="Root", timeframe="1w")  # pyright: ignore [reportGeneralTypeIssues]
 19 | 
 20 |     # Check that the result contains expected content
 21 |     assert "Continuing conversation on: Root" in result  # pyright: ignore [reportOperatorIssue]
 22 |     assert "This is a memory retrieval session" in result  # pyright: ignore [reportOperatorIssue]
 23 |     assert "Start by executing one of the suggested commands" in result  # pyright: ignore [reportOperatorIssue]
 24 | 
 25 | 
 26 | @pytest.mark.asyncio
 27 | async def test_continue_conversation_with_recent_activity(client, test_graph):
 28 |     """Test continue_conversation with no topic, using recent activity."""
 29 |     # Call the function without a topic
 30 |     result = await continue_conversation.fn(timeframe="1w")  # pyright: ignore [reportGeneralTypeIssues]
 31 | 
 32 |     # Check that the result contains expected content for recent activity
 33 |     assert "Continuing conversation on: Recent Activity" in result  # pyright: ignore [reportOperatorIssue]
 34 |     assert "This is a memory retrieval session" in result  # pyright: ignore [reportOperatorIssue]
 35 |     assert "Please use the available basic-memory tools" in result  # pyright: ignore [reportOperatorIssue]
 36 |     assert "Next Steps" in result  # pyright: ignore [reportOperatorIssue]
 37 | 
 38 | 
 39 | @pytest.mark.asyncio
 40 | async def test_continue_conversation_no_results(client):
 41 |     """Test continue_conversation when no results are found."""
 42 |     # Call with a non-existent topic
 43 |     result = await continue_conversation.fn(topic="NonExistentTopic", timeframe="1w")  # pyright: ignore [reportGeneralTypeIssues]
 44 | 
 45 |     # Check the response indicates no results found
 46 |     assert "Continuing conversation on: NonExistentTopic" in result  # pyright: ignore [reportOperatorIssue]
 47 |     assert "The supplied query did not return any information" in result  # pyright: ignore [reportOperatorIssue]
 48 | 
 49 | 
 50 | @pytest.mark.asyncio
 51 | async def test_continue_conversation_creates_structured_suggestions(client, test_graph):
 52 |     """Test that continue_conversation generates structured tool usage suggestions."""
 53 |     # Call the function with a topic that should match existing content
 54 |     result = await continue_conversation.fn(topic="Root", timeframe="1w")  # pyright: ignore [reportGeneralTypeIssues]
 55 | 
 56 |     # Verify the response includes clear tool usage instructions
 57 |     assert "start by executing one of the suggested commands" in result.lower()  # pyright: ignore [reportAttributeAccessIssue]
 58 | 
 59 |     # Check that the response contains tool call examples
 60 |     assert "read_note" in result  # pyright: ignore [reportOperatorIssue]
 61 |     assert "search" in result  # pyright: ignore [reportOperatorIssue]
 62 |     assert "recent_activity" in result  # pyright: ignore [reportOperatorIssue]
 63 | 
 64 | 
 65 | # Search prompt tests
 66 | 
 67 | 
 68 | @pytest.mark.asyncio
 69 | async def test_search_prompt_with_results(client, test_graph):
 70 |     """Test search_prompt with a query that returns results."""
 71 |     # Call the function with a query that should match existing content
 72 |     result = await search_prompt.fn("Root")  # pyright: ignore [reportGeneralTypeIssues]
 73 | 
 74 |     # Check the response contains expected content
 75 |     assert 'Search Results for: "Root"' in result  # pyright: ignore [reportOperatorIssue]
 76 |     assert "I found " in result  # pyright: ignore [reportOperatorIssue]
 77 |     assert "You can view this content with: `read_note" in result  # pyright: ignore [reportOperatorIssue]
 78 |     assert "Synthesize and Capture Knowledge" in result  # pyright: ignore [reportOperatorIssue]
 79 | 
 80 | 
 81 | @pytest.mark.asyncio
 82 | async def test_search_prompt_with_timeframe(client, test_graph):
 83 |     """Test search_prompt with a timeframe."""
 84 |     # Call the function with a query and timeframe
 85 |     result = await search_prompt.fn("Root", timeframe="1w")  # pyright: ignore [reportGeneralTypeIssues]
 86 | 
 87 |     # Check the response includes timeframe information
 88 |     assert 'Search Results for: "Root" (after 7d)' in result  # pyright: ignore [reportOperatorIssue]
 89 |     assert "I found " in result  # pyright: ignore [reportOperatorIssue]
 90 | 
 91 | 
 92 | @pytest.mark.asyncio
 93 | async def test_search_prompt_no_results(client):
 94 |     """Test search_prompt when no results are found."""
 95 |     # Call with a query that won't match anything
 96 |     result = await search_prompt.fn("XYZ123NonExistentQuery")  # pyright: ignore [reportGeneralTypeIssues]
 97 | 
 98 |     # Check the response indicates no results found
 99 |     assert 'Search Results for: "XYZ123NonExistentQuery"' in result  # pyright: ignore [reportOperatorIssue]
100 |     assert "I couldn't find any results for this query" in result  # pyright: ignore [reportOperatorIssue]
101 |     assert "Opportunity to Capture Knowledge" in result  # pyright: ignore [reportOperatorIssue]
102 |     assert "write_note" in result  # pyright: ignore [reportOperatorIssue]
103 | 
104 | 
105 | # Test utils
106 | 
107 | 
108 | def test_prompt_context_with_file_path_no_permalink():
109 |     """Test format_prompt_context with items that have file_path but no permalink."""
110 |     from basic_memory.mcp.prompts.utils import (
111 |         format_prompt_context,
112 |         PromptContext,
113 |         PromptContextItem,
114 |     )
115 |     from basic_memory.schemas.memory import EntitySummary
116 | 
117 |     # Create a mock context with a file that has no permalink (like a binary file)
118 |     test_entity = EntitySummary(
119 |         type="entity",
120 |         title="Test File",
121 |         permalink=None,  # No permalink
122 |         file_path="test_file.pdf",
123 |         created_at=datetime.now(timezone.utc),
124 |     )
125 | 
126 |     context = PromptContext(
127 |         topic="Test Topic",
128 |         timeframe="1d",
129 |         results=[
130 |             PromptContextItem(
131 |                 primary_results=[test_entity],
132 |                 related_results=[test_entity],  # Also use as related
133 |             )
134 |         ],
135 |     )
136 | 
137 |     # Format the context
138 |     result = format_prompt_context(context)
139 | 
140 |     # Check that file_path is used when permalink is missing
141 |     assert "test_file.pdf" in result
142 |     assert "read_file" in result
143 | 
144 | 
145 | # Recent activity prompt tests
146 | 
147 | 
148 | @pytest.mark.asyncio
149 | async def test_recent_activity_prompt_discovery_mode(client, test_project, test_graph):
150 |     """Test recent_activity_prompt in discovery mode (no project)."""
151 |     # Call the function in discovery mode
152 |     result = await recent_activity_prompt.fn(timeframe="1w")  # pyright: ignore [reportGeneralTypeIssues]
153 | 
154 |     # Check the response contains expected discovery mode content
155 |     assert "Recent Activity Across All Projects" in result  # pyright: ignore [reportOperatorIssue]
156 |     assert "Cross-Project Activity Discovery" in result  # pyright: ignore [reportOperatorIssue]
157 |     assert "write_note" in result  # pyright: ignore [reportOperatorIssue]
158 | 
159 | 
160 | @pytest.mark.asyncio
161 | async def test_recent_activity_prompt_project_specific(client, test_project, test_graph):
162 |     """Test recent_activity_prompt in project-specific mode."""
163 |     # Call the function with a specific project
164 |     result = await recent_activity_prompt.fn(timeframe="1w", project=test_project.name)  # pyright: ignore [reportGeneralTypeIssues]
165 | 
166 |     # Check the response contains expected project-specific content
167 |     assert f"Recent Activity in {test_project.name}" in result  # pyright: ignore [reportOperatorIssue]
168 |     assert "Opportunity to Capture Activity Summary" in result  # pyright: ignore [reportOperatorIssue]
169 |     assert f"recent activity in {test_project.name}" in result  # pyright: ignore [reportOperatorIssue]
170 |     assert "write_note" in result  # pyright: ignore [reportOperatorIssue]
171 | 
172 | 
173 | @pytest.mark.asyncio
174 | async def test_recent_activity_prompt_with_custom_timeframe(client, test_project, test_graph):
175 |     """Test recent_activity_prompt with custom timeframe."""
176 |     # Call the function with a custom timeframe in discovery mode
177 |     result = await recent_activity_prompt.fn(timeframe="1d")  # pyright: ignore [reportGeneralTypeIssues]
178 | 
179 |     # Check the response includes the custom timeframe
180 |     assert "Recent Activity Across All Projects (1d)" in result  # pyright: ignore [reportOperatorIssue]
181 | 
```

--------------------------------------------------------------------------------
/tests/sync/test_watch_service_reload.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for watch service project reloading functionality."""
  2 | 
  3 | import asyncio
  4 | from unittest.mock import AsyncMock, patch
  5 | import pytest
  6 | 
  7 | from basic_memory.config import BasicMemoryConfig
  8 | from basic_memory.models.project import Project
  9 | from basic_memory.sync.watch_service import WatchService
 10 | 
 11 | 
 12 | @pytest.mark.asyncio
 13 | async def test_schedule_restart_uses_config_interval():
 14 |     """Test that _schedule_restart uses the configured interval."""
 15 |     config = BasicMemoryConfig(watch_project_reload_interval=2)
 16 |     repo = AsyncMock()
 17 |     watch_service = WatchService(config, repo, quiet=True)
 18 | 
 19 |     stop_event = asyncio.Event()
 20 | 
 21 |     # Mock sleep to capture the interval
 22 |     with patch("asyncio.sleep") as mock_sleep:
 23 |         mock_sleep.return_value = None  # Make it return immediately
 24 | 
 25 |         await watch_service._schedule_restart(stop_event)
 26 | 
 27 |         # Verify sleep was called with config interval
 28 |         mock_sleep.assert_called_once_with(2)
 29 | 
 30 |         # Verify stop event was set
 31 |         assert stop_event.is_set()
 32 | 
 33 | 
 34 | @pytest.mark.asyncio
 35 | async def test_watch_projects_cycle_handles_empty_project_list():
 36 |     """Test that _watch_projects_cycle handles empty project list."""
 37 |     config = BasicMemoryConfig()
 38 |     repo = AsyncMock()
 39 |     watch_service = WatchService(config, repo, quiet=True)
 40 | 
 41 |     stop_event = asyncio.Event()
 42 |     stop_event.set()  # Set immediately to exit quickly
 43 | 
 44 |     # Mock awatch to track calls
 45 |     with patch("basic_memory.sync.watch_service.awatch") as mock_awatch:
 46 |         # Create an async iterator that yields nothing
 47 |         async def empty_iterator():
 48 |             return
 49 |             yield  # unreachable, just for async generator
 50 | 
 51 |         mock_awatch.return_value = empty_iterator()
 52 | 
 53 |         # Should not raise error with empty project list
 54 |         await watch_service._watch_projects_cycle([], stop_event)
 55 | 
 56 |         # awatch should be called with no paths
 57 |         mock_awatch.assert_called_once_with(
 58 |             debounce=config.sync_delay,
 59 |             watch_filter=watch_service.filter_changes,
 60 |             recursive=True,
 61 |             stop_event=stop_event,
 62 |         )
 63 | 
 64 | 
 65 | @pytest.mark.asyncio
 66 | async def test_run_handles_no_projects():
 67 |     """Test that run method handles no active projects gracefully."""
 68 |     config = BasicMemoryConfig()
 69 |     repo = AsyncMock()
 70 |     repo.get_active_projects.return_value = []  # No projects
 71 | 
 72 |     watch_service = WatchService(config, repo, quiet=True)
 73 | 
 74 |     call_count = 0
 75 | 
 76 |     def stop_after_one_call(*args):
 77 |         nonlocal call_count
 78 |         call_count += 1
 79 |         if call_count >= 1:
 80 |             watch_service.state.running = False
 81 |         return AsyncMock()
 82 | 
 83 |     # Mock sleep and write_status to track behavior
 84 |     with patch("asyncio.sleep", side_effect=stop_after_one_call) as mock_sleep:
 85 |         with patch.object(watch_service, "write_status", return_value=None):
 86 |             await watch_service.run()
 87 | 
 88 |     # Should have slept for 30 seconds when no projects found
 89 |     mock_sleep.assert_called_with(30)
 90 | 
 91 | 
 92 | @pytest.mark.asyncio
 93 | async def test_run_reloads_projects_each_cycle():
 94 |     """Test that run method reloads projects in each cycle."""
 95 |     config = BasicMemoryConfig()
 96 |     repo = AsyncMock()
 97 | 
 98 |     # Return different projects on each call
 99 |     projects_call_1 = [Project(id=1, name="project1", path="/tmp/project1", permalink="project1")]
100 |     projects_call_2 = [
101 |         Project(id=1, name="project1", path="/tmp/project1", permalink="project1"),
102 |         Project(id=2, name="project2", path="/tmp/project2", permalink="project2"),
103 |     ]
104 | 
105 |     repo.get_active_projects.side_effect = [projects_call_1, projects_call_2]
106 | 
107 |     watch_service = WatchService(config, repo, quiet=True)
108 | 
109 |     cycle_count = 0
110 | 
111 |     async def mock_watch_cycle(projects, stop_event):
112 |         nonlocal cycle_count
113 |         cycle_count += 1
114 |         if cycle_count >= 2:
115 |             watch_service.state.running = False
116 | 
117 |     with patch.object(watch_service, "_watch_projects_cycle", side_effect=mock_watch_cycle):
118 |         with patch.object(watch_service, "write_status", return_value=None):
119 |             await watch_service.run()
120 | 
121 |     # Should have reloaded projects twice
122 |     assert repo.get_active_projects.call_count == 2
123 | 
124 |     # Should have completed two cycles
125 |     assert cycle_count == 2
126 | 
127 | 
128 | @pytest.mark.asyncio
129 | async def test_run_continues_after_cycle_error():
130 |     """Test that run continues to next cycle after error in watch cycle."""
131 |     config = BasicMemoryConfig()
132 |     repo = AsyncMock()
133 |     repo.get_active_projects.return_value = [
134 |         Project(id=1, name="test", path="/tmp/test", permalink="test")
135 |     ]
136 | 
137 |     watch_service = WatchService(config, repo, quiet=True)
138 | 
139 |     call_count = 0
140 | 
141 |     async def failing_watch_cycle(projects, stop_event):
142 |         nonlocal call_count
143 |         call_count += 1
144 |         if call_count == 1:
145 |             raise Exception("Simulated error")
146 |         else:
147 |             # Stop after second call
148 |             watch_service.state.running = False
149 | 
150 |     with patch.object(watch_service, "_watch_projects_cycle", side_effect=failing_watch_cycle):
151 |         with patch("asyncio.sleep") as mock_sleep:
152 |             with patch.object(watch_service, "write_status", return_value=None):
153 |                 await watch_service.run()
154 | 
155 |     # Should have tried both cycles
156 |     assert call_count == 2
157 | 
158 |     # Should have slept for error retry
159 |     mock_sleep.assert_called_with(5)
160 | 
161 | 
162 | @pytest.mark.asyncio
163 | async def test_timer_task_cancelled_properly():
164 |     """Test that timer task is cancelled when cycle completes."""
165 |     config = BasicMemoryConfig()
166 |     repo = AsyncMock()
167 |     repo.get_active_projects.return_value = [
168 |         Project(id=1, name="test", path="/tmp/test", permalink="test")
169 |     ]
170 | 
171 |     watch_service = WatchService(config, repo, quiet=True)
172 | 
173 |     # Track created timer tasks
174 |     created_tasks = []
175 |     original_create_task = asyncio.create_task
176 | 
177 |     def track_create_task(coro):
178 |         task = original_create_task(coro)
179 |         created_tasks.append(task)
180 |         return task
181 | 
182 |     async def quick_watch_cycle(projects, stop_event):
183 |         # Complete immediately
184 |         watch_service.state.running = False
185 | 
186 |     with patch("asyncio.create_task", side_effect=track_create_task):
187 |         with patch.object(watch_service, "_watch_projects_cycle", side_effect=quick_watch_cycle):
188 |             with patch.object(watch_service, "write_status", return_value=None):
189 |                 await watch_service.run()
190 | 
191 |     # Should have created one timer task
192 |     assert len(created_tasks) == 1
193 | 
194 |     # Timer task should be cancelled or done
195 |     timer_task = created_tasks[0]
196 |     assert timer_task.cancelled() or timer_task.done()
197 | 
198 | 
199 | @pytest.mark.asyncio
200 | async def test_new_project_addition_scenario():
201 |     """Test the main scenario: new project is detected when added while watching."""
202 |     config = BasicMemoryConfig()
203 |     repo = AsyncMock()
204 | 
205 |     # Initially one project
206 |     initial_projects = [Project(id=1, name="existing", path="/tmp/existing", permalink="existing")]
207 | 
208 |     # After some time, new project is added
209 |     updated_projects = [
210 |         Project(id=1, name="existing", path="/tmp/existing", permalink="existing"),
211 |         Project(id=2, name="new", path="/tmp/new", permalink="new"),
212 |     ]
213 | 
214 |     # Track which project lists were used
215 |     project_lists_used = []
216 | 
217 |     def mock_get_projects():
218 |         if len(project_lists_used) < 2:
219 |             project_lists_used.append(initial_projects)
220 |             return initial_projects
221 |         else:
222 |             project_lists_used.append(updated_projects)
223 |             return updated_projects
224 | 
225 |     repo.get_active_projects.side_effect = mock_get_projects
226 | 
227 |     watch_service = WatchService(config, repo, quiet=True)
228 | 
229 |     cycle_count = 0
230 | 
231 |     async def counting_watch_cycle(projects, stop_event):
232 |         nonlocal cycle_count
233 |         cycle_count += 1
234 | 
235 |         # Stop after enough cycles to test project reload
236 |         if cycle_count >= 3:
237 |             watch_service.state.running = False
238 | 
239 |     with patch.object(watch_service, "_watch_projects_cycle", side_effect=counting_watch_cycle):
240 |         with patch.object(watch_service, "write_status", return_value=None):
241 |             await watch_service.run()
242 | 
243 |     # Should have reloaded projects multiple times
244 |     assert repo.get_active_projects.call_count >= 3
245 | 
246 |     # Should have completed multiple cycles
247 |     assert cycle_count == 3
248 | 
249 |     # Should have seen both project configurations
250 |     assert len(project_lists_used) >= 3
251 |     assert any(len(projects) == 1 for projects in project_lists_used)  # Initial state
252 |     assert any(len(projects) == 2 for projects in project_lists_used)  # After addition
253 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/markdown/entity_parser.py:
--------------------------------------------------------------------------------

```python
  1 | """Parser for markdown files into Entity objects.
  2 | 
  3 | Uses markdown-it with plugins to parse structured data from markdown content.
  4 | """
  5 | 
  6 | from dataclasses import dataclass, field
  7 | from datetime import date, datetime
  8 | from pathlib import Path
  9 | from typing import Any, Optional
 10 | 
 11 | import dateparser
 12 | import frontmatter
 13 | import yaml
 14 | from loguru import logger
 15 | from markdown_it import MarkdownIt
 16 | 
 17 | from basic_memory.markdown.plugins import observation_plugin, relation_plugin
 18 | from basic_memory.markdown.schemas import (
 19 |     EntityFrontmatter,
 20 |     EntityMarkdown,
 21 |     Observation,
 22 |     Relation,
 23 | )
 24 | from basic_memory.utils import parse_tags
 25 | 
 26 | md = MarkdownIt().use(observation_plugin).use(relation_plugin)
 27 | 
 28 | 
 29 | def normalize_frontmatter_value(value: Any) -> Any:
 30 |     """Normalize frontmatter values to safe types for processing.
 31 | 
 32 |     PyYAML automatically converts various string-like values into native Python types:
 33 |     - Date strings ("2025-10-24") → datetime.date objects
 34 |     - Numbers ("1.0") → int or float
 35 |     - Booleans ("true") → bool
 36 |     - Lists → list objects
 37 | 
 38 |     This can cause AttributeError when code expects strings and calls string methods
 39 |     like .strip() on these values (see GitHub issue #236).
 40 | 
 41 |     This function normalizes all frontmatter values to safe types:
 42 |     - Dates/datetimes → ISO format strings
 43 |     - Numbers (int/float) → strings
 44 |     - Booleans → strings ("True"/"False")
 45 |     - Lists → preserved as lists, but items are recursively normalized
 46 |     - Dicts → preserved as dicts, but values are recursively normalized
 47 |     - Strings → kept as-is
 48 |     - None → kept as None
 49 | 
 50 |     Args:
 51 |         value: The frontmatter value to normalize
 52 | 
 53 |     Returns:
 54 |         The normalized value safe for string operations
 55 | 
 56 |     Example:
 57 |         >>> normalize_frontmatter_value(datetime.date(2025, 10, 24))
 58 |         '2025-10-24'
 59 |         >>> normalize_frontmatter_value([datetime.date(2025, 10, 24), "tag", 123])
 60 |         ['2025-10-24', 'tag', '123']
 61 |         >>> normalize_frontmatter_value(True)
 62 |         'True'
 63 |     """
 64 |     # Convert date/datetime objects to ISO format strings
 65 |     if isinstance(value, datetime):
 66 |         return value.isoformat()
 67 |     if isinstance(value, date):
 68 |         return value.isoformat()
 69 | 
 70 |     # Convert boolean to string (must come before int check since bool is subclass of int)
 71 |     if isinstance(value, bool):
 72 |         return str(value)
 73 | 
 74 |     # Convert numbers to strings
 75 |     if isinstance(value, (int, float)):
 76 |         return str(value)
 77 | 
 78 |     # Recursively process lists (preserve as list, normalize items)
 79 |     if isinstance(value, list):
 80 |         return [normalize_frontmatter_value(item) for item in value]
 81 | 
 82 |     # Recursively process dicts (preserve as dict, normalize values)
 83 |     if isinstance(value, dict):
 84 |         return {key: normalize_frontmatter_value(val) for key, val in value.items()}
 85 | 
 86 |     # Keep strings and None as-is
 87 |     return value
 88 | 
 89 | 
 90 | def normalize_frontmatter_metadata(metadata: dict) -> dict:
 91 |     """Normalize all values in frontmatter metadata dict.
 92 | 
 93 |     Converts date/datetime objects to ISO format strings to prevent
 94 |     AttributeError when code expects strings (GitHub issue #236).
 95 | 
 96 |     Args:
 97 |         metadata: The frontmatter metadata dictionary
 98 | 
 99 |     Returns:
100 |         A new dictionary with all values normalized
101 |     """
102 |     return {key: normalize_frontmatter_value(value) for key, value in metadata.items()}
103 | 
104 | 
105 | @dataclass
106 | class EntityContent:
107 |     content: str
108 |     observations: list[Observation] = field(default_factory=list)
109 |     relations: list[Relation] = field(default_factory=list)
110 | 
111 | 
112 | def parse(content: str) -> EntityContent:
113 |     """Parse markdown content into EntityMarkdown."""
114 | 
115 |     # Parse content for observations and relations using markdown-it
116 |     observations = []
117 |     relations = []
118 | 
119 |     if content:
120 |         for token in md.parse(content):
121 |             # check for observations and relations
122 |             if token.meta:
123 |                 if "observation" in token.meta:
124 |                     obs = token.meta["observation"]
125 |                     observation = Observation.model_validate(obs)
126 |                     observations.append(observation)
127 |                 if "relations" in token.meta:
128 |                     rels = token.meta["relations"]
129 |                     relations.extend([Relation.model_validate(r) for r in rels])
130 | 
131 |     return EntityContent(
132 |         content=content,
133 |         observations=observations,
134 |         relations=relations,
135 |     )
136 | 
137 | 
138 | # def parse_tags(tags: Any) -> list[str]:
139 | #     """Parse tags into list of strings."""
140 | #     if isinstance(tags, (list, tuple)):
141 | #         return [str(t).strip() for t in tags if str(t).strip()]
142 | #     return [t.strip() for t in tags.split(",") if t.strip()]
143 | 
144 | 
145 | class EntityParser:
146 |     """Parser for markdown files into Entity objects."""
147 | 
148 |     def __init__(self, base_path: Path):
149 |         """Initialize parser with base path for relative permalink generation."""
150 |         self.base_path = base_path.resolve()
151 | 
152 |     def parse_date(self, value: Any) -> Optional[datetime]:
153 |         """Parse date strings using dateparser for maximum flexibility.
154 | 
155 |         Supports human friendly formats like:
156 |         - 2024-01-15
157 |         - Jan 15, 2024
158 |         - 2024-01-15 10:00 AM
159 |         - yesterday
160 |         - 2 days ago
161 |         """
162 |         if isinstance(value, datetime):
163 |             return value
164 |         if isinstance(value, str):
165 |             parsed = dateparser.parse(value)
166 |             if parsed:
167 |                 return parsed
168 |         return None
169 | 
170 |     async def parse_file(self, path: Path | str) -> EntityMarkdown:
171 |         """Parse markdown file into EntityMarkdown."""
172 | 
173 |         # Check if the path is already absolute
174 |         if (
175 |             isinstance(path, Path)
176 |             and path.is_absolute()
177 |             or (isinstance(path, str) and Path(path).is_absolute())
178 |         ):
179 |             absolute_path = Path(path)
180 |         else:
181 |             absolute_path = self.get_file_path(path)
182 | 
183 |         # Parse frontmatter and content using python-frontmatter
184 |         file_content = absolute_path.read_text(encoding="utf-8")
185 |         return await self.parse_file_content(absolute_path, file_content)
186 | 
187 |     def get_file_path(self, path):
188 |         """Get absolute path for a file using the base path for the project."""
189 |         return self.base_path / path
190 | 
191 |     async def parse_file_content(self, absolute_path, file_content):
192 |         # Parse frontmatter with proper error handling for malformed YAML (issue #185)
193 |         try:
194 |             post = frontmatter.loads(file_content)
195 |         except yaml.YAMLError as e:
196 |             # Log the YAML parsing error with file context
197 |             logger.warning(
198 |                 f"Failed to parse YAML frontmatter in {absolute_path}: {e}. "
199 |                 f"Treating file as plain markdown without frontmatter."
200 |             )
201 |             # Create a post with no frontmatter - treat entire content as markdown
202 |             post = frontmatter.Post(file_content, metadata={})
203 | 
204 |         # Extract file stat info
205 |         file_stats = absolute_path.stat()
206 | 
207 |         # Normalize frontmatter values to prevent AttributeError on date objects (issue #236)
208 |         # PyYAML automatically converts date strings like "2025-10-24" to datetime.date objects
209 |         # This normalization converts them back to ISO format strings to ensure compatibility
210 |         # with code that expects string values
211 |         metadata = normalize_frontmatter_metadata(post.metadata)
212 | 
213 |         # Ensure required fields have defaults (issue #184, #387)
214 |         # Handle title - use default if missing, None/null, empty, or string "None"
215 |         title = metadata.get("title")
216 |         if not title or title == "None":
217 |             metadata["title"] = absolute_path.stem
218 |         else:
219 |             metadata["title"] = title
220 |         # Handle type - use default if missing OR explicitly set to None/null
221 |         entity_type = metadata.get("type")
222 |         metadata["type"] = entity_type if entity_type is not None else "note"
223 | 
224 |         tags = parse_tags(metadata.get("tags", []))  # pyright: ignore
225 |         if tags:
226 |             metadata["tags"] = tags
227 | 
228 |         # frontmatter - use metadata with defaults applied
229 |         entity_frontmatter = EntityFrontmatter(
230 |             metadata=metadata,
231 |         )
232 |         entity_content = parse(post.content)
233 |         return EntityMarkdown(
234 |             frontmatter=entity_frontmatter,
235 |             content=post.content,
236 |             observations=entity_content.observations,
237 |             relations=entity_content.relations,
238 |             created=datetime.fromtimestamp(file_stats.st_ctime).astimezone(),
239 |             modified=datetime.fromtimestamp(file_stats.st_mtime).astimezone(),
240 |         )
241 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/api/template_loader.py:
--------------------------------------------------------------------------------

```python
  1 | """Template loading and rendering utilities for the Basic Memory API.
  2 | 
  3 | This module handles the loading and rendering of Handlebars templates from the
  4 | templates directory, providing a consistent interface for all prompt-related
  5 | formatting needs.
  6 | """
  7 | 
  8 | import textwrap
  9 | from typing import Dict, Any, Optional, Callable
 10 | from pathlib import Path
 11 | import json
 12 | import datetime
 13 | 
 14 | import pybars
 15 | from loguru import logger
 16 | 
 17 | # Get the base path of the templates directory
 18 | TEMPLATES_DIR = Path(__file__).parent.parent / "templates"
 19 | 
 20 | 
 21 | # Custom helpers for Handlebars
 22 | def _date_helper(this, *args):
 23 |     """Format a date using the given format string."""
 24 |     if len(args) < 1:  # pragma: no cover
 25 |         return ""
 26 | 
 27 |     timestamp = args[0]
 28 |     format_str = args[1] if len(args) > 1 else "%Y-%m-%d %H:%M"
 29 | 
 30 |     if hasattr(timestamp, "strftime"):
 31 |         result = timestamp.strftime(format_str)
 32 |     elif isinstance(timestamp, str):
 33 |         try:
 34 |             dt = datetime.datetime.fromisoformat(timestamp)
 35 |             result = dt.strftime(format_str)
 36 |         except ValueError:
 37 |             result = timestamp
 38 |     else:
 39 |         result = str(timestamp)  # pragma: no cover
 40 | 
 41 |     return pybars.strlist([result])
 42 | 
 43 | 
 44 | def _default_helper(this, *args):
 45 |     """Return a default value if the given value is None or empty."""
 46 |     if len(args) < 2:  # pragma: no cover
 47 |         return ""
 48 | 
 49 |     value = args[0]
 50 |     default_value = args[1]
 51 | 
 52 |     result = default_value if value is None or value == "" else value
 53 |     # Use strlist for consistent handling of HTML escaping
 54 |     return pybars.strlist([str(result)])
 55 | 
 56 | 
 57 | def _capitalize_helper(this, *args):
 58 |     """Capitalize the first letter of a string."""
 59 |     if len(args) < 1:  # pragma: no cover
 60 |         return ""
 61 | 
 62 |     text = args[0]
 63 |     if not text or not isinstance(text, str):  # pragma: no cover
 64 |         result = ""
 65 |     else:
 66 |         result = text.capitalize()
 67 | 
 68 |     return pybars.strlist([result])
 69 | 
 70 | 
 71 | def _round_helper(this, *args):
 72 |     """Round a number to the specified number of decimal places."""
 73 |     if len(args) < 1:
 74 |         return ""
 75 | 
 76 |     value = args[0]
 77 |     decimal_places = args[1] if len(args) > 1 else 2
 78 | 
 79 |     try:
 80 |         result = str(round(float(value), int(decimal_places)))
 81 |     except (ValueError, TypeError):
 82 |         result = str(value)
 83 | 
 84 |     return pybars.strlist([result])
 85 | 
 86 | 
 87 | def _size_helper(this, *args):
 88 |     """Return the size/length of a collection."""
 89 |     if len(args) < 1:
 90 |         return 0
 91 | 
 92 |     value = args[0]
 93 |     if value is None:
 94 |         result = "0"
 95 |     elif isinstance(value, (list, tuple, dict, str)):
 96 |         result = str(len(value))  # pragma: no cover
 97 |     else:  # pragma: no cover
 98 |         result = "0"
 99 | 
100 |     return pybars.strlist([result])
101 | 
102 | 
103 | def _json_helper(this, *args):
104 |     """Convert a value to a JSON string."""
105 |     if len(args) < 1:  # pragma: no cover
106 |         return "{}"
107 | 
108 |     value = args[0]
109 |     # For pybars, we need to return a SafeString to prevent HTML escaping
110 |     result = json.dumps(value)  # pragma: no cover
111 |     # Safe string implementation to prevent HTML escaping
112 |     return pybars.strlist([result])
113 | 
114 | 
115 | def _math_helper(this, *args):
116 |     """Perform basic math operations."""
117 |     if len(args) < 3:
118 |         return pybars.strlist(["Math error: Insufficient arguments"])
119 | 
120 |     lhs = args[0]
121 |     operator = args[1]
122 |     rhs = args[2]
123 | 
124 |     try:
125 |         lhs = float(lhs)
126 |         rhs = float(rhs)
127 |         if operator == "+":
128 |             result = str(lhs + rhs)
129 |         elif operator == "-":
130 |             result = str(lhs - rhs)
131 |         elif operator == "*":
132 |             result = str(lhs * rhs)
133 |         elif operator == "/":
134 |             result = str(lhs / rhs)
135 |         else:
136 |             result = f"Unsupported operator: {operator}"
137 |     except (ValueError, TypeError) as e:
138 |         result = f"Math error: {e}"
139 | 
140 |     return pybars.strlist([result])
141 | 
142 | 
143 | def _lt_helper(this, *args):
144 |     """Check if left hand side is less than right hand side."""
145 |     if len(args) < 2:
146 |         return False
147 | 
148 |     lhs = args[0]
149 |     rhs = args[1]
150 | 
151 |     try:
152 |         return float(lhs) < float(rhs)
153 |     except (ValueError, TypeError):
154 |         # Fall back to string comparison for non-numeric values
155 |         return str(lhs) < str(rhs)
156 | 
157 | 
158 | def _if_cond_helper(this, options, condition):
159 |     """Block helper for custom if conditionals."""
160 |     if condition:
161 |         return options["fn"](this)
162 |     elif "inverse" in options:
163 |         return options["inverse"](this)
164 |     return ""  # pragma: no cover
165 | 
166 | 
167 | def _dedent_helper(this, options):
168 |     """Dedent a block of text to remove common leading whitespace.
169 | 
170 |     Usage:
171 |     {{#dedent}}
172 |         This text will have its
173 |         common leading whitespace removed
174 |         while preserving relative indentation.
175 |     {{/dedent}}
176 |     """
177 |     if "fn" not in options:  # pragma: no cover
178 |         return ""
179 | 
180 |     # Get the content from the block
181 |     content = options["fn"](this)
182 | 
183 |     # Convert to string if it's a strlist
184 |     if (
185 |         isinstance(content, list)
186 |         or hasattr(content, "__iter__")
187 |         and not isinstance(content, (str, bytes))
188 |     ):
189 |         content_str = "".join(str(item) for item in content)  # pragma: no cover
190 |     else:
191 |         content_str = str(content)  # pragma: no cover
192 | 
193 |     # Add trailing and leading newlines to ensure proper dedenting
194 |     # This is critical for textwrap.dedent to work correctly with mixed content
195 |     content_str = "\n" + content_str + "\n"
196 | 
197 |     # Use textwrap to dedent the content and remove the extra newlines we added
198 |     dedented = textwrap.dedent(content_str)[1:-1]
199 | 
200 |     # Return as a SafeString to prevent HTML escaping
201 |     return pybars.strlist([dedented])  # pragma: no cover
202 | 
203 | 
204 | class TemplateLoader:
205 |     """Loader for Handlebars templates.
206 | 
207 |     This class is responsible for loading templates from disk and rendering
208 |     them with the provided context data.
209 |     """
210 | 
211 |     def __init__(self, template_dir: Optional[str] = None):
212 |         """Initialize the template loader.
213 | 
214 |         Args:
215 |             template_dir: Optional custom template directory path
216 |         """
217 |         self.template_dir = Path(template_dir) if template_dir else TEMPLATES_DIR
218 |         self.template_cache: Dict[str, Callable] = {}
219 |         self.compiler = pybars.Compiler()
220 | 
221 |         # Set up standard helpers
222 |         self.helpers = {
223 |             "date": _date_helper,
224 |             "default": _default_helper,
225 |             "capitalize": _capitalize_helper,
226 |             "round": _round_helper,
227 |             "size": _size_helper,
228 |             "json": _json_helper,
229 |             "math": _math_helper,
230 |             "lt": _lt_helper,
231 |             "if_cond": _if_cond_helper,
232 |             "dedent": _dedent_helper,
233 |         }
234 | 
235 |         logger.debug(f"Initialized template loader with directory: {self.template_dir}")
236 | 
237 |     def get_template(self, template_path: str) -> Callable:
238 |         """Get a template by path, using cache if available.
239 | 
240 |         Args:
241 |             template_path: The path to the template, relative to the templates directory
242 | 
243 |         Returns:
244 |             The compiled Handlebars template
245 | 
246 |         Raises:
247 |             FileNotFoundError: If the template doesn't exist
248 |         """
249 |         if template_path in self.template_cache:
250 |             return self.template_cache[template_path]
251 | 
252 |         # Convert from Liquid-style path to Handlebars extension
253 |         if template_path.endswith(".liquid"):
254 |             template_path = template_path.replace(".liquid", ".hbs")
255 |         elif not template_path.endswith(".hbs"):
256 |             template_path = f"{template_path}.hbs"
257 | 
258 |         full_path = self.template_dir / template_path
259 | 
260 |         if not full_path.exists():
261 |             raise FileNotFoundError(f"Template not found: {full_path}")
262 | 
263 |         with open(full_path, "r", encoding="utf-8") as f:
264 |             template_str = f.read()
265 | 
266 |         template = self.compiler.compile(template_str)
267 |         self.template_cache[template_path] = template
268 | 
269 |         logger.debug(f"Loaded template: {template_path}")
270 |         return template
271 | 
272 |     async def render(self, template_path: str, context: Dict[str, Any]) -> str:
273 |         """Render a template with the given context.
274 | 
275 |         Args:
276 |             template_path: The path to the template, relative to the templates directory
277 |             context: The context data to pass to the template
278 | 
279 |         Returns:
280 |             The rendered template as a string
281 |         """
282 |         template = self.get_template(template_path)
283 |         return template(context, helpers=self.helpers)
284 | 
285 |     def clear_cache(self) -> None:
286 |         """Clear the template cache."""
287 |         self.template_cache.clear()
288 |         logger.debug("Template cache cleared")
289 | 
290 | 
291 | # Global template loader instance
292 | template_loader = TemplateLoader()
293 | 
```

--------------------------------------------------------------------------------
/specs/SPEC-15 Configuration Persistence via Tigris for Cloud Tenants.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | title: 'SPEC-15: Configuration Persistence via Tigris for Cloud Tenants'
  3 | type: spec
  4 | permalink: specs/spec-14-config-persistence-tigris
  5 | tags:
  6 | - persistence
  7 | - tigris
  8 | - multi-tenant
  9 | - infrastructure
 10 | - configuration
 11 | status: draft
 12 | ---
 13 | 
 14 | # SPEC-15: Configuration Persistence via Tigris for Cloud Tenants
 15 | 
 16 | ## Why
 17 | 
 18 | We need to persist Basic Memory configuration across Fly.io deployments without using persistent volumes or external databases.
 19 | 
 20 | **Current Problems:**
 21 | - `~/.basic-memory/config.json` lost on every deployment (project configuration)
 22 | - `~/.basic-memory/memory.db` lost on every deployment (search index)
 23 | - Persistent volumes break clean deployment workflow
 24 | - External databases (Turso) require per-tenant token management
 25 | 
 26 | **The Insight:**
 27 | The SQLite database is just an **index cache** of the markdown files. It can be rebuilt in seconds from the source markdown files in Tigris. Only the small `config.json` file needs true persistence.
 28 | 
 29 | **Solution:**
 30 | - Store `config.json` in Tigris bucket (persistent, small file)
 31 | - Rebuild `memory.db` on startup from markdown files (fast, ephemeral)
 32 | - No persistent volumes, no external databases, no token management
 33 | 
 34 | ## What
 35 | 
 36 | Store Basic Memory configuration in the Tigris bucket and rebuild the database index on tenant machine startup.
 37 | 
 38 | **Affected Components:**
 39 | - `basic-memory/src/basic_memory/config.py` - Add configurable config directory
 40 | 
 41 | **Architecture:**
 42 | 
 43 | ```bash
 44 | # Tigris Bucket (persistent, mounted at /app/data)
 45 | /app/data/
 46 |   ├── .basic-memory/
 47 |   │   └── config.json          # ← Project configuration (persistent, accessed via BASIC_MEMORY_CONFIG_DIR)
 48 |   └── basic-memory/             # ← Markdown files (persistent, BASIC_MEMORY_HOME)
 49 |       ├── project1/
 50 |       └── project2/
 51 | 
 52 | # Fly Machine (ephemeral)
 53 | /app/.basic-memory/
 54 |   └── memory.db                # ← Rebuilt on startup (fast local disk)
 55 | ```
 56 | 
 57 | ## How (High Level)
 58 | 
 59 | ### 1. Add Configurable Config Directory to Basic Memory
 60 | 
 61 | Currently `ConfigManager` hardcodes `~/.basic-memory/config.json`. Add environment variable to override:
 62 | 
 63 | ```python
 64 | # basic-memory/src/basic_memory/config.py
 65 | 
 66 | class ConfigManager:
 67 |     """Manages Basic Memory configuration."""
 68 | 
 69 |     def __init__(self) -> None:
 70 |         """Initialize the configuration manager."""
 71 |         home = os.getenv("HOME", Path.home())
 72 |         if isinstance(home, str):
 73 |             home = Path(home)
 74 | 
 75 |         # Allow override via environment variable
 76 |         if config_dir := os.getenv("BASIC_MEMORY_CONFIG_DIR"):
 77 |             self.config_dir = Path(config_dir)
 78 |         else:
 79 |             self.config_dir = home / DATA_DIR_NAME
 80 | 
 81 |         self.config_file = self.config_dir / CONFIG_FILE_NAME
 82 | 
 83 |         # Ensure config directory exists
 84 |         self.config_dir.mkdir(parents=True, exist_ok=True)
 85 | ```
 86 | 
 87 | ### 2. Rebuild Database on Startup
 88 | 
 89 | Basic Memory already has the sync functionality. Just ensure it runs on startup:
 90 | 
 91 | ```python
 92 | # apps/api/src/basic_memory_cloud_api/main.py
 93 | 
 94 | @app.on_event("startup")
 95 | async def startup_sync():
 96 |     """Rebuild database index from Tigris markdown files."""
 97 |     logger.info("Starting database rebuild from Tigris")
 98 | 
 99 |     # Initialize file sync (rebuilds index from markdown files)
100 |     app_config = ConfigManager().config
101 |     await initialize_file_sync(app_config)
102 | 
103 |     logger.info("Database rebuild complete")
104 | ```
105 | 
106 | ### 3. Environment Configuration
107 | 
108 | ```bash
109 | # Machine environment variables
110 | BASIC_MEMORY_CONFIG_DIR=/app/data/.basic-memory  # Config read/written directly to Tigris
111 | # memory.db stays in default location: /app/.basic-memory/memory.db (local ephemeral disk)
112 | ```
113 | 
114 | ## Implementation Task List
115 | 
116 | ### Phase 1: Basic Memory Changes ✅
117 | - [x] Add `BASIC_MEMORY_CONFIG_DIR` environment variable support to `ConfigManager.__init__()`
118 | - [x] Test config loading from custom directory
119 | - [x] Update tests to verify custom config dir works
120 | 
121 | ### Phase 2: Tigris Bucket Structure ✅
122 | - [x] Ensure `.basic-memory/` directory exists in Tigris bucket on tenant creation
123 |   - ✅ ConfigManager auto-creates on first run, no explicit provisioning needed
124 | - [x] Initialize `config.json` in Tigris on first tenant deployment
125 |   - ✅ ConfigManager creates config.json automatically in BASIC_MEMORY_CONFIG_DIR
126 | - [x] Verify TigrisFS handles hidden directories correctly
127 |   - ✅ TigrisFS supports hidden directories (verified in SPEC-8)
128 | 
129 | ### Phase 3: Deployment Integration ✅
130 | - [x] Set `BASIC_MEMORY_CONFIG_DIR` environment variable in machine deployment
131 |   - ✅ Added to BasicMemoryMachineConfigBuilder in fly_schemas.py
132 | - [x] Ensure database rebuild runs on machine startup via initialization sync
133 |   - ✅ sync_worker.py runs initialize_file_sync every 30s (already implemented)
134 | - [x] Handle first-time tenant setup (no config exists yet)
135 |   - ✅ ConfigManager creates config.json on first initialization
136 | - [ ] Test deployment workflow with config persistence
137 | 
138 | ### Phase 4: Testing
139 | - [x] Unit tests for config directory override
140 | - [-] Integration test: deploy → write config → redeploy → verify config persists
141 | - [ ] Integration test: deploy → add project → redeploy → verify project in config
142 | - [ ] Performance test: measure db rebuild time on startup
143 | 
144 | ### Phase 5: Documentation
145 | - [ ] Document config persistence architecture
146 | - [ ] Update deployment runbook
147 | - [ ] Document startup sequence and timing
148 | 
149 | ## How to Evaluate
150 | 
151 | ### Success Criteria
152 | 
153 | 1. **Config Persistence**
154 |    - [ ] config.json persists across deployments
155 |    - [ ] Projects list maintained across restarts
156 |    - [ ] No manual configuration needed after redeploy
157 | 
158 | 2. **Database Rebuild**
159 |    - [ ] memory.db rebuilt on startup in < 30 seconds
160 |    - [ ] All entities indexed correctly
161 |    - [ ] Search functionality works after rebuild
162 | 
163 | 3. **Performance**
164 |    - [ ] SQLite queries remain fast (local disk)
165 |    - [ ] Config reads acceptable (symlink to Tigris)
166 |    - [ ] No noticeable performance degradation
167 | 
168 | 4. **Deployment Workflow**
169 |    - [ ] Clean deployments without volumes
170 |    - [ ] No new external dependencies
171 |    - [ ] No secret management needed
172 | 
173 | ### Testing Procedure
174 | 
175 | 1. **Config Persistence Test**
176 |    ```bash
177 |    # Deploy tenant
178 |    POST /tenants → tenant_id
179 | 
180 |    # Add a project
181 |    basic-memory project add "test-project" ~/test
182 | 
183 |    # Verify config has project
184 |    cat /app/data/.basic-memory/config.json
185 | 
186 |    # Redeploy machine
187 |    fly deploy --app basic-memory-{tenant_id}
188 | 
189 |    # Verify project still exists
190 |    basic-memory project list
191 |    ```
192 | 
193 | 2. **Database Rebuild Test**
194 |    ```bash
195 |    # Create notes
196 |    basic-memory write "Test Note" --content "..."
197 | 
198 |    # Redeploy (db lost)
199 |    fly deploy --app basic-memory-{tenant_id}
200 | 
201 |    # Wait for startup sync
202 |    sleep 10
203 | 
204 |    # Verify note is indexed
205 |    basic-memory search "Test Note"
206 |    ```
207 | 
208 | 3. **Performance Benchmark**
209 |    ```bash
210 |    # Time the startup sync
211 |    time basic-memory sync
212 | 
213 |    # Should be < 30 seconds for typical tenant
214 |    ```
215 | 
216 | ## Benefits Over Alternatives
217 | 
218 | **vs. Persistent Volumes:**
219 | - ✅ Clean deployment workflow
220 | - ✅ No volume migration needed
221 | - ✅ Simpler infrastructure
222 | 
223 | **vs. Turso (External Database):**
224 | - ✅ No per-tenant token management
225 | - ✅ No external service dependencies
226 | - ✅ No additional costs
227 | - ✅ Simpler architecture
228 | 
229 | **vs. SQLite on FUSE:**
230 | - ✅ Fast local SQLite performance
231 | - ✅ Only slow reads for small config file
232 | - ✅ Database queries remain fast
233 | 
234 | ## Implementation Assignment
235 | 
236 | **Primary Agent:** `python-developer`
237 | - Add `BASIC_MEMORY_CONFIG_DIR` environment variable to ConfigManager
238 | - Update deployment workflow to set environment variable
239 | - Ensure startup sync runs correctly
240 | 
241 | **Review Agent:** `system-architect`
242 | - Validate architecture simplicity
243 | - Review performance implications
244 | - Assess startup timing
245 | 
246 | ## Dependencies
247 | 
248 | - **Internal:** TigrisFS must be working and stable
249 | - **Internal:** Basic Memory sync must be reliable
250 | - **Internal:** SPEC-8 (TigrisFS Integration) must be complete
251 | 
252 | ## Open Questions
253 | 
254 | 1. Should we add a health check that waits for db rebuild to complete?
255 | 2. Do we need to handle very large knowledge bases (>10k entities) differently?
256 | 3. Should we add metrics for startup sync duration?
257 | 
258 | ## References
259 | 
260 | - Basic Memory sync: `basic-memory/src/basic_memory/services/initialization.py`
261 | - Config management: `basic-memory/src/basic_memory/config.py`
262 | - TigrisFS integration: SPEC-8
263 | 
264 | ---
265 | 
266 | **Status Updates:**
267 | 
268 | - 2025-10-08: Pivoted from Turso to Tigris-based config persistence
269 | - 2025-10-08: Phase 1 complete - BASIC_MEMORY_CONFIG_DIR support added (PR #343)
270 | - 2025-10-08: Phases 2-3 complete - Added BASIC_MEMORY_CONFIG_DIR to machine config
271 |   - Config now persists to /app/data/.basic-memory/config.json in Tigris bucket
272 |   - Database rebuild already working via sync_worker.py
273 |   - Ready for deployment testing (Phase 4)
274 | 
```
Page 6/23FirstPrevNextLast