#
tokens: 49038/50000 16/347 files (page 7/23)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 7 of 23. Use http://codebase.md/basicmachines-co/basic-memory?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .claude
│   ├── agents
│   │   ├── python-developer.md
│   │   └── system-architect.md
│   └── commands
│       ├── release
│       │   ├── beta.md
│       │   ├── changelog.md
│       │   ├── release-check.md
│       │   └── release.md
│       ├── spec.md
│       └── test-live.md
├── .dockerignore
├── .github
│   ├── dependabot.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.md
│   │   ├── config.yml
│   │   ├── documentation.md
│   │   └── feature_request.md
│   └── workflows
│       ├── claude-code-review.yml
│       ├── claude-issue-triage.yml
│       ├── claude.yml
│       ├── dev-release.yml
│       ├── docker.yml
│       ├── pr-title.yml
│       ├── release.yml
│       └── test.yml
├── .gitignore
├── .python-version
├── CHANGELOG.md
├── CITATION.cff
├── CLA.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── docker-compose.yml
├── Dockerfile
├── docs
│   ├── ai-assistant-guide-extended.md
│   ├── character-handling.md
│   ├── cloud-cli.md
│   └── Docker.md
├── justfile
├── LICENSE
├── llms-install.md
├── pyproject.toml
├── README.md
├── SECURITY.md
├── smithery.yaml
├── specs
│   ├── SPEC-1 Specification-Driven Development Process.md
│   ├── SPEC-10 Unified Deployment Workflow and Event Tracking.md
│   ├── SPEC-11 Basic Memory API Performance Optimization.md
│   ├── SPEC-12 OpenTelemetry Observability.md
│   ├── SPEC-13 CLI Authentication with Subscription Validation.md
│   ├── SPEC-14 Cloud Git Versioning & GitHub Backup.md
│   ├── SPEC-14- Cloud Git Versioning & GitHub Backup.md
│   ├── SPEC-15 Configuration Persistence via Tigris for Cloud Tenants.md
│   ├── SPEC-16 MCP Cloud Service Consolidation.md
│   ├── SPEC-17 Semantic Search with ChromaDB.md
│   ├── SPEC-18 AI Memory Management Tool.md
│   ├── SPEC-19 Sync Performance and Memory Optimization.md
│   ├── SPEC-2 Slash Commands Reference.md
│   ├── SPEC-20 Simplified Project-Scoped Rclone Sync.md
│   ├── SPEC-3 Agent Definitions.md
│   ├── SPEC-4 Notes Web UI Component Architecture.md
│   ├── SPEC-5 CLI Cloud Upload via WebDAV.md
│   ├── SPEC-6 Explicit Project Parameter Architecture.md
│   ├── SPEC-7 POC to spike Tigris Turso for local access to cloud data.md
│   ├── SPEC-8 TigrisFS Integration.md
│   ├── SPEC-9 Multi-Project Bidirectional Sync Architecture.md
│   ├── SPEC-9 Signed Header Tenant Information.md
│   └── SPEC-9-1 Follow-Ups- Conflict, Sync, and Observability.md
├── src
│   └── basic_memory
│       ├── __init__.py
│       ├── alembic
│       │   ├── alembic.ini
│       │   ├── env.py
│       │   ├── migrations.py
│       │   ├── script.py.mako
│       │   └── versions
│       │       ├── 3dae7c7b1564_initial_schema.py
│       │       ├── 502b60eaa905_remove_required_from_entity_permalink.py
│       │       ├── 5fe1ab1ccebe_add_projects_table.py
│       │       ├── 647e7a75e2cd_project_constraint_fix.py
│       │       ├── 9d9c1cb7d8f5_add_mtime_and_size_columns_to_entity_.py
│       │       ├── a1b2c3d4e5f6_fix_project_foreign_keys.py
│       │       ├── b3c3938bacdb_relation_to_name_unique_index.py
│       │       ├── cc7172b46608_update_search_index_schema.py
│       │       └── e7e1f4367280_add_scan_watermark_tracking_to_project.py
│       ├── api
│       │   ├── __init__.py
│       │   ├── app.py
│       │   ├── routers
│       │   │   ├── __init__.py
│       │   │   ├── directory_router.py
│       │   │   ├── importer_router.py
│       │   │   ├── knowledge_router.py
│       │   │   ├── management_router.py
│       │   │   ├── memory_router.py
│       │   │   ├── project_router.py
│       │   │   ├── prompt_router.py
│       │   │   ├── resource_router.py
│       │   │   ├── search_router.py
│       │   │   └── utils.py
│       │   └── template_loader.py
│       ├── cli
│       │   ├── __init__.py
│       │   ├── app.py
│       │   ├── auth.py
│       │   ├── commands
│       │   │   ├── __init__.py
│       │   │   ├── cloud
│       │   │   │   ├── __init__.py
│       │   │   │   ├── api_client.py
│       │   │   │   ├── bisync_commands.py
│       │   │   │   ├── cloud_utils.py
│       │   │   │   ├── core_commands.py
│       │   │   │   ├── rclone_commands.py
│       │   │   │   ├── rclone_config.py
│       │   │   │   ├── rclone_installer.py
│       │   │   │   ├── upload_command.py
│       │   │   │   └── upload.py
│       │   │   ├── command_utils.py
│       │   │   ├── db.py
│       │   │   ├── import_chatgpt.py
│       │   │   ├── import_claude_conversations.py
│       │   │   ├── import_claude_projects.py
│       │   │   ├── import_memory_json.py
│       │   │   ├── mcp.py
│       │   │   ├── project.py
│       │   │   ├── status.py
│       │   │   └── tool.py
│       │   └── main.py
│       ├── config.py
│       ├── db.py
│       ├── deps.py
│       ├── file_utils.py
│       ├── ignore_utils.py
│       ├── importers
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── chatgpt_importer.py
│       │   ├── claude_conversations_importer.py
│       │   ├── claude_projects_importer.py
│       │   ├── memory_json_importer.py
│       │   └── utils.py
│       ├── markdown
│       │   ├── __init__.py
│       │   ├── entity_parser.py
│       │   ├── markdown_processor.py
│       │   ├── plugins.py
│       │   ├── schemas.py
│       │   └── utils.py
│       ├── mcp
│       │   ├── __init__.py
│       │   ├── async_client.py
│       │   ├── project_context.py
│       │   ├── prompts
│       │   │   ├── __init__.py
│       │   │   ├── ai_assistant_guide.py
│       │   │   ├── continue_conversation.py
│       │   │   ├── recent_activity.py
│       │   │   ├── search.py
│       │   │   └── utils.py
│       │   ├── resources
│       │   │   ├── ai_assistant_guide.md
│       │   │   └── project_info.py
│       │   ├── server.py
│       │   └── tools
│       │       ├── __init__.py
│       │       ├── build_context.py
│       │       ├── canvas.py
│       │       ├── chatgpt_tools.py
│       │       ├── delete_note.py
│       │       ├── edit_note.py
│       │       ├── list_directory.py
│       │       ├── move_note.py
│       │       ├── project_management.py
│       │       ├── read_content.py
│       │       ├── read_note.py
│       │       ├── recent_activity.py
│       │       ├── search.py
│       │       ├── utils.py
│       │       ├── view_note.py
│       │       └── write_note.py
│       ├── models
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── knowledge.py
│       │   ├── project.py
│       │   └── search.py
│       ├── repository
│       │   ├── __init__.py
│       │   ├── entity_repository.py
│       │   ├── observation_repository.py
│       │   ├── project_info_repository.py
│       │   ├── project_repository.py
│       │   ├── relation_repository.py
│       │   ├── repository.py
│       │   └── search_repository.py
│       ├── schemas
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── cloud.py
│       │   ├── delete.py
│       │   ├── directory.py
│       │   ├── importer.py
│       │   ├── memory.py
│       │   ├── project_info.py
│       │   ├── prompt.py
│       │   ├── request.py
│       │   ├── response.py
│       │   ├── search.py
│       │   └── sync_report.py
│       ├── services
│       │   ├── __init__.py
│       │   ├── context_service.py
│       │   ├── directory_service.py
│       │   ├── entity_service.py
│       │   ├── exceptions.py
│       │   ├── file_service.py
│       │   ├── initialization.py
│       │   ├── link_resolver.py
│       │   ├── project_service.py
│       │   ├── search_service.py
│       │   └── service.py
│       ├── sync
│       │   ├── __init__.py
│       │   ├── background_sync.py
│       │   ├── sync_service.py
│       │   └── watch_service.py
│       ├── templates
│       │   └── prompts
│       │       ├── continue_conversation.hbs
│       │       └── search.hbs
│       └── utils.py
├── test-int
│   ├── BENCHMARKS.md
│   ├── cli
│   │   ├── test_project_commands_integration.py
│   │   └── test_version_integration.py
│   ├── conftest.py
│   ├── mcp
│   │   ├── test_build_context_underscore.py
│   │   ├── test_build_context_validation.py
│   │   ├── test_chatgpt_tools_integration.py
│   │   ├── test_default_project_mode_integration.py
│   │   ├── test_delete_note_integration.py
│   │   ├── test_edit_note_integration.py
│   │   ├── test_list_directory_integration.py
│   │   ├── test_move_note_integration.py
│   │   ├── test_project_management_integration.py
│   │   ├── test_project_state_sync_integration.py
│   │   ├── test_read_content_integration.py
│   │   ├── test_read_note_integration.py
│   │   ├── test_search_integration.py
│   │   ├── test_single_project_mcp_integration.py
│   │   └── test_write_note_integration.py
│   ├── test_db_wal_mode.py
│   ├── test_disable_permalinks_integration.py
│   └── test_sync_performance_benchmark.py
├── tests
│   ├── __init__.py
│   ├── api
│   │   ├── conftest.py
│   │   ├── test_async_client.py
│   │   ├── test_continue_conversation_template.py
│   │   ├── test_directory_router.py
│   │   ├── test_importer_router.py
│   │   ├── test_knowledge_router.py
│   │   ├── test_management_router.py
│   │   ├── test_memory_router.py
│   │   ├── test_project_router_operations.py
│   │   ├── test_project_router.py
│   │   ├── test_prompt_router.py
│   │   ├── test_relation_background_resolution.py
│   │   ├── test_resource_router.py
│   │   ├── test_search_router.py
│   │   ├── test_search_template.py
│   │   ├── test_template_loader_helpers.py
│   │   └── test_template_loader.py
│   ├── cli
│   │   ├── conftest.py
│   │   ├── test_cli_tools.py
│   │   ├── test_cloud_authentication.py
│   │   ├── test_ignore_utils.py
│   │   ├── test_import_chatgpt.py
│   │   ├── test_import_claude_conversations.py
│   │   ├── test_import_claude_projects.py
│   │   ├── test_import_memory_json.py
│   │   ├── test_project_add_with_local_path.py
│   │   └── test_upload.py
│   ├── conftest.py
│   ├── db
│   │   └── test_issue_254_foreign_key_constraints.py
│   ├── importers
│   │   ├── test_importer_base.py
│   │   └── test_importer_utils.py
│   ├── markdown
│   │   ├── __init__.py
│   │   ├── test_date_frontmatter_parsing.py
│   │   ├── test_entity_parser_error_handling.py
│   │   ├── test_entity_parser.py
│   │   ├── test_markdown_plugins.py
│   │   ├── test_markdown_processor.py
│   │   ├── test_observation_edge_cases.py
│   │   ├── test_parser_edge_cases.py
│   │   ├── test_relation_edge_cases.py
│   │   └── test_task_detection.py
│   ├── mcp
│   │   ├── conftest.py
│   │   ├── test_obsidian_yaml_formatting.py
│   │   ├── test_permalink_collision_file_overwrite.py
│   │   ├── test_prompts.py
│   │   ├── test_resources.py
│   │   ├── test_tool_build_context.py
│   │   ├── test_tool_canvas.py
│   │   ├── test_tool_delete_note.py
│   │   ├── test_tool_edit_note.py
│   │   ├── test_tool_list_directory.py
│   │   ├── test_tool_move_note.py
│   │   ├── test_tool_read_content.py
│   │   ├── test_tool_read_note.py
│   │   ├── test_tool_recent_activity.py
│   │   ├── test_tool_resource.py
│   │   ├── test_tool_search.py
│   │   ├── test_tool_utils.py
│   │   ├── test_tool_view_note.py
│   │   ├── test_tool_write_note.py
│   │   └── tools
│   │       └── test_chatgpt_tools.py
│   ├── Non-MarkdownFileSupport.pdf
│   ├── repository
│   │   ├── test_entity_repository_upsert.py
│   │   ├── test_entity_repository.py
│   │   ├── test_entity_upsert_issue_187.py
│   │   ├── test_observation_repository.py
│   │   ├── test_project_info_repository.py
│   │   ├── test_project_repository.py
│   │   ├── test_relation_repository.py
│   │   ├── test_repository.py
│   │   ├── test_search_repository_edit_bug_fix.py
│   │   └── test_search_repository.py
│   ├── schemas
│   │   ├── test_base_timeframe_minimum.py
│   │   ├── test_memory_serialization.py
│   │   ├── test_memory_url_validation.py
│   │   ├── test_memory_url.py
│   │   ├── test_schemas.py
│   │   └── test_search.py
│   ├── Screenshot.png
│   ├── services
│   │   ├── test_context_service.py
│   │   ├── test_directory_service.py
│   │   ├── test_entity_service_disable_permalinks.py
│   │   ├── test_entity_service.py
│   │   ├── test_file_service.py
│   │   ├── test_initialization.py
│   │   ├── test_link_resolver.py
│   │   ├── test_project_removal_bug.py
│   │   ├── test_project_service_operations.py
│   │   ├── test_project_service.py
│   │   └── test_search_service.py
│   ├── sync
│   │   ├── test_character_conflicts.py
│   │   ├── test_sync_service_incremental.py
│   │   ├── test_sync_service.py
│   │   ├── test_sync_wikilink_issue.py
│   │   ├── test_tmp_files.py
│   │   ├── test_watch_service_edge_cases.py
│   │   ├── test_watch_service_reload.py
│   │   └── test_watch_service.py
│   ├── test_config.py
│   ├── test_db_migration_deduplication.py
│   ├── test_deps.py
│   ├── test_production_cascade_delete.py
│   ├── test_rclone_commands.py
│   └── utils
│       ├── test_file_utils.py
│       ├── test_frontmatter_obsidian_compatible.py
│       ├── test_parse_tags.py
│       ├── test_permalink_formatting.py
│       ├── test_utf8_handling.py
│       └── test_validate_project_path.py
├── uv.lock
├── v0.15.0-RELEASE-DOCS.md
└── v15-docs
    ├── api-performance.md
    ├── background-relations.md
    ├── basic-memory-home.md
    ├── bug-fixes.md
    ├── chatgpt-integration.md
    ├── cloud-authentication.md
    ├── cloud-bisync.md
    ├── cloud-mode-usage.md
    ├── cloud-mount.md
    ├── default-project-mode.md
    ├── env-file-removal.md
    ├── env-var-overrides.md
    ├── explicit-project-parameter.md
    ├── gitignore-integration.md
    ├── project-root-env-var.md
    ├── README.md
    └── sqlite-performance.md
```

# Files

--------------------------------------------------------------------------------
/specs/SPEC-15 Configuration Persistence via Tigris for Cloud Tenants.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | title: 'SPEC-15: Configuration Persistence via Tigris for Cloud Tenants'
  3 | type: spec
  4 | permalink: specs/spec-14-config-persistence-tigris
  5 | tags:
  6 | - persistence
  7 | - tigris
  8 | - multi-tenant
  9 | - infrastructure
 10 | - configuration
 11 | status: draft
 12 | ---
 13 | 
 14 | # SPEC-15: Configuration Persistence via Tigris for Cloud Tenants
 15 | 
 16 | ## Why
 17 | 
 18 | We need to persist Basic Memory configuration across Fly.io deployments without using persistent volumes or external databases.
 19 | 
 20 | **Current Problems:**
 21 | - `~/.basic-memory/config.json` lost on every deployment (project configuration)
 22 | - `~/.basic-memory/memory.db` lost on every deployment (search index)
 23 | - Persistent volumes break clean deployment workflow
 24 | - External databases (Turso) require per-tenant token management
 25 | 
 26 | **The Insight:**
 27 | The SQLite database is just an **index cache** of the markdown files. It can be rebuilt in seconds from the source markdown files in Tigris. Only the small `config.json` file needs true persistence.
 28 | 
 29 | **Solution:**
 30 | - Store `config.json` in Tigris bucket (persistent, small file)
 31 | - Rebuild `memory.db` on startup from markdown files (fast, ephemeral)
 32 | - No persistent volumes, no external databases, no token management
 33 | 
 34 | ## What
 35 | 
 36 | Store Basic Memory configuration in the Tigris bucket and rebuild the database index on tenant machine startup.
 37 | 
 38 | **Affected Components:**
 39 | - `basic-memory/src/basic_memory/config.py` - Add configurable config directory
 40 | 
 41 | **Architecture:**
 42 | 
 43 | ```bash
 44 | # Tigris Bucket (persistent, mounted at /app/data)
 45 | /app/data/
 46 |   ├── .basic-memory/
 47 |   │   └── config.json          # ← Project configuration (persistent, accessed via BASIC_MEMORY_CONFIG_DIR)
 48 |   └── basic-memory/             # ← Markdown files (persistent, BASIC_MEMORY_HOME)
 49 |       ├── project1/
 50 |       └── project2/
 51 | 
 52 | # Fly Machine (ephemeral)
 53 | /app/.basic-memory/
 54 |   └── memory.db                # ← Rebuilt on startup (fast local disk)
 55 | ```
 56 | 
 57 | ## How (High Level)
 58 | 
 59 | ### 1. Add Configurable Config Directory to Basic Memory
 60 | 
 61 | Currently `ConfigManager` hardcodes `~/.basic-memory/config.json`. Add environment variable to override:
 62 | 
 63 | ```python
 64 | # basic-memory/src/basic_memory/config.py
 65 | 
 66 | class ConfigManager:
 67 |     """Manages Basic Memory configuration."""
 68 | 
 69 |     def __init__(self) -> None:
 70 |         """Initialize the configuration manager."""
 71 |         home = os.getenv("HOME", Path.home())
 72 |         if isinstance(home, str):
 73 |             home = Path(home)
 74 | 
 75 |         # Allow override via environment variable
 76 |         if config_dir := os.getenv("BASIC_MEMORY_CONFIG_DIR"):
 77 |             self.config_dir = Path(config_dir)
 78 |         else:
 79 |             self.config_dir = home / DATA_DIR_NAME
 80 | 
 81 |         self.config_file = self.config_dir / CONFIG_FILE_NAME
 82 | 
 83 |         # Ensure config directory exists
 84 |         self.config_dir.mkdir(parents=True, exist_ok=True)
 85 | ```
 86 | 
 87 | ### 2. Rebuild Database on Startup
 88 | 
 89 | Basic Memory already has the sync functionality. Just ensure it runs on startup:
 90 | 
 91 | ```python
 92 | # apps/api/src/basic_memory_cloud_api/main.py
 93 | 
 94 | @app.on_event("startup")
 95 | async def startup_sync():
 96 |     """Rebuild database index from Tigris markdown files."""
 97 |     logger.info("Starting database rebuild from Tigris")
 98 | 
 99 |     # Initialize file sync (rebuilds index from markdown files)
100 |     app_config = ConfigManager().config
101 |     await initialize_file_sync(app_config)
102 | 
103 |     logger.info("Database rebuild complete")
104 | ```
105 | 
106 | ### 3. Environment Configuration
107 | 
108 | ```bash
109 | # Machine environment variables
110 | BASIC_MEMORY_CONFIG_DIR=/app/data/.basic-memory  # Config read/written directly to Tigris
111 | # memory.db stays in default location: /app/.basic-memory/memory.db (local ephemeral disk)
112 | ```
113 | 
114 | ## Implementation Task List
115 | 
116 | ### Phase 1: Basic Memory Changes ✅
117 | - [x] Add `BASIC_MEMORY_CONFIG_DIR` environment variable support to `ConfigManager.__init__()`
118 | - [x] Test config loading from custom directory
119 | - [x] Update tests to verify custom config dir works
120 | 
121 | ### Phase 2: Tigris Bucket Structure ✅
122 | - [x] Ensure `.basic-memory/` directory exists in Tigris bucket on tenant creation
123 |   - ✅ ConfigManager auto-creates on first run, no explicit provisioning needed
124 | - [x] Initialize `config.json` in Tigris on first tenant deployment
125 |   - ✅ ConfigManager creates config.json automatically in BASIC_MEMORY_CONFIG_DIR
126 | - [x] Verify TigrisFS handles hidden directories correctly
127 |   - ✅ TigrisFS supports hidden directories (verified in SPEC-8)
128 | 
129 | ### Phase 3: Deployment Integration ✅
130 | - [x] Set `BASIC_MEMORY_CONFIG_DIR` environment variable in machine deployment
131 |   - ✅ Added to BasicMemoryMachineConfigBuilder in fly_schemas.py
132 | - [x] Ensure database rebuild runs on machine startup via initialization sync
133 |   - ✅ sync_worker.py runs initialize_file_sync every 30s (already implemented)
134 | - [x] Handle first-time tenant setup (no config exists yet)
135 |   - ✅ ConfigManager creates config.json on first initialization
136 | - [ ] Test deployment workflow with config persistence
137 | 
138 | ### Phase 4: Testing
139 | - [x] Unit tests for config directory override
140 | - [-] Integration test: deploy → write config → redeploy → verify config persists
141 | - [ ] Integration test: deploy → add project → redeploy → verify project in config
142 | - [ ] Performance test: measure db rebuild time on startup
143 | 
144 | ### Phase 5: Documentation
145 | - [ ] Document config persistence architecture
146 | - [ ] Update deployment runbook
147 | - [ ] Document startup sequence and timing
148 | 
149 | ## How to Evaluate
150 | 
151 | ### Success Criteria
152 | 
153 | 1. **Config Persistence**
154 |    - [ ] config.json persists across deployments
155 |    - [ ] Projects list maintained across restarts
156 |    - [ ] No manual configuration needed after redeploy
157 | 
158 | 2. **Database Rebuild**
159 |    - [ ] memory.db rebuilt on startup in < 30 seconds
160 |    - [ ] All entities indexed correctly
161 |    - [ ] Search functionality works after rebuild
162 | 
163 | 3. **Performance**
164 |    - [ ] SQLite queries remain fast (local disk)
165 |    - [ ] Config reads acceptable (symlink to Tigris)
166 |    - [ ] No noticeable performance degradation
167 | 
168 | 4. **Deployment Workflow**
169 |    - [ ] Clean deployments without volumes
170 |    - [ ] No new external dependencies
171 |    - [ ] No secret management needed
172 | 
173 | ### Testing Procedure
174 | 
175 | 1. **Config Persistence Test**
176 |    ```bash
177 |    # Deploy tenant
178 |    POST /tenants → tenant_id
179 | 
180 |    # Add a project
181 |    basic-memory project add "test-project" ~/test
182 | 
183 |    # Verify config has project
184 |    cat /app/data/.basic-memory/config.json
185 | 
186 |    # Redeploy machine
187 |    fly deploy --app basic-memory-{tenant_id}
188 | 
189 |    # Verify project still exists
190 |    basic-memory project list
191 |    ```
192 | 
193 | 2. **Database Rebuild Test**
194 |    ```bash
195 |    # Create notes
196 |    basic-memory write "Test Note" --content "..."
197 | 
198 |    # Redeploy (db lost)
199 |    fly deploy --app basic-memory-{tenant_id}
200 | 
201 |    # Wait for startup sync
202 |    sleep 10
203 | 
204 |    # Verify note is indexed
205 |    basic-memory search "Test Note"
206 |    ```
207 | 
208 | 3. **Performance Benchmark**
209 |    ```bash
210 |    # Time the startup sync
211 |    time basic-memory sync
212 | 
213 |    # Should be < 30 seconds for typical tenant
214 |    ```
215 | 
216 | ## Benefits Over Alternatives
217 | 
218 | **vs. Persistent Volumes:**
219 | - ✅ Clean deployment workflow
220 | - ✅ No volume migration needed
221 | - ✅ Simpler infrastructure
222 | 
223 | **vs. Turso (External Database):**
224 | - ✅ No per-tenant token management
225 | - ✅ No external service dependencies
226 | - ✅ No additional costs
227 | - ✅ Simpler architecture
228 | 
229 | **vs. SQLite on FUSE:**
230 | - ✅ Fast local SQLite performance
231 | - ✅ Only slow reads for small config file
232 | - ✅ Database queries remain fast
233 | 
234 | ## Implementation Assignment
235 | 
236 | **Primary Agent:** `python-developer`
237 | - Add `BASIC_MEMORY_CONFIG_DIR` environment variable to ConfigManager
238 | - Update deployment workflow to set environment variable
239 | - Ensure startup sync runs correctly
240 | 
241 | **Review Agent:** `system-architect`
242 | - Validate architecture simplicity
243 | - Review performance implications
244 | - Assess startup timing
245 | 
246 | ## Dependencies
247 | 
248 | - **Internal:** TigrisFS must be working and stable
249 | - **Internal:** Basic Memory sync must be reliable
250 | - **Internal:** SPEC-8 (TigrisFS Integration) must be complete
251 | 
252 | ## Open Questions
253 | 
254 | 1. Should we add a health check that waits for db rebuild to complete?
255 | 2. Do we need to handle very large knowledge bases (>10k entities) differently?
256 | 3. Should we add metrics for startup sync duration?
257 | 
258 | ## References
259 | 
260 | - Basic Memory sync: `basic-memory/src/basic_memory/services/initialization.py`
261 | - Config management: `basic-memory/src/basic_memory/config.py`
262 | - TigrisFS integration: SPEC-8
263 | 
264 | ---
265 | 
266 | **Status Updates:**
267 | 
268 | - 2025-10-08: Pivoted from Turso to Tigris-based config persistence
269 | - 2025-10-08: Phase 1 complete - BASIC_MEMORY_CONFIG_DIR support added (PR #343)
270 | - 2025-10-08: Phases 2-3 complete - Added BASIC_MEMORY_CONFIG_DIR to machine config
271 |   - Config now persists to /app/data/.basic-memory/config.json in Tigris bucket
272 |   - Database rebuild already working via sync_worker.py
273 |   - Ready for deployment testing (Phase 4)
274 | 
```

--------------------------------------------------------------------------------
/tests/schemas/test_memory_serialization.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for datetime serialization in memory schema models."""
  2 | 
  3 | import json
  4 | from datetime import datetime
  5 | 
  6 | 
  7 | from basic_memory.schemas.memory import (
  8 |     EntitySummary,
  9 |     RelationSummary,
 10 |     ObservationSummary,
 11 |     MemoryMetadata,
 12 |     GraphContext,
 13 |     ContextResult,
 14 | )
 15 | 
 16 | 
 17 | class TestDateTimeSerialization:
 18 |     """Test datetime serialization for MCP schema compliance."""
 19 | 
 20 |     def test_entity_summary_datetime_serialization(self):
 21 |         """Test EntitySummary serializes datetime as ISO format string."""
 22 |         test_datetime = datetime(2023, 12, 8, 10, 30, 0)
 23 | 
 24 |         entity = EntitySummary(
 25 |             permalink="test/entity",
 26 |             title="Test Entity",
 27 |             file_path="test/entity.md",
 28 |             created_at=test_datetime,
 29 |         )
 30 | 
 31 |         # Test model_dump_json() produces ISO format
 32 |         json_str = entity.model_dump_json()
 33 |         data = json.loads(json_str)
 34 | 
 35 |         assert data["created_at"] == "2023-12-08T10:30:00"
 36 |         assert data["type"] == "entity"
 37 |         assert data["title"] == "Test Entity"
 38 | 
 39 |     def test_relation_summary_datetime_serialization(self):
 40 |         """Test RelationSummary serializes datetime as ISO format string."""
 41 |         test_datetime = datetime(2023, 12, 8, 15, 45, 30)
 42 | 
 43 |         relation = RelationSummary(
 44 |             title="Test Relation",
 45 |             file_path="test/relation.md",
 46 |             permalink="test/relation",
 47 |             relation_type="relates_to",
 48 |             from_entity="entity1",
 49 |             to_entity="entity2",
 50 |             created_at=test_datetime,
 51 |         )
 52 | 
 53 |         # Test model_dump_json() produces ISO format
 54 |         json_str = relation.model_dump_json()
 55 |         data = json.loads(json_str)
 56 | 
 57 |         assert data["created_at"] == "2023-12-08T15:45:30"
 58 |         assert data["type"] == "relation"
 59 |         assert data["relation_type"] == "relates_to"
 60 | 
 61 |     def test_observation_summary_datetime_serialization(self):
 62 |         """Test ObservationSummary serializes datetime as ISO format string."""
 63 |         test_datetime = datetime(2023, 12, 8, 20, 15, 45)
 64 | 
 65 |         observation = ObservationSummary(
 66 |             title="Test Observation",
 67 |             file_path="test/observation.md",
 68 |             permalink="test/observation",
 69 |             category="note",
 70 |             content="Test content",
 71 |             created_at=test_datetime,
 72 |         )
 73 | 
 74 |         # Test model_dump_json() produces ISO format
 75 |         json_str = observation.model_dump_json()
 76 |         data = json.loads(json_str)
 77 | 
 78 |         assert data["created_at"] == "2023-12-08T20:15:45"
 79 |         assert data["type"] == "observation"
 80 |         assert data["category"] == "note"
 81 | 
 82 |     def test_memory_metadata_datetime_serialization(self):
 83 |         """Test MemoryMetadata serializes datetime as ISO format string."""
 84 |         test_datetime = datetime(2023, 12, 8, 12, 0, 0)
 85 | 
 86 |         metadata = MemoryMetadata(
 87 |             depth=2, generated_at=test_datetime, primary_count=5, related_count=3
 88 |         )
 89 | 
 90 |         # Test model_dump_json() produces ISO format
 91 |         json_str = metadata.model_dump_json()
 92 |         data = json.loads(json_str)
 93 | 
 94 |         assert data["generated_at"] == "2023-12-08T12:00:00"
 95 |         assert data["depth"] == 2
 96 |         assert data["primary_count"] == 5
 97 | 
 98 |     def test_context_result_with_datetime_serialization(self):
 99 |         """Test ContextResult with nested models serializes datetime correctly."""
100 |         test_datetime = datetime(2023, 12, 8, 9, 30, 15)
101 | 
102 |         entity = EntitySummary(
103 |             permalink="test/entity",
104 |             title="Test Entity",
105 |             file_path="test/entity.md",
106 |             created_at=test_datetime,
107 |         )
108 | 
109 |         observation = ObservationSummary(
110 |             title="Test Observation",
111 |             file_path="test/observation.md",
112 |             permalink="test/observation",
113 |             category="note",
114 |             content="Test content",
115 |             created_at=test_datetime,
116 |         )
117 | 
118 |         context_result = ContextResult(
119 |             primary_result=entity, observations=[observation], related_results=[]
120 |         )
121 | 
122 |         # Test model_dump_json() produces ISO format for nested models
123 |         json_str = context_result.model_dump_json()
124 |         data = json.loads(json_str)
125 | 
126 |         assert data["primary_result"]["created_at"] == "2023-12-08T09:30:15"
127 |         assert data["observations"][0]["created_at"] == "2023-12-08T09:30:15"
128 | 
129 |     def test_graph_context_full_serialization(self):
130 |         """Test full GraphContext serialization with all datetime fields."""
131 |         test_datetime = datetime(2023, 12, 8, 14, 20, 10)
132 | 
133 |         entity = EntitySummary(
134 |             permalink="test/entity",
135 |             title="Test Entity",
136 |             file_path="test/entity.md",
137 |             created_at=test_datetime,
138 |         )
139 | 
140 |         metadata = MemoryMetadata(
141 |             depth=1, generated_at=test_datetime, primary_count=1, related_count=0
142 |         )
143 | 
144 |         context_result = ContextResult(primary_result=entity, observations=[], related_results=[])
145 | 
146 |         graph_context = GraphContext(
147 |             results=[context_result], metadata=metadata, page=1, page_size=10
148 |         )
149 | 
150 |         # Test full serialization
151 |         json_str = graph_context.model_dump_json()
152 |         data = json.loads(json_str)
153 | 
154 |         assert data["metadata"]["generated_at"] == "2023-12-08T14:20:10"
155 |         assert data["results"][0]["primary_result"]["created_at"] == "2023-12-08T14:20:10"
156 | 
157 |     def test_datetime_with_microseconds_serialization(self):
158 |         """Test datetime with microseconds serializes correctly."""
159 |         test_datetime = datetime(2023, 12, 8, 10, 30, 0, 123456)
160 | 
161 |         entity = EntitySummary(
162 |             permalink="test/entity",
163 |             title="Test Entity",
164 |             file_path="test/entity.md",
165 |             created_at=test_datetime,
166 |         )
167 | 
168 |         json_str = entity.model_dump_json()
169 |         data = json.loads(json_str)
170 | 
171 |         # Should include microseconds in ISO format
172 |         assert data["created_at"] == "2023-12-08T10:30:00.123456"
173 | 
174 |     def test_mcp_schema_validation_compatibility(self):
175 |         """Test that serialized datetime format is compatible with MCP schema validation."""
176 |         test_datetime = datetime(2023, 12, 8, 10, 30, 0)
177 | 
178 |         entity = EntitySummary(
179 |             permalink="test/entity",
180 |             title="Test Entity",
181 |             file_path="test/entity.md",
182 |             created_at=test_datetime,
183 |         )
184 | 
185 |         # Serialize to JSON
186 |         json_str = entity.model_dump_json()
187 |         data = json.loads(json_str)
188 | 
189 |         # Verify the format matches expected MCP "date-time" format
190 |         datetime_str = data["created_at"]
191 | 
192 |         # Should be parseable back to datetime (ISO format validation)
193 |         parsed_datetime = datetime.fromisoformat(datetime_str)
194 |         assert parsed_datetime == test_datetime
195 | 
196 |         # Should match the expected ISO format pattern
197 |         assert "T" in datetime_str  # Contains date-time separator
198 |         assert len(datetime_str) >= 19  # At least YYYY-MM-DDTHH:MM:SS format
199 | 
200 |     def test_all_models_have_datetime_serializers_configured(self):
201 |         """Test that all memory schema models have datetime field serializers configured."""
202 |         models_to_test = [
203 |             (EntitySummary, "created_at"),
204 |             (RelationSummary, "created_at"),
205 |             (ObservationSummary, "created_at"),
206 |             (MemoryMetadata, "generated_at"),
207 |         ]
208 | 
209 |         for model_class, datetime_field in models_to_test:
210 |             # Create a test instance with a datetime field
211 |             test_datetime = datetime(2023, 12, 8, 10, 30, 0)
212 | 
213 |             if model_class == EntitySummary:
214 |                 instance = model_class(
215 |                     permalink="test", title="Test", file_path="test.md", created_at=test_datetime
216 |                 )
217 |             elif model_class == RelationSummary:
218 |                 instance = model_class(
219 |                     title="Test",
220 |                     file_path="test.md",
221 |                     permalink="test",
222 |                     relation_type="test",
223 |                     created_at=test_datetime,
224 |                 )
225 |             elif model_class == ObservationSummary:
226 |                 instance = model_class(
227 |                     title="Test",
228 |                     file_path="test.md",
229 |                     permalink="test",
230 |                     category="test",
231 |                     content="Test",
232 |                     created_at=test_datetime,
233 |                 )
234 |             elif model_class == MemoryMetadata:
235 |                 instance = model_class(depth=1, generated_at=test_datetime)
236 | 
237 |             # Test that model_dump produces ISO format for datetime field
238 |             data = instance.model_dump()
239 |             assert data[datetime_field] == "2023-12-08T10:30:00"
240 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/tools/write_note.py:
--------------------------------------------------------------------------------

```python
  1 | """Write note tool for Basic Memory MCP server."""
  2 | 
  3 | from typing import List, Union, Optional
  4 | 
  5 | from loguru import logger
  6 | 
  7 | from basic_memory.mcp.async_client import get_client
  8 | from basic_memory.mcp.project_context import get_active_project, add_project_metadata
  9 | from basic_memory.mcp.server import mcp
 10 | from basic_memory.mcp.tools.utils import call_put
 11 | from basic_memory.schemas import EntityResponse
 12 | from fastmcp import Context
 13 | from basic_memory.schemas.base import Entity
 14 | from basic_memory.utils import parse_tags, validate_project_path
 15 | 
 16 | # Define TagType as a Union that can accept either a string or a list of strings or None
 17 | TagType = Union[List[str], str, None]
 18 | 
 19 | # Define TagType as a Union that can accept either a string or a list of strings or None
 20 | TagType = Union[List[str], str, None]
 21 | 
 22 | 
 23 | @mcp.tool(
 24 |     description="Create or update a markdown note. Returns a markdown formatted summary of the semantic content.",
 25 | )
 26 | async def write_note(
 27 |     title: str,
 28 |     content: str,
 29 |     folder: str,
 30 |     project: Optional[str] = None,
 31 |     tags=None,
 32 |     entity_type: str = "note",
 33 |     context: Context | None = None,
 34 | ) -> str:
 35 |     """Write a markdown note to the knowledge base.
 36 | 
 37 |     Creates or updates a markdown note with semantic observations and relations.
 38 | 
 39 |     Project Resolution:
 40 |     Server resolves projects in this order: Single Project Mode → project parameter → default project.
 41 |     If project unknown, use list_memory_projects() or recent_activity() first.
 42 | 
 43 |     The content can include semantic observations and relations using markdown syntax:
 44 | 
 45 |     Observations format:
 46 |         `- [category] Observation text #tag1 #tag2 (optional context)`
 47 | 
 48 |         Examples:
 49 |         `- [design] Files are the source of truth #architecture (All state comes from files)`
 50 |         `- [tech] Using SQLite for storage #implementation`
 51 |         `- [note] Need to add error handling #todo`
 52 | 
 53 |     Relations format:
 54 |         - Explicit: `- relation_type [[Entity]] (optional context)`
 55 |         - Inline: Any `[[Entity]]` reference creates a relation
 56 | 
 57 |         Examples:
 58 |         `- depends_on [[Content Parser]] (Need for semantic extraction)`
 59 |         `- implements [[Search Spec]] (Initial implementation)`
 60 |         `- This feature extends [[Base Design]] and uses [[Core Utils]]`
 61 | 
 62 |     Args:
 63 |         title: The title of the note
 64 |         content: Markdown content for the note, can include observations and relations
 65 |         folder: Folder path relative to project root where the file should be saved.
 66 |                 Use forward slashes (/) as separators. Use "/" or "" to write to project root.
 67 |                 Examples: "notes", "projects/2025", "research/ml", "/" (root)
 68 |         project: Project name to write to. Optional - server will resolve using the
 69 |                 hierarchy above. If unknown, use list_memory_projects() to discover
 70 |                 available projects.
 71 |         tags: Tags to categorize the note. Can be a list of strings, a comma-separated string, or None.
 72 |               Note: If passing from external MCP clients, use a string format (e.g. "tag1,tag2,tag3")
 73 |         entity_type: Type of entity to create. Defaults to "note". Can be "guide", "report", "config", etc.
 74 |         context: Optional FastMCP context for performance caching.
 75 | 
 76 |     Returns:
 77 |         A markdown formatted summary of the semantic content, including:
 78 |         - Creation/update status with project name
 79 |         - File path and checksum
 80 |         - Observation counts by category
 81 |         - Relation counts (resolved/unresolved)
 82 |         - Tags if present
 83 |         - Session tracking metadata for project awareness
 84 | 
 85 |     Examples:
 86 |         # Assistant flow when project is unknown
 87 |         # 1. list_memory_projects() -> Ask user which project
 88 |         # 2. User: "Use my-research"
 89 |         # 3. write_note(...) and remember "my-research" for session
 90 | 
 91 |         # Create a simple note
 92 |         write_note(
 93 |             project="my-research",
 94 |             title="Meeting Notes",
 95 |             folder="meetings",
 96 |             content="# Weekly Standup\\n\\n- [decision] Use SQLite for storage #tech"
 97 |         )
 98 | 
 99 |         # Create a note with tags and entity type
100 |         write_note(
101 |             project="work-project",
102 |             title="API Design",
103 |             folder="specs",
104 |             content="# REST API Specification\\n\\n- implements [[Authentication]]",
105 |             tags=["api", "design"],
106 |             entity_type="guide"
107 |         )
108 | 
109 |         # Update existing note (same title/folder)
110 |         write_note(
111 |             project="my-research",
112 |             title="Meeting Notes",
113 |             folder="meetings",
114 |             content="# Weekly Standup\\n\\n- [decision] Use PostgreSQL instead #tech"
115 |         )
116 | 
117 |     Raises:
118 |         HTTPError: If project doesn't exist or is inaccessible
119 |         SecurityError: If folder path attempts path traversal
120 |     """
121 |     async with get_client() as client:
122 |         logger.info(
123 |             f"MCP tool call tool=write_note project={project} folder={folder}, title={title}, tags={tags}"
124 |         )
125 | 
126 |         # Get and validate the project (supports optional project parameter)
127 |         active_project = await get_active_project(client, project, context)
128 | 
129 |         # Normalize "/" to empty string for root folder (must happen before validation)
130 |         if folder == "/":
131 |             folder = ""
132 | 
133 |         # Validate folder path to prevent path traversal attacks
134 |         project_path = active_project.home
135 |         if folder and not validate_project_path(folder, project_path):
136 |             logger.warning(
137 |                 "Attempted path traversal attack blocked",
138 |                 folder=folder,
139 |                 project=active_project.name,
140 |             )
141 |             return f"# Error\n\nFolder path '{folder}' is not allowed - paths must stay within project boundaries"
142 | 
143 |         # Process tags using the helper function
144 |         tag_list = parse_tags(tags)
145 |         # Create the entity request
146 |         metadata = {"tags": tag_list} if tag_list else None
147 |         entity = Entity(
148 |             title=title,
149 |             folder=folder,
150 |             entity_type=entity_type,
151 |             content_type="text/markdown",
152 |             content=content,
153 |             entity_metadata=metadata,
154 |         )
155 |         project_url = active_project.permalink
156 | 
157 |         # Create or update via knowledge API
158 |         logger.debug(f"Creating entity via API permalink={entity.permalink}")
159 |         url = f"{project_url}/knowledge/entities/{entity.permalink}"
160 |         response = await call_put(client, url, json=entity.model_dump())
161 |         result = EntityResponse.model_validate(response.json())
162 | 
163 |         # Format semantic summary based on status code
164 |         action = "Created" if response.status_code == 201 else "Updated"
165 |         summary = [
166 |             f"# {action} note",
167 |             f"project: {active_project.name}",
168 |             f"file_path: {result.file_path}",
169 |             f"permalink: {result.permalink}",
170 |             f"checksum: {result.checksum[:8] if result.checksum else 'unknown'}",
171 |         ]
172 | 
173 |         # Count observations by category
174 |         categories = {}
175 |         if result.observations:
176 |             for obs in result.observations:
177 |                 categories[obs.category] = categories.get(obs.category, 0) + 1
178 | 
179 |             summary.append("\n## Observations")
180 |             for category, count in sorted(categories.items()):
181 |                 summary.append(f"- {category}: {count}")
182 | 
183 |         # Count resolved/unresolved relations
184 |         unresolved = 0
185 |         resolved = 0
186 |         if result.relations:
187 |             unresolved = sum(1 for r in result.relations if not r.to_id)
188 |             resolved = len(result.relations) - unresolved
189 | 
190 |             summary.append("\n## Relations")
191 |             summary.append(f"- Resolved: {resolved}")
192 |             if unresolved:
193 |                 summary.append(f"- Unresolved: {unresolved}")
194 |                 summary.append(
195 |                     "\nNote: Unresolved relations point to entities that don't exist yet."
196 |                 )
197 |                 summary.append(
198 |                     "They will be automatically resolved when target entities are created or during sync operations."
199 |                 )
200 | 
201 |         if tag_list:
202 |             summary.append(f"\n## Tags\n- {', '.join(tag_list)}")
203 | 
204 |         # Log the response with structured data
205 |         logger.info(
206 |             f"MCP tool response: tool=write_note project={active_project.name} action={action} permalink={result.permalink} observations_count={len(result.observations)} relations_count={len(result.relations)} resolved_relations={resolved} unresolved_relations={unresolved} status_code={response.status_code}"
207 |         )
208 |         result = "\n".join(summary)
209 |         return add_project_metadata(result, active_project.name)
210 | 
```

--------------------------------------------------------------------------------
/tests/markdown/test_entity_parser.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for entity markdown parsing."""
  2 | 
  3 | from datetime import datetime
  4 | from pathlib import Path
  5 | from textwrap import dedent
  6 | 
  7 | import pytest
  8 | 
  9 | from basic_memory.markdown.schemas import EntityMarkdown, EntityFrontmatter, Relation
 10 | from basic_memory.markdown.entity_parser import parse
 11 | 
 12 | 
 13 | @pytest.fixture
 14 | def valid_entity_content():
 15 |     """A complete, valid entity file with all features."""
 16 |     return dedent("""
 17 |         ---
 18 |         title: Auth Service
 19 |         type: component
 20 |         permalink: auth_service
 21 |         created: 2024-12-21T14:00:00Z
 22 |         modified: 2024-12-21T14:00:00Z
 23 |         tags: authentication, security, core
 24 |         ---
 25 | 
 26 |         Core authentication service that handles user authentication.
 27 |         
 28 |         some [[Random Link]]
 29 |         another [[Random Link with Title|Titled Link]]
 30 | 
 31 |         ## Observations
 32 |         - [design] Stateless authentication #security #architecture (JWT based)
 33 |         - [feature] Mobile client support #mobile #oauth (Required for App Store)
 34 |         - [tech] Caching layer #performance (Redis implementation)
 35 | 
 36 |         ## Relations
 37 |         - implements [[OAuth Implementation]] (Core auth flows)
 38 |         - uses [[Redis Cache]] (Token caching)
 39 |         - specified_by [[Auth API Spec]] (OpenAPI spec)
 40 |         """)
 41 | 
 42 | 
 43 | @pytest.mark.asyncio
 44 | async def test_parse_complete_file(project_config, entity_parser, valid_entity_content):
 45 |     """Test parsing a complete entity file with all features."""
 46 |     test_file = project_config.home / "test_entity.md"
 47 |     test_file.write_text(valid_entity_content)
 48 | 
 49 |     entity = await entity_parser.parse_file(test_file)
 50 | 
 51 |     # Verify entity structure
 52 |     assert isinstance(entity, EntityMarkdown)
 53 |     assert isinstance(entity.frontmatter, EntityFrontmatter)
 54 |     assert isinstance(entity.content, str)
 55 | 
 56 |     # Check frontmatter
 57 |     assert entity.frontmatter.title == "Auth Service"
 58 |     assert entity.frontmatter.type == "component"
 59 |     assert entity.frontmatter.permalink == "auth_service"
 60 |     assert set(entity.frontmatter.tags) == {"authentication", "security", "core"}
 61 | 
 62 |     # Check content
 63 |     assert "Core authentication service that handles user authentication." in entity.content
 64 | 
 65 |     # Check observations
 66 |     assert len(entity.observations) == 3
 67 |     obs = entity.observations[0]
 68 |     assert obs.category == "design"
 69 |     assert obs.content == "Stateless authentication #security #architecture"
 70 |     assert set(obs.tags or []) == {"security", "architecture"}
 71 |     assert obs.context == "JWT based"
 72 | 
 73 |     # Check relations
 74 |     assert len(entity.relations) == 5
 75 |     assert (
 76 |         Relation(type="implements", target="OAuth Implementation", context="Core auth flows")
 77 |         in entity.relations
 78 |     ), "missing [[OAuth Implementation]]"
 79 |     assert (
 80 |         Relation(type="uses", target="Redis Cache", context="Token caching") in entity.relations
 81 |     ), "missing [[Redis Cache]]"
 82 |     assert (
 83 |         Relation(type="specified_by", target="Auth API Spec", context="OpenAPI spec")
 84 |         in entity.relations
 85 |     ), "missing [[Auth API Spec]]"
 86 | 
 87 |     # inline links in content
 88 |     assert Relation(type="links to", target="Random Link", context=None) in entity.relations, (
 89 |         "missing [[Random Link]]"
 90 |     )
 91 |     assert (
 92 |         Relation(type="links to", target="Random Link with Title|Titled Link", context=None)
 93 |         in entity.relations
 94 |     ), "missing [[Random Link with Title|Titled Link]]"
 95 | 
 96 | 
 97 | @pytest.mark.asyncio
 98 | async def test_parse_minimal_file(project_config, entity_parser):
 99 |     """Test parsing a minimal valid entity file."""
100 |     content = dedent("""
101 |         ---
102 |         type: component
103 |         tags: []
104 |         ---
105 | 
106 |         # Minimal Entity
107 | 
108 |         ## Observations
109 |         - [note] Basic observation #test
110 | 
111 |         ## Relations
112 |         - references [[Other Entity]]
113 |         """)
114 | 
115 |     test_file = project_config.home / "minimal.md"
116 |     test_file.write_text(content)
117 | 
118 |     entity = await entity_parser.parse_file(test_file)
119 | 
120 |     assert entity.frontmatter.type == "component"
121 |     assert entity.frontmatter.permalink is None
122 |     assert len(entity.observations) == 1
123 |     assert len(entity.relations) == 1
124 | 
125 |     assert entity.created is not None
126 |     assert entity.modified is not None
127 | 
128 | 
129 | @pytest.mark.asyncio
130 | async def test_error_handling(project_config, entity_parser):
131 |     """Test error handling."""
132 | 
133 |     # Missing file
134 |     with pytest.raises(FileNotFoundError):
135 |         await entity_parser.parse_file(Path("nonexistent.md"))
136 | 
137 |     # Invalid file encoding
138 |     test_file = project_config.home / "binary.md"
139 |     with open(test_file, "wb") as f:
140 |         f.write(b"\x80\x81")  # Invalid UTF-8
141 |     with pytest.raises(UnicodeDecodeError):
142 |         await entity_parser.parse_file(test_file)
143 | 
144 | 
145 | @pytest.mark.asyncio
146 | async def test_parse_file_without_section_headers(project_config, entity_parser):
147 |     """Test parsing a minimal valid entity file."""
148 |     content = dedent("""
149 |         ---
150 |         type: component
151 |         permalink: minimal_entity
152 |         status: draft
153 |         tags: []
154 |         ---
155 | 
156 |         # Minimal Entity
157 | 
158 |         some text
159 |         some [[Random Link]]
160 | 
161 |         - [note] Basic observation #test
162 | 
163 |         - references [[Other Entity]]
164 |         """)
165 | 
166 |     test_file = project_config.home / "minimal.md"
167 |     test_file.write_text(content)
168 | 
169 |     entity = await entity_parser.parse_file(test_file)
170 | 
171 |     assert entity.frontmatter.type == "component"
172 |     assert entity.frontmatter.permalink == "minimal_entity"
173 | 
174 |     assert "some text\nsome [[Random Link]]" in entity.content
175 | 
176 |     assert len(entity.observations) == 1
177 |     assert entity.observations[0].category == "note"
178 |     assert entity.observations[0].content == "Basic observation #test"
179 |     assert entity.observations[0].tags == ["test"]
180 | 
181 |     assert len(entity.relations) == 2
182 |     assert entity.relations[0].type == "links to"
183 |     assert entity.relations[0].target == "Random Link"
184 | 
185 |     assert entity.relations[1].type == "references"
186 |     assert entity.relations[1].target == "Other Entity"
187 | 
188 | 
189 | def test_parse_date_formats(entity_parser):
190 |     """Test date parsing functionality."""
191 |     # Valid formats
192 |     assert entity_parser.parse_date("2024-01-15") is not None
193 |     assert entity_parser.parse_date("Jan 15, 2024") is not None
194 |     assert entity_parser.parse_date("2024-01-15 10:00 AM") is not None
195 |     assert entity_parser.parse_date(datetime.now()) is not None
196 | 
197 |     # Invalid formats
198 |     assert entity_parser.parse_date(None) is None
199 |     assert entity_parser.parse_date(123) is None  # Non-string/datetime
200 |     assert entity_parser.parse_date("not a date") is None  # Unparseable string
201 |     assert entity_parser.parse_date("") is None  # Empty string
202 | 
203 |     # Test dateparser error handling
204 |     assert entity_parser.parse_date("25:00:00") is None  # Invalid time
205 | 
206 | 
207 | def test_parse_empty_content():
208 |     """Test parsing empty or minimal content."""
209 |     result = parse("")
210 |     assert result.content == ""
211 |     assert len(result.observations) == 0
212 |     assert len(result.relations) == 0
213 | 
214 |     result = parse("# Just a title")
215 |     assert result.content == "# Just a title"
216 |     assert len(result.observations) == 0
217 |     assert len(result.relations) == 0
218 | 
219 | 
220 | @pytest.mark.asyncio
221 | async def test_parse_file_with_absolute_path(project_config, entity_parser):
222 |     """Test parsing a file with an absolute path."""
223 |     content = dedent("""
224 |         ---
225 |         type: component
226 |         permalink: absolute_path_test
227 |         ---
228 | 
229 |         # Absolute Path Test
230 |         
231 |         A file with an absolute path.
232 |         """)
233 | 
234 |     # Create a test file in the project directory
235 |     test_file = project_config.home / "absolute_path_test.md"
236 |     test_file.write_text(content)
237 | 
238 |     # Get the absolute path to the test file
239 |     absolute_path = test_file.resolve()
240 | 
241 |     # Parse the file using the absolute path
242 |     entity = await entity_parser.parse_file(absolute_path)
243 | 
244 |     # Verify the file was parsed correctly
245 |     assert entity.frontmatter.permalink == "absolute_path_test"
246 |     assert "Absolute Path Test" in entity.content
247 |     assert entity.created is not None
248 |     assert entity.modified is not None
249 | 
250 | 
251 | # @pytest.mark.asyncio
252 | # async def test_parse_file_invalid_yaml(test_config, entity_parser):
253 | #     """Test parsing file with invalid YAML frontmatter."""
254 | #     content = dedent("""
255 | #         ---
256 | #         invalid: [yaml: ]syntax]
257 | #         ---
258 | #
259 | #         # Invalid YAML Frontmatter
260 | #         """)
261 | #
262 | #     test_file = test_config.home / "invalid_yaml.md"
263 | #     test_file.write_text(content)
264 | #
265 | #     # Should handle invalid YAML gracefully
266 | #     entity = await entity_parser.parse_file(test_file)
267 | #     assert entity.frontmatter.title == "invalid_yaml.md"
268 | #     assert entity.frontmatter.type == "note"
269 | #     assert entity.content.strip() == "# Invalid YAML Frontmatter"
270 | 
```

--------------------------------------------------------------------------------
/tests/markdown/test_entity_parser_error_handling.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for entity parser error handling (issues #184 and #185)."""
  2 | 
  3 | import pytest
  4 | from textwrap import dedent
  5 | 
  6 | from basic_memory.markdown.entity_parser import EntityParser
  7 | 
  8 | 
  9 | @pytest.mark.asyncio
 10 | async def test_parse_file_with_malformed_yaml_frontmatter(tmp_path):
 11 |     """Test that files with malformed YAML frontmatter are parsed gracefully (issue #185).
 12 | 
 13 |     This reproduces the production error where block sequence entries cause YAML parsing to fail.
 14 |     The parser should handle the error gracefully and treat the file as plain markdown.
 15 |     """
 16 |     # Create a file with malformed YAML frontmatter
 17 |     test_file = tmp_path / "malformed.md"
 18 |     content = dedent(
 19 |         """
 20 |         ---
 21 |         title: Group Chat Texts
 22 |         tags:
 23 |           - family    # Line 5, column 7 - this syntax can fail in certain YAML contexts
 24 |           - messages
 25 |         type: note
 26 |         ---
 27 |         # Group Chat Texts
 28 | 
 29 |         Content here
 30 |         """
 31 |     ).strip()
 32 |     test_file.write_text(content)
 33 | 
 34 |     # Parse the file - should not raise YAMLError
 35 |     parser = EntityParser(tmp_path)
 36 |     result = await parser.parse_file(test_file)
 37 | 
 38 |     # Should successfully parse, treating as plain markdown if YAML fails
 39 |     assert result is not None
 40 |     # If YAML parsing succeeded, verify expected values
 41 |     # If it failed, it should have defaults
 42 |     assert result.frontmatter.title is not None
 43 |     assert result.frontmatter.type is not None
 44 | 
 45 | 
 46 | @pytest.mark.asyncio
 47 | async def test_parse_file_with_completely_invalid_yaml(tmp_path):
 48 |     """Test that files with completely invalid YAML are handled gracefully (issue #185).
 49 | 
 50 |     This tests the extreme case where YAML parsing completely fails.
 51 |     """
 52 |     # Create a file with completely broken YAML
 53 |     test_file = tmp_path / "broken_yaml.md"
 54 |     content = dedent(
 55 |         """
 56 |         ---
 57 |         title: Invalid YAML
 58 |         this is: [not, valid, yaml
 59 |         missing: closing bracket
 60 |         ---
 61 |         # Content
 62 | 
 63 |         This file has broken YAML frontmatter.
 64 |         """
 65 |     ).strip()
 66 |     test_file.write_text(content)
 67 | 
 68 |     # Parse the file - should not raise exception
 69 |     parser = EntityParser(tmp_path)
 70 |     result = await parser.parse_file(test_file)
 71 | 
 72 |     # Should successfully parse with defaults
 73 |     assert result is not None
 74 |     assert result.frontmatter.title == "broken_yaml"  # Default from filename
 75 |     assert result.frontmatter.type == "note"  # Default type
 76 |     # Content should include the whole file since frontmatter parsing failed
 77 |     assert "# Content" in result.content
 78 | 
 79 | 
 80 | @pytest.mark.asyncio
 81 | async def test_parse_file_without_entity_type(tmp_path):
 82 |     """Test that files without entity_type get a default value (issue #184).
 83 | 
 84 |     This reproduces the NOT NULL constraint error where entity_type was missing.
 85 |     """
 86 |     # Create a file without entity_type in frontmatter
 87 |     test_file = tmp_path / "no_type.md"
 88 |     content = dedent(
 89 |         """
 90 |         ---
 91 |         title: The Invisible Weight of Mental Habits
 92 |         ---
 93 |         # The Invisible Weight of Mental Habits
 94 | 
 95 |         An article about mental habits.
 96 |         """
 97 |     ).strip()
 98 |     test_file.write_text(content)
 99 | 
100 |     # Parse the file
101 |     parser = EntityParser(tmp_path)
102 |     result = await parser.parse_file(test_file)
103 | 
104 |     # Should have default entity_type
105 |     assert result is not None
106 |     assert result.frontmatter.type == "note"  # Default type applied
107 |     assert result.frontmatter.title == "The Invisible Weight of Mental Habits"
108 | 
109 | 
110 | @pytest.mark.asyncio
111 | async def test_parse_file_with_empty_frontmatter(tmp_path):
112 |     """Test that files with empty frontmatter get defaults (issue #184)."""
113 |     # Create a file with empty frontmatter
114 |     test_file = tmp_path / "empty_frontmatter.md"
115 |     content = dedent(
116 |         """
117 |         ---
118 |         ---
119 |         # Content
120 | 
121 |         This file has empty frontmatter.
122 |         """
123 |     ).strip()
124 |     test_file.write_text(content)
125 | 
126 |     # Parse the file
127 |     parser = EntityParser(tmp_path)
128 |     result = await parser.parse_file(test_file)
129 | 
130 |     # Should have defaults
131 |     assert result is not None
132 |     assert result.frontmatter.type == "note"  # Default type
133 |     assert result.frontmatter.title == "empty_frontmatter"  # Default from filename
134 | 
135 | 
136 | @pytest.mark.asyncio
137 | async def test_parse_file_without_frontmatter(tmp_path):
138 |     """Test that files without any frontmatter get defaults (issue #184)."""
139 |     # Create a file with no frontmatter at all
140 |     test_file = tmp_path / "no_frontmatter.md"
141 |     content = dedent(
142 |         """
143 |         # Just Content
144 | 
145 |         This file has no frontmatter at all.
146 |         """
147 |     ).strip()
148 |     test_file.write_text(content)
149 | 
150 |     # Parse the file
151 |     parser = EntityParser(tmp_path)
152 |     result = await parser.parse_file(test_file)
153 | 
154 |     # Should have defaults
155 |     assert result is not None
156 |     assert result.frontmatter.type == "note"  # Default type
157 |     assert result.frontmatter.title == "no_frontmatter"  # Default from filename
158 | 
159 | 
160 | @pytest.mark.asyncio
161 | async def test_parse_file_with_null_entity_type(tmp_path):
162 |     """Test that files with explicit null entity_type get default (issue #184)."""
163 |     # Create a file with null/None entity_type
164 |     test_file = tmp_path / "null_type.md"
165 |     content = dedent(
166 |         """
167 |         ---
168 |         title: Test File
169 |         type: null
170 |         ---
171 |         # Content
172 |         """
173 |     ).strip()
174 |     test_file.write_text(content)
175 | 
176 |     # Parse the file
177 |     parser = EntityParser(tmp_path)
178 |     result = await parser.parse_file(test_file)
179 | 
180 |     # Should have default type even when explicitly set to null
181 |     assert result is not None
182 |     assert result.frontmatter.type == "note"  # Default type applied
183 |     assert result.frontmatter.title == "Test File"
184 | 
185 | 
186 | @pytest.mark.asyncio
187 | async def test_parse_file_with_null_title(tmp_path):
188 |     """Test that files with explicit null title get default from filename (issue #387)."""
189 |     # Create a file with null title
190 |     test_file = tmp_path / "null_title.md"
191 |     content = dedent(
192 |         """
193 |         ---
194 |         title: null
195 |         type: note
196 |         ---
197 |         # Content
198 |         """
199 |     ).strip()
200 |     test_file.write_text(content)
201 | 
202 |     # Parse the file
203 |     parser = EntityParser(tmp_path)
204 |     result = await parser.parse_file(test_file)
205 | 
206 |     # Should have default title from filename even when explicitly set to null
207 |     assert result is not None
208 |     assert result.frontmatter.title == "null_title"  # Default from filename
209 |     assert result.frontmatter.type == "note"
210 | 
211 | 
212 | @pytest.mark.asyncio
213 | async def test_parse_file_with_empty_title(tmp_path):
214 |     """Test that files with empty title get default from filename (issue #387)."""
215 |     # Create a file with empty title
216 |     test_file = tmp_path / "empty_title.md"
217 |     content = dedent(
218 |         """
219 |         ---
220 |         title:
221 |         type: note
222 |         ---
223 |         # Content
224 |         """
225 |     ).strip()
226 |     test_file.write_text(content)
227 | 
228 |     # Parse the file
229 |     parser = EntityParser(tmp_path)
230 |     result = await parser.parse_file(test_file)
231 | 
232 |     # Should have default title from filename when title is empty
233 |     assert result is not None
234 |     assert result.frontmatter.title == "empty_title"  # Default from filename
235 |     assert result.frontmatter.type == "note"
236 | 
237 | 
238 | @pytest.mark.asyncio
239 | async def test_parse_file_with_string_none_title(tmp_path):
240 |     """Test that files with string 'None' title get default from filename (issue #387)."""
241 |     # Create a file with string "None" as title (common in templates)
242 |     test_file = tmp_path / "template_file.md"
243 |     content = dedent(
244 |         """
245 |         ---
246 |         title: "None"
247 |         type: note
248 |         ---
249 |         # Content
250 |         """
251 |     ).strip()
252 |     test_file.write_text(content)
253 | 
254 |     # Parse the file
255 |     parser = EntityParser(tmp_path)
256 |     result = await parser.parse_file(test_file)
257 | 
258 |     # Should have default title from filename when title is string "None"
259 |     assert result is not None
260 |     assert result.frontmatter.title == "template_file"  # Default from filename
261 |     assert result.frontmatter.type == "note"
262 | 
263 | 
264 | @pytest.mark.asyncio
265 | async def test_parse_valid_file_still_works(tmp_path):
266 |     """Test that valid files with proper frontmatter still parse correctly."""
267 |     # Create a valid file
268 |     test_file = tmp_path / "valid.md"
269 |     content = dedent(
270 |         """
271 |         ---
272 |         title: Valid File
273 |         type: knowledge
274 |         tags:
275 |           - test
276 |           - valid
277 |         ---
278 |         # Valid File
279 | 
280 |         This is a properly formatted file.
281 |         """
282 |     ).strip()
283 |     test_file.write_text(content)
284 | 
285 |     # Parse the file
286 |     parser = EntityParser(tmp_path)
287 |     result = await parser.parse_file(test_file)
288 | 
289 |     # Should parse correctly with all values
290 |     assert result is not None
291 |     assert result.frontmatter.title == "Valid File"
292 |     assert result.frontmatter.type == "knowledge"
293 |     assert result.frontmatter.tags == ["test", "valid"]
294 | 
```

--------------------------------------------------------------------------------
/v15-docs/bug-fixes.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Bug Fixes and Improvements
  2 | 
  3 | **Status**: Bug Fixes
  4 | **Version**: v0.15.0
  5 | **Impact**: Stability, reliability, platform compatibility
  6 | 
  7 | ## Overview
  8 | 
  9 | v0.15.0 includes 13+ bug fixes addressing entity conflicts, URL handling, file operations, and platform compatibility. These fixes improve stability and eliminate edge cases that could cause errors.
 10 | 
 11 | ## Key Fixes
 12 | 
 13 | ### 1. Entity Upsert Conflict Resolution (#328)
 14 | 
 15 | **Problem:**
 16 | Database-level conflicts when upserting entities with same title/folder caused crashes.
 17 | 
 18 | **Fix:**
 19 | Simplified entity upsert to use database-level conflict resolution with `ON CONFLICT` clause.
 20 | 
 21 | **Before:**
 22 | ```python
 23 | # Manual conflict checking (error-prone)
 24 | existing = await get_entity_by_title(title, folder)
 25 | if existing:
 26 |     await update_entity(existing.id, data)
 27 | else:
 28 |     await insert_entity(data)
 29 | # → Could fail if concurrent insert
 30 | ```
 31 | 
 32 | **After:**
 33 | ```python
 34 | # Database handles conflict
 35 | await db.execute("""
 36 |     INSERT INTO entities (title, folder, content)
 37 |     VALUES (?, ?, ?)
 38 |     ON CONFLICT (title, folder) DO UPDATE SET content = excluded.content
 39 | """)
 40 | # → Always works, even with concurrent access
 41 | ```
 42 | 
 43 | **Benefit:** Eliminates race conditions, more reliable writes
 44 | 
 45 | ### 2. memory:// URL Underscore Normalization (#329)
 46 | 
 47 | **Problem:**
 48 | Underscores in memory:// URLs weren't normalized to hyphens, causing lookups to fail.
 49 | 
 50 | **Fix:**
 51 | Normalize underscores to hyphens when resolving memory:// URLs.
 52 | 
 53 | **Before:**
 54 | ```python
 55 | # URL with underscores
 56 | url = "memory://my_note"
 57 | entity = await resolve_url(url)
 58 | # → Not found! (permalink is "my-note")
 59 | ```
 60 | 
 61 | **After:**
 62 | ```python
 63 | # Automatic normalization
 64 | url = "memory://my_note"
 65 | entity = await resolve_url(url)
 66 | # → Found! (my_note → my-note)
 67 | ```
 68 | 
 69 | **Examples:**
 70 | - `memory://my_note` → finds entity with permalink `my-note`
 71 | - `memory://user_guide` → finds entity with permalink `user-guide`
 72 | - `memory://api_docs` → finds entity with permalink `api-docs`
 73 | 
 74 | **Benefit:** More forgiving URL matching, fewer lookup failures
 75 | 
 76 | ### 3. .gitignore File Filtering (#287, #285)
 77 | 
 78 | **Problem:**
 79 | Sync process didn't respect .gitignore patterns, indexing sensitive files and build artifacts.
 80 | 
 81 | **Fix:**
 82 | Integrated .gitignore support - files matching patterns are automatically skipped during sync.
 83 | 
 84 | **Before:**
 85 | ```bash
 86 | bm sync
 87 | # → Indexed .env files
 88 | # → Indexed node_modules/
 89 | # → Indexed build artifacts
 90 | ```
 91 | 
 92 | **After:**
 93 | ```bash
 94 | # .gitignore
 95 | .env
 96 | node_modules/
 97 | dist/
 98 | 
 99 | bm sync
100 | # → Skipped .env (gitignored)
101 | # → Skipped node_modules/ (gitignored)
102 | # → Skipped dist/ (gitignored)
103 | ```
104 | 
105 | **Benefit:** Better security, cleaner knowledge base, faster sync
106 | 
107 | **See:** `gitignore-integration.md` for full details
108 | 
109 | ### 4. move_note File Extension Handling (#281)
110 | 
111 | **Problem:**
112 | `move_note` failed when destination path included or omitted `.md` extension inconsistently.
113 | 
114 | **Fix:**
115 | Automatically handle file extensions - works with or without `.md`.
116 | 
117 | **Before:**
118 | ```python
119 | # Had to match exactly
120 | await move_note("My Note", "new-folder/my-note.md")  # ✓
121 | await move_note("My Note", "new-folder/my-note")     # ✗ Failed
122 | ```
123 | 
124 | **After:**
125 | ```python
126 | # Both work
127 | await move_note("My Note", "new-folder/my-note.md")  # ✓ Works
128 | await move_note("My Note", "new-folder/my-note")     # ✓ Works (adds .md)
129 | ```
130 | 
131 | **Automatic handling:**
132 | - Input without `.md` → adds `.md`
133 | - Input with `.md` → uses as-is
134 | - Always creates valid markdown file
135 | 
136 | **Benefit:** More forgiving API, fewer errors
137 | 
138 | ### 5. .env File Loading Removed (#330)
139 | 
140 | **Problem:**
141 | Automatic .env file loading created security vulnerability - could load untrusted files.
142 | 
143 | **Fix:**
144 | Removed automatic .env loading. Environment variables must be set explicitly.
145 | 
146 | **Impact:** Breaking change for users relying on .env files
147 | 
148 | **Migration:**
149 | ```bash
150 | # Before: Used .env file
151 | # .env
152 | BASIC_MEMORY_LOG_LEVEL=DEBUG
153 | 
154 | # After: Use explicit export
155 | export BASIC_MEMORY_LOG_LEVEL=DEBUG
156 | 
157 | # Or use direnv
158 | # .envrc (git-ignored)
159 | export BASIC_MEMORY_LOG_LEVEL=DEBUG
160 | ```
161 | 
162 | **Benefit:** Better security, explicit configuration
163 | 
164 | **See:** `env-file-removal.md` for migration guide
165 | 
166 | ### 6. Python 3.13 Compatibility
167 | 
168 | **Problem:**
169 | Code not tested with Python 3.13, potential compatibility issues.
170 | 
171 | **Fix:**
172 | - Added Python 3.13 to CI test matrix
173 | - Fixed deprecation warnings
174 | - Verified all dependencies compatible
175 | - Updated type hints for 3.13
176 | 
177 | **Before:**
178 | ```yaml
179 | # .github/workflows/test.yml
180 | python-version: ["3.10", "3.11", "3.12"]
181 | ```
182 | 
183 | **After:**
184 | ```yaml
185 | # .github/workflows/test.yml
186 | python-version: ["3.10", "3.11", "3.12", "3.13"]
187 | ```
188 | 
189 | **Benefit:** Full Python 3.13 support, future-proof
190 | 
191 | ## Additional Fixes
192 | 
193 | ### Minimum Timeframe Enforcement (#318)
194 | 
195 | **Problem:**
196 | `recent_activity` with very short timeframes caused timezone issues.
197 | 
198 | **Fix:**
199 | Enforce minimum 1-day timeframe to handle timezone edge cases.
200 | 
201 | ```python
202 | # Before: Could use any timeframe
203 | await recent_activity(timeframe="1h")  # Timezone issues
204 | 
205 | # After: Minimum 1 day
206 | await recent_activity(timeframe="1h")  # → Auto-adjusted to "1d"
207 | ```
208 | 
209 | ### Permalink Collision Prevention
210 | 
211 | **Problem:**
212 | Strict link resolution could create duplicate permalinks.
213 | 
214 | **Fix:**
215 | Enhanced permalink uniqueness checking to prevent collisions.
216 | 
217 | ### DateTime JSON Schema (#312)
218 | 
219 | **Problem:**
220 | MCP validation failed on DateTime fields - missing proper JSON schema format.
221 | 
222 | **Fix:**
223 | Added proper `format: "date-time"` annotations for MCP compatibility.
224 | 
225 | ```python
226 | # Before: No format
227 | created_at: datetime
228 | 
229 | # After: With format
230 | created_at: datetime = Field(json_schema_extra={"format": "date-time"})
231 | ```
232 | 
233 | ## Testing Coverage
234 | 
235 | ### Automated Tests
236 | 
237 | All fixes include comprehensive tests:
238 | 
239 | ```bash
240 | # Entity upsert conflict
241 | tests/services/test_entity_upsert.py
242 | 
243 | # URL normalization
244 | tests/mcp/test_build_context_validation.py
245 | 
246 | # File extension handling
247 | tests/mcp/test_tool_move_note.py
248 | 
249 | # gitignore integration
250 | tests/sync/test_gitignore.py
251 | ```
252 | 
253 | ### Manual Testing Checklist
254 | 
255 | - [x] Entity upsert with concurrent access
256 | - [x] memory:// URLs with underscores
257 | - [x] .gitignore file filtering
258 | - [x] move_note with/without .md extension
259 | - [x] .env file not auto-loaded
260 | - [x] Python 3.13 compatibility
261 | 
262 | ## Migration Guide
263 | 
264 | ### If You're Affected by These Bugs
265 | 
266 | **Entity Conflicts:**
267 | - No action needed - automatically fixed
268 | 
269 | **memory:// URLs:**
270 | - No action needed - URLs now more forgiving
271 | - Previously broken URLs should work now
272 | 
273 | **.gitignore Integration:**
274 | - Create `.gitignore` if you don't have one
275 | - Add patterns for files to skip
276 | 
277 | **move_note:**
278 | - No action needed - both formats now work
279 | - Can simplify code that manually added `.md`
280 | 
281 | **.env Files:**
282 | - See `env-file-removal.md` for full migration
283 | - Use explicit environment variables or direnv
284 | 
285 | **Python 3.13:**
286 | - Upgrade if desired: `pip install --upgrade basic-memory`
287 | - Or stay on 3.10-3.12 (still supported)
288 | 
289 | ## Verification
290 | 
291 | ### Check Entity Upserts Work
292 | 
293 | ```python
294 | # Should not conflict
295 | await write_note("Test", "Content", "folder")
296 | await write_note("Test", "Updated", "folder")  # Updates, not errors
297 | ```
298 | 
299 | ### Check URL Normalization
300 | 
301 | ```python
302 | # Both should work
303 | context1 = await build_context("memory://my_note")
304 | context2 = await build_context("memory://my-note")
305 | # Both resolve to same entity
306 | ```
307 | 
308 | ### Check .gitignore Respected
309 | 
310 | ```bash
311 | echo ".env" >> .gitignore
312 | echo "SECRET=test" > .env
313 | bm sync
314 | # .env should be skipped
315 | ```
316 | 
317 | ### Check move_note Extension
318 | 
319 | ```python
320 | # Both work
321 | await move_note("Note", "folder/note.md")   # ✓
322 | await move_note("Note", "folder/note")      # ✓
323 | ```
324 | 
325 | ### Check .env Not Loaded
326 | 
327 | ```bash
328 | echo "BASIC_MEMORY_LOG_LEVEL=DEBUG" > .env
329 | bm sync
330 | # LOG_LEVEL not set (not auto-loaded)
331 | 
332 | export BASIC_MEMORY_LOG_LEVEL=DEBUG
333 | bm sync
334 | # LOG_LEVEL now set (explicit)
335 | ```
336 | 
337 | ### Check Python 3.13
338 | 
339 | ```bash
340 | python3.13 --version
341 | python3.13 -m pip install basic-memory
342 | python3.13 -m basic_memory --version
343 | ```
344 | 
345 | ## Known Issues (Fixed)
346 | 
347 | ### Previously Reported, Now Fixed
348 | 
349 | 1. ✅ Entity upsert conflicts (#328)
350 | 2. ✅ memory:// URL underscore handling (#329)
351 | 3. ✅ .gitignore not respected (#287, #285)
352 | 4. ✅ move_note extension issues (#281)
353 | 5. ✅ .env security vulnerability (#330)
354 | 6. ✅ Minimum timeframe issues (#318)
355 | 7. ✅ DateTime JSON schema (#312)
356 | 8. ✅ Permalink collisions
357 | 9. ✅ Python 3.13 compatibility
358 | 
359 | ## Upgrade Notes
360 | 
361 | ### From v0.14.x
362 | 
363 | All bug fixes apply automatically:
364 | 
365 | ```bash
366 | # Upgrade
367 | pip install --upgrade basic-memory
368 | 
369 | # Restart MCP server
370 | # Bug fixes active immediately
371 | ```
372 | 
373 | ### Breaking Changes
374 | 
375 | Only one breaking change:
376 | 
377 | - ✅ .env file auto-loading removed (#330)
378 |   - See `env-file-removal.md` for migration
379 | 
380 | All other fixes are backward compatible.
381 | 
382 | ## Reporting New Issues
383 | 
384 | If you encounter issues:
385 | 
386 | 1. Check this list to see if already fixed
387 | 2. Verify you're on v0.15.0+: `bm --version`
388 | 3. Report at: https://github.com/basicmachines-co/basic-memory/issues
389 | 
390 | ## See Also
391 | 
392 | - `gitignore-integration.md` - .gitignore support details
393 | - `env-file-removal.md` - .env migration guide
394 | - GitHub issues for each fix
395 | - v0.15.0 changelog
396 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/schemas/memory.py:
--------------------------------------------------------------------------------

```python
  1 | """Schemas for memory context."""
  2 | 
  3 | from datetime import datetime
  4 | from typing import List, Optional, Annotated, Sequence, Literal, Union, Dict
  5 | 
  6 | from annotated_types import MinLen, MaxLen
  7 | from pydantic import BaseModel, Field, BeforeValidator, TypeAdapter, field_serializer
  8 | 
  9 | from basic_memory.schemas.search import SearchItemType
 10 | 
 11 | 
 12 | def validate_memory_url_path(path: str) -> bool:
 13 |     """Validate that a memory URL path is well-formed.
 14 | 
 15 |     Args:
 16 |         path: The path part of a memory URL (without memory:// prefix)
 17 | 
 18 |     Returns:
 19 |         True if the path is valid, False otherwise
 20 | 
 21 |     Examples:
 22 |         >>> validate_memory_url_path("specs/search")
 23 |         True
 24 |         >>> validate_memory_url_path("memory//test")  # Double slash
 25 |         False
 26 |         >>> validate_memory_url_path("invalid://test")  # Contains protocol
 27 |         False
 28 |     """
 29 |     # Empty paths are not valid
 30 |     if not path or not path.strip():
 31 |         return False
 32 | 
 33 |     # Check for invalid protocol schemes within the path first (more specific)
 34 |     if "://" in path:
 35 |         return False
 36 | 
 37 |     # Check for double slashes (except at the beginning for absolute paths)
 38 |     if "//" in path:
 39 |         return False
 40 | 
 41 |     # Check for invalid characters (excluding * which is used for pattern matching)
 42 |     invalid_chars = {"<", ">", '"', "|", "?"}
 43 |     if any(char in path for char in invalid_chars):
 44 |         return False
 45 | 
 46 |     return True
 47 | 
 48 | 
 49 | def normalize_memory_url(url: str | None) -> str:
 50 |     """Normalize a MemoryUrl string with validation.
 51 | 
 52 |     Args:
 53 |         url: A path like "specs/search" or "memory://specs/search"
 54 | 
 55 |     Returns:
 56 |         Normalized URL starting with memory://
 57 | 
 58 |     Raises:
 59 |         ValueError: If the URL path is malformed
 60 | 
 61 |     Examples:
 62 |         >>> normalize_memory_url("specs/search")
 63 |         'memory://specs/search'
 64 |         >>> normalize_memory_url("memory://specs/search")
 65 |         'memory://specs/search'
 66 |         >>> normalize_memory_url("memory//test")
 67 |         Traceback (most recent call last):
 68 |         ...
 69 |         ValueError: Invalid memory URL path: 'memory//test' contains double slashes
 70 |     """
 71 |     if not url:
 72 |         raise ValueError("Memory URL cannot be empty")
 73 | 
 74 |     # Strip whitespace for consistency
 75 |     url = url.strip()
 76 | 
 77 |     if not url:
 78 |         raise ValueError("Memory URL cannot be empty or whitespace")
 79 | 
 80 |     clean_path = url.removeprefix("memory://")
 81 | 
 82 |     # Validate the extracted path
 83 |     if not validate_memory_url_path(clean_path):
 84 |         # Provide specific error messages for common issues
 85 |         if "://" in clean_path:
 86 |             raise ValueError(f"Invalid memory URL path: '{clean_path}' contains protocol scheme")
 87 |         elif "//" in clean_path:
 88 |             raise ValueError(f"Invalid memory URL path: '{clean_path}' contains double slashes")
 89 |         else:
 90 |             raise ValueError(f"Invalid memory URL path: '{clean_path}' contains invalid characters")
 91 | 
 92 |     return f"memory://{clean_path}"
 93 | 
 94 | 
 95 | MemoryUrl = Annotated[
 96 |     str,
 97 |     BeforeValidator(str.strip),  # Clean whitespace
 98 |     BeforeValidator(normalize_memory_url),  # Validate and normalize the URL
 99 |     MinLen(1),
100 |     MaxLen(2028),
101 | ]
102 | 
103 | memory_url = TypeAdapter(MemoryUrl)
104 | 
105 | 
106 | def memory_url_path(url: memory_url) -> str:  # pyright: ignore
107 |     """
108 |     Returns the uri for a url value by removing the prefix "memory://" from a given MemoryUrl.
109 | 
110 |     This function processes a given MemoryUrl by removing the "memory://"
111 |     prefix and returns the resulting string. If the provided url does not
112 |     begin with "memory://", the function will simply return the input url
113 |     unchanged.
114 | 
115 |     :param url: A MemoryUrl object representing the URL with a "memory://" prefix.
116 |     :type url: MemoryUrl
117 |     :return: A string representing the URL with the "memory://" prefix removed.
118 |     :rtype: str
119 |     """
120 |     return url.removeprefix("memory://")
121 | 
122 | 
123 | class EntitySummary(BaseModel):
124 |     """Simplified entity representation."""
125 | 
126 |     type: Literal["entity"] = "entity"
127 |     permalink: Optional[str]
128 |     title: str
129 |     content: Optional[str] = None
130 |     file_path: str
131 |     created_at: Annotated[
132 |         datetime, Field(json_schema_extra={"type": "string", "format": "date-time"})
133 |     ]
134 | 
135 |     @field_serializer("created_at")
136 |     def serialize_created_at(self, dt: datetime) -> str:
137 |         return dt.isoformat()
138 | 
139 | 
140 | class RelationSummary(BaseModel):
141 |     """Simplified relation representation."""
142 | 
143 |     type: Literal["relation"] = "relation"
144 |     title: str
145 |     file_path: str
146 |     permalink: str
147 |     relation_type: str
148 |     from_entity: Optional[str] = None
149 |     to_entity: Optional[str] = None
150 |     created_at: Annotated[
151 |         datetime, Field(json_schema_extra={"type": "string", "format": "date-time"})
152 |     ]
153 | 
154 |     @field_serializer("created_at")
155 |     def serialize_created_at(self, dt: datetime) -> str:
156 |         return dt.isoformat()
157 | 
158 | 
159 | class ObservationSummary(BaseModel):
160 |     """Simplified observation representation."""
161 | 
162 |     type: Literal["observation"] = "observation"
163 |     title: str
164 |     file_path: str
165 |     permalink: str
166 |     category: str
167 |     content: str
168 |     created_at: Annotated[
169 |         datetime, Field(json_schema_extra={"type": "string", "format": "date-time"})
170 |     ]
171 | 
172 |     @field_serializer("created_at")
173 |     def serialize_created_at(self, dt: datetime) -> str:
174 |         return dt.isoformat()
175 | 
176 | 
177 | class MemoryMetadata(BaseModel):
178 |     """Simplified response metadata."""
179 | 
180 |     uri: Optional[str] = None
181 |     types: Optional[List[SearchItemType]] = None
182 |     depth: int
183 |     timeframe: Optional[str] = None
184 |     generated_at: Annotated[
185 |         datetime, Field(json_schema_extra={"type": "string", "format": "date-time"})
186 |     ]
187 |     primary_count: Optional[int] = None  # Changed field name
188 |     related_count: Optional[int] = None  # Changed field name
189 |     total_results: Optional[int] = None  # For backward compatibility
190 |     total_relations: Optional[int] = None
191 |     total_observations: Optional[int] = None
192 | 
193 |     @field_serializer("generated_at")
194 |     def serialize_generated_at(self, dt: datetime) -> str:
195 |         return dt.isoformat()
196 | 
197 | 
198 | class ContextResult(BaseModel):
199 |     """Context result containing a primary item with its observations and related items."""
200 | 
201 |     primary_result: Annotated[
202 |         Union[EntitySummary, RelationSummary, ObservationSummary],
203 |         Field(discriminator="type", description="Primary item"),
204 |     ]
205 | 
206 |     observations: Sequence[ObservationSummary] = Field(
207 |         description="Observations belonging to this entity", default_factory=list
208 |     )
209 | 
210 |     related_results: Sequence[
211 |         Annotated[
212 |             Union[EntitySummary, RelationSummary, ObservationSummary], Field(discriminator="type")
213 |         ]
214 |     ] = Field(description="Related items", default_factory=list)
215 | 
216 | 
217 | class GraphContext(BaseModel):
218 |     """Complete context response."""
219 | 
220 |     # hierarchical results
221 |     results: Sequence[ContextResult] = Field(
222 |         description="Hierarchical results with related items nested", default_factory=list
223 |     )
224 | 
225 |     # Context metadata
226 |     metadata: MemoryMetadata
227 | 
228 |     page: Optional[int] = None
229 |     page_size: Optional[int] = None
230 | 
231 | 
232 | class ActivityStats(BaseModel):
233 |     """Statistics about activity across all projects."""
234 | 
235 |     total_projects: int
236 |     active_projects: int = Field(description="Projects with activity in timeframe")
237 |     most_active_project: Optional[str] = None
238 |     total_items: int = Field(description="Total items across all projects")
239 |     total_entities: int = 0
240 |     total_relations: int = 0
241 |     total_observations: int = 0
242 | 
243 | 
244 | class ProjectActivity(BaseModel):
245 |     """Activity summary for a single project."""
246 | 
247 |     project_name: str
248 |     project_path: str
249 |     activity: GraphContext = Field(description="The actual activity data for this project")
250 |     item_count: int = Field(description="Total items in this project's activity")
251 |     last_activity: Optional[
252 |         Annotated[datetime, Field(json_schema_extra={"type": "string", "format": "date-time"})]
253 |     ] = Field(default=None, description="Most recent activity timestamp")
254 |     active_folders: List[str] = Field(default_factory=list, description="Most active folders")
255 | 
256 |     @field_serializer("last_activity")
257 |     def serialize_last_activity(self, dt: Optional[datetime]) -> Optional[str]:
258 |         return dt.isoformat() if dt else None
259 | 
260 | 
261 | class ProjectActivitySummary(BaseModel):
262 |     """Summary of activity across all projects."""
263 | 
264 |     projects: Dict[str, ProjectActivity] = Field(
265 |         description="Activity per project, keyed by project name"
266 |     )
267 |     summary: ActivityStats
268 |     timeframe: str = Field(description="The timeframe used for the query")
269 |     generated_at: Annotated[
270 |         datetime, Field(json_schema_extra={"type": "string", "format": "date-time"})
271 |     ]
272 |     guidance: Optional[str] = Field(
273 |         default=None, description="Assistant guidance for project selection and session management"
274 |     )
275 | 
276 |     @field_serializer("generated_at")
277 |     def serialize_generated_at(self, dt: datetime) -> str:
278 |         return dt.isoformat()
279 | 
```

--------------------------------------------------------------------------------
/v15-docs/env-file-removal.md:
--------------------------------------------------------------------------------

```markdown
  1 | # .env File Loading Removed
  2 | 
  3 | **Status**: Security Fix
  4 | **PR**: #330
  5 | **Impact**: Breaking change for users relying on .env files
  6 | 
  7 | ## What Changed
  8 | 
  9 | v0.15.0 **removes automatic .env file loading** from Basic Memory configuration. Environment variables must now be set explicitly through your shell, systemd, Docker, or other standard mechanisms.
 10 | 
 11 | ### Before v0.15.0
 12 | 
 13 | ```python
 14 | # BasicMemoryConfig automatically loaded .env files
 15 | from dotenv import load_dotenv
 16 | load_dotenv()  # ← Automatically loaded .env
 17 | 
 18 | config = BasicMemoryConfig()  # ← Used .env values
 19 | ```
 20 | 
 21 | ### v0.15.0 and Later
 22 | 
 23 | ```python
 24 | # No automatic .env loading
 25 | config = BasicMemoryConfig()  # ← Only uses actual environment variables
 26 | ```
 27 | 
 28 | ## Why This Changed
 29 | 
 30 | ### Security Vulnerability
 31 | 
 32 | Automatic .env loading created security risks:
 33 | 
 34 | 1. **Unintended file loading:**
 35 |    - Could load `.env` from current directory
 36 |    - Could load `.env` from parent directories
 37 |    - Risk of loading untrusted `.env` files
 38 | 
 39 | 2. **Credential leakage:**
 40 |    - `.env` files might contain secrets
 41 |    - Easy to accidentally commit to git
 42 |    - Hard to audit what's loaded
 43 | 
 44 | 3. **Configuration confusion:**
 45 |    - Unclear which values come from `.env` vs environment
 46 |    - Debugging difficult with implicit loading
 47 | 
 48 | ### Best Practice
 49 | 
 50 | Modern deployment practices use explicit environment configuration:
 51 | - Shell exports
 52 | - systemd Environment directives
 53 | - Docker environment variables
 54 | - Kubernetes ConfigMaps/Secrets
 55 | - CI/CD variable injection
 56 | 
 57 | ## Migration Guide
 58 | 
 59 | ### If You Used .env Files
 60 | 
 61 | **Step 1: Check if you have a .env file**
 62 | ```bash
 63 | ls -la .env
 64 | ls -la ~/.basic-memory/.env
 65 | ```
 66 | 
 67 | **Step 2: Review .env contents**
 68 | ```bash
 69 | cat .env
 70 | ```
 71 | 
 72 | **Step 3: Convert to explicit environment variables**
 73 | 
 74 | **Option A: Shell exports (development)**
 75 | ```bash
 76 | # Move values from .env to shell config
 77 | # .bashrc or .zshrc
 78 | 
 79 | export BASIC_MEMORY_PROJECT_ROOT=/app/data
 80 | export BASIC_MEMORY_LOG_LEVEL=DEBUG
 81 | export BASIC_MEMORY_DEFAULT_PROJECT=main
 82 | ```
 83 | 
 84 | **Option B: direnv (recommended for development)**
 85 | ```bash
 86 | # Install direnv
 87 | brew install direnv  # macOS
 88 | sudo apt install direnv  # Linux
 89 | 
 90 | # Create .envrc (git-ignored)
 91 | cat > .envrc <<EOF
 92 | export BASIC_MEMORY_PROJECT_ROOT=/app/data
 93 | export BASIC_MEMORY_LOG_LEVEL=DEBUG
 94 | EOF
 95 | 
 96 | # Allow direnv for this directory
 97 | direnv allow
 98 | 
 99 | # Auto-loads when entering directory
100 | ```
101 | 
102 | **Option C: systemd (production)**
103 | ```ini
104 | # /etc/systemd/system/basic-memory.service
105 | [Service]
106 | Environment="BASIC_MEMORY_PROJECT_ROOT=/var/lib/basic-memory"
107 | Environment="BASIC_MEMORY_LOG_LEVEL=INFO"
108 | ExecStart=/usr/local/bin/basic-memory serve
109 | ```
110 | 
111 | **Option D: Docker (containers)**
112 | ```yaml
113 | # docker-compose.yml
114 | services:
115 |   basic-memory:
116 |     environment:
117 |       BASIC_MEMORY_PROJECT_ROOT: /app/data
118 |       BASIC_MEMORY_LOG_LEVEL: INFO
119 | ```
120 | 
121 | ### If You Didn't Use .env Files
122 | 
123 | No action needed - your setup already uses explicit environment variables.
124 | 
125 | ## Alternative Solutions
126 | 
127 | ### Development: Use direnv
128 | 
129 | [direnv](https://direnv.net/) automatically loads environment variables when entering a directory:
130 | 
131 | **Setup:**
132 | ```bash
133 | # Install
134 | brew install direnv
135 | 
136 | # Add to shell (.bashrc or .zshrc)
137 | eval "$(direnv hook bash)"  # or zsh
138 | 
139 | # Create .envrc in project
140 | cat > .envrc <<EOF
141 | export BASIC_MEMORY_LOG_LEVEL=DEBUG
142 | export BASIC_MEMORY_PROJECT_ROOT=\$PWD/data
143 | EOF
144 | 
145 | # Git-ignore it
146 | echo ".envrc" >> .gitignore
147 | 
148 | # Allow it
149 | direnv allow
150 | ```
151 | 
152 | **Usage:**
153 | ```bash
154 | # Entering directory auto-loads variables
155 | cd ~/my-project
156 | # → direnv: loading .envrc
157 | # → direnv: export +BASIC_MEMORY_LOG_LEVEL +BASIC_MEMORY_PROJECT_ROOT
158 | 
159 | # Check variables
160 | env | grep BASIC_MEMORY_
161 | ```
162 | 
163 | ### Production: External Configuration
164 | 
165 | **AWS Systems Manager:**
166 | ```bash
167 | # Store in Parameter Store
168 | aws ssm put-parameter \
169 |   --name /basic-memory/project-root \
170 |   --value /app/data \
171 |   --type SecureString
172 | 
173 | # Retrieve and export
174 | export BASIC_MEMORY_PROJECT_ROOT=$(aws ssm get-parameter \
175 |   --name /basic-memory/project-root \
176 |   --with-decryption \
177 |   --query Parameter.Value \
178 |   --output text)
179 | ```
180 | 
181 | **Kubernetes Secrets:**
182 | ```yaml
183 | apiVersion: v1
184 | kind: Secret
185 | metadata:
186 |   name: basic-memory-env
187 | stringData:
188 |   BASIC_MEMORY_PROJECT_ROOT: /app/data
189 | ---
190 | apiVersion: v1
191 | kind: Pod
192 | spec:
193 |   containers:
194 |   - name: basic-memory
195 |     envFrom:
196 |     - secretRef:
197 |         name: basic-memory-env
198 | ```
199 | 
200 | **HashiCorp Vault:**
201 | ```bash
202 | # Store in Vault
203 | vault kv put secret/basic-memory \
204 |   project_root=/app/data \
205 |   log_level=INFO
206 | 
207 | # Retrieve and export
208 | export BASIC_MEMORY_PROJECT_ROOT=$(vault kv get -field=project_root secret/basic-memory)
209 | ```
210 | 
211 | ## Security Best Practices
212 | 
213 | ### 1. Never Commit Environment Files
214 | 
215 | **Always git-ignore:**
216 | ```bash
217 | # .gitignore
218 | .env
219 | .env.*
220 | .envrc
221 | *.env
222 | cloud-auth.json
223 | ```
224 | 
225 | ### 2. Use Secret Management
226 | 
227 | **For sensitive values:**
228 | - AWS Secrets Manager
229 | - HashiCorp Vault
230 | - Kubernetes Secrets
231 | - Azure Key Vault
232 | - Google Secret Manager
233 | 
234 | ### 3. Scope Secrets Appropriately
235 | 
236 | **Development:**
237 | ```bash
238 | # Development secrets (less sensitive)
239 | export BASIC_MEMORY_LOG_LEVEL=DEBUG
240 | export BASIC_MEMORY_PROJECT_ROOT=~/dev/data
241 | ```
242 | 
243 | **Production:**
244 | ```bash
245 | # Production secrets (highly sensitive)
246 | export BASIC_MEMORY_CLOUD_SECRET_KEY=$(fetch-from-vault)
247 | export BASIC_MEMORY_PROJECT_ROOT=/app/data
248 | ```
249 | 
250 | ### 4. Audit Environment Variables
251 | 
252 | **Log non-sensitive vars:**
253 | ```python
254 | import os
255 | from loguru import logger
256 | 
257 | # Safe to log
258 | safe_vars = {
259 |     k: v for k, v in os.environ.items()
260 |     if k.startswith("BASIC_MEMORY_") and "SECRET" not in k
261 | }
262 | logger.info(f"Config loaded with: {safe_vars}")
263 | 
264 | # Never log
265 | secret_vars = [k for k in os.environ.keys() if "SECRET" in k or "KEY" in k]
266 | logger.debug(f"Secret vars present: {len(secret_vars)}")
267 | ```
268 | 
269 | ### 5. Principle of Least Privilege
270 | 
271 | ```bash
272 | # ✓ Good: Minimal permissions
273 | export BASIC_MEMORY_PROJECT_ROOT=/app/data/tenant-123  # Scoped to tenant
274 | 
275 | # ✗ Bad: Too permissive
276 | export BASIC_MEMORY_PROJECT_ROOT=/  # Entire filesystem
277 | ```
278 | 
279 | ## Troubleshooting
280 | 
281 | ### Variables Not Loading
282 | 
283 | **Problem:** Settings not taking effect after migration
284 | 
285 | **Check:**
286 | ```bash
287 | # Are variables actually exported?
288 | env | grep BASIC_MEMORY_
289 | 
290 | # Not exported (wrong)
291 | BASIC_MEMORY_LOG_LEVEL=DEBUG  # Missing 'export'
292 | 
293 | # Exported (correct)
294 | export BASIC_MEMORY_LOG_LEVEL=DEBUG
295 | ```
296 | 
297 | ### .env Still Present
298 | 
299 | **Problem:** Old .env file exists but ignored
300 | 
301 | **Solution:**
302 | ```bash
303 | # Review and remove
304 | cat .env  # Check contents
305 | rm .env   # Remove after migrating
306 | 
307 | # Ensure git-ignored
308 | echo ".env" >> .gitignore
309 | ```
310 | 
311 | ### Different Behavior After Upgrade
312 | 
313 | **Problem:** Config different after v0.15.0
314 | 
315 | **Check for .env usage:**
316 | ```bash
317 | # Did you have .env?
318 | git log --all --full-history -- .env
319 | 
320 | # If yes, migrate values to explicit env vars
321 | ```
322 | 
323 | ## Configuration Checklist
324 | 
325 | After removing .env files, verify:
326 | 
327 | - [ ] All required env vars exported explicitly
328 | - [ ] .env files removed or git-ignored
329 | - [ ] Production uses systemd/Docker/K8s env vars
330 | - [ ] Development uses direnv or shell config
331 | - [ ] Secrets stored in secret manager (not env files)
332 | - [ ] No credentials committed to git
333 | - [ ] Documentation updated with new approach
334 | 
335 | ## Example Configurations
336 | 
337 | ### Local Development
338 | 
339 | **~/.bashrc or ~/.zshrc:**
340 | ```bash
341 | # Basic Memory configuration
342 | export BASIC_MEMORY_LOG_LEVEL=DEBUG
343 | export BASIC_MEMORY_PROJECT_ROOT=~/dev/basic-memory
344 | export BASIC_MEMORY_DEFAULT_PROJECT=main
345 | export BASIC_MEMORY_DEFAULT_PROJECT_MODE=true
346 | ```
347 | 
348 | ### Docker Development
349 | 
350 | **docker-compose.yml:**
351 | ```yaml
352 | services:
353 |   basic-memory:
354 |     image: basic-memory:latest
355 |     environment:
356 |       BASIC_MEMORY_LOG_LEVEL: DEBUG
357 |       BASIC_MEMORY_PROJECT_ROOT: /app/data
358 |       BASIC_MEMORY_HOME: /app/data/basic-memory
359 |     volumes:
360 |       - ./data:/app/data
361 | ```
362 | 
363 | ### Production Deployment
364 | 
365 | **systemd service:**
366 | ```ini
367 | [Unit]
368 | Description=Basic Memory Service
369 | 
370 | [Service]
371 | Type=simple
372 | User=basicmemory
373 | Environment="BASIC_MEMORY_ENV=user"
374 | Environment="BASIC_MEMORY_LOG_LEVEL=INFO"
375 | Environment="BASIC_MEMORY_PROJECT_ROOT=/var/lib/basic-memory"
376 | EnvironmentFile=/etc/basic-memory/secrets.env
377 | ExecStart=/usr/local/bin/basic-memory serve
378 | 
379 | [Install]
380 | WantedBy=multi-user.target
381 | ```
382 | 
383 | **/etc/basic-memory/secrets.env:**
384 | ```bash
385 | # Loaded via EnvironmentFile
386 | BASIC_MEMORY_CLOUD_SECRET_KEY=<from-secret-manager>
387 | ```
388 | 
389 | ### Kubernetes Production
390 | 
391 | **ConfigMap (non-secret):**
392 | ```yaml
393 | apiVersion: v1
394 | kind: ConfigMap
395 | metadata:
396 |   name: basic-memory-config
397 | data:
398 |   BASIC_MEMORY_LOG_LEVEL: "INFO"
399 |   BASIC_MEMORY_PROJECT_ROOT: "/app/data"
400 | ```
401 | 
402 | **Secret (sensitive):**
403 | ```yaml
404 | apiVersion: v1
405 | kind: Secret
406 | metadata:
407 |   name: basic-memory-secrets
408 | type: Opaque
409 | stringData:
410 |   BASIC_MEMORY_CLOUD_SECRET_KEY: <base64-encoded>
411 | ```
412 | 
413 | **Deployment:**
414 | ```yaml
415 | apiVersion: apps/v1
416 | kind: Deployment
417 | spec:
418 |   template:
419 |     spec:
420 |       containers:
421 |       - name: basic-memory
422 |         envFrom:
423 |         - configMapRef:
424 |             name: basic-memory-config
425 |         - secretRef:
426 |             name: basic-memory-secrets
427 | ```
428 | 
429 | ## See Also
430 | 
431 | - `env-var-overrides.md` - How environment variables work
432 | - Security best practices documentation
433 | - Secret management guide
434 | - Configuration reference
435 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/mcp/tools/delete_note.py:
--------------------------------------------------------------------------------

```python
  1 | from textwrap import dedent
  2 | from typing import Optional
  3 | 
  4 | from loguru import logger
  5 | from fastmcp import Context
  6 | 
  7 | from basic_memory.mcp.project_context import get_active_project
  8 | from basic_memory.mcp.tools.utils import call_delete
  9 | from basic_memory.mcp.server import mcp
 10 | from basic_memory.mcp.async_client import get_client
 11 | from basic_memory.schemas import DeleteEntitiesResponse
 12 | 
 13 | 
 14 | def _format_delete_error_response(project: str, error_message: str, identifier: str) -> str:
 15 |     """Format helpful error responses for delete failures that guide users to successful deletions."""
 16 | 
 17 |     # Note not found errors
 18 |     if "entity not found" in error_message.lower() or "not found" in error_message.lower():
 19 |         search_term = identifier.split("/")[-1] if "/" in identifier else identifier
 20 |         title_format = (
 21 |             identifier.split("/")[-1].replace("-", " ").title() if "/" in identifier else identifier
 22 |         )
 23 |         permalink_format = identifier.lower().replace(" ", "-")
 24 | 
 25 |         return dedent(f"""
 26 |             # Delete Failed - Note Not Found
 27 | 
 28 |             The note '{identifier}' could not be found for deletion in {project}.
 29 | 
 30 |             ## This might mean:
 31 |             1. **Already deleted**: The note may have been deleted previously
 32 |             2. **Wrong identifier**: The identifier format might be incorrect
 33 |             3. **Different project**: The note might be in a different project
 34 | 
 35 |             ## How to verify:
 36 |             1. **Search for the note**: Use `search_notes("{project}", "{search_term}")` to find it
 37 |             2. **Try different formats**:
 38 |                - If you used a permalink like "folder/note-title", try just the title: "{title_format}"
 39 |                - If you used a title, try the permalink format: "{permalink_format}"
 40 | 
 41 |             3. **Check if already deleted**: Use `list_directory("/")` to see what notes exist
 42 |             4. **List notes in project**: Use `list_directory("/")` to see what notes exist in the current project
 43 | 
 44 |             ## If the note actually exists:
 45 |             ```
 46 |             # First, find the correct identifier:
 47 |             search_notes("{project}", "{identifier}")
 48 | 
 49 |             # Then delete using the correct identifier:
 50 |             delete_note("{project}", "correct-identifier-from-search")
 51 |             ```
 52 | 
 53 |             ## If you want to delete multiple similar notes:
 54 |             Use search to find all related notes and delete them one by one.
 55 |             """).strip()
 56 | 
 57 |     # Permission/access errors
 58 |     if (
 59 |         "permission" in error_message.lower()
 60 |         or "access" in error_message.lower()
 61 |         or "forbidden" in error_message.lower()
 62 |     ):
 63 |         return f"""# Delete Failed - Permission Error
 64 | 
 65 | You don't have permission to delete '{identifier}': {error_message}
 66 | 
 67 | ## How to resolve:
 68 | 1. **Check permissions**: Verify you have delete/write access to this project
 69 | 2. **File locks**: The note might be open in another application
 70 | 3. **Project access**: Ensure you're in the correct project with proper permissions
 71 | 
 72 | ## Alternative actions:
 73 | - List available projects: `list_memory_projects()`
 74 | - Specify the correct project: `delete_note("{identifier}", project="project-name")`
 75 | - Verify note exists first: `read_note("{identifier}", project="project-name")`
 76 | 
 77 | ## If you have read-only access:
 78 | Ask someone with write access to delete the note."""
 79 | 
 80 |     # Server/filesystem errors
 81 |     if (
 82 |         "server error" in error_message.lower()
 83 |         or "filesystem" in error_message.lower()
 84 |         or "disk" in error_message.lower()
 85 |     ):
 86 |         return f"""# Delete Failed - System Error
 87 | 
 88 | A system error occurred while deleting '{identifier}': {error_message}
 89 | 
 90 | ## Immediate steps:
 91 | 1. **Try again**: The error might be temporary
 92 | 2. **Check file status**: Verify the file isn't locked or in use
 93 | 3. **Check disk space**: Ensure the system has adequate storage
 94 | 
 95 | ## Troubleshooting:
 96 | - Verify note exists: `read_note("{project}","{identifier}")`
 97 | - Try again in a few moments
 98 | 
 99 | ## If problem persists:
100 | Send a message to [email protected] - there may be a filesystem or database issue."""
101 | 
102 |     # Database/sync errors
103 |     if "database" in error_message.lower() or "sync" in error_message.lower():
104 |         return f"""# Delete Failed - Database Error
105 | 
106 | A database error occurred while deleting '{identifier}': {error_message}
107 | 
108 | ## This usually means:
109 | 1. **Sync conflict**: The file system and database are out of sync
110 | 2. **Database lock**: Another operation is accessing the database
111 | 3. **Corrupted entry**: The database entry might be corrupted
112 | 
113 | ## Steps to resolve:
114 | 1. **Try again**: Wait a moment and retry the deletion
115 | 2. **Check note status**: `read_note("{project}","{identifier}")` to see current state
116 | 3. **Manual verification**: Use `list_directory()` to see if file still exists
117 | 
118 | ## If the note appears gone but database shows it exists:
119 | Send a message to [email protected] - a manual database cleanup may be needed."""
120 | 
121 |     # Generic fallback
122 |     return f"""# Delete Failed
123 | 
124 | Error deleting note '{identifier}': {error_message}
125 | 
126 | ## General troubleshooting:
127 | 1. **Verify the note exists**: `read_note("{project}", "{identifier}")` or `search_notes("{project}", "{identifier}")`
128 | 2. **Check permissions**: Ensure you can edit/delete files in this project
129 | 3. **Try again**: The error might be temporary
130 | 4. **Check project**: Make sure you're in the correct project
131 | 
132 | ## Step-by-step approach:
133 | ```
134 | # 1. Confirm note exists and get correct identifier
135 | search_notes("{project}", "{identifier}")
136 | 
137 | # 2. Read the note to verify access
138 | read_note("{project}", "correct-identifier-from-search")
139 | 
140 | # 3. Try deletion with correct identifier
141 | delete_note("{project}", "correct-identifier-from-search")
142 | ```
143 | 
144 | ## Alternative approaches:
145 | - Check what notes exist: `list_directory("{project}", "/")`
146 | 
147 | ## Need help?
148 | If the note should be deleted but the operation keeps failing, send a message to [email protected]."""
149 | 
150 | 
151 | @mcp.tool(description="Delete a note by title or permalink")
152 | async def delete_note(
153 |     identifier: str, project: Optional[str] = None, context: Context | None = None
154 | ) -> bool | str:
155 |     """Delete a note from the knowledge base.
156 | 
157 |     Permanently removes a note from the specified project. The note is identified
158 |     by title or permalink. If the note doesn't exist, the operation returns False
159 |     without error. If deletion fails due to other issues, helpful error messages are provided.
160 | 
161 |     Project Resolution:
162 |     Server resolves projects in this order: Single Project Mode → project parameter → default project.
163 |     If project unknown, use list_memory_projects() or recent_activity() first.
164 | 
165 |     Args:
166 |         project: Project name to delete from. Optional - server will resolve using hierarchy.
167 |                 If unknown, use list_memory_projects() to discover available projects.
168 |         identifier: Note title or permalink to delete
169 |                    Can be a title like "Meeting Notes" or permalink like "notes/meeting-notes"
170 |         context: Optional FastMCP context for performance caching.
171 | 
172 |     Returns:
173 |         True if note was successfully deleted, False if note was not found.
174 |         On errors, returns a formatted string with helpful troubleshooting guidance.
175 | 
176 |     Examples:
177 |         # Delete by title
178 |         delete_note("my-project", "Meeting Notes: Project Planning")
179 | 
180 |         # Delete by permalink
181 |         delete_note("work-docs", "notes/project-planning")
182 | 
183 |         # Delete with exact path
184 |         delete_note("research", "experiments/ml-model-results")
185 | 
186 |         # Common usage pattern
187 |         if delete_note("my-project", "old-draft"):
188 |             print("Note deleted successfully")
189 |         else:
190 |             print("Note not found or already deleted")
191 | 
192 |     Raises:
193 |         HTTPError: If project doesn't exist or is inaccessible
194 |         SecurityError: If identifier attempts path traversal
195 | 
196 |     Warning:
197 |         This operation is permanent and cannot be undone. The note file
198 |         will be removed from the filesystem and all references will be lost.
199 | 
200 |     Note:
201 |         If the note is not found, this function provides helpful error messages
202 |         with suggestions for finding the correct identifier, including search
203 |         commands and alternative formats to try.
204 |     """
205 |     async with get_client() as client:
206 |         active_project = await get_active_project(client, project, context)
207 |         project_url = active_project.project_url
208 | 
209 |         try:
210 |             response = await call_delete(client, f"{project_url}/knowledge/entities/{identifier}")
211 |             result = DeleteEntitiesResponse.model_validate(response.json())
212 | 
213 |             if result.deleted:
214 |                 logger.info(
215 |                     f"Successfully deleted note: {identifier} in project: {active_project.name}"
216 |                 )
217 |                 return True
218 |             else:
219 |                 logger.warning(f"Delete operation completed but note was not deleted: {identifier}")
220 |                 return False
221 | 
222 |         except Exception as e:  # pragma: no cover
223 |             logger.error(f"Delete failed for '{identifier}': {e}, project: {active_project.name}")
224 |             # Return formatted error message for better user experience
225 |             return _format_delete_error_response(active_project.name, str(e), identifier)
226 | 
```

--------------------------------------------------------------------------------
/src/basic_memory/cli/commands/cloud/rclone_installer.py:
--------------------------------------------------------------------------------

```python
  1 | """Cross-platform rclone installation utilities."""
  2 | 
  3 | import os
  4 | import platform
  5 | import shutil
  6 | import subprocess
  7 | from typing import Optional
  8 | 
  9 | from rich.console import Console
 10 | 
 11 | console = Console()
 12 | 
 13 | 
 14 | class RcloneInstallError(Exception):
 15 |     """Exception raised for rclone installation errors."""
 16 | 
 17 |     pass
 18 | 
 19 | 
 20 | def is_rclone_installed() -> bool:
 21 |     """Check if rclone is already installed and available in PATH."""
 22 |     return shutil.which("rclone") is not None
 23 | 
 24 | 
 25 | def get_platform() -> str:
 26 |     """Get the current platform identifier."""
 27 |     system = platform.system().lower()
 28 |     if system == "darwin":
 29 |         return "macos"
 30 |     elif system == "linux":
 31 |         return "linux"
 32 |     elif system == "windows":
 33 |         return "windows"
 34 |     else:
 35 |         raise RcloneInstallError(f"Unsupported platform: {system}")
 36 | 
 37 | 
 38 | def run_command(command: list[str], check: bool = True) -> subprocess.CompletedProcess:
 39 |     """Run a command with proper error handling."""
 40 |     try:
 41 |         console.print(f"[dim]Running: {' '.join(command)}[/dim]")
 42 |         result = subprocess.run(command, capture_output=True, text=True, check=check)
 43 |         if result.stdout:
 44 |             console.print(f"[dim]Output: {result.stdout.strip()}[/dim]")
 45 |         return result
 46 |     except subprocess.CalledProcessError as e:
 47 |         console.print(f"[red]Command failed: {e}[/red]")
 48 |         if e.stderr:
 49 |             console.print(f"[red]Error output: {e.stderr}[/red]")
 50 |         raise RcloneInstallError(f"Command failed: {e}") from e
 51 |     except FileNotFoundError as e:
 52 |         raise RcloneInstallError(f"Command not found: {' '.join(command)}") from e
 53 | 
 54 | 
 55 | def install_rclone_macos() -> None:
 56 |     """Install rclone on macOS using Homebrew or official script."""
 57 |     # Try Homebrew first
 58 |     if shutil.which("brew"):
 59 |         try:
 60 |             console.print("[blue]Installing rclone via Homebrew...[/blue]")
 61 |             run_command(["brew", "install", "rclone"])
 62 |             console.print("[green]rclone installed via Homebrew[/green]")
 63 |             return
 64 |         except RcloneInstallError:
 65 |             console.print(
 66 |                 "[yellow]Homebrew installation failed, trying official script...[/yellow]"
 67 |             )
 68 | 
 69 |     # Fallback to official script
 70 |     console.print("[blue]Installing rclone via official script...[/blue]")
 71 |     try:
 72 |         run_command(["sh", "-c", "curl https://rclone.org/install.sh | sudo bash"])
 73 |         console.print("[green]rclone installed via official script[/green]")
 74 |     except RcloneInstallError:
 75 |         raise RcloneInstallError(
 76 |             "Failed to install rclone. Please install manually: brew install rclone"
 77 |         )
 78 | 
 79 | 
 80 | def install_rclone_linux() -> None:
 81 |     """Install rclone on Linux using package managers or official script."""
 82 |     # Try snap first (most universal)
 83 |     if shutil.which("snap"):
 84 |         try:
 85 |             console.print("[blue]Installing rclone via snap...[/blue]")
 86 |             run_command(["sudo", "snap", "install", "rclone"])
 87 |             console.print("[green]rclone installed via snap[/green]")
 88 |             return
 89 |         except RcloneInstallError:
 90 |             console.print("[yellow]Snap installation failed, trying apt...[/yellow]")
 91 | 
 92 |     # Try apt (Debian/Ubuntu)
 93 |     if shutil.which("apt"):
 94 |         try:
 95 |             console.print("[blue]Installing rclone via apt...[/blue]")
 96 |             run_command(["sudo", "apt", "update"])
 97 |             run_command(["sudo", "apt", "install", "-y", "rclone"])
 98 |             console.print("[green]rclone installed via apt[/green]")
 99 |             return
100 |         except RcloneInstallError:
101 |             console.print("[yellow]apt installation failed, trying official script...[/yellow]")
102 | 
103 |     # Fallback to official script
104 |     console.print("[blue]Installing rclone via official script...[/blue]")
105 |     try:
106 |         run_command(["sh", "-c", "curl https://rclone.org/install.sh | sudo bash"])
107 |         console.print("[green]rclone installed via official script[/green]")
108 |     except RcloneInstallError:
109 |         raise RcloneInstallError(
110 |             "Failed to install rclone. Please install manually: sudo snap install rclone"
111 |         )
112 | 
113 | 
114 | def install_rclone_windows() -> None:
115 |     """Install rclone on Windows using package managers."""
116 |     # Try winget first (built into Windows 10+)
117 |     if shutil.which("winget"):
118 |         try:
119 |             console.print("[blue]Installing rclone via winget...[/blue]")
120 |             run_command(
121 |                 [
122 |                     "winget",
123 |                     "install",
124 |                     "Rclone.Rclone",
125 |                     "--accept-source-agreements",
126 |                     "--accept-package-agreements",
127 |                 ]
128 |             )
129 |             console.print("[green]rclone installed via winget[/green]")
130 |             return
131 |         except RcloneInstallError:
132 |             console.print("[yellow]winget installation failed, trying chocolatey...[/yellow]")
133 | 
134 |     # Try chocolatey
135 |     if shutil.which("choco"):
136 |         try:
137 |             console.print("[blue]Installing rclone via chocolatey...[/blue]")
138 |             run_command(["choco", "install", "rclone", "-y"])
139 |             console.print("[green]rclone installed via chocolatey[/green]")
140 |             return
141 |         except RcloneInstallError:
142 |             console.print("[yellow]chocolatey installation failed, trying scoop...[/yellow]")
143 | 
144 |     # Try scoop
145 |     if shutil.which("scoop"):
146 |         try:
147 |             console.print("[blue]Installing rclone via scoop...[/blue]")
148 |             run_command(["scoop", "install", "rclone"])
149 |             console.print("[green]rclone installed via scoop[/green]")
150 |             return
151 |         except RcloneInstallError:
152 |             console.print("[yellow]scoop installation failed[/yellow]")
153 | 
154 |     # No package manager available
155 |     raise RcloneInstallError(
156 |         "Could not install rclone automatically. Please install a package manager "
157 |         "(winget, chocolatey, or scoop) or install rclone manually from https://rclone.org/downloads/"
158 |     )
159 | 
160 | 
161 | def install_rclone(platform_override: Optional[str] = None) -> None:
162 |     """Install rclone for the current platform."""
163 |     if is_rclone_installed():
164 |         console.print("[green]rclone is already installed[/green]")
165 |         return
166 | 
167 |     platform_name = platform_override or get_platform()
168 |     console.print(f"[blue]Installing rclone for {platform_name}...[/blue]")
169 | 
170 |     try:
171 |         if platform_name == "macos":
172 |             install_rclone_macos()
173 |         elif platform_name == "linux":
174 |             install_rclone_linux()
175 |         elif platform_name == "windows":
176 |             install_rclone_windows()
177 |             refresh_windows_path()
178 |         else:
179 |             raise RcloneInstallError(f"Unsupported platform: {platform_name}")
180 | 
181 |         # Verify installation
182 |         if not is_rclone_installed():
183 |             raise RcloneInstallError("rclone installation completed but command not found in PATH")
184 | 
185 |         console.print("[green]rclone installation completed successfully[/green]")
186 | 
187 |     except RcloneInstallError:
188 |         raise
189 |     except Exception as e:
190 |         raise RcloneInstallError(f"Unexpected error during installation: {e}") from e
191 | 
192 | 
193 | def refresh_windows_path() -> None:
194 |     """Refresh the Windows PATH environment variable for the current session."""
195 |     if platform.system().lower() != "windows":
196 |         return
197 | 
198 |     # Importing here after performing platform detection. Also note that we have to ignore pylance/pyright
199 |     # warnings about winreg attributes so that "errors" don't appear on non-Windows platforms.
200 |     import winreg
201 | 
202 |     user_key_path = r"Environment"
203 |     system_key_path = r"System\CurrentControlSet\Control\Session Manager\Environment"
204 |     new_path = ""
205 | 
206 |     # Read user PATH
207 |     try:
208 |         reg_key = winreg.OpenKey(winreg.HKEY_CURRENT_USER, user_key_path, 0, winreg.KEY_READ)  # type: ignore[reportAttributeAccessIssue]
209 |         user_path, _ = winreg.QueryValueEx(reg_key, "PATH")  # type: ignore[reportAttributeAccessIssue]
210 |         winreg.CloseKey(reg_key)  # type: ignore[reportAttributeAccessIssue]
211 |     except Exception:
212 |         user_path = ""
213 | 
214 |     # Read system PATH
215 |     try:
216 |         reg_key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE, system_key_path, 0, winreg.KEY_READ)  # type: ignore[reportAttributeAccessIssue]
217 |         system_path, _ = winreg.QueryValueEx(reg_key, "PATH")  # type: ignore[reportAttributeAccessIssue]
218 |         winreg.CloseKey(reg_key)  # type: ignore[reportAttributeAccessIssue]
219 |     except Exception:
220 |         system_path = ""
221 | 
222 |     # Merge user and system PATHs (system first, then user)
223 |     if system_path and user_path:
224 |         new_path = system_path + ";" + user_path
225 |     elif system_path:
226 |         new_path = system_path
227 |     elif user_path:
228 |         new_path = user_path
229 | 
230 |     if new_path:
231 |         os.environ["PATH"] = new_path
232 | 
233 | 
234 | def get_rclone_version() -> Optional[str]:
235 |     """Get the installed rclone version."""
236 |     if not is_rclone_installed():
237 |         return None
238 | 
239 |     try:
240 |         result = run_command(["rclone", "version"], check=False)
241 |         if result.returncode == 0:
242 |             # Parse version from output (format: "rclone v1.64.0")
243 |             lines = result.stdout.strip().split("\n")
244 |             for line in lines:
245 |                 if line.startswith("rclone v"):
246 |                     return line.split()[1]
247 |         return "unknown"
248 |     except Exception:
249 |         return "unknown"
250 | 
```

--------------------------------------------------------------------------------
/tests/sync/test_watch_service_edge_cases.py:
--------------------------------------------------------------------------------

```python
  1 | """Test edge cases in the WatchService."""
  2 | 
  3 | from unittest.mock import patch
  4 | 
  5 | import pytest
  6 | from watchfiles import Change
  7 | 
  8 | 
  9 | def test_filter_changes_valid_path(watch_service, project_config):
 10 |     """Test the filter_changes method with valid non-hidden paths."""
 11 |     # Regular file path
 12 |     assert (
 13 |         watch_service.filter_changes(Change.added, str(project_config.home / "valid_file.txt"))
 14 |         is True
 15 |     )
 16 | 
 17 |     # Nested path
 18 |     assert (
 19 |         watch_service.filter_changes(
 20 |             Change.added, str(project_config.home / "nested" / "valid_file.txt")
 21 |         )
 22 |         is True
 23 |     )
 24 | 
 25 | 
 26 | def test_filter_changes_hidden_path(watch_service, project_config):
 27 |     """Test the filter_changes method with hidden files/directories."""
 28 |     # Hidden file (starts with dot)
 29 |     assert (
 30 |         watch_service.filter_changes(Change.added, str(project_config.home / ".hidden_file.txt"))
 31 |         is False
 32 |     )
 33 | 
 34 |     # File in hidden directory
 35 |     assert (
 36 |         watch_service.filter_changes(
 37 |             Change.added, str(project_config.home / ".hidden_dir" / "file.txt")
 38 |         )
 39 |         is False
 40 |     )
 41 | 
 42 |     # Deeply nested hidden directory
 43 |     assert (
 44 |         watch_service.filter_changes(
 45 |             Change.added, str(project_config.home / "valid" / ".hidden" / "file.txt")
 46 |         )
 47 |         is False
 48 |     )
 49 | 
 50 | 
 51 | @pytest.mark.asyncio
 52 | async def test_handle_changes_empty_set(watch_service, project_config, test_project):
 53 |     """Test handle_changes with an empty set (no processed files)."""
 54 |     # Mock write_status to avoid file operations
 55 |     with patch.object(watch_service, "write_status", return_value=None):
 56 |         # Capture console output to verify
 57 |         with patch.object(watch_service.console, "print") as mock_print:
 58 |             # Call handle_changes with empty set
 59 |             await watch_service.handle_changes(test_project, set())
 60 | 
 61 |             # Verify divider wasn't printed (processed is empty)
 62 |             mock_print.assert_not_called()
 63 | 
 64 |             # Verify last_scan was updated
 65 |             assert watch_service.state.last_scan is not None
 66 | 
 67 |             # Verify synced_files wasn't changed
 68 |             assert watch_service.state.synced_files == 0
 69 | 
 70 | 
 71 | @pytest.mark.asyncio
 72 | async def test_handle_vim_atomic_write_delete_still_exists(
 73 |     watch_service, project_config, test_project, sync_service
 74 | ):
 75 |     """Test vim atomic write scenario: DELETE event but file still exists on disk."""
 76 |     project_dir = project_config.home
 77 | 
 78 |     # Create initial file and sync it
 79 |     test_file = project_dir / "vim_test.md"
 80 |     initial_content = """---
 81 | type: note
 82 | title: vim test
 83 | ---
 84 | # Vim Test
 85 | Initial content for atomic write test
 86 | """
 87 |     test_file.write_text(initial_content)
 88 |     await sync_service.sync(project_dir)
 89 | 
 90 |     # Get initial entity state
 91 |     initial_entity = await sync_service.entity_repository.get_by_file_path("vim_test.md")
 92 |     assert initial_entity is not None
 93 |     initial_checksum = initial_entity.checksum
 94 | 
 95 |     # Simulate vim's atomic write: modify content but send DELETE event
 96 |     # (vim moves original file, creates new content, then deletes old inode)
 97 |     modified_content = """---
 98 | type: note
 99 | title: vim test
100 | ---
101 | # Vim Test
102 | Modified content after atomic write
103 | """
104 |     test_file.write_text(modified_content)
105 | 
106 |     # Setup DELETE event even though file still exists (vim's atomic write behavior)
107 |     # Use absolute path like the real watch service would
108 |     changes = {(Change.deleted, str(test_file))}
109 | 
110 |     # Handle the change
111 |     await watch_service.handle_changes(test_project, changes)
112 | 
113 |     # Verify the entity still exists and was updated (not deleted)
114 |     entity = await sync_service.entity_repository.get_by_file_path("vim_test.md")
115 |     assert entity is not None
116 |     assert entity.id == initial_entity.id  # Same entity
117 |     assert entity.checksum != initial_checksum  # Checksum should be updated
118 | 
119 |     # Verify the file content was properly synced
120 |     actual_content = test_file.read_text()
121 |     assert "Modified content after atomic write" in actual_content
122 | 
123 |     # Check that correct event was recorded (should be "modified", not "deleted")
124 |     events = [e for e in watch_service.state.recent_events if e.path == "vim_test.md"]
125 |     assert len(events) == 1
126 |     assert events[0].action == "modified"
127 |     assert events[0].status == "success"
128 | 
129 | 
130 | @pytest.mark.asyncio
131 | async def test_handle_true_deletion_vs_vim_atomic(
132 |     watch_service, project_config, test_project, sync_service
133 | ):
134 |     """Test that true deletions are still handled correctly vs vim atomic writes."""
135 |     project_dir = project_config.home
136 | 
137 |     # Create and sync two files
138 |     atomic_file = project_dir / "atomic_test.md"
139 |     delete_file = project_dir / "delete_test.md"
140 | 
141 |     content = """---
142 | type: note
143 | ---
144 | # Test File
145 | Content for testing
146 | """
147 | 
148 |     atomic_file.write_text(content)
149 |     delete_file.write_text(content)
150 |     await sync_service.sync(project_dir)
151 | 
152 |     # For atomic_file: modify content but keep file (vim atomic write scenario)
153 |     modified_content = content.replace("Content for testing", "Modified content")
154 |     atomic_file.write_text(modified_content)
155 | 
156 |     # For delete_file: actually delete it (true deletion)
157 |     delete_file.unlink()
158 | 
159 |     # Setup DELETE events for both files
160 |     # Use absolute paths like the real watch service would
161 |     changes = {
162 |         (Change.deleted, str(atomic_file)),  # File still exists - atomic write
163 |         (Change.deleted, str(delete_file)),  # File deleted - true deletion
164 |     }
165 | 
166 |     # Handle the changes
167 |     await watch_service.handle_changes(test_project, changes)
168 | 
169 |     # Verify atomic_file was treated as modification (still exists in DB)
170 |     atomic_entity = await sync_service.entity_repository.get_by_file_path("atomic_test.md")
171 |     assert atomic_entity is not None
172 | 
173 |     # Verify delete_file was truly deleted (no longer exists in DB)
174 |     delete_entity = await sync_service.entity_repository.get_by_file_path("delete_test.md")
175 |     assert delete_entity is None
176 | 
177 |     # Check events were recorded correctly
178 |     events = watch_service.state.recent_events
179 |     atomic_events = [e for e in events if e.path == "atomic_test.md"]
180 |     delete_events = [e for e in events if e.path == "delete_test.md"]
181 | 
182 |     assert len(atomic_events) == 1
183 |     assert atomic_events[0].action == "modified"
184 | 
185 |     assert len(delete_events) == 1
186 |     assert delete_events[0].action == "deleted"
187 | 
188 | 
189 | @pytest.mark.asyncio
190 | async def test_handle_vim_atomic_write_markdown_with_relations(
191 |     watch_service, project_config, test_project, sync_service
192 | ):
193 |     """Test vim atomic write with markdown files that contain relations."""
194 |     project_dir = project_config.home
195 | 
196 |     # Create target file for relations
197 |     target_file = project_dir / "target.md"
198 |     target_content = """---
199 | type: note
200 | title: Target Note
201 | ---
202 | # Target Note
203 | This is the target of relations.
204 | """
205 |     target_file.write_text(target_content)
206 | 
207 |     # Create main file with relations
208 |     main_file = project_dir / "main.md"
209 |     initial_content = """---
210 | type: note
211 | title: Main Note
212 | ---
213 | # Main Note
214 | This note links to [[Target Note]].
215 | 
216 | - relates_to [[Target Note]]
217 | """
218 |     main_file.write_text(initial_content)
219 |     await sync_service.sync(project_dir)
220 | 
221 |     # Get initial state
222 |     main_entity = await sync_service.entity_repository.get_by_file_path("main.md")
223 |     assert main_entity is not None
224 |     initial_relations = len(main_entity.relations)
225 | 
226 |     # Simulate vim atomic write with content change that adds more relations
227 |     modified_content = """---
228 | type: note
229 | title: Main Note
230 | ---
231 | # Main Note
232 | This note links to [[Target Note]] multiple times.
233 | 
234 | - relates_to [[Target Note]]
235 | - references [[Target Note]]
236 | """
237 |     main_file.write_text(modified_content)
238 | 
239 |     # Setup DELETE event (vim atomic write)
240 |     # Use absolute path like the real watch service would
241 |     changes = {(Change.deleted, str(main_file))}
242 | 
243 |     # Handle the change
244 |     await watch_service.handle_changes(test_project, changes)
245 | 
246 |     # Verify entity still exists and relations were updated
247 |     updated_entity = await sync_service.entity_repository.get_by_file_path("main.md")
248 |     assert updated_entity is not None
249 |     assert updated_entity.id == main_entity.id
250 | 
251 |     # Verify relations were processed correctly
252 |     updated_relations = len(updated_entity.relations)
253 |     assert updated_relations >= initial_relations  # Should have at least as many relations
254 | 
255 |     # Check event was recorded as modification
256 |     events = [e for e in watch_service.state.recent_events if e.path == "main.md"]
257 |     assert len(events) == 1
258 |     assert events[0].action == "modified"
259 | 
260 | 
261 | @pytest.mark.asyncio
262 | async def test_handle_vim_atomic_write_directory_path_ignored(
263 |     watch_service, project_config, test_project
264 | ):
265 |     """Test that directories are properly ignored even in atomic write detection."""
266 |     project_dir = project_config.home
267 | 
268 |     # Create directory
269 |     test_dir = project_dir / "test_directory"
270 |     test_dir.mkdir()
271 | 
272 |     # Setup DELETE event for directory (should be ignored)
273 |     # Use absolute path like the real watch service would
274 |     changes = {(Change.deleted, str(test_dir))}
275 | 
276 |     # Handle the change - should not cause errors
277 |     await watch_service.handle_changes(test_project, changes)
278 | 
279 |     # Verify no events were recorded for the directory
280 |     events = [e for e in watch_service.state.recent_events if "test_directory" in e.path]
281 |     assert len(events) == 0
282 | 
```

--------------------------------------------------------------------------------
/tests/services/test_directory_service.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for directory service."""
  2 | 
  3 | import pytest
  4 | 
  5 | from basic_memory.services.directory_service import DirectoryService
  6 | 
  7 | 
  8 | @pytest.mark.asyncio
  9 | async def test_directory_tree_empty(directory_service: DirectoryService):
 10 |     """Test getting empty directory tree."""
 11 | 
 12 |     # When no entities exist, result should just be the root
 13 |     result = await directory_service.get_directory_tree()
 14 |     assert result is not None
 15 |     assert len(result.children) == 0
 16 | 
 17 |     assert result.name == "Root"
 18 |     assert result.directory_path == "/"
 19 |     assert result.has_children is False
 20 | 
 21 | 
 22 | @pytest.mark.asyncio
 23 | async def test_directory_tree(directory_service: DirectoryService, test_graph):
 24 |     # test_graph files:
 25 |     # /
 26 |     # ├── test
 27 |     # │   ├── Connected Entity 1.md
 28 |     # │   ├── Connected Entity 2.md
 29 |     # │   ├── Deep Entity.md
 30 |     # │   ├── Deeper Entity.md
 31 |     # │   └── Root.md
 32 | 
 33 |     result = await directory_service.get_directory_tree()
 34 |     assert result is not None
 35 |     assert len(result.children) == 1
 36 | 
 37 |     node_0 = result.children[0]
 38 |     assert node_0.name == "test"
 39 |     assert node_0.type == "directory"
 40 |     assert node_0.content_type is None
 41 |     assert node_0.entity_id is None
 42 |     assert node_0.entity_type is None
 43 |     assert node_0.title is None
 44 |     assert node_0.directory_path == "/test"
 45 |     assert node_0.has_children is True
 46 |     assert len(node_0.children) == 5
 47 | 
 48 |     # assert one file node
 49 |     node_file = node_0.children[0]
 50 |     assert node_file.name == "Deeper Entity.md"
 51 |     assert node_file.type == "file"
 52 |     assert node_file.content_type == "text/markdown"
 53 |     assert node_file.entity_id == 1
 54 |     assert node_file.entity_type == "deeper"
 55 |     assert node_file.title == "Deeper Entity"
 56 |     assert node_file.permalink == "test/deeper-entity"
 57 |     assert node_file.directory_path == "/test/Deeper Entity.md"
 58 |     assert node_file.file_path == "test/Deeper Entity.md"
 59 |     assert node_file.has_children is False
 60 |     assert len(node_file.children) == 0
 61 | 
 62 | 
 63 | @pytest.mark.asyncio
 64 | async def test_list_directory_empty(directory_service: DirectoryService):
 65 |     """Test listing directory with no entities."""
 66 |     result = await directory_service.list_directory()
 67 |     assert result == []
 68 | 
 69 | 
 70 | @pytest.mark.asyncio
 71 | async def test_list_directory_root(directory_service: DirectoryService, test_graph):
 72 |     """Test listing root directory contents."""
 73 |     result = await directory_service.list_directory(dir_name="/")
 74 | 
 75 |     # Should return immediate children of root (the "test" directory)
 76 |     assert len(result) == 1
 77 |     assert result[0].name == "test"
 78 |     assert result[0].type == "directory"
 79 |     assert result[0].directory_path == "/test"
 80 | 
 81 | 
 82 | @pytest.mark.asyncio
 83 | async def test_list_directory_specific_path(directory_service: DirectoryService, test_graph):
 84 |     """Test listing specific directory contents."""
 85 |     result = await directory_service.list_directory(dir_name="/test")
 86 | 
 87 |     # Should return the 5 files in the test directory
 88 |     assert len(result) == 5
 89 |     file_names = {node.name for node in result}
 90 |     expected_files = {
 91 |         "Connected Entity 1.md",
 92 |         "Connected Entity 2.md",
 93 |         "Deep Entity.md",
 94 |         "Deeper Entity.md",
 95 |         "Root.md",
 96 |     }
 97 |     assert file_names == expected_files
 98 | 
 99 |     # All should be files
100 |     for node in result:
101 |         assert node.type == "file"
102 | 
103 | 
104 | @pytest.mark.asyncio
105 | async def test_list_directory_nonexistent_path(directory_service: DirectoryService, test_graph):
106 |     """Test listing nonexistent directory."""
107 |     result = await directory_service.list_directory(dir_name="/nonexistent")
108 |     assert result == []
109 | 
110 | 
111 | @pytest.mark.asyncio
112 | async def test_list_directory_with_glob_filter(directory_service: DirectoryService, test_graph):
113 |     """Test listing directory with glob pattern filtering."""
114 |     # Filter for files containing "Connected"
115 |     result = await directory_service.list_directory(dir_name="/test", file_name_glob="*Connected*")
116 | 
117 |     assert len(result) == 2
118 |     file_names = {node.name for node in result}
119 |     assert file_names == {"Connected Entity 1.md", "Connected Entity 2.md"}
120 | 
121 | 
122 | @pytest.mark.asyncio
123 | async def test_list_directory_with_markdown_filter(directory_service: DirectoryService, test_graph):
124 |     """Test listing directory with markdown file filter."""
125 |     result = await directory_service.list_directory(dir_name="/test", file_name_glob="*.md")
126 | 
127 |     # All files in test_graph are markdown files
128 |     assert len(result) == 5
129 | 
130 | 
131 | @pytest.mark.asyncio
132 | async def test_list_directory_with_specific_file_filter(
133 |     directory_service: DirectoryService, test_graph
134 | ):
135 |     """Test listing directory with specific file pattern."""
136 |     result = await directory_service.list_directory(dir_name="/test", file_name_glob="Root.*")
137 | 
138 |     assert len(result) == 1
139 |     assert result[0].name == "Root.md"
140 | 
141 | 
142 | @pytest.mark.asyncio
143 | async def test_list_directory_depth_control(directory_service: DirectoryService, test_graph):
144 |     """Test listing directory with depth control."""
145 |     # Depth 1 should only return immediate children
146 |     result_depth_1 = await directory_service.list_directory(dir_name="/", depth=1)
147 |     assert len(result_depth_1) == 1  # Just the "test" directory
148 | 
149 |     # Depth 2 should return directory + its contents
150 |     result_depth_2 = await directory_service.list_directory(dir_name="/", depth=2)
151 |     assert len(result_depth_2) == 6  # "test" directory + 5 files in it
152 | 
153 | 
154 | @pytest.mark.asyncio
155 | async def test_list_directory_path_normalization(directory_service: DirectoryService, test_graph):
156 |     """Test that directory paths are normalized correctly."""
157 |     # Test various path formats that should all be equivalent
158 |     paths_to_test = ["/test", "test", "/test/", "test/"]
159 | 
160 |     base_result = await directory_service.list_directory(dir_name="/test")
161 | 
162 |     for path in paths_to_test:
163 |         result = await directory_service.list_directory(dir_name=path)
164 |         assert len(result) == len(base_result)
165 |         # Compare by name since the objects might be different instances
166 |         result_names = {node.name for node in result}
167 |         base_names = {node.name for node in base_result}
168 |         assert result_names == base_names
169 | 
170 | 
171 | @pytest.mark.asyncio
172 | async def test_list_directory_dot_slash_prefix_normalization(
173 |     directory_service: DirectoryService, test_graph
174 | ):
175 |     """Test that ./ prefixed directory paths are normalized correctly."""
176 |     # This test reproduces the bug report issue where ./dirname fails
177 |     base_result = await directory_service.list_directory(dir_name="/test")
178 | 
179 |     # Test paths with ./ prefix that should be equivalent to /test
180 |     dot_paths_to_test = ["./test", "./test/"]
181 | 
182 |     for path in dot_paths_to_test:
183 |         result = await directory_service.list_directory(dir_name=path)
184 |         assert len(result) == len(base_result), (
185 |             f"Path '{path}' returned {len(result)} results, expected {len(base_result)}"
186 |         )
187 |         # Compare by name since the objects might be different instances
188 |         result_names = {node.name for node in result}
189 |         base_names = {node.name for node in base_result}
190 |         assert result_names == base_names, f"Path '{path}' returned different files than expected"
191 | 
192 | 
193 | @pytest.mark.asyncio
194 | async def test_list_directory_glob_no_matches(directory_service: DirectoryService, test_graph):
195 |     """Test listing directory with glob that matches nothing."""
196 |     result = await directory_service.list_directory(
197 |         dir_name="/test", file_name_glob="*.nonexistent"
198 |     )
199 |     assert result == []
200 | 
201 | 
202 | @pytest.mark.asyncio
203 | async def test_list_directory_default_parameters(directory_service: DirectoryService, test_graph):
204 |     """Test listing directory with default parameters."""
205 |     # Should default to root directory, depth 1, no glob filter
206 |     result = await directory_service.list_directory()
207 | 
208 |     assert len(result) == 1
209 |     assert result[0].name == "test"
210 |     assert result[0].type == "directory"
211 | 
212 | 
213 | @pytest.mark.asyncio
214 | async def test_directory_structure_empty(directory_service: DirectoryService):
215 |     """Test getting empty directory structure."""
216 |     # When no entities exist, result should just be the root
217 |     result = await directory_service.get_directory_structure()
218 |     assert result is not None
219 |     assert len(result.children) == 0
220 | 
221 |     assert result.name == "Root"
222 |     assert result.directory_path == "/"
223 |     assert result.type == "directory"
224 |     assert result.has_children is False
225 | 
226 | 
227 | @pytest.mark.asyncio
228 | async def test_directory_structure(directory_service: DirectoryService, test_graph):
229 |     """Test getting directory structure with folders only (no files)."""
230 |     # test_graph files:
231 |     # /
232 |     # ├── test
233 |     # │   ├── Connected Entity 1.md
234 |     # │   ├── Connected Entity 2.md
235 |     # │   ├── Deep Entity.md
236 |     # │   ├── Deeper Entity.md
237 |     # │   └── Root.md
238 | 
239 |     result = await directory_service.get_directory_structure()
240 |     assert result is not None
241 |     assert len(result.children) == 1
242 | 
243 |     # Should only have the "test" directory, not the files
244 |     node_0 = result.children[0]
245 |     assert node_0.name == "test"
246 |     assert node_0.type == "directory"
247 |     assert node_0.directory_path == "/test"
248 |     assert node_0.has_children is False  # No subdirectories, only files
249 | 
250 |     # Verify no file metadata is present
251 |     assert node_0.content_type is None
252 |     assert node_0.entity_id is None
253 |     assert node_0.entity_type is None
254 |     assert node_0.title is None
255 |     assert node_0.permalink is None
256 | 
257 |     # No file nodes should be present
258 |     assert len(node_0.children) == 0
259 | 
```

--------------------------------------------------------------------------------
/specs/SPEC-14 Cloud Git Versioning & GitHub Backup.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | title: 'SPEC-14: Cloud Git Versioning & GitHub Backup'
  3 | type: spec
  4 | permalink: specs/spec-14-cloud-git-versioning
  5 | tags:
  6 | - git
  7 | - github
  8 | - backup
  9 | - versioning
 10 | - cloud
 11 | related:
 12 | - specs/spec-9-multi-project-bisync
 13 | - specs/spec-9-follow-ups-conflict-sync-and-observability
 14 | status: deferred
 15 | ---
 16 | 
 17 | # SPEC-14: Cloud Git Versioning & GitHub Backup
 18 | 
 19 | **Status: DEFERRED** - Postponed until multi-user/teams feature development. Using S3 versioning (SPEC-9.1) for v1 instead.
 20 | 
 21 | ## Why Deferred
 22 | 
 23 | **Original goals can be met with simpler solutions:**
 24 | - Version history → **S3 bucket versioning** (automatic, zero config)
 25 | - Offsite backup → **Tigris global replication** (built-in)
 26 | - Restore capability → **S3 version restore** (`bm cloud restore --version-id`)
 27 | - Collaboration → **Deferred to teams/multi-user feature** (not v1 requirement)
 28 | 
 29 | **Complexity vs value trade-off:**
 30 | - Git integration adds: committer service, puller service, webhooks, LFS, merge conflicts
 31 | - Risk: Loop detection between Git ↔ rclone bisync ↔ local edits
 32 | - S3 versioning gives 80% of value with 5% of complexity
 33 | 
 34 | **When to revisit:**
 35 | - Teams/multi-user features (PR-based collaboration workflow)
 36 | - User requests for commit messages and branch-based workflows
 37 | - Need for fine-grained audit trail beyond S3 object metadata
 38 | 
 39 | ---
 40 | 
 41 | ## Original Specification (for reference)
 42 | 
 43 | ## Why
 44 | Early access users want **transparent version history**, easy **offsite backup**, and a familiar **restore/branching** workflow. Git/GitHub integration would provide:
 45 | - Auditable history of every change (who/when/why)
 46 | - Branches/PRs for review and collaboration
 47 | - Offsite private backup under the user's control
 48 | - Escape hatch: users can always `git clone` their knowledge base
 49 | 
 50 | **Note:** These goals are now addressed via S3 versioning (SPEC-9.1) for single-user use case.
 51 | 
 52 | ## Goals
 53 | - **Transparent**: Users keep using Basic Memory; Git runs behind the scenes.
 54 | - **Private**: Push to a **private GitHub repo** that the user owns (or tenant org).
 55 | - **Reliable**: No data loss, deterministic mapping of filesystem ↔ Git.
 56 | - **Composable**: Plays nicely with SPEC‑9 bisync and upcoming conflict features (SPEC‑9 Follow‑Ups).
 57 | 
 58 | **Non‑Goals (for v1):**
 59 | - Fine‑grained per‑file encryption in Git history (can be layered later).
 60 | - Large media optimization beyond Git LFS defaults.
 61 | 
 62 | ## User Stories
 63 | 1. *As a user*, I connect my GitHub and choose a private backup repo.
 64 | 2. *As a user*, every change I make in cloud (or via bisync) is **committed** and **pushed** automatically.
 65 | 3. *As a user*, I can **restore** a file/folder/project to a prior version.
 66 | 4. *As a power user*, I can **git pull/push** directly to collaborate outside the app.
 67 | 5. *As an admin*, I can enforce repo ownership (tenant org) and least‑privilege scopes.
 68 | 
 69 | ## Scope
 70 | - **In scope:** Full repo backup of `/app/data/` (all projects) with optional selective subpaths.
 71 | - **Out of scope (v1):** Partial shallow mirrors; encrypted Git; cross‑provider SCM (GitLab/Bitbucket).
 72 | 
 73 | ## Architecture
 74 | ### Topology
 75 | - **Authoritative working tree**: `/app/data/` (bucket mount) remains the source of truth (SPEC‑9).
 76 | - **Bare repo** lives alongside: `/app/git/${tenant}/knowledge.git` (server‑side).
 77 | - **Mirror remote**: `github.com/<owner>/<repo>.git` (private).
 78 | 
 79 | ```mermaid
 80 | flowchart LR
 81 |   A[/Users & Agents/] -->|writes/edits| B[/app/data/]
 82 |   B -->|file events| C[Committer Service]
 83 |   C -->|git commit| D[(Bare Repo)]
 84 |   D -->|push| E[(GitHub Private Repo)]
 85 |   E -->|webhook (push)| F[Puller Service]
 86 |   F -->|git pull/merge| D
 87 |   D -->|checkout/merge| B
 88 | ```
 89 | 
 90 | ### Services
 91 | - **Committer Service** (daemon):
 92 |   - Watches `/app/data/` for changes (inotify/poll)
 93 |   - Batches changes (debounce e.g. 2–5s)
 94 |   - Writes `.bmmeta` (if present) into commit message trailer (see Follow‑Ups)
 95 |   - `git add -A && git commit -m "chore(sync): <summary>
 96 | 
 97 | BM-Meta: <json>"`
 98 |   - Periodic `git push` to GitHub mirror (configurable interval)
 99 | - **Puller Service** (webhook target):
100 |   - Receives GitHub webhook (push) → `git fetch`
101 |   - **Fast‑forward** merges to `main` only; reject non‑FF unless policy allows
102 |   - Applies changes back to `/app/data/` via clean checkout
103 |   - Emits sync events for Basic Memory indexers
104 | 
105 | ### Auth & Security
106 | - **GitHub App** (recommended): minimal scopes: `contents:read/write`, `metadata:read`, webhook.
107 | - Tenant‑scoped installation; repo created in user account or tenant org.
108 | - Tokens stored in KMS/secret manager; rotated automatically.
109 | - Optional policy: allow only **FF merges** on `main`; non‑FF requires PR.
110 | 
111 | ### Repo Layout
112 | - **Monorepo** (default): one repo per tenant mirrors `/app/data/` with subfolders per project.
113 | - Optional multi‑repo mode (later): one repo per project.
114 | 
115 | ### File Handling
116 | - Honor `.gitignore` generated from `.bmignore.rclone` + BM defaults (cache, temp, state).
117 | - **Git LFS** for large binaries (images, media) — auto track by extension/size threshold.
118 | - Normalize newline + Unicode (aligns with Follow‑Ups).
119 | 
120 | ### Conflict Model
121 | - **Primary concurrency**: SPEC‑9 Follow‑Ups (`.bmmeta`, conflict copies) stays the first line of defense.
122 | - **Git merges** are a **secondary** mechanism:
123 |   - Server only auto‑merges **text** conflicts when trivial (FF or clean 3‑way).
124 |   - Otherwise, create `name (conflict from <branch>, <ts>).md` and surface via events.
125 | 
126 | ### Data Flow vs Bisync
127 | - Bisync (rclone) continues between local sync dir ↔ bucket.
128 | - Git sits **cloud‑side** between bucket and GitHub.
129 | - On **pull** from GitHub → files written to `/app/data/` → picked up by indexers & eventually by bisync back to users.
130 | 
131 | ## CLI & UX
132 | New commands (cloud mode):
133 | - `bm cloud git connect` — Launch GitHub App installation; create private repo; store installation id.
134 | - `bm cloud git status` — Show connected repo, last push time, last webhook delivery, pending commits.
135 | - `bm cloud git push` — Manual push (rarely needed).
136 | - `bm cloud git pull` — Manual pull/FF (admin only by default).
137 | - `bm cloud snapshot -m "message"` — Create a tagged point‑in‑time snapshot (git tag).
138 | - `bm restore <path> --to <commit|tag>` — Restore file/folder/project to prior version.
139 | 
140 | Settings:
141 | - `bm config set git.autoPushInterval=5s`
142 | - `bm config set git.lfs.sizeThreshold=10MB`
143 | - `bm config set git.allowNonFF=false`
144 | 
145 | ## Migration & Backfill
146 | - On connect, if repo empty: initial commit of entire `/app/data/`.
147 | - If repo has content: require **one‑time import** path (clone to staging, reconcile, choose direction).
148 | 
149 | ## Edge Cases
150 | - Massive deletes: gated by SPEC‑9 `max_delete` **and** Git pre‑push hook checks.
151 | - Case changes and rename detection: rely on git rename heuristics + Follow‑Ups move hints.
152 | - Secrets: default ignore common secret patterns; allow custom deny list.
153 | 
154 | ## Telemetry & Observability
155 | - Emit `git_commit`, `git_push`, `git_pull`, `git_conflict` events with correlation IDs.
156 | - `bm sync --report` extended with Git stats (commit count, delta bytes, push latency).
157 | 
158 | ## Phased Plan
159 | ### Phase 0 — Prototype (1 sprint)
160 | - Server: bare repo init + simple committer (batch every 10s) + manual GitHub token.
161 | - CLI: `bm cloud git connect --token <PAT>` (dev‑only)
162 | - Success: edits in `/app/data/` appear in GitHub within 30s.
163 | 
164 | ### Phase 1 — GitHub App & Webhooks (1–2 sprints)
165 | - Switch to GitHub App installs; create private repo; store installation id.
166 | - Committer hardened (debounce 2–5s, backoff, retries).
167 | - Puller service with webhook → FF merge → checkout to `/app/data/`.
168 | - LFS auto‑track + `.gitignore` generation.
169 | - CLI surfaces status + logs.
170 | 
171 | ### Phase 2 — Restore & Snapshots (1 sprint)
172 | - `bm restore` for file/folder/project with dry‑run.
173 | - `bm cloud snapshot` tags + list/inspect.
174 | - Policy: PR‑only non‑FF, admin override.
175 | 
176 | ### Phase 3 — Selective & Multi‑Repo (nice‑to‑have)
177 | - Include/exclude projects; optional per‑project repos.
178 | - Advanced policies (branch protections, required reviews).
179 | 
180 | ## Acceptance Criteria
181 | - Changes to `/app/data/` are committed and pushed automatically within configurable interval (default ≤5s).
182 | - GitHub webhook pull results in updated files in `/app/data/` (FF‑only by default).
183 | - LFS configured and functioning; large files don't bloat history.
184 | - `bm cloud git status` shows connected repo and last push/pull times.
185 | - `bm restore` restores a file/folder to a prior commit with a clear audit trail.
186 | - End‑to‑end works alongside SPEC‑9 bisync without loops or data loss.
187 | 
188 | ## Risks & Mitigations
189 | - **Loop risk (Git ↔ Bisync)**: Writes to `/app/data/` → bisync → local → user edits → back again. *Mitigation*: Debounce, commit squashing, idempotent `.bmmeta` versioning, and watch exclusion windows during pull.
190 | - **Repo bloat**: Lots of binary churn. *Mitigation*: default LFS, size threshold, optional media‑only repo later.
191 | - **Security**: Token leakage. *Mitigation*: GitHub App with short‑lived tokens, KMS storage, scoped permissions.
192 | - **Merge complexity**: Non‑trivial conflicts. *Mitigation*: prefer FF; otherwise conflict copies + events; require PR for non‑FF.
193 | 
194 | ## Open Questions
195 | - Do we default to **monorepo** per tenant, or offer project‑per‑repo at connect time?
196 | - Should `restore` write to a branch and open a PR, or directly modify `main`?
197 | - How do we expose Git history in UI (timeline view) without users dropping to CLI?
198 | 
199 | ## Appendix: Sample Config
200 | ```json
201 | {
202 |   "git": {
203 |     "enabled": true,
204 |     "repo": "https://github.com/<owner>/<repo>.git",
205 |     "autoPushInterval": "5s",
206 |     "allowNonFF": false,
207 |     "lfs": { "sizeThreshold": 10485760 }
208 |   }
209 | }
210 | ```
211 | 
```

--------------------------------------------------------------------------------
/specs/SPEC-14- Cloud Git Versioning & GitHub Backup.md:
--------------------------------------------------------------------------------

```markdown
  1 | ---
  2 | title: 'SPEC-14: Cloud Git Versioning & GitHub Backup'
  3 | type: spec
  4 | permalink: specs/spec-14-cloud-git-versioning
  5 | tags:
  6 | - git
  7 | - github
  8 | - backup
  9 | - versioning
 10 | - cloud
 11 | related:
 12 | - specs/spec-9-multi-project-bisync
 13 | - specs/spec-9-follow-ups-conflict-sync-and-observability
 14 | status: deferred
 15 | ---
 16 | 
 17 | # SPEC-14: Cloud Git Versioning & GitHub Backup
 18 | 
 19 | **Status: DEFERRED** - Postponed until multi-user/teams feature development. Using S3 versioning (SPEC-9.1) for v1 instead.
 20 | 
 21 | ## Why Deferred
 22 | 
 23 | **Original goals can be met with simpler solutions:**
 24 | - Version history → **S3 bucket versioning** (automatic, zero config)
 25 | - Offsite backup → **Tigris global replication** (built-in)
 26 | - Restore capability → **S3 version restore** (`bm cloud restore --version-id`)
 27 | - Collaboration → **Deferred to teams/multi-user feature** (not v1 requirement)
 28 | 
 29 | **Complexity vs value trade-off:**
 30 | - Git integration adds: committer service, puller service, webhooks, LFS, merge conflicts
 31 | - Risk: Loop detection between Git ↔ rclone bisync ↔ local edits
 32 | - S3 versioning gives 80% of value with 5% of complexity
 33 | 
 34 | **When to revisit:**
 35 | - Teams/multi-user features (PR-based collaboration workflow)
 36 | - User requests for commit messages and branch-based workflows
 37 | - Need for fine-grained audit trail beyond S3 object metadata
 38 | 
 39 | ---
 40 | 
 41 | ## Original Specification (for reference)
 42 | 
 43 | ## Why
 44 | Early access users want **transparent version history**, easy **offsite backup**, and a familiar **restore/branching** workflow. Git/GitHub integration would provide:
 45 | - Auditable history of every change (who/when/why)
 46 | - Branches/PRs for review and collaboration
 47 | - Offsite private backup under the user's control
 48 | - Escape hatch: users can always `git clone` their knowledge base
 49 | 
 50 | **Note:** These goals are now addressed via S3 versioning (SPEC-9.1) for single-user use case.
 51 | 
 52 | ## Goals
 53 | - **Transparent**: Users keep using Basic Memory; Git runs behind the scenes.
 54 | - **Private**: Push to a **private GitHub repo** that the user owns (or tenant org).
 55 | - **Reliable**: No data loss, deterministic mapping of filesystem ↔ Git.
 56 | - **Composable**: Plays nicely with SPEC‑9 bisync and upcoming conflict features (SPEC‑9 Follow‑Ups).
 57 | 
 58 | **Non‑Goals (for v1):**
 59 | - Fine‑grained per‑file encryption in Git history (can be layered later).
 60 | - Large media optimization beyond Git LFS defaults.
 61 | 
 62 | ## User Stories
 63 | 1. *As a user*, I connect my GitHub and choose a private backup repo.
 64 | 2. *As a user*, every change I make in cloud (or via bisync) is **committed** and **pushed** automatically.
 65 | 3. *As a user*, I can **restore** a file/folder/project to a prior version.
 66 | 4. *As a power user*, I can **git pull/push** directly to collaborate outside the app.
 67 | 5. *As an admin*, I can enforce repo ownership (tenant org) and least‑privilege scopes.
 68 | 
 69 | ## Scope
 70 | - **In scope:** Full repo backup of `/app/data/` (all projects) with optional selective subpaths.
 71 | - **Out of scope (v1):** Partial shallow mirrors; encrypted Git; cross‑provider SCM (GitLab/Bitbucket).
 72 | 
 73 | ## Architecture
 74 | ### Topology
 75 | - **Authoritative working tree**: `/app/data/` (bucket mount) remains the source of truth (SPEC‑9).
 76 | - **Bare repo** lives alongside: `/app/git/${tenant}/knowledge.git` (server‑side).
 77 | - **Mirror remote**: `github.com/<owner>/<repo>.git` (private).
 78 | 
 79 | ```mermaid
 80 | flowchart LR
 81 |   A[/Users & Agents/] -->|writes/edits| B[/app/data/]
 82 |   B -->|file events| C[Committer Service]
 83 |   C -->|git commit| D[(Bare Repo)]
 84 |   D -->|push| E[(GitHub Private Repo)]
 85 |   E -->|webhook (push)| F[Puller Service]
 86 |   F -->|git pull/merge| D
 87 |   D -->|checkout/merge| B
 88 | ```
 89 | 
 90 | ### Services
 91 | - **Committer Service** (daemon):
 92 |   - Watches `/app/data/` for changes (inotify/poll)
 93 |   - Batches changes (debounce e.g. 2–5s)
 94 |   - Writes `.bmmeta` (if present) into commit message trailer (see Follow‑Ups)
 95 |   - `git add -A && git commit -m "chore(sync): <summary>
 96 | 
 97 | BM-Meta: <json>"`
 98 |   - Periodic `git push` to GitHub mirror (configurable interval)
 99 | - **Puller Service** (webhook target):
100 |   - Receives GitHub webhook (push) → `git fetch`
101 |   - **Fast‑forward** merges to `main` only; reject non‑FF unless policy allows
102 |   - Applies changes back to `/app/data/` via clean checkout
103 |   - Emits sync events for Basic Memory indexers
104 | 
105 | ### Auth & Security
106 | - **GitHub App** (recommended): minimal scopes: `contents:read/write`, `metadata:read`, webhook.
107 | - Tenant‑scoped installation; repo created in user account or tenant org.
108 | - Tokens stored in KMS/secret manager; rotated automatically.
109 | - Optional policy: allow only **FF merges** on `main`; non‑FF requires PR.
110 | 
111 | ### Repo Layout
112 | - **Monorepo** (default): one repo per tenant mirrors `/app/data/` with subfolders per project.
113 | - Optional multi‑repo mode (later): one repo per project.
114 | 
115 | ### File Handling
116 | - Honor `.gitignore` generated from `.bmignore.rclone` + BM defaults (cache, temp, state).
117 | - **Git LFS** for large binaries (images, media) — auto track by extension/size threshold.
118 | - Normalize newline + Unicode (aligns with Follow‑Ups).
119 | 
120 | ### Conflict Model
121 | - **Primary concurrency**: SPEC‑9 Follow‑Ups (`.bmmeta`, conflict copies) stays the first line of defense.
122 | - **Git merges** are a **secondary** mechanism:
123 |   - Server only auto‑merges **text** conflicts when trivial (FF or clean 3‑way).
124 |   - Otherwise, create `name (conflict from <branch>, <ts>).md` and surface via events.
125 | 
126 | ### Data Flow vs Bisync
127 | - Bisync (rclone) continues between local sync dir ↔ bucket.
128 | - Git sits **cloud‑side** between bucket and GitHub.
129 | - On **pull** from GitHub → files written to `/app/data/` → picked up by indexers & eventually by bisync back to users.
130 | 
131 | ## CLI & UX
132 | New commands (cloud mode):
133 | - `bm cloud git connect` — Launch GitHub App installation; create private repo; store installation id.
134 | - `bm cloud git status` — Show connected repo, last push time, last webhook delivery, pending commits.
135 | - `bm cloud git push` — Manual push (rarely needed).
136 | - `bm cloud git pull` — Manual pull/FF (admin only by default).
137 | - `bm cloud snapshot -m "message"` — Create a tagged point‑in‑time snapshot (git tag).
138 | - `bm restore <path> --to <commit|tag>` — Restore file/folder/project to prior version.
139 | 
140 | Settings:
141 | - `bm config set git.autoPushInterval=5s`
142 | - `bm config set git.lfs.sizeThreshold=10MB`
143 | - `bm config set git.allowNonFF=false`
144 | 
145 | ## Migration & Backfill
146 | - On connect, if repo empty: initial commit of entire `/app/data/`.
147 | - If repo has content: require **one‑time import** path (clone to staging, reconcile, choose direction).
148 | 
149 | ## Edge Cases
150 | - Massive deletes: gated by SPEC‑9 `max_delete` **and** Git pre‑push hook checks.
151 | - Case changes and rename detection: rely on git rename heuristics + Follow‑Ups move hints.
152 | - Secrets: default ignore common secret patterns; allow custom deny list.
153 | 
154 | ## Telemetry & Observability
155 | - Emit `git_commit`, `git_push`, `git_pull`, `git_conflict` events with correlation IDs.
156 | - `bm sync --report` extended with Git stats (commit count, delta bytes, push latency).
157 | 
158 | ## Phased Plan
159 | ### Phase 0 — Prototype (1 sprint)
160 | - Server: bare repo init + simple committer (batch every 10s) + manual GitHub token.
161 | - CLI: `bm cloud git connect --token <PAT>` (dev‑only)
162 | - Success: edits in `/app/data/` appear in GitHub within 30s.
163 | 
164 | ### Phase 1 — GitHub App & Webhooks (1–2 sprints)
165 | - Switch to GitHub App installs; create private repo; store installation id.
166 | - Committer hardened (debounce 2–5s, backoff, retries).
167 | - Puller service with webhook → FF merge → checkout to `/app/data/`.
168 | - LFS auto‑track + `.gitignore` generation.
169 | - CLI surfaces status + logs.
170 | 
171 | ### Phase 2 — Restore & Snapshots (1 sprint)
172 | - `bm restore` for file/folder/project with dry‑run.
173 | - `bm cloud snapshot` tags + list/inspect.
174 | - Policy: PR‑only non‑FF, admin override.
175 | 
176 | ### Phase 3 — Selective & Multi‑Repo (nice‑to‑have)
177 | - Include/exclude projects; optional per‑project repos.
178 | - Advanced policies (branch protections, required reviews).
179 | 
180 | ## Acceptance Criteria
181 | - Changes to `/app/data/` are committed and pushed automatically within configurable interval (default ≤5s).
182 | - GitHub webhook pull results in updated files in `/app/data/` (FF‑only by default).
183 | - LFS configured and functioning; large files don't bloat history.
184 | - `bm cloud git status` shows connected repo and last push/pull times.
185 | - `bm restore` restores a file/folder to a prior commit with a clear audit trail.
186 | - End‑to‑end works alongside SPEC‑9 bisync without loops or data loss.
187 | 
188 | ## Risks & Mitigations
189 | - **Loop risk (Git ↔ Bisync)**: Writes to `/app/data/` → bisync → local → user edits → back again. *Mitigation*: Debounce, commit squashing, idempotent `.bmmeta` versioning, and watch exclusion windows during pull.
190 | - **Repo bloat**: Lots of binary churn. *Mitigation*: default LFS, size threshold, optional media‑only repo later.
191 | - **Security**: Token leakage. *Mitigation*: GitHub App with short‑lived tokens, KMS storage, scoped permissions.
192 | - **Merge complexity**: Non‑trivial conflicts. *Mitigation*: prefer FF; otherwise conflict copies + events; require PR for non‑FF.
193 | 
194 | ## Open Questions
195 | - Do we default to **monorepo** per tenant, or offer project‑per‑repo at connect time?
196 | - Should `restore` write to a branch and open a PR, or directly modify `main`?
197 | - How do we expose Git history in UI (timeline view) without users dropping to CLI?
198 | 
199 | ## Appendix: Sample Config
200 | ```json
201 | {
202 |   "git": {
203 |     "enabled": true,
204 |     "repo": "https://github.com/<owner>/<repo>.git",
205 |     "autoPushInterval": "5s",
206 |     "allowNonFF": false,
207 |     "lfs": { "sizeThreshold": 10485760 }
208 |   }
209 | }
210 | ```
211 | 
```

--------------------------------------------------------------------------------
/tests/schemas/test_memory_url_validation.py:
--------------------------------------------------------------------------------

```python
  1 | """Tests for memory URL validation functionality."""
  2 | 
  3 | import pytest
  4 | from pydantic import ValidationError
  5 | 
  6 | from basic_memory.schemas.memory import (
  7 |     normalize_memory_url,
  8 |     validate_memory_url_path,
  9 |     memory_url,
 10 | )
 11 | 
 12 | 
 13 | class TestValidateMemoryUrlPath:
 14 |     """Test the validate_memory_url_path function."""
 15 | 
 16 |     def test_valid_paths(self):
 17 |         """Test that valid paths pass validation."""
 18 |         valid_paths = [
 19 |             "notes/meeting",
 20 |             "projects/basic-memory",
 21 |             "research/findings-2025",
 22 |             "specs/search",
 23 |             "docs/api-spec",
 24 |             "folder/subfolder/note",
 25 |             "single-note",
 26 |             "notes/with-hyphens",
 27 |             "notes/with_underscores",
 28 |             "notes/with123numbers",
 29 |             "pattern/*",  # Wildcard pattern matching
 30 |             "deep/*/pattern",
 31 |         ]
 32 | 
 33 |         for path in valid_paths:
 34 |             assert validate_memory_url_path(path), f"Path '{path}' should be valid"
 35 | 
 36 |     def test_invalid_empty_paths(self):
 37 |         """Test that empty/whitespace paths fail validation."""
 38 |         invalid_paths = [
 39 |             "",
 40 |             "   ",
 41 |             "\t",
 42 |             "\n",
 43 |             "  \n  ",
 44 |         ]
 45 | 
 46 |         for path in invalid_paths:
 47 |             assert not validate_memory_url_path(path), f"Path '{path}' should be invalid"
 48 | 
 49 |     def test_invalid_double_slashes(self):
 50 |         """Test that paths with double slashes fail validation."""
 51 |         invalid_paths = [
 52 |             "notes//meeting",
 53 |             "//root",
 54 |             "folder//subfolder/note",
 55 |             "path//with//multiple//doubles",
 56 |             "memory//test",
 57 |         ]
 58 | 
 59 |         for path in invalid_paths:
 60 |             assert not validate_memory_url_path(path), (
 61 |                 f"Path '{path}' should be invalid (double slashes)"
 62 |             )
 63 | 
 64 |     def test_invalid_protocol_schemes(self):
 65 |         """Test that paths with protocol schemes fail validation."""
 66 |         invalid_paths = [
 67 |             "http://example.com",
 68 |             "https://example.com/path",
 69 |             "file://local/path",
 70 |             "ftp://server.com",
 71 |             "invalid://test",
 72 |             "custom://scheme",
 73 |         ]
 74 | 
 75 |         for path in invalid_paths:
 76 |             assert not validate_memory_url_path(path), (
 77 |                 f"Path '{path}' should be invalid (protocol scheme)"
 78 |             )
 79 | 
 80 |     def test_invalid_characters(self):
 81 |         """Test that paths with invalid characters fail validation."""
 82 |         invalid_paths = [
 83 |             "notes<with>brackets",
 84 |             'notes"with"quotes',
 85 |             "notes|with|pipes",
 86 |             "notes?with?questions",
 87 |         ]
 88 | 
 89 |         for path in invalid_paths:
 90 |             assert not validate_memory_url_path(path), (
 91 |                 f"Path '{path}' should be invalid (invalid chars)"
 92 |             )
 93 | 
 94 | 
 95 | class TestNormalizeMemoryUrl:
 96 |     """Test the normalize_memory_url function."""
 97 | 
 98 |     def test_valid_normalization(self):
 99 |         """Test that valid URLs are properly normalized."""
100 |         test_cases = [
101 |             ("specs/search", "memory://specs/search"),
102 |             ("memory://specs/search", "memory://specs/search"),
103 |             ("notes/meeting-2025", "memory://notes/meeting-2025"),
104 |             ("memory://notes/meeting-2025", "memory://notes/meeting-2025"),
105 |             ("pattern/*", "memory://pattern/*"),
106 |             ("memory://pattern/*", "memory://pattern/*"),
107 |         ]
108 | 
109 |         for input_url, expected in test_cases:
110 |             result = normalize_memory_url(input_url)
111 |             assert result == expected, (
112 |                 f"normalize_memory_url('{input_url}') should return '{expected}', got '{result}'"
113 |             )
114 | 
115 |     def test_empty_url(self):
116 |         """Test that empty URLs raise ValueError."""
117 |         with pytest.raises(ValueError, match="cannot be empty"):
118 |             normalize_memory_url(None)
119 |         with pytest.raises(ValueError, match="cannot be empty"):
120 |             normalize_memory_url("")
121 | 
122 |     def test_invalid_double_slashes(self):
123 |         """Test that URLs with double slashes raise ValueError."""
124 |         invalid_urls = [
125 |             "memory//test",
126 |             "notes//meeting",
127 |             "//root",
128 |             "memory://path//with//doubles",
129 |         ]
130 | 
131 |         for url in invalid_urls:
132 |             with pytest.raises(ValueError, match="contains double slashes"):
133 |                 normalize_memory_url(url)
134 | 
135 |     def test_invalid_protocol_schemes(self):
136 |         """Test that URLs with other protocol schemes raise ValueError."""
137 |         invalid_urls = [
138 |             "http://example.com",
139 |             "https://example.com/path",
140 |             "file://local/path",
141 |             "invalid://test",
142 |         ]
143 | 
144 |         for url in invalid_urls:
145 |             with pytest.raises(ValueError, match="contains protocol scheme"):
146 |                 normalize_memory_url(url)
147 | 
148 |     def test_whitespace_only(self):
149 |         """Test that whitespace-only URLs raise ValueError."""
150 |         whitespace_urls = [
151 |             "   ",
152 |             "\t",
153 |             "\n",
154 |             "  \n  ",
155 |         ]
156 | 
157 |         for url in whitespace_urls:
158 |             with pytest.raises(ValueError, match="cannot be empty or whitespace"):
159 |                 normalize_memory_url(url)
160 | 
161 |     def test_invalid_characters(self):
162 |         """Test that URLs with invalid characters raise ValueError."""
163 |         invalid_urls = [
164 |             "notes<brackets>",
165 |             'notes"quotes"',
166 |             "notes|pipes|",
167 |             "notes?questions?",
168 |         ]
169 | 
170 |         for url in invalid_urls:
171 |             with pytest.raises(ValueError, match="contains invalid characters"):
172 |                 normalize_memory_url(url)
173 | 
174 | 
175 | class TestMemoryUrlPydanticValidation:
176 |     """Test the MemoryUrl Pydantic type validation."""
177 | 
178 |     def test_valid_urls_pass_validation(self):
179 |         """Test that valid URLs pass Pydantic validation."""
180 |         valid_urls = [
181 |             "specs/search",
182 |             "memory://specs/search",
183 |             "notes/meeting-2025",
184 |             "projects/basic-memory/docs",
185 |             "pattern/*",
186 |         ]
187 | 
188 |         for url in valid_urls:
189 |             # Should not raise an exception
190 |             result = memory_url.validate_python(url)
191 |             assert result.startswith("memory://"), (
192 |                 f"Validated URL should start with memory://, got {result}"
193 |             )
194 | 
195 |     def test_invalid_urls_fail_validation(self):
196 |         """Test that invalid URLs fail Pydantic validation with clear errors."""
197 |         invalid_test_cases = [
198 |             ("memory//test", "double slashes"),
199 |             ("invalid://test", "protocol scheme"),
200 |             ("   ", "empty or whitespace"),
201 |             ("notes<brackets>", "invalid characters"),
202 |         ]
203 | 
204 |         for url, expected_error in invalid_test_cases:
205 |             with pytest.raises(ValidationError) as exc_info:
206 |                 memory_url.validate_python(url)
207 | 
208 |             error_msg = str(exc_info.value)
209 |             assert "value_error" in error_msg, f"Should be a value_error for '{url}'"
210 | 
211 |     def test_empty_string_fails_validation(self):
212 |         """Test that empty strings fail validation."""
213 |         with pytest.raises(ValidationError, match="cannot be empty"):
214 |             memory_url.validate_python("")
215 | 
216 |     def test_very_long_urls_fail_maxlength(self):
217 |         """Test that very long URLs fail MaxLen validation."""
218 |         long_url = "a" * 3000  # Exceeds MaxLen(2028)
219 |         with pytest.raises(ValidationError, match="at most 2028"):
220 |             memory_url.validate_python(long_url)
221 | 
222 |     def test_whitespace_stripped(self):
223 |         """Test that whitespace is properly stripped."""
224 |         urls_with_whitespace = [
225 |             "  specs/search  ",
226 |             "\tprojects/basic-memory\t",
227 |             "\nnotes/meeting\n",
228 |         ]
229 | 
230 |         for url in urls_with_whitespace:
231 |             result = memory_url.validate_python(url)
232 |             assert not result.startswith(" ") and not result.endswith(" "), (
233 |                 f"Whitespace should be stripped from '{url}'"
234 |             )
235 |             assert "memory://" in result, "Result should contain memory:// prefix"
236 | 
237 | 
238 | class TestMemoryUrlErrorMessages:
239 |     """Test that error messages are clear and helpful."""
240 | 
241 |     def test_double_slash_error_message(self):
242 |         """Test specific error message for double slashes."""
243 |         with pytest.raises(ValueError) as exc_info:
244 |             normalize_memory_url("memory//test")
245 | 
246 |         error_msg = str(exc_info.value)
247 |         assert "memory//test" in error_msg
248 |         assert "double slashes" in error_msg
249 | 
250 |     def test_protocol_scheme_error_message(self):
251 |         """Test specific error message for protocol schemes."""
252 |         with pytest.raises(ValueError) as exc_info:
253 |             normalize_memory_url("http://example.com")
254 | 
255 |         error_msg = str(exc_info.value)
256 |         assert "http://example.com" in error_msg
257 |         assert "protocol scheme" in error_msg
258 | 
259 |     def test_empty_error_message(self):
260 |         """Test specific error message for empty paths."""
261 |         with pytest.raises(ValueError) as exc_info:
262 |             normalize_memory_url("   ")
263 | 
264 |         error_msg = str(exc_info.value)
265 |         assert "empty or whitespace" in error_msg
266 | 
267 |     def test_invalid_characters_error_message(self):
268 |         """Test specific error message for invalid characters."""
269 |         with pytest.raises(ValueError) as exc_info:
270 |             normalize_memory_url("notes<brackets>")
271 | 
272 |         error_msg = str(exc_info.value)
273 |         assert "notes<brackets>" in error_msg
274 |         assert "invalid characters" in error_msg
275 | 
```

--------------------------------------------------------------------------------
/v15-docs/default-project-mode.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Default Project Mode
  2 | 
  3 | **Status**: New Feature
  4 | **PR**: #298 (SPEC-6)
  5 | **Related**: explicit-project-parameter.md
  6 | 
  7 | ## What's New
  8 | 
  9 | v0.15.0 introduces `default_project_mode` - a configuration option that simplifies single-project workflows by automatically using your default project when no explicit project parameter is provided.
 10 | 
 11 | ## Quick Start
 12 | 
 13 | ### Enable Default Project Mode
 14 | 
 15 | Edit `~/.basic-memory/config.json`:
 16 | 
 17 | ```json
 18 | {
 19 |   "default_project": "main",
 20 |   "default_project_mode": true,
 21 |   "projects": {
 22 |     "main": "/Users/you/basic-memory"
 23 |   }
 24 | }
 25 | ```
 26 | 
 27 | ### Now Tools Work Without Project Parameter
 28 | 
 29 | ```python
 30 | # Before (explicit project required)
 31 | await write_note("Note", "Content", "folder", project="main")
 32 | 
 33 | # After (with default_project_mode: true)
 34 | await write_note("Note", "Content", "folder")  # Uses "main" automatically
 35 | ```
 36 | 
 37 | ## Configuration Options
 38 | 
 39 | | Option | Type | Default | Description |
 40 | |--------|------|---------|-------------|
 41 | | `default_project_mode` | boolean | `false` | Enable auto-fallback to default project |
 42 | | `default_project` | string | `"main"` | Which project to use as default |
 43 | 
 44 | ## How It Works
 45 | 
 46 | ### Three-Tier Project Resolution
 47 | 
 48 | When a tool is called, Basic Memory resolves the project in this order:
 49 | 
 50 | 1. **CLI Constraint** (Highest): `bm --project work-notes` forces all tools to use "work-notes"
 51 | 2. **Explicit Parameter** (Medium): `project="specific"` in tool call
 52 | 3. **Default Mode** (Lowest): Uses `default_project` if `default_project_mode: true`
 53 | 
 54 | ### Examples
 55 | 
 56 | **With default_project_mode: false (default):**
 57 | ```python
 58 | # Must specify project explicitly
 59 | await search_notes("query", project="main")  # ✓ Works
 60 | await search_notes("query")                  # ✗ Error: project required
 61 | ```
 62 | 
 63 | **With default_project_mode: true:**
 64 | ```python
 65 | # Project parameter is optional
 66 | await search_notes("query")                  # ✓ Uses default_project
 67 | await search_notes("query", project="work")  # ✓ Explicit override works
 68 | ```
 69 | 
 70 | ## Use Cases
 71 | 
 72 | ### Single-Project Users
 73 | 
 74 | **Best for:**
 75 | - Users who maintain one primary knowledge base
 76 | - Personal knowledge management
 77 | - Single-purpose documentation
 78 | 
 79 | **Configuration:**
 80 | ```json
 81 | {
 82 |   "default_project": "main",
 83 |   "default_project_mode": true,
 84 |   "projects": {
 85 |     "main": "/Users/you/basic-memory"
 86 |   }
 87 | }
 88 | ```
 89 | 
 90 | **Benefits:**
 91 | - Simpler tool calls
 92 | - Less verbose for AI assistants
 93 | - Familiar workflow (like v0.14.x)
 94 | 
 95 | ### Multi-Project Users
 96 | 
 97 | **Best for:**
 98 | - Multiple distinct knowledge bases (work, personal, research)
 99 | - Switching contexts frequently
100 | - Team collaboration with separate projects
101 | 
102 | **Configuration:**
103 | ```json
104 | {
105 |   "default_project": "main",
106 |   "default_project_mode": false,
107 |   "projects": {
108 |     "work": "/Users/you/work-kb",
109 |     "personal": "/Users/you/personal-kb",
110 |     "research": "/Users/you/research-kb"
111 |   }
112 | }
113 | ```
114 | 
115 | **Benefits:**
116 | - Explicit project selection prevents mistakes
117 | - Clear which knowledge base is being accessed
118 | - Better for context switching
119 | 
120 | ## Workflow Examples
121 | 
122 | ### Single-Project Workflow
123 | 
124 | ```python
125 | # config.json: default_project_mode: true, default_project: "main"
126 | 
127 | # Write without specifying project
128 | await write_note(
129 |     title="Meeting Notes",
130 |     content="# Team Sync\n...",
131 |     folder="meetings"
132 | )  # → Saved to "main" project
133 | 
134 | # Search across default project
135 | results = await search_notes("quarterly goals")
136 | # → Searches "main" project
137 | 
138 | # Build context from default project
139 | context = await build_context("memory://goals/q4-2024")
140 | # → Uses "main" project
141 | ```
142 | 
143 | ### Multi-Project with Explicit Selection
144 | 
145 | ```python
146 | # config.json: default_project_mode: false
147 | 
148 | # Work project
149 | await write_note(
150 |     title="Architecture Decision",
151 |     content="# ADR-001\n...",
152 |     folder="decisions",
153 |     project="work"
154 | )
155 | 
156 | # Personal project
157 | await write_note(
158 |     title="Book Notes",
159 |     content="# Design Patterns\n...",
160 |     folder="reading",
161 |     project="personal"
162 | )
163 | 
164 | # Research project
165 | await search_notes(
166 |     query="machine learning",
167 |     project="research"
168 | )
169 | ```
170 | 
171 | ### Hybrid: Default with Occasional Override
172 | 
173 | ```python
174 | # config.json: default_project_mode: true, default_project: "personal"
175 | 
176 | # Most operations use personal (default)
177 | await write_note("Daily Journal", "...", "journal")
178 | # → Saved to "personal"
179 | 
180 | # Explicitly use work project when needed
181 | await write_note(
182 |     title="Sprint Planning",
183 |     content="...",
184 |     folder="planning",
185 |     project="work"  # Override default
186 | )
187 | # → Saved to "work"
188 | 
189 | # Back to default
190 | await search_notes("goals")
191 | # → Searches "personal"
192 | ```
193 | 
194 | ## Migration Guide
195 | 
196 | ### From v0.14.x (Implicit Project)
197 | 
198 | v0.14.x had implicit project context via middleware. To get similar behavior:
199 | 
200 | **Enable default_project_mode:**
201 | ```json
202 | {
203 |   "default_project": "main",
204 |   "default_project_mode": true
205 | }
206 | ```
207 | 
208 | Now tools work without explicit project parameter (like v0.14.x).
209 | 
210 | ### From v0.15.0 Explicit-Only
211 | 
212 | If you started with v0.15.0 using explicit projects:
213 | 
214 | **Keep current behavior:**
215 | ```json
216 | {
217 |   "default_project_mode": false  # or omit (false is default)
218 | }
219 | ```
220 | 
221 | **Or simplify for single project:**
222 | ```json
223 | {
224 |   "default_project": "main",
225 |   "default_project_mode": true
226 | }
227 | ```
228 | 
229 | ## LLM Integration
230 | 
231 | ### Claude Desktop
232 | 
233 | Claude can detect and use default_project_mode:
234 | 
235 | **Auto-detection:**
236 | ```python
237 | # Claude reads config
238 | config = read_config()
239 | 
240 | if config.get("default_project_mode"):
241 |     # Use simple calls
242 |     await write_note("Note", "Content", "folder")
243 | else:
244 |     # Discover and use explicit project
245 |     projects = await list_memory_projects()
246 |     await write_note("Note", "Content", "folder", project=projects[0].name)
247 | ```
248 | 
249 | ### Custom MCP Clients
250 | 
251 | ```python
252 | from basic_memory.config import ConfigManager
253 | 
254 | config = ConfigManager().config
255 | 
256 | if config.default_project_mode:
257 |     # Project parameter optional
258 |     result = await mcp_tool(arg1, arg2)
259 | else:
260 |     # Project parameter required
261 |     result = await mcp_tool(arg1, arg2, project="name")
262 | ```
263 | 
264 | ## Error Handling
265 | 
266 | ### Missing Project (default_project_mode: false)
267 | 
268 | ```python
269 | try:
270 |     results = await search_notes("query")
271 | except ValueError as e:
272 |     print("Error: project parameter required")
273 |     # Show available projects
274 |     projects = await list_memory_projects()
275 |     print(f"Available: {[p.name for p in projects]}")
276 | ```
277 | 
278 | ### Invalid Default Project
279 | 
280 | ```json
281 | {
282 |   "default_project": "nonexistent",
283 |   "default_project_mode": true
284 | }
285 | ```
286 | 
287 | **Result:** Falls back to "main" project if default doesn't exist.
288 | 
289 | ## Configuration Management
290 | 
291 | ### Update Config
292 | 
293 | ```bash
294 | # Edit directly
295 | vim ~/.basic-memory/config.json
296 | 
297 | # Or use CLI (if available)
298 | bm config set default_project_mode true
299 | bm config set default_project main
300 | ```
301 | 
302 | ### Verify Config
303 | 
304 | ```python
305 | from basic_memory.config import ConfigManager
306 | 
307 | config = ConfigManager().config
308 | print(f"Default mode: {config.default_project_mode}")
309 | print(f"Default project: {config.default_project}")
310 | print(f"Projects: {list(config.projects.keys())}")
311 | ```
312 | 
313 | ### Environment Override
314 | 
315 | ```bash
316 | # Override via environment
317 | export BASIC_MEMORY_DEFAULT_PROJECT_MODE=true
318 | export BASIC_MEMORY_DEFAULT_PROJECT=work
319 | 
320 | # Now default_project_mode enabled for this session
321 | ```
322 | 
323 | ## Best Practices
324 | 
325 | 1. **Choose based on workflow:**
326 |    - Single project → enable default_project_mode
327 |    - Multiple projects → keep explicit (false)
328 | 
329 | 2. **Document your choice:**
330 |    - Add comment to config.json explaining why
331 | 
332 | 3. **Consistent with team:**
333 |    - Agree on project mode for shared setups
334 | 
335 | 4. **Test both modes:**
336 |    - Try each to see what feels natural
337 | 
338 | 5. **Use CLI constraints when needed:**
339 |    - `bm --project work-notes` overrides everything
340 | 
341 | ## Troubleshooting
342 | 
343 | ### Tools Not Using Default Project
344 | 
345 | **Problem:** default_project_mode: true but tools still require project
346 | 
347 | **Check:**
348 | ```bash
349 | # Verify config
350 | cat ~/.basic-memory/config.json | grep default_project_mode
351 | 
352 | # Should show: "default_project_mode": true
353 | ```
354 | 
355 | **Solution:** Restart MCP server to reload config
356 | 
357 | ### Wrong Project Being Used
358 | 
359 | **Problem:** Tools using unexpected project
360 | 
361 | **Check resolution order:**
362 | 1. CLI constraint (`--project` flag)
363 | 2. Explicit parameter in tool call
364 | 3. Default project (if mode enabled)
365 | 
366 | **Solution:** Check for CLI constraints or explicit parameters
367 | 
368 | ### Config Not Loading
369 | 
370 | **Problem:** Changes to config.json not taking effect
371 | 
372 | **Solution:**
373 | ```bash
374 | # Restart MCP server
375 | # Or reload config programmatically
376 | from basic_memory import config as config_module
377 | config_module._config = None  # Clear cache
378 | ```
379 | 
380 | ## Technical Details
381 | 
382 | ### Implementation
383 | 
384 | ```python
385 | class BasicMemoryConfig(BaseSettings):
386 |     default_project: str = Field(
387 |         default="main",
388 |         description="Name of the default project to use"
389 |     )
390 | 
391 |     default_project_mode: bool = Field(
392 |         default=False,
393 |         description="When True, MCP tools automatically use default_project when no project parameter is specified"
394 |     )
395 | ```
396 | 
397 | ### Project Resolution Logic
398 | 
399 | ```python
400 | def resolve_project(
401 |     explicit_project: Optional[str] = None,
402 |     cli_project: Optional[str] = None,
403 |     config: BasicMemoryConfig = None
404 | ) -> str:
405 |     # 1. CLI constraint (highest priority)
406 |     if cli_project:
407 |         return cli_project
408 | 
409 |     # 2. Explicit parameter
410 |     if explicit_project:
411 |         return explicit_project
412 | 
413 |     # 3. Default mode (lowest priority)
414 |     if config.default_project_mode:
415 |         return config.default_project
416 | 
417 |     # 4. No project found
418 |     raise ValueError("Project parameter required")
419 | ```
420 | 
421 | ## See Also
422 | 
423 | - `explicit-project-parameter.md` - Why explicit project is required
424 | - SPEC-6: Explicit Project Parameter Architecture
425 | - MCP tools documentation
426 | 
```
Page 7/23FirstPrevNextLast