This is page 18 of 19. Use http://codebase.md/basicmachines-co/basic-memory?lines=false&page={x} to view the full context.
# Directory Structure
```
├── .claude
│ ├── commands
│ │ ├── release
│ │ │ ├── beta.md
│ │ │ ├── changelog.md
│ │ │ ├── release-check.md
│ │ │ └── release.md
│ │ ├── spec.md
│ │ └── test-live.md
│ └── settings.json
├── .dockerignore
├── .env.example
├── .github
│ ├── dependabot.yml
│ ├── ISSUE_TEMPLATE
│ │ ├── bug_report.md
│ │ ├── config.yml
│ │ ├── documentation.md
│ │ └── feature_request.md
│ └── workflows
│ ├── claude-code-review.yml
│ ├── claude-issue-triage.yml
│ ├── claude.yml
│ ├── dev-release.yml
│ ├── docker.yml
│ ├── pr-title.yml
│ ├── release.yml
│ └── test.yml
├── .gitignore
├── .python-version
├── CHANGELOG.md
├── CITATION.cff
├── CLA.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── docker-compose-postgres.yml
├── docker-compose.yml
├── Dockerfile
├── docs
│ ├── ai-assistant-guide-extended.md
│ ├── ARCHITECTURE.md
│ ├── character-handling.md
│ ├── cloud-cli.md
│ ├── Docker.md
│ └── testing-coverage.md
├── justfile
├── LICENSE
├── llms-install.md
├── pyproject.toml
├── README.md
├── SECURITY.md
├── smithery.yaml
├── specs
│ ├── SPEC-1 Specification-Driven Development Process.md
│ ├── SPEC-10 Unified Deployment Workflow and Event Tracking.md
│ ├── SPEC-11 Basic Memory API Performance Optimization.md
│ ├── SPEC-12 OpenTelemetry Observability.md
│ ├── SPEC-13 CLI Authentication with Subscription Validation.md
│ ├── SPEC-14 Cloud Git Versioning & GitHub Backup.md
│ ├── SPEC-14- Cloud Git Versioning & GitHub Backup.md
│ ├── SPEC-15 Configuration Persistence via Tigris for Cloud Tenants.md
│ ├── SPEC-16 MCP Cloud Service Consolidation.md
│ ├── SPEC-17 Semantic Search with ChromaDB.md
│ ├── SPEC-18 AI Memory Management Tool.md
│ ├── SPEC-19 Sync Performance and Memory Optimization.md
│ ├── SPEC-2 Slash Commands Reference.md
│ ├── SPEC-20 Simplified Project-Scoped Rclone Sync.md
│ ├── SPEC-3 Agent Definitions.md
│ ├── SPEC-4 Notes Web UI Component Architecture.md
│ ├── SPEC-5 CLI Cloud Upload via WebDAV.md
│ ├── SPEC-6 Explicit Project Parameter Architecture.md
│ ├── SPEC-7 POC to spike Tigris Turso for local access to cloud data.md
│ ├── SPEC-8 TigrisFS Integration.md
│ ├── SPEC-9 Multi-Project Bidirectional Sync Architecture.md
│ ├── SPEC-9 Signed Header Tenant Information.md
│ └── SPEC-9-1 Follow-Ups- Conflict, Sync, and Observability.md
├── src
│ └── basic_memory
│ ├── __init__.py
│ ├── alembic
│ │ ├── alembic.ini
│ │ ├── env.py
│ │ ├── migrations.py
│ │ ├── script.py.mako
│ │ └── versions
│ │ ├── 314f1ea54dc4_add_postgres_full_text_search_support_.py
│ │ ├── 3dae7c7b1564_initial_schema.py
│ │ ├── 502b60eaa905_remove_required_from_entity_permalink.py
│ │ ├── 5fe1ab1ccebe_add_projects_table.py
│ │ ├── 647e7a75e2cd_project_constraint_fix.py
│ │ ├── 6830751f5fb6_merge_multiple_heads.py
│ │ ├── 9d9c1cb7d8f5_add_mtime_and_size_columns_to_entity_.py
│ │ ├── a1b2c3d4e5f6_fix_project_foreign_keys.py
│ │ ├── a2b3c4d5e6f7_add_search_index_entity_cascade.py
│ │ ├── b3c3938bacdb_relation_to_name_unique_index.py
│ │ ├── cc7172b46608_update_search_index_schema.py
│ │ ├── e7e1f4367280_add_scan_watermark_tracking_to_project.py
│ │ ├── f8a9b2c3d4e5_add_pg_trgm_for_fuzzy_link_resolution.py
│ │ └── g9a0b3c4d5e6_add_external_id_to_project_and_entity.py
│ ├── api
│ │ ├── __init__.py
│ │ ├── app.py
│ │ ├── container.py
│ │ ├── routers
│ │ │ ├── __init__.py
│ │ │ ├── directory_router.py
│ │ │ ├── importer_router.py
│ │ │ ├── knowledge_router.py
│ │ │ ├── management_router.py
│ │ │ ├── memory_router.py
│ │ │ ├── project_router.py
│ │ │ ├── prompt_router.py
│ │ │ ├── resource_router.py
│ │ │ ├── search_router.py
│ │ │ └── utils.py
│ │ ├── template_loader.py
│ │ └── v2
│ │ ├── __init__.py
│ │ └── routers
│ │ ├── __init__.py
│ │ ├── directory_router.py
│ │ ├── importer_router.py
│ │ ├── knowledge_router.py
│ │ ├── memory_router.py
│ │ ├── project_router.py
│ │ ├── prompt_router.py
│ │ ├── resource_router.py
│ │ └── search_router.py
│ ├── cli
│ │ ├── __init__.py
│ │ ├── app.py
│ │ ├── auth.py
│ │ ├── commands
│ │ │ ├── __init__.py
│ │ │ ├── cloud
│ │ │ │ ├── __init__.py
│ │ │ │ ├── api_client.py
│ │ │ │ ├── bisync_commands.py
│ │ │ │ ├── cloud_utils.py
│ │ │ │ ├── core_commands.py
│ │ │ │ ├── rclone_commands.py
│ │ │ │ ├── rclone_config.py
│ │ │ │ ├── rclone_installer.py
│ │ │ │ ├── upload_command.py
│ │ │ │ └── upload.py
│ │ │ ├── command_utils.py
│ │ │ ├── db.py
│ │ │ ├── format.py
│ │ │ ├── import_chatgpt.py
│ │ │ ├── import_claude_conversations.py
│ │ │ ├── import_claude_projects.py
│ │ │ ├── import_memory_json.py
│ │ │ ├── mcp.py
│ │ │ ├── project.py
│ │ │ ├── status.py
│ │ │ ├── telemetry.py
│ │ │ └── tool.py
│ │ ├── container.py
│ │ └── main.py
│ ├── config.py
│ ├── db.py
│ ├── deps
│ │ ├── __init__.py
│ │ ├── config.py
│ │ ├── db.py
│ │ ├── importers.py
│ │ ├── projects.py
│ │ ├── repositories.py
│ │ └── services.py
│ ├── deps.py
│ ├── file_utils.py
│ ├── ignore_utils.py
│ ├── importers
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── chatgpt_importer.py
│ │ ├── claude_conversations_importer.py
│ │ ├── claude_projects_importer.py
│ │ ├── memory_json_importer.py
│ │ └── utils.py
│ ├── markdown
│ │ ├── __init__.py
│ │ ├── entity_parser.py
│ │ ├── markdown_processor.py
│ │ ├── plugins.py
│ │ ├── schemas.py
│ │ └── utils.py
│ ├── mcp
│ │ ├── __init__.py
│ │ ├── async_client.py
│ │ ├── clients
│ │ │ ├── __init__.py
│ │ │ ├── directory.py
│ │ │ ├── knowledge.py
│ │ │ ├── memory.py
│ │ │ ├── project.py
│ │ │ ├── resource.py
│ │ │ └── search.py
│ │ ├── container.py
│ │ ├── project_context.py
│ │ ├── prompts
│ │ │ ├── __init__.py
│ │ │ ├── ai_assistant_guide.py
│ │ │ ├── continue_conversation.py
│ │ │ ├── recent_activity.py
│ │ │ ├── search.py
│ │ │ └── utils.py
│ │ ├── resources
│ │ │ ├── ai_assistant_guide.md
│ │ │ └── project_info.py
│ │ ├── server.py
│ │ └── tools
│ │ ├── __init__.py
│ │ ├── build_context.py
│ │ ├── canvas.py
│ │ ├── chatgpt_tools.py
│ │ ├── delete_note.py
│ │ ├── edit_note.py
│ │ ├── list_directory.py
│ │ ├── move_note.py
│ │ ├── project_management.py
│ │ ├── read_content.py
│ │ ├── read_note.py
│ │ ├── recent_activity.py
│ │ ├── search.py
│ │ ├── utils.py
│ │ ├── view_note.py
│ │ └── write_note.py
│ ├── models
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── knowledge.py
│ │ ├── project.py
│ │ └── search.py
│ ├── project_resolver.py
│ ├── repository
│ │ ├── __init__.py
│ │ ├── entity_repository.py
│ │ ├── observation_repository.py
│ │ ├── postgres_search_repository.py
│ │ ├── project_info_repository.py
│ │ ├── project_repository.py
│ │ ├── relation_repository.py
│ │ ├── repository.py
│ │ ├── search_index_row.py
│ │ ├── search_repository_base.py
│ │ ├── search_repository.py
│ │ └── sqlite_search_repository.py
│ ├── runtime.py
│ ├── schemas
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── cloud.py
│ │ ├── delete.py
│ │ ├── directory.py
│ │ ├── importer.py
│ │ ├── memory.py
│ │ ├── project_info.py
│ │ ├── prompt.py
│ │ ├── request.py
│ │ ├── response.py
│ │ ├── search.py
│ │ ├── sync_report.py
│ │ └── v2
│ │ ├── __init__.py
│ │ ├── entity.py
│ │ └── resource.py
│ ├── services
│ │ ├── __init__.py
│ │ ├── context_service.py
│ │ ├── directory_service.py
│ │ ├── entity_service.py
│ │ ├── exceptions.py
│ │ ├── file_service.py
│ │ ├── initialization.py
│ │ ├── link_resolver.py
│ │ ├── project_service.py
│ │ ├── search_service.py
│ │ └── service.py
│ ├── sync
│ │ ├── __init__.py
│ │ ├── background_sync.py
│ │ ├── coordinator.py
│ │ ├── sync_service.py
│ │ └── watch_service.py
│ ├── telemetry.py
│ ├── templates
│ │ └── prompts
│ │ ├── continue_conversation.hbs
│ │ └── search.hbs
│ └── utils.py
├── test-int
│ ├── BENCHMARKS.md
│ ├── cli
│ │ ├── test_project_commands_integration.py
│ │ └── test_version_integration.py
│ ├── conftest.py
│ ├── mcp
│ │ ├── test_build_context_underscore.py
│ │ ├── test_build_context_validation.py
│ │ ├── test_chatgpt_tools_integration.py
│ │ ├── test_default_project_mode_integration.py
│ │ ├── test_delete_note_integration.py
│ │ ├── test_edit_note_integration.py
│ │ ├── test_lifespan_shutdown_sync_task_cancellation_integration.py
│ │ ├── test_list_directory_integration.py
│ │ ├── test_move_note_integration.py
│ │ ├── test_project_management_integration.py
│ │ ├── test_project_state_sync_integration.py
│ │ ├── test_read_content_integration.py
│ │ ├── test_read_note_integration.py
│ │ ├── test_search_integration.py
│ │ ├── test_single_project_mcp_integration.py
│ │ └── test_write_note_integration.py
│ ├── test_db_wal_mode.py
│ └── test_disable_permalinks_integration.py
├── tests
│ ├── __init__.py
│ ├── api
│ │ ├── conftest.py
│ │ ├── test_api_container.py
│ │ ├── test_async_client.py
│ │ ├── test_continue_conversation_template.py
│ │ ├── test_directory_router.py
│ │ ├── test_importer_router.py
│ │ ├── test_knowledge_router.py
│ │ ├── test_management_router.py
│ │ ├── test_memory_router.py
│ │ ├── test_project_router_operations.py
│ │ ├── test_project_router.py
│ │ ├── test_prompt_router.py
│ │ ├── test_relation_background_resolution.py
│ │ ├── test_resource_router.py
│ │ ├── test_search_router.py
│ │ ├── test_search_template.py
│ │ ├── test_template_loader_helpers.py
│ │ ├── test_template_loader.py
│ │ └── v2
│ │ ├── __init__.py
│ │ ├── conftest.py
│ │ ├── test_directory_router.py
│ │ ├── test_importer_router.py
│ │ ├── test_knowledge_router.py
│ │ ├── test_memory_router.py
│ │ ├── test_project_router.py
│ │ ├── test_prompt_router.py
│ │ ├── test_resource_router.py
│ │ └── test_search_router.py
│ ├── cli
│ │ ├── cloud
│ │ │ ├── test_cloud_api_client_and_utils.py
│ │ │ ├── test_rclone_config_and_bmignore_filters.py
│ │ │ └── test_upload_path.py
│ │ ├── conftest.py
│ │ ├── test_auth_cli_auth.py
│ │ ├── test_cli_container.py
│ │ ├── test_cli_exit.py
│ │ ├── test_cli_tool_exit.py
│ │ ├── test_cli_tools.py
│ │ ├── test_cloud_authentication.py
│ │ ├── test_ignore_utils.py
│ │ ├── test_import_chatgpt.py
│ │ ├── test_import_claude_conversations.py
│ │ ├── test_import_claude_projects.py
│ │ ├── test_import_memory_json.py
│ │ ├── test_project_add_with_local_path.py
│ │ └── test_upload.py
│ ├── conftest.py
│ ├── db
│ │ └── test_issue_254_foreign_key_constraints.py
│ ├── importers
│ │ ├── test_conversation_indexing.py
│ │ ├── test_importer_base.py
│ │ └── test_importer_utils.py
│ ├── markdown
│ │ ├── __init__.py
│ │ ├── test_date_frontmatter_parsing.py
│ │ ├── test_entity_parser_error_handling.py
│ │ ├── test_entity_parser.py
│ │ ├── test_markdown_plugins.py
│ │ ├── test_markdown_processor.py
│ │ ├── test_observation_edge_cases.py
│ │ ├── test_parser_edge_cases.py
│ │ ├── test_relation_edge_cases.py
│ │ └── test_task_detection.py
│ ├── mcp
│ │ ├── clients
│ │ │ ├── __init__.py
│ │ │ └── test_clients.py
│ │ ├── conftest.py
│ │ ├── test_async_client_modes.py
│ │ ├── test_mcp_container.py
│ │ ├── test_obsidian_yaml_formatting.py
│ │ ├── test_permalink_collision_file_overwrite.py
│ │ ├── test_project_context.py
│ │ ├── test_prompts.py
│ │ ├── test_recent_activity_prompt_modes.py
│ │ ├── test_resources.py
│ │ ├── test_server_lifespan_branches.py
│ │ ├── test_tool_build_context.py
│ │ ├── test_tool_canvas.py
│ │ ├── test_tool_delete_note.py
│ │ ├── test_tool_edit_note.py
│ │ ├── test_tool_list_directory.py
│ │ ├── test_tool_move_note.py
│ │ ├── test_tool_project_management.py
│ │ ├── test_tool_read_content.py
│ │ ├── test_tool_read_note.py
│ │ ├── test_tool_recent_activity.py
│ │ ├── test_tool_resource.py
│ │ ├── test_tool_search.py
│ │ ├── test_tool_utils.py
│ │ ├── test_tool_view_note.py
│ │ ├── test_tool_write_note_kebab_filenames.py
│ │ ├── test_tool_write_note.py
│ │ └── tools
│ │ └── test_chatgpt_tools.py
│ ├── Non-MarkdownFileSupport.pdf
│ ├── README.md
│ ├── repository
│ │ ├── test_entity_repository_upsert.py
│ │ ├── test_entity_repository.py
│ │ ├── test_entity_upsert_issue_187.py
│ │ ├── test_observation_repository.py
│ │ ├── test_postgres_search_repository.py
│ │ ├── test_project_info_repository.py
│ │ ├── test_project_repository.py
│ │ ├── test_relation_repository.py
│ │ ├── test_repository.py
│ │ ├── test_search_repository_edit_bug_fix.py
│ │ └── test_search_repository.py
│ ├── schemas
│ │ ├── test_base_timeframe_minimum.py
│ │ ├── test_memory_serialization.py
│ │ ├── test_memory_url_validation.py
│ │ ├── test_memory_url.py
│ │ ├── test_relation_response_reference_resolution.py
│ │ ├── test_schemas.py
│ │ └── test_search.py
│ ├── Screenshot.png
│ ├── services
│ │ ├── test_context_service.py
│ │ ├── test_directory_service.py
│ │ ├── test_entity_service_disable_permalinks.py
│ │ ├── test_entity_service.py
│ │ ├── test_file_service.py
│ │ ├── test_initialization_cloud_mode_branches.py
│ │ ├── test_initialization.py
│ │ ├── test_link_resolver.py
│ │ ├── test_project_removal_bug.py
│ │ ├── test_project_service_operations.py
│ │ ├── test_project_service.py
│ │ └── test_search_service.py
│ ├── sync
│ │ ├── test_character_conflicts.py
│ │ ├── test_coordinator.py
│ │ ├── test_sync_service_incremental.py
│ │ ├── test_sync_service.py
│ │ ├── test_sync_wikilink_issue.py
│ │ ├── test_tmp_files.py
│ │ ├── test_watch_service_atomic_adds.py
│ │ ├── test_watch_service_edge_cases.py
│ │ ├── test_watch_service_reload.py
│ │ └── test_watch_service.py
│ ├── test_config.py
│ ├── test_deps.py
│ ├── test_production_cascade_delete.py
│ ├── test_project_resolver.py
│ ├── test_rclone_commands.py
│ ├── test_runtime.py
│ ├── test_telemetry.py
│ └── utils
│ ├── test_file_utils.py
│ ├── test_frontmatter_obsidian_compatible.py
│ ├── test_parse_tags.py
│ ├── test_permalink_formatting.py
│ ├── test_timezone_utils.py
│ ├── test_utf8_handling.py
│ └── test_validate_project_path.py
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/tests/services/test_project_service.py:
--------------------------------------------------------------------------------
```python
"""Tests for ProjectService."""
import os
import tempfile
from pathlib import Path
import pytest
from basic_memory.schemas import (
ProjectInfoResponse,
ProjectStatistics,
ActivityMetrics,
SystemStatus,
)
from basic_memory.services.project_service import ProjectService
from basic_memory.config import ConfigManager
def test_projects_property(project_service: ProjectService):
"""Test the projects property."""
# Get the projects
projects = project_service.projects
# Assert that it returns a dictionary
assert isinstance(projects, dict)
# The test config should have at least one project
assert len(projects) > 0
def test_default_project_property(project_service: ProjectService):
"""Test the default_project property."""
# Get the default project
default_project = project_service.default_project
# Assert it's a string and has a value
assert isinstance(default_project, str)
assert default_project
def test_current_project_property(project_service: ProjectService):
"""Test the current_project property."""
# Save original environment
original_env = os.environ.get("BASIC_MEMORY_PROJECT")
try:
# Test with environment variable not set
if "BASIC_MEMORY_PROJECT" in os.environ:
del os.environ["BASIC_MEMORY_PROJECT"]
# Should return default_project when env var not set
assert project_service.current_project == project_service.default_project
# Now set the environment variable
os.environ["BASIC_MEMORY_PROJECT"] = "test-project"
# Should return env var value
assert project_service.current_project == "test-project"
finally:
# Restore original environment
if original_env is not None:
os.environ["BASIC_MEMORY_PROJECT"] = original_env
elif "BASIC_MEMORY_PROJECT" in os.environ:
del os.environ["BASIC_MEMORY_PROJECT"]
"""Test the methods of ProjectService."""
@pytest.mark.asyncio
async def test_project_operations_sync_methods(
app_config, project_service: ProjectService, config_manager: ConfigManager
):
"""Test adding, switching, and removing a project using ConfigManager directly.
This test uses the ConfigManager directly instead of the async methods.
"""
# Generate a unique project name for testing
test_project_name = f"test-project-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = test_root / "test-project"
# Make sure the test directory exists
test_project_path.mkdir(parents=True, exist_ok=True)
try:
# Test adding a project (using ConfigManager directly)
config_manager.add_project(test_project_name, str(test_project_path))
# Verify it was added
assert test_project_name in project_service.projects
assert Path(project_service.projects[test_project_name]) == test_project_path
# Test setting as default
original_default = project_service.default_project
config_manager.set_default_project(test_project_name)
assert project_service.default_project == test_project_name
# Restore original default
if original_default:
config_manager.set_default_project(original_default)
# Test removing the project
config_manager.remove_project(test_project_name)
assert test_project_name not in project_service.projects
except Exception as e:
# Clean up in case of error
if test_project_name in project_service.projects:
try:
config_manager.remove_project(test_project_name)
except Exception:
pass
raise e
@pytest.mark.asyncio
async def test_get_system_status(project_service: ProjectService):
"""Test getting system status."""
# Get the system status
status = project_service.get_system_status()
# Assert it returns a valid SystemStatus object
assert isinstance(status, SystemStatus)
assert status.version
assert status.database_path
assert status.database_size
@pytest.mark.asyncio
async def test_get_statistics(project_service: ProjectService, test_graph, test_project):
"""Test getting statistics."""
# Get statistics
statistics = await project_service.get_statistics(test_project.id)
# Assert it returns a valid ProjectStatistics object
assert isinstance(statistics, ProjectStatistics)
assert statistics.total_entities > 0
assert "test" in statistics.entity_types
@pytest.mark.asyncio
async def test_get_activity_metrics(project_service: ProjectService, test_graph, test_project):
"""Test getting activity metrics."""
# Get activity metrics
metrics = await project_service.get_activity_metrics(test_project.id)
# Assert it returns a valid ActivityMetrics object
assert isinstance(metrics, ActivityMetrics)
assert len(metrics.recently_created) > 0
assert len(metrics.recently_updated) > 0
@pytest.mark.asyncio
async def test_get_project_info(project_service: ProjectService, test_graph, test_project):
"""Test getting full project info."""
# Get project info
info = await project_service.get_project_info(test_project.name)
# Assert it returns a valid ProjectInfoResponse object
assert isinstance(info, ProjectInfoResponse)
assert info.project_name
assert info.project_path
assert info.default_project
assert isinstance(info.available_projects, dict)
assert isinstance(info.statistics, ProjectStatistics)
assert isinstance(info.activity, ActivityMetrics)
assert isinstance(info.system, SystemStatus)
@pytest.mark.asyncio
async def test_add_project_async(project_service: ProjectService):
"""Test adding a project with the updated async method."""
test_project_name = f"test-async-project-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = test_root / "test-async-project"
# Make sure the test directory exists
test_project_path.mkdir(parents=True, exist_ok=True)
try:
# Test adding a project
await project_service.add_project(test_project_name, str(test_project_path))
# Verify it was added to config
assert test_project_name in project_service.projects
assert Path(project_service.projects[test_project_name]) == test_project_path
# Verify it was added to the database
project = await project_service.repository.get_by_name(test_project_name)
assert project is not None
assert project.name == test_project_name
assert Path(project.path) == test_project_path
finally:
# Clean up
if test_project_name in project_service.projects:
await project_service.remove_project(test_project_name)
# Ensure it was removed from both config and DB
assert test_project_name not in project_service.projects
project = await project_service.repository.get_by_name(test_project_name)
assert project is None
@pytest.mark.asyncio
async def test_set_default_project_async(project_service: ProjectService, test_project):
"""Test setting a project as default with the updated async method."""
# First add a test project
test_project_name = f"test-default-project-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = str(test_root / "test-default-project")
# Make sure the test directory exists
os.makedirs(test_project_path, exist_ok=True)
original_default = project_service.default_project
try:
# Add the test project
await project_service.add_project(test_project_name, test_project_path)
# Set as default
await project_service.set_default_project(test_project_name)
# Verify it's set as default in config
assert project_service.default_project == test_project_name
# Verify it's set as default in database
project = await project_service.repository.get_by_name(test_project_name)
assert project is not None
assert project.is_default is True
# Make sure old default is no longer default
old_default_project = await project_service.repository.get_by_name(original_default)
if old_default_project:
assert old_default_project.is_default is not True
finally:
# Restore original default (only if it exists in database)
if original_default:
original_project = await project_service.repository.get_by_name(original_default)
if original_project:
await project_service.set_default_project(original_default)
# Clean up test project
if test_project_name in project_service.projects:
await project_service.remove_project(test_project_name)
@pytest.mark.asyncio
async def test_get_project_method(project_service: ProjectService):
"""Test the get_project method directly."""
test_project_name = f"test-get-project-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = (test_root / "test-get-project").as_posix()
# Make sure the test directory exists
os.makedirs(test_project_path, exist_ok=True)
try:
# Test getting a non-existent project
result = await project_service.get_project("non-existent-project")
assert result is None
# Add a project
await project_service.add_project(test_project_name, test_project_path)
# Test getting an existing project
result = await project_service.get_project(test_project_name)
assert result is not None
assert result.name == test_project_name
assert result.path == test_project_path
finally:
# Clean up
if test_project_name in project_service.projects:
await project_service.remove_project(test_project_name)
@pytest.mark.asyncio
async def test_set_default_project_config_db_mismatch(
project_service: ProjectService, config_manager: ConfigManager
):
"""Test set_default_project raises error when project exists in config but not in database."""
test_project_name = f"test-mismatch-project-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = str(test_root / "test-mismatch-project")
# Make sure the test directory exists
os.makedirs(test_project_path, exist_ok=True)
try:
# Add project to config only (not to database)
config_manager.add_project(test_project_name, test_project_path)
# Verify it's in config but not in database
assert test_project_name in project_service.projects
db_project = await project_service.repository.get_by_name(test_project_name)
assert db_project is None
# Try to set as default - should raise ValueError since project not in database
with pytest.raises(ValueError, match=f"Project '{test_project_name}' not found"):
await project_service.set_default_project(test_project_name)
finally:
# Clean up
if test_project_name in project_service.projects:
config_manager.remove_project(test_project_name)
@pytest.mark.asyncio
async def test_add_project_with_set_default_true(project_service: ProjectService, test_project):
"""Test adding a project with set_default=True enforces single default."""
test_project_name = f"test-default-true-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = str(test_root / "test-default-true")
# Make sure the test directory exists
os.makedirs(test_project_path, exist_ok=True)
original_default = project_service.default_project
try:
# Get original default project from database
original_default_project = await project_service.repository.get_by_name(
original_default
)
# Add project with set_default=True
await project_service.add_project(
test_project_name, test_project_path, set_default=True
)
# Verify new project is set as default in both config and database
assert project_service.default_project == test_project_name
new_project = await project_service.repository.get_by_name(test_project_name)
assert new_project is not None
assert new_project.is_default is True
# Verify original default is no longer default in database
if original_default_project:
refreshed_original = await project_service.repository.get_by_name(original_default)
assert refreshed_original.is_default is not True
# Verify only one project has is_default=True
all_projects = await project_service.repository.find_all()
default_projects = [p for p in all_projects if p.is_default is True]
assert len(default_projects) == 1
assert default_projects[0].name == test_project_name
finally:
# Restore original default (only if it exists in database)
if original_default:
original_project = await project_service.repository.get_by_name(original_default)
if original_project:
await project_service.set_default_project(original_default)
# Clean up test project
if test_project_name in project_service.projects:
await project_service.remove_project(test_project_name)
@pytest.mark.asyncio
async def test_add_project_with_set_default_false(project_service: ProjectService):
"""Test adding a project with set_default=False doesn't change defaults."""
test_project_name = f"test-default-false-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = str(test_root / "test-default-false")
# Make sure the test directory exists
os.makedirs(test_project_path, exist_ok=True)
original_default = project_service.default_project
try:
# Add project with set_default=False (explicit)
await project_service.add_project(
test_project_name, test_project_path, set_default=False
)
# Verify default project hasn't changed
assert project_service.default_project == original_default
# Verify new project is NOT set as default
new_project = await project_service.repository.get_by_name(test_project_name)
assert new_project is not None
assert new_project.is_default is not True
# Verify original default is still default
original_default_project = await project_service.repository.get_by_name(
original_default
)
if original_default_project:
assert original_default_project.is_default is True
finally:
# Clean up test project
if test_project_name in project_service.projects:
await project_service.remove_project(test_project_name)
@pytest.mark.asyncio
async def test_add_project_default_parameter_omitted(project_service: ProjectService):
"""Test adding a project without set_default parameter defaults to False behavior."""
test_project_name = f"test-default-omitted-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = str(test_root / "test-default-omitted")
# Make sure the test directory exists
os.makedirs(test_project_path, exist_ok=True)
original_default = project_service.default_project
try:
# Add project without set_default parameter (should default to False)
await project_service.add_project(test_project_name, test_project_path)
# Verify default project hasn't changed
assert project_service.default_project == original_default
# Verify new project is NOT set as default
new_project = await project_service.repository.get_by_name(test_project_name)
assert new_project is not None
assert new_project.is_default is not True
finally:
# Clean up test project
if test_project_name in project_service.projects:
await project_service.remove_project(test_project_name)
@pytest.mark.asyncio
async def test_ensure_single_default_project_enforcement_logic(
project_service: ProjectService, test_project
):
"""Test that _ensure_single_default_project logic works correctly."""
# Test that the method exists and is callable
assert hasattr(project_service, "_ensure_single_default_project")
assert callable(getattr(project_service, "_ensure_single_default_project"))
# Call the enforcement method - should work without error
await project_service._ensure_single_default_project()
# Verify there is exactly one default project after enforcement
all_projects = await project_service.repository.find_all()
default_projects = [p for p in all_projects if p.is_default is True]
assert len(default_projects) == 1 # Should have exactly one default
@pytest.mark.asyncio
async def test_synchronize_projects_calls_ensure_single_default(project_service: ProjectService):
"""Test that synchronize_projects calls _ensure_single_default_project."""
test_project_name = f"test-sync-default-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = str(test_root / "test-sync-default")
# Make sure the test directory exists
os.makedirs(test_project_path, exist_ok=True)
config_manager = ConfigManager()
try:
# Add project to config only (simulating unsynchronized state)
config_manager.add_project(test_project_name, test_project_path)
# Verify it's in config but not in database
assert test_project_name in project_service.projects
db_project = await project_service.repository.get_by_name(test_project_name)
assert db_project is None
# Call synchronize_projects (this should call _ensure_single_default_project)
await project_service.synchronize_projects()
# Verify project is now in database
db_project = await project_service.repository.get_by_name(test_project_name)
assert db_project is not None
# Verify default project enforcement was applied
all_projects = await project_service.repository.find_all()
default_projects = [p for p in all_projects if p.is_default is True]
assert len(default_projects) <= 1 # Should be exactly 1 or 0
finally:
# Clean up test project
if test_project_name in project_service.projects:
await project_service.remove_project(test_project_name)
@pytest.mark.asyncio
async def test_synchronize_projects_normalizes_project_names(project_service: ProjectService):
"""Test that synchronize_projects normalizes project names in config to match database format."""
# Use a project name that needs normalization (uppercase, spaces)
unnormalized_name = "Test Project With Spaces"
expected_normalized_name = "test-project-with-spaces"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = str(test_root / "test-project-spaces")
# Make sure the test directory exists
os.makedirs(test_project_path, exist_ok=True)
config_manager = ConfigManager()
try:
# Manually add the unnormalized project name to config
# Add project with unnormalized name directly to config
config = config_manager.load_config()
config.projects[unnormalized_name] = test_project_path
config_manager.save_config(config)
# Verify the unnormalized name is in config
assert unnormalized_name in project_service.projects
assert project_service.projects[unnormalized_name] == test_project_path
# Call synchronize_projects - this should normalize the project name
await project_service.synchronize_projects()
# Verify the config was updated with normalized name
assert expected_normalized_name in project_service.projects
assert unnormalized_name not in project_service.projects
assert project_service.projects[expected_normalized_name] == test_project_path
# Verify the project was added to database with normalized name
db_project = await project_service.repository.get_by_name(expected_normalized_name)
assert db_project is not None
assert db_project.name == expected_normalized_name
assert db_project.path == test_project_path
assert db_project.permalink == expected_normalized_name
# Verify the unnormalized name is not in database
unnormalized_db_project = await project_service.repository.get_by_name(
unnormalized_name
)
assert unnormalized_db_project is None
finally:
# Clean up - remove any test projects from both config and database
current_projects = project_service.projects.copy()
for name in [unnormalized_name, expected_normalized_name]:
if name in current_projects:
try:
await project_service.remove_project(name)
except Exception:
# Try to clean up manually if remove_project fails
try:
config_manager.remove_project(name)
except Exception:
pass
# Remove from database
db_project = await project_service.repository.get_by_name(name)
if db_project:
await project_service.repository.delete(db_project.id)
@pytest.mark.asyncio
async def test_move_project(project_service: ProjectService):
"""Test moving a project to a new location."""
test_project_name = f"test-move-project-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
old_path = test_root / "old-location"
new_path = test_root / "new-location"
# Create old directory
old_path.mkdir(parents=True, exist_ok=True)
try:
# Add project with initial path
await project_service.add_project(test_project_name, str(old_path))
# Verify initial state
assert test_project_name in project_service.projects
assert Path(project_service.projects[test_project_name]) == old_path
project = await project_service.repository.get_by_name(test_project_name)
assert project is not None
assert Path(project.path) == old_path
# Move project to new location
await project_service.move_project(test_project_name, str(new_path))
# Verify config was updated
assert Path(project_service.projects[test_project_name]) == new_path
# Verify database was updated
updated_project = await project_service.repository.get_by_name(test_project_name)
assert updated_project is not None
assert Path(updated_project.path) == new_path
# Verify new directory was created
assert os.path.exists(new_path)
finally:
# Clean up
if test_project_name in project_service.projects:
await project_service.remove_project(test_project_name)
@pytest.mark.asyncio
async def test_move_project_nonexistent(project_service: ProjectService):
"""Test moving a project that doesn't exist."""
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
new_path = str(test_root / "new-location")
with pytest.raises(ValueError, match="not found in configuration"):
await project_service.move_project("nonexistent-project", new_path)
@pytest.mark.asyncio
async def test_move_project_db_mismatch(project_service: ProjectService):
"""Test moving a project that exists in config but not in database."""
test_project_name = f"test-move-mismatch-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
old_path = test_root / "old-location"
new_path = test_root / "new-location"
# Create directories
old_path.mkdir(parents=True, exist_ok=True)
config_manager = project_service.config_manager
try:
# Add project to config only (not to database)
config_manager.add_project(test_project_name, str(old_path))
# Verify it's in config but not in database
assert test_project_name in project_service.projects
db_project = await project_service.repository.get_by_name(test_project_name)
assert db_project is None
# Try to move project - should fail and restore config
with pytest.raises(ValueError, match="not found in database"):
await project_service.move_project(test_project_name, str(new_path))
# Verify config was restored to original path
assert Path(project_service.projects[test_project_name]) == old_path
finally:
# Clean up
if test_project_name in project_service.projects:
config_manager.remove_project(test_project_name)
@pytest.mark.asyncio
async def test_move_project_expands_path(project_service: ProjectService):
"""Test that move_project expands ~ and relative paths."""
test_project_name = f"test-move-expand-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
old_path = (test_root / "old-location").as_posix()
# Create old directory
os.makedirs(old_path, exist_ok=True)
try:
# Add project with initial path
await project_service.add_project(test_project_name, old_path)
# Use a relative path for the move
relative_new_path = "./new-location"
expected_absolute_path = Path(os.path.abspath(relative_new_path)).as_posix()
# Move project using relative path
await project_service.move_project(test_project_name, relative_new_path)
# Verify the path was expanded to absolute
assert project_service.projects[test_project_name] == expected_absolute_path
updated_project = await project_service.repository.get_by_name(test_project_name)
assert updated_project is not None
assert updated_project.path == expected_absolute_path
finally:
# Clean up
if test_project_name in project_service.projects:
await project_service.remove_project(test_project_name)
@pytest.mark.asyncio
async def test_synchronize_projects_handles_case_sensitivity_bug(project_service: ProjectService):
"""Test that synchronize_projects fixes the case sensitivity bug (Personal vs personal)."""
with tempfile.TemporaryDirectory() as temp_dir:
# Simulate the exact bug scenario: config has "Personal" but database expects "personal"
config_name = "Personal"
normalized_name = "personal"
test_root = Path(temp_dir)
test_project_path = str(test_root / "personal-project")
# Make sure the test directory exists
os.makedirs(test_project_path, exist_ok=True)
config_manager = ConfigManager()
try:
# Add project with uppercase name to config (simulating the bug scenario)
config = config_manager.load_config()
config.projects[config_name] = test_project_path
config_manager.save_config(config)
# Verify the uppercase name is in config
assert config_name in project_service.projects
assert project_service.projects[config_name] == test_project_path
# Call synchronize_projects - this should fix the case sensitivity issue
await project_service.synchronize_projects()
# Verify the config was updated to use normalized case
assert normalized_name in project_service.projects
assert config_name not in project_service.projects
assert project_service.projects[normalized_name] == test_project_path
# Verify the project exists in database with correct normalized name
db_project = await project_service.repository.get_by_name(normalized_name)
assert db_project is not None
assert db_project.name == normalized_name
assert db_project.path == test_project_path
# Verify we can now switch to this project without case sensitivity errors
# (This would have failed before the fix with "Personal" != "personal")
project_lookup = await project_service.get_project(normalized_name)
assert project_lookup is not None
assert project_lookup.name == normalized_name
finally:
# Clean up
for name in [config_name, normalized_name]:
if name in project_service.projects:
try:
await project_service.remove_project(name)
except Exception:
# Manual cleanup if needed
try:
config_manager.remove_project(name)
except Exception:
pass
db_project = await project_service.repository.get_by_name(name)
if db_project:
await project_service.repository.delete(db_project.id)
@pytest.mark.skipif(os.name == "nt", reason="Project root constraints only tested on POSIX systems")
@pytest.mark.asyncio
async def test_add_project_with_project_root_sanitizes_paths(
project_service: ProjectService, config_manager: ConfigManager, monkeypatch
):
"""Test that BASIC_MEMORY_PROJECT_ROOT uses sanitized project name, ignoring user path.
When project_root is set (cloud mode), the system should:
1. Ignore the user's provided path completely
2. Use the sanitized project name as the directory name
3. Create a flat structure: /app/data/test-bisync instead of /app/data/documents/test bisync
This prevents the bisync auto-discovery bug where nested paths caused duplicate project creation.
"""
with tempfile.TemporaryDirectory() as temp_dir:
# Set up project root environment
project_root_path = Path(temp_dir) / "app" / "data"
project_root_path.mkdir(parents=True, exist_ok=True)
monkeypatch.setenv("BASIC_MEMORY_PROJECT_ROOT", str(project_root_path))
# Invalidate config cache so it picks up the new env var
from basic_memory import config as config_module
config_module._CONFIG_CACHE = None
test_cases = [
# (project_name, user_path, expected_sanitized_name)
# User path is IGNORED - only project name matters
("test", "anything/path", "test"),
(
"Test BiSync",
"~/Documents/Test BiSync",
"test-bi-sync",
), # BiSync -> bi-sync (dash preserved)
("My Project", "/tmp/whatever", "my-project"),
("UPPERCASE", "~", "uppercase"),
("With Spaces", "~/Documents/With Spaces", "with-spaces"),
]
for i, (project_name, user_path, expected_sanitized) in enumerate(test_cases):
test_project_name = f"{project_name}-{i}" # Make unique
expected_final_segment = f"{expected_sanitized}-{i}"
try:
# Add the project - user_path should be ignored
await project_service.add_project(test_project_name, user_path)
# Verify the path uses sanitized project name, not user path
assert test_project_name in project_service.projects
actual_path = project_service.projects[test_project_name]
# The path should be under project_root (resolve both to handle macOS /private/var)
assert (
Path(actual_path).resolve().is_relative_to(Path(project_root_path).resolve())
), f"Path {actual_path} should be under {project_root_path}"
# Verify the final path segment is the sanitized project name
path_parts = Path(actual_path).parts
final_segment = path_parts[-1]
assert final_segment == expected_final_segment, (
f"Expected path segment '{expected_final_segment}', got '{final_segment}'"
)
# Clean up
await project_service.remove_project(test_project_name)
except ValueError as e:
pytest.fail(f"Unexpected ValueError for project {test_project_name}: {e}")
@pytest.mark.skipif(os.name == "nt", reason="Project root constraints only tested on POSIX systems")
@pytest.mark.asyncio
async def test_add_project_with_project_root_rejects_escape_attempts(
project_service: ProjectService, config_manager: ConfigManager, monkeypatch
):
"""Test that BASIC_MEMORY_PROJECT_ROOT rejects paths that try to escape the project root."""
with tempfile.TemporaryDirectory() as temp_dir:
# Set up project root environment
project_root_path = Path(temp_dir) / "app" / "data"
project_root_path.mkdir(parents=True, exist_ok=True)
# Create a directory outside project_root to verify it's not accessible
outside_dir = Path(temp_dir) / "outside"
outside_dir.mkdir(parents=True, exist_ok=True)
monkeypatch.setenv("BASIC_MEMORY_PROJECT_ROOT", str(project_root_path))
# Invalidate config cache so it picks up the new env var
from basic_memory import config as config_module
config_module._CONFIG_CACHE = None
# All of these should succeed by being sanitized to paths under project_root
# The sanitization removes dangerous patterns, so they don't escape
safe_after_sanitization = [
"../../../etc/passwd",
"../../.env",
"../../../home/user/.ssh/id_rsa",
]
for i, attack_path in enumerate(safe_after_sanitization):
test_project_name = f"project-root-attack-test-{i}"
try:
# Add the project
await project_service.add_project(test_project_name, attack_path)
# Verify it was sanitized to be under project_root (resolve to handle macOS /private/var)
actual_path = project_service.projects[test_project_name]
assert (
Path(actual_path).resolve().is_relative_to(Path(project_root_path).resolve())
), f"Sanitized path {actual_path} should be under {project_root_path}"
# Clean up
await project_service.remove_project(test_project_name)
except ValueError:
# If it raises ValueError, that's also acceptable for security
pass
@pytest.mark.skipif(os.name == "nt", reason="Project root constraints only tested on POSIX systems")
@pytest.mark.asyncio
async def test_add_project_without_project_root_allows_arbitrary_paths(
project_service: ProjectService, config_manager: ConfigManager, monkeypatch
):
"""Test that without BASIC_MEMORY_PROJECT_ROOT set, arbitrary paths are allowed."""
with tempfile.TemporaryDirectory() as temp_dir:
# Ensure project_root is not set
if "BASIC_MEMORY_PROJECT_ROOT" in os.environ:
monkeypatch.delenv("BASIC_MEMORY_PROJECT_ROOT")
# Create a test directory
test_dir = Path(temp_dir) / "arbitrary-location"
test_dir.mkdir(parents=True, exist_ok=True)
test_project_name = "no-project-root-test"
try:
# Without project_root, we should be able to use arbitrary absolute paths
await project_service.add_project(test_project_name, str(test_dir))
# Verify the path was accepted as-is
assert test_project_name in project_service.projects
actual_path = project_service.projects[test_project_name]
assert actual_path == str(test_dir)
finally:
# Clean up
if test_project_name in project_service.projects:
await project_service.remove_project(test_project_name)
@pytest.mark.skip(
reason="Obsolete: project_root mode now uses sanitized project name, not user path. See test_add_project_with_project_root_sanitizes_paths instead."
)
@pytest.mark.skipif(os.name == "nt", reason="Project root constraints only tested on POSIX systems")
@pytest.mark.asyncio
async def test_add_project_with_project_root_normalizes_case(
project_service: ProjectService, config_manager: ConfigManager, monkeypatch
):
"""Test that BASIC_MEMORY_PROJECT_ROOT normalizes paths to lowercase.
NOTE: This test is obsolete. After fixing the bisync duplicate project bug,
project_root mode now ignores the user's path and uses the sanitized project name instead.
"""
with tempfile.TemporaryDirectory() as temp_dir:
# Set up project root environment
project_root_path = Path(temp_dir) / "app" / "data"
project_root_path.mkdir(parents=True, exist_ok=True)
monkeypatch.setenv("BASIC_MEMORY_PROJECT_ROOT", str(project_root_path))
# Invalidate config cache so it picks up the new env var
from basic_memory import config as config_module
config_module._CONFIG_CACHE = None
test_cases = [
# (input_path, expected_normalized_path)
("Documents/my-project", str(project_root_path / "documents" / "my-project")),
("UPPERCASE/PATH", str(project_root_path / "uppercase" / "path")),
("MixedCase/Path", str(project_root_path / "mixedcase" / "path")),
("documents/Test-TWO", str(project_root_path / "documents" / "test-two")),
]
for i, (input_path, expected_path) in enumerate(test_cases):
test_project_name = f"case-normalize-test-{i}"
try:
# Add the project
await project_service.add_project(test_project_name, input_path)
# Verify the path was normalized to lowercase (resolve both to handle macOS /private/var)
assert test_project_name in project_service.projects
actual_path = project_service.projects[test_project_name]
assert Path(actual_path).resolve() == Path(expected_path).resolve(), (
f"Expected path {expected_path} but got {actual_path} for input {input_path}"
)
# Clean up
await project_service.remove_project(test_project_name)
except ValueError as e:
pytest.fail(f"Unexpected ValueError for input path {input_path}: {e}")
@pytest.mark.skip(
reason="Obsolete: project_root mode now uses sanitized project name, not user path."
)
@pytest.mark.skipif(os.name == "nt", reason="Project root constraints only tested on POSIX systems")
@pytest.mark.asyncio
async def test_add_project_with_project_root_detects_case_collisions(
project_service: ProjectService, config_manager: ConfigManager, monkeypatch
):
"""Test that BASIC_MEMORY_PROJECT_ROOT detects case-insensitive path collisions.
NOTE: This test is obsolete. After fixing the bisync duplicate project bug,
project_root mode now ignores the user's path and uses the sanitized project name instead.
"""
with tempfile.TemporaryDirectory() as temp_dir:
# Set up project root environment
project_root_path = Path(temp_dir) / "app" / "data"
project_root_path.mkdir(parents=True, exist_ok=True)
monkeypatch.setenv("BASIC_MEMORY_PROJECT_ROOT", str(project_root_path))
# Invalidate config cache so it picks up the new env var
from basic_memory import config as config_module
config_module._CONFIG_CACHE = None
# First, create a project with lowercase path
first_project = "documents-project"
await project_service.add_project(first_project, "documents/basic-memory")
# Verify it was created with normalized lowercase path (resolve to handle macOS /private/var)
assert first_project in project_service.projects
first_path = project_service.projects[first_project]
assert (
Path(first_path).resolve()
== (project_root_path / "documents" / "basic-memory").resolve()
)
# Now try to create a project with the same path but different case
# This should be normalized to the same lowercase path and not cause a collision
# since both will be normalized to the same path
second_project = "documents-project-2"
try:
# This should succeed because both get normalized to the same lowercase path
await project_service.add_project(second_project, "documents/basic-memory")
# If we get here, both should have the exact same path
second_path = project_service.projects[second_project]
assert second_path == first_path
# Clean up second project
await project_service.remove_project(second_project)
except ValueError:
# This is expected if there's already a project with this exact path
pass
# Clean up
await project_service.remove_project(first_project)
@pytest.mark.asyncio
async def test_add_project_rejects_nested_child_path(project_service: ProjectService):
"""Test that adding a project nested under an existing project fails."""
parent_project_name = f"parent-project-{os.urandom(4).hex()}"
# Use a completely separate temp directory to avoid fixture conflicts
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
parent_path = (test_root / "parent").as_posix()
# Create parent directory
os.makedirs(parent_path, exist_ok=True)
try:
# Add parent project
await project_service.add_project(parent_project_name, parent_path)
# Try to add a child project nested under parent
child_project_name = f"child-project-{os.urandom(4).hex()}"
child_path = (test_root / "parent" / "child").as_posix()
with pytest.raises(ValueError, match="nested within existing project"):
await project_service.add_project(child_project_name, child_path)
finally:
# Clean up
if parent_project_name in project_service.projects:
await project_service.remove_project(parent_project_name)
@pytest.mark.asyncio
async def test_add_project_rejects_parent_path_over_existing_child(project_service: ProjectService):
"""Test that adding a parent project over an existing nested project fails."""
child_project_name = f"child-project-{os.urandom(4).hex()}"
# Use a completely separate temp directory to avoid fixture conflicts
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
child_path = (test_root / "parent" / "child").as_posix()
# Create child directory
os.makedirs(child_path, exist_ok=True)
try:
# Add child project
await project_service.add_project(child_project_name, child_path)
# Try to add a parent project that contains the child
parent_project_name = f"parent-project-{os.urandom(4).hex()}"
parent_path = (test_root / "parent").as_posix()
with pytest.raises(ValueError, match="is nested within this path"):
await project_service.add_project(parent_project_name, parent_path)
finally:
# Clean up
if child_project_name in project_service.projects:
await project_service.remove_project(child_project_name)
@pytest.mark.asyncio
async def test_add_project_allows_sibling_paths(project_service: ProjectService):
"""Test that adding sibling projects (same level, different directories) succeeds."""
project1_name = f"sibling-project-1-{os.urandom(4).hex()}"
project2_name = f"sibling-project-2-{os.urandom(4).hex()}"
# Use a completely separate temp directory to avoid fixture conflicts
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
project1_path = (test_root / "sibling1").as_posix()
project2_path = (test_root / "sibling2").as_posix()
# Create directories
os.makedirs(project1_path, exist_ok=True)
os.makedirs(project2_path, exist_ok=True)
try:
# Add first sibling project
await project_service.add_project(project1_name, project1_path)
# Add second sibling project (should succeed)
await project_service.add_project(project2_name, project2_path)
# Verify both exist
assert project1_name in project_service.projects
assert project2_name in project_service.projects
finally:
# Clean up
if project1_name in project_service.projects:
await project_service.remove_project(project1_name)
if project2_name in project_service.projects:
await project_service.remove_project(project2_name)
@pytest.mark.asyncio
async def test_add_project_rejects_deeply_nested_path(project_service: ProjectService):
"""Test that deeply nested paths are also rejected."""
root_project_name = f"root-project-{os.urandom(4).hex()}"
# Use a completely separate temp directory to avoid fixture conflicts
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
root_path = (test_root / "root").as_posix()
# Create root directory
os.makedirs(root_path, exist_ok=True)
try:
# Add root project
await project_service.add_project(root_project_name, root_path)
# Try to add a deeply nested project
nested_project_name = f"nested-project-{os.urandom(4).hex()}"
nested_path = (test_root / "root" / "level1" / "level2" / "level3").as_posix()
with pytest.raises(ValueError, match="nested within existing project"):
await project_service.add_project(nested_project_name, nested_path)
finally:
# Clean up
if root_project_name in project_service.projects:
await project_service.remove_project(root_project_name)
@pytest.mark.skipif(os.name == "nt", reason="Project root constraints only tested on POSIX systems")
@pytest.mark.asyncio
async def test_add_project_nested_validation_with_project_root(
project_service: ProjectService, config_manager: ConfigManager, monkeypatch
):
"""Test that nested path validation works with BASIC_MEMORY_PROJECT_ROOT set."""
# Use a completely separate temp directory to avoid fixture conflicts
with tempfile.TemporaryDirectory() as temp_dir:
project_root_path = Path(temp_dir) / "app" / "data"
project_root_path.mkdir(parents=True, exist_ok=True)
monkeypatch.setenv("BASIC_MEMORY_PROJECT_ROOT", str(project_root_path))
# Invalidate config cache
from basic_memory import config as config_module
config_module._CONFIG_CACHE = None
parent_project_name = f"cloud-parent-{os.urandom(4).hex()}"
child_project_name = f"cloud-child-{os.urandom(4).hex()}"
try:
# Add parent project - user path is ignored, uses sanitized project name
await project_service.add_project(parent_project_name, "parent-folder")
# Verify it was created using sanitized project name, not user path
assert parent_project_name in project_service.projects
parent_actual_path = project_service.projects[parent_project_name]
# Path should use sanitized project name (cloud-parent-xxx -> cloud-parent-xxx)
# NOT the user-provided path "parent-folder"
assert parent_project_name.lower() in parent_actual_path.lower()
# Resolve both to handle macOS /private/var vs /var
assert (
Path(parent_actual_path).resolve().is_relative_to(Path(project_root_path).resolve())
)
# Nested projects should still be prevented, even with user path ignored
# Since paths use project names, this won't actually be nested
# But we can test that two projects can coexist
await project_service.add_project(child_project_name, "parent-folder/child-folder")
# Both should exist with their own paths
assert child_project_name in project_service.projects
child_actual_path = project_service.projects[child_project_name]
assert child_project_name.lower() in child_actual_path.lower()
# Clean up child
await project_service.remove_project(child_project_name)
finally:
# Clean up
if parent_project_name in project_service.projects:
await project_service.remove_project(parent_project_name)
@pytest.mark.asyncio
async def test_synchronize_projects_removes_db_only_projects(project_service: ProjectService):
"""Test that synchronize_projects removes projects that exist in DB but not in config.
This is a regression test for issue #193 where deleted projects would be re-added
to config during synchronization, causing them to reappear after deletion.
Config is the source of truth - if a project is deleted from config, it should be
removed from the database during synchronization.
"""
test_project_name = f"test-db-only-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = str(test_root / "test-db-only")
# Make sure the test directory exists
os.makedirs(test_project_path, exist_ok=True)
try:
# Add project to database only (not to config) - simulating orphaned DB entry
project_data = {
"name": test_project_name,
"path": test_project_path,
"permalink": test_project_name.lower().replace(" ", "-"),
"is_active": True,
}
await project_service.repository.create(project_data)
# Verify it exists in DB but not in config
db_project = await project_service.repository.get_by_name(test_project_name)
assert db_project is not None
assert test_project_name not in project_service.projects
# Call synchronize_projects - this should remove the orphaned DB entry
# because config is the source of truth
await project_service.synchronize_projects()
# Verify project was removed from database
db_project_after = await project_service.repository.get_by_name(test_project_name)
assert db_project_after is None, (
"Project should be removed from DB when not in config (config is source of truth)"
)
# Verify it's still not in config
assert test_project_name not in project_service.projects
finally:
# Clean up if needed
db_project = await project_service.repository.get_by_name(test_project_name)
if db_project:
await project_service.repository.delete(db_project.id)
@pytest.mark.asyncio
async def test_remove_project_with_delete_notes_false(project_service: ProjectService):
"""Test that remove_project with delete_notes=False keeps directory intact."""
test_project_name = f"test-remove-keep-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = test_root / "test-project"
test_project_path.mkdir()
test_file = test_project_path / "test.md"
test_file.write_text("# Test Note")
try:
# Add project
await project_service.add_project(test_project_name, str(test_project_path))
# Verify project exists
assert test_project_name in project_service.projects
assert test_project_path.exists()
assert test_file.exists()
# Remove project without deleting notes (default behavior)
await project_service.remove_project(test_project_name, delete_notes=False)
# Verify project is removed from config/db
assert test_project_name not in project_service.projects
db_project = await project_service.repository.get_by_name(test_project_name)
assert db_project is None
# Verify directory and files still exist
assert test_project_path.exists()
assert test_file.exists()
finally:
# Cleanup happens automatically with temp_dir context manager
pass
@pytest.mark.asyncio
async def test_remove_project_with_delete_notes_true(project_service: ProjectService):
"""Test that remove_project with delete_notes=True deletes directory."""
test_project_name = f"test-remove-delete-{os.urandom(4).hex()}"
with tempfile.TemporaryDirectory() as temp_dir:
test_root = Path(temp_dir)
test_project_path = test_root / "test-project"
test_project_path.mkdir()
test_file = test_project_path / "test.md"
test_file.write_text("# Test Note")
try:
# Add project
await project_service.add_project(test_project_name, str(test_project_path))
# Verify project exists
assert test_project_name in project_service.projects
assert test_project_path.exists()
assert test_file.exists()
# Remove project with delete_notes=True
await project_service.remove_project(test_project_name, delete_notes=True)
# Verify project is removed from config/db
assert test_project_name not in project_service.projects
db_project = await project_service.repository.get_by_name(test_project_name)
assert db_project is None
# Verify directory and files are deleted
assert not test_project_path.exists()
finally:
# Cleanup happens automatically with temp_dir context manager
pass
@pytest.mark.asyncio
async def test_remove_project_delete_notes_missing_directory(project_service: ProjectService):
"""Test that remove_project with delete_notes=True handles missing directory gracefully."""
test_project_name = f"test-remove-missing-{os.urandom(4).hex()}"
test_project_path = f"/tmp/nonexistent-directory-{os.urandom(8).hex()}"
try:
# Add project pointing to non-existent path
await project_service.add_project(test_project_name, test_project_path)
# Verify project exists in config/db
assert test_project_name in project_service.projects
db_project = await project_service.repository.get_by_name(test_project_name)
assert db_project is not None
# Remove project with delete_notes=True (should not fail even if dir doesn't exist)
await project_service.remove_project(test_project_name, delete_notes=True)
# Verify project is removed from config/db
assert test_project_name not in project_service.projects
db_project = await project_service.repository.get_by_name(test_project_name)
assert db_project is None
finally:
# Ensure cleanup
if test_project_name in project_service.projects:
try:
project_service.config_manager.remove_project(test_project_name)
except Exception:
pass
```
--------------------------------------------------------------------------------
/tests/services/test_entity_service.py:
--------------------------------------------------------------------------------
```python
"""Tests for EntityService."""
from pathlib import Path
from textwrap import dedent
import pytest
import yaml
from basic_memory.config import ProjectConfig, BasicMemoryConfig
from basic_memory.markdown import EntityParser
from basic_memory.models import Entity as EntityModel
from basic_memory.repository import EntityRepository
from basic_memory.schemas import Entity as EntitySchema
from basic_memory.services import FileService
from basic_memory.services.entity_service import EntityService
from basic_memory.services.exceptions import EntityCreationError, EntityNotFoundError
from basic_memory.services.search_service import SearchService
from basic_memory.utils import generate_permalink
@pytest.mark.asyncio
async def test_create_entity(entity_service: EntityService, file_service: FileService):
"""Test successful entity creation."""
entity_data = EntitySchema(
title="Test Entity",
folder="",
entity_type="test",
)
# Act
entity = await entity_service.create_entity(entity_data)
# Assert Entity
assert isinstance(entity, EntityModel)
assert entity.permalink == entity_data.permalink
assert entity.file_path == entity_data.file_path
assert entity.entity_type == "test"
assert entity.created_at is not None
assert len(entity.relations) == 0
# Verify we can retrieve it using permalink
retrieved = await entity_service.get_by_permalink(entity_data.permalink)
assert retrieved.title == "Test Entity"
assert retrieved.entity_type == "test"
assert retrieved.created_at is not None
# Verify file was written
file_path = file_service.get_entity_path(entity)
assert await file_service.exists(file_path)
file_content, _ = await file_service.read_file(file_path)
_, frontmatter, doc_content = file_content.split("---", 2)
metadata = yaml.safe_load(frontmatter)
# Verify frontmatter contents
assert metadata["permalink"] == entity.permalink
assert metadata["type"] == entity.entity_type
@pytest.mark.asyncio
async def test_create_entity_file_exists(entity_service: EntityService, file_service: FileService):
"""Test successful entity creation."""
entity_data = EntitySchema(
title="Test Entity",
folder="",
entity_type="test",
content="first",
)
# Act
entity = await entity_service.create_entity(entity_data)
# Verify file was written
file_path = file_service.get_entity_path(entity)
assert await file_service.exists(file_path)
file_content, _ = await file_service.read_file(file_path)
assert (
"---\ntitle: Test Entity\ntype: test\npermalink: test-entity\n---\n\nfirst" == file_content
)
entity_data = EntitySchema(
title="Test Entity",
folder="",
entity_type="test",
content="second",
)
with pytest.raises(EntityCreationError):
await entity_service.create_entity(entity_data)
@pytest.mark.asyncio
async def test_create_entity_unique_permalink(
project_config,
entity_service: EntityService,
file_service: FileService,
entity_repository: EntityRepository,
):
"""Test successful entity creation."""
entity_data = EntitySchema(
title="Test Entity",
folder="test",
entity_type="test",
)
entity = await entity_service.create_entity(entity_data)
# default permalink
assert entity.permalink == generate_permalink(entity.file_path)
# move file
file_path = file_service.get_entity_path(entity)
file_path.rename(project_config.home / "new_path.md")
await entity_repository.update(entity.id, {"file_path": "new_path.md"})
# create again
entity2 = await entity_service.create_entity(entity_data)
assert entity2.permalink == f"{entity.permalink}-1"
file_path = file_service.get_entity_path(entity2)
file_content, _ = await file_service.read_file(file_path)
_, frontmatter, doc_content = file_content.split("---", 2)
metadata = yaml.safe_load(frontmatter)
# Verify frontmatter contents
assert metadata["permalink"] == entity2.permalink
@pytest.mark.asyncio
async def test_get_by_permalink(entity_service: EntityService):
"""Test finding entity by type and name combination."""
entity1_data = EntitySchema(
title="TestEntity1",
folder="test",
entity_type="test",
)
entity1 = await entity_service.create_entity(entity1_data)
entity2_data = EntitySchema(
title="TestEntity2",
folder="test",
entity_type="test",
)
entity2 = await entity_service.create_entity(entity2_data)
# Find by type1 and name
found = await entity_service.get_by_permalink(entity1_data.permalink)
assert found is not None
assert found.id == entity1.id
assert found.entity_type == entity1.entity_type
# Find by type2 and name
found = await entity_service.get_by_permalink(entity2_data.permalink)
assert found is not None
assert found.id == entity2.id
assert found.entity_type == entity2.entity_type
# Test not found case
with pytest.raises(EntityNotFoundError):
await entity_service.get_by_permalink("nonexistent/test_entity")
@pytest.mark.asyncio
async def test_get_entity_success(entity_service: EntityService):
"""Test successful entity retrieval."""
entity_data = EntitySchema(
title="TestEntity",
folder="test",
entity_type="test",
)
await entity_service.create_entity(entity_data)
# Get by permalink
retrieved = await entity_service.get_by_permalink(entity_data.permalink)
assert isinstance(retrieved, EntityModel)
assert retrieved.title == "TestEntity"
assert retrieved.entity_type == "test"
@pytest.mark.asyncio
async def test_delete_entity_success(entity_service: EntityService):
"""Test successful entity deletion."""
entity_data = EntitySchema(
title="TestEntity",
folder="test",
entity_type="test",
)
await entity_service.create_entity(entity_data)
# Act using permalink
result = await entity_service.delete_entity(entity_data.permalink)
# Assert
assert result is True
with pytest.raises(EntityNotFoundError):
await entity_service.get_by_permalink(entity_data.permalink)
@pytest.mark.asyncio
async def test_delete_entity_by_id(entity_service: EntityService):
"""Test successful entity deletion."""
entity_data = EntitySchema(
title="TestEntity",
folder="test",
entity_type="test",
)
created = await entity_service.create_entity(entity_data)
# Act using permalink
result = await entity_service.delete_entity(created.id)
# Assert
assert result is True
with pytest.raises(EntityNotFoundError):
await entity_service.get_by_permalink(entity_data.permalink)
@pytest.mark.asyncio
async def test_get_entity_by_permalink_not_found(entity_service: EntityService):
"""Test handling of non-existent entity retrieval."""
with pytest.raises(EntityNotFoundError):
await entity_service.get_by_permalink("test/non_existent")
@pytest.mark.asyncio
async def test_delete_nonexistent_entity(entity_service: EntityService):
"""Test deleting an entity that doesn't exist."""
assert await entity_service.delete_entity("test/non_existent") is True
@pytest.mark.asyncio
async def test_create_entity_with_special_chars(entity_service: EntityService):
"""Test entity creation with special characters in name and description."""
name = "TestEntity_$pecial chars & symbols!" # Note: Using valid path characters
entity_data = EntitySchema(
title=name,
folder="test",
entity_type="test",
)
entity = await entity_service.create_entity(entity_data)
assert entity.title == name
# Verify after retrieval using permalink
await entity_service.get_by_permalink(entity_data.permalink)
@pytest.mark.asyncio
async def test_get_entities_by_permalinks(entity_service: EntityService):
"""Test opening multiple entities by path IDs."""
# Create test entities
entity1_data = EntitySchema(
title="Entity1",
folder="test",
entity_type="test",
)
entity2_data = EntitySchema(
title="Entity2",
folder="test",
entity_type="test",
)
await entity_service.create_entity(entity1_data)
await entity_service.create_entity(entity2_data)
# Open nodes by path IDs
permalinks = [entity1_data.permalink, entity2_data.permalink]
found = await entity_service.get_entities_by_permalinks(permalinks)
assert len(found) == 2
names = {e.title for e in found}
assert names == {"Entity1", "Entity2"}
@pytest.mark.asyncio
async def test_get_entities_empty_input(entity_service: EntityService):
"""Test opening nodes with empty path ID list."""
found = await entity_service.get_entities_by_permalinks([])
assert len(found) == 0
@pytest.mark.asyncio
async def test_get_entities_some_not_found(entity_service: EntityService):
"""Test opening nodes with mix of existing and non-existent path IDs."""
# Create one test entity
entity_data = EntitySchema(
title="Entity1",
folder="test",
entity_type="test",
)
await entity_service.create_entity(entity_data)
# Try to open two nodes, one exists, one doesn't
permalinks = [entity_data.permalink, "type1/non_existent"]
found = await entity_service.get_entities_by_permalinks(permalinks)
assert len(found) == 1
assert found[0].title == "Entity1"
@pytest.mark.asyncio
async def test_get_entity_path(entity_service: EntityService):
"""Should generate correct filesystem path for entity."""
entity = EntityModel(
permalink="test-entity",
file_path="test-entity.md",
entity_type="test",
)
path = entity_service.file_service.get_entity_path(entity)
assert path == Path(entity_service.file_service.base_path / "test-entity.md")
@pytest.mark.asyncio
async def test_update_note_entity_content(entity_service: EntityService, file_service: FileService):
"""Should update note content directly."""
# Create test entity
schema = EntitySchema(
title="test",
folder="test",
entity_type="note",
entity_metadata={"status": "draft"},
)
entity = await entity_service.create_entity(schema)
assert entity.entity_metadata.get("status") == "draft"
# Update content with a relation
schema.content = """
# Updated [[Content]]
- references [[new content]]
- [note] This is new content.
"""
updated = await entity_service.update_entity(entity, schema)
# Verify file has new content but preserved metadata
file_path = file_service.get_entity_path(updated)
content, _ = await file_service.read_file(file_path)
assert "# Updated [[Content]]" in content
assert "- references [[new content]]" in content
assert "- [note] This is new content" in content
# Verify metadata was preserved
_, frontmatter, _ = content.split("---", 2)
metadata = yaml.safe_load(frontmatter)
assert metadata.get("status") == "draft"
@pytest.mark.asyncio
async def test_create_or_update_new(entity_service: EntityService, file_service: FileService):
"""Should create a new entity."""
# Create test entity
entity, created = await entity_service.create_or_update_entity(
EntitySchema(
title="test",
folder="test",
entity_type="test",
entity_metadata={"status": "draft"},
)
)
assert entity.title == "test"
assert created is True
@pytest.mark.asyncio
async def test_create_or_update_existing(entity_service: EntityService, file_service: FileService):
"""Should update entity name in both DB and frontmatter."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="test",
folder="test",
entity_type="test",
content="Test entity",
entity_metadata={"status": "final"},
)
)
entity.content = "Updated content"
# Update name
updated, created = await entity_service.create_or_update_entity(entity)
assert updated.title == "test"
assert updated.entity_metadata["status"] == "final"
assert created is False
@pytest.mark.asyncio
async def test_create_with_content(entity_service: EntityService, file_service: FileService):
# contains frontmatter
content = dedent(
"""
---
permalink: git-workflow-guide
---
# Git Workflow Guide
A guide to our [[Git]] workflow. This uses some ideas from [[Trunk Based Development]].
## Best Practices
Use branches effectively:
- [design] Keep feature branches short-lived #git #workflow (Reduces merge conflicts)
- implements [[Branch Strategy]] (Our standard workflow)
## Common Commands
See the [[Git Cheat Sheet]] for reference.
"""
)
# Create test entity
entity, created = await entity_service.create_or_update_entity(
EntitySchema(
title="Git Workflow Guide",
folder="test",
entity_type="test",
content=content,
)
)
assert created is True
assert entity.title == "Git Workflow Guide"
assert entity.entity_type == "test"
assert entity.permalink == "git-workflow-guide"
assert entity.file_path == "test/Git Workflow Guide.md"
assert len(entity.observations) == 1
assert entity.observations[0].category == "design"
assert entity.observations[0].content == "Keep feature branches short-lived #git #workflow"
assert set(entity.observations[0].tags) == {"git", "workflow"}
assert entity.observations[0].context == "Reduces merge conflicts"
assert len(entity.relations) == 4
assert entity.relations[0].relation_type == "links_to"
assert entity.relations[0].to_name == "Git"
assert entity.relations[1].relation_type == "links_to"
assert entity.relations[1].to_name == "Trunk Based Development"
assert entity.relations[2].relation_type == "implements"
assert entity.relations[2].to_name == "Branch Strategy"
assert entity.relations[2].context == "Our standard workflow"
assert entity.relations[3].relation_type == "links_to"
assert entity.relations[3].to_name == "Git Cheat Sheet"
# Verify file has new content but preserved metadata
file_path = file_service.get_entity_path(entity)
file_content, _ = await file_service.read_file(file_path)
# assert file
# note the permalink value is corrected
expected = dedent("""
---
title: Git Workflow Guide
type: test
permalink: git-workflow-guide
---
# Git Workflow Guide
A guide to our [[Git]] workflow. This uses some ideas from [[Trunk Based Development]].
## Best Practices
Use branches effectively:
- [design] Keep feature branches short-lived #git #workflow (Reduces merge conflicts)
- implements [[Branch Strategy]] (Our standard workflow)
## Common Commands
See the [[Git Cheat Sheet]] for reference.
""").strip()
assert expected == file_content
@pytest.mark.asyncio
async def test_update_with_content(entity_service: EntityService, file_service: FileService):
content = """# Git Workflow Guide"""
# Create test entity
entity, created = await entity_service.create_or_update_entity(
EntitySchema(
title="Git Workflow Guide",
entity_type="test",
folder="test",
content=content,
)
)
assert created is True
assert entity.title == "Git Workflow Guide"
assert len(entity.observations) == 0
assert len(entity.relations) == 0
# Verify file has new content but preserved metadata
file_path = file_service.get_entity_path(entity)
file_content, _ = await file_service.read_file(file_path)
# assert content is in file
assert (
dedent(
"""
---
title: Git Workflow Guide
type: test
permalink: test/git-workflow-guide
---
# Git Workflow Guide
"""
).strip()
== file_content
)
# now update the content
update_content = dedent(
"""
---
title: Git Workflow Guide
type: test
permalink: git-workflow-guide
---
# Git Workflow Guide
A guide to our [[Git]] workflow. This uses some ideas from [[Trunk Based Development]].
## Best Practices
Use branches effectively:
- [design] Keep feature branches short-lived #git #workflow (Reduces merge conflicts)
- implements [[Branch Strategy]] (Our standard workflow)
## Common Commands
See the [[Git Cheat Sheet]] for reference.
"""
).strip()
# update entity
entity, created = await entity_service.create_or_update_entity(
EntitySchema(
title="Git Workflow Guide",
folder="test",
entity_type="test",
content=update_content,
)
)
assert created is False
assert entity.title == "Git Workflow Guide"
# assert custom permalink value
assert entity.permalink == "git-workflow-guide"
assert len(entity.observations) == 1
assert entity.observations[0].category == "design"
assert entity.observations[0].content == "Keep feature branches short-lived #git #workflow"
assert set(entity.observations[0].tags) == {"git", "workflow"}
assert entity.observations[0].context == "Reduces merge conflicts"
assert len(entity.relations) == 4
assert entity.relations[0].relation_type == "links_to"
assert entity.relations[0].to_name == "Git"
assert entity.relations[1].relation_type == "links_to"
assert entity.relations[1].to_name == "Trunk Based Development"
assert entity.relations[2].relation_type == "implements"
assert entity.relations[2].to_name == "Branch Strategy"
assert entity.relations[2].context == "Our standard workflow"
assert entity.relations[3].relation_type == "links_to"
assert entity.relations[3].to_name == "Git Cheat Sheet"
# Verify file has new content but preserved metadata
file_path = file_service.get_entity_path(entity)
file_content, _ = await file_service.read_file(file_path)
# assert content is in file
assert update_content.strip() == file_content
@pytest.mark.asyncio
async def test_create_with_no_frontmatter(
project_config: ProjectConfig,
entity_parser: EntityParser,
entity_service: EntityService,
file_service: FileService,
):
# contains no frontmatter
content = "# Git Workflow Guide"
file_path = Path("test/Git Workflow Guide.md")
full_path = project_config.home / file_path
await file_service.write_file(Path(full_path), content)
entity_markdown = await entity_parser.parse_file(full_path)
created = await entity_service.create_entity_from_markdown(file_path, entity_markdown)
file_content, _ = await file_service.read_file(created.file_path)
assert file_path.as_posix() == created.file_path
assert created.title == "Git Workflow Guide"
assert created.entity_type == "note"
assert created.permalink is None
# assert file
expected = dedent("""
# Git Workflow Guide
""").strip()
assert expected == file_content
@pytest.mark.asyncio
async def test_edit_entity_append(entity_service: EntityService, file_service: FileService):
"""Test appending content to an entity."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="Original content",
)
)
# Edit entity with append operation
updated = await entity_service.edit_entity(
identifier=entity.permalink, operation="append", content="Appended content"
)
# Verify content was appended
file_path = file_service.get_entity_path(updated)
file_content, _ = await file_service.read_file(file_path)
assert "Original content" in file_content
assert "Appended content" in file_content
assert file_content.index("Original content") < file_content.index("Appended content")
@pytest.mark.asyncio
async def test_edit_entity_prepend(entity_service: EntityService, file_service: FileService):
"""Test prepending content to an entity."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="Original content",
)
)
# Edit entity with prepend operation
updated = await entity_service.edit_entity(
identifier=entity.permalink, operation="prepend", content="Prepended content"
)
# Verify content was prepended
file_path = file_service.get_entity_path(updated)
file_content, _ = await file_service.read_file(file_path)
assert "Original content" in file_content
assert "Prepended content" in file_content
assert file_content.index("Prepended content") < file_content.index("Original content")
@pytest.mark.asyncio
async def test_edit_entity_find_replace(entity_service: EntityService, file_service: FileService):
"""Test find and replace operation on an entity."""
# Create test entity with specific content to replace
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="This is old content that needs updating",
)
)
# Edit entity with find_replace operation
updated = await entity_service.edit_entity(
identifier=entity.permalink,
operation="find_replace",
content="new content",
find_text="old content",
)
# Verify content was replaced
file_path = file_service.get_entity_path(updated)
file_content, _ = await file_service.read_file(file_path)
assert "old content" not in file_content
assert "This is new content that needs updating" in file_content
@pytest.mark.asyncio
async def test_edit_entity_replace_section(
entity_service: EntityService, file_service: FileService
):
"""Test replacing a specific section in an entity."""
# Create test entity with sections
content = dedent("""
# Main Title
## Section 1
Original section 1 content
## Section 2
Original section 2 content
""").strip()
entity = await entity_service.create_entity(
EntitySchema(
title="Sample Note",
folder="docs",
entity_type="note",
content=content,
)
)
# Edit entity with replace_section operation
updated = await entity_service.edit_entity(
identifier=entity.permalink,
operation="replace_section",
content="New section 1 content",
section="## Section 1",
)
# Verify section was replaced
file_path = file_service.get_entity_path(updated)
file_content, _ = await file_service.read_file(file_path)
assert "New section 1 content" in file_content
assert "Original section 1 content" not in file_content
assert "Original section 2 content" in file_content # Other sections preserved
@pytest.mark.asyncio
async def test_edit_entity_replace_section_create_new(
entity_service: EntityService, file_service: FileService
):
"""Test replacing a section that doesn't exist creates it."""
# Create test entity without the section
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="# Main Title\n\nSome content",
)
)
# Edit entity with replace_section operation for non-existent section
updated = await entity_service.edit_entity(
identifier=entity.permalink,
operation="replace_section",
content="New section content",
section="## New Section",
)
# Verify section was created
file_path = file_service.get_entity_path(updated)
file_content, _ = await file_service.read_file(file_path)
assert "## New Section" in file_content
assert "New section content" in file_content
@pytest.mark.asyncio
async def test_edit_entity_not_found(entity_service: EntityService):
"""Test editing a non-existent entity raises error."""
with pytest.raises(EntityNotFoundError):
await entity_service.edit_entity(
identifier="non-existent", operation="append", content="content"
)
@pytest.mark.asyncio
async def test_edit_entity_invalid_operation(entity_service: EntityService):
"""Test editing with invalid operation raises error."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="Original content",
)
)
with pytest.raises(ValueError, match="Unsupported operation"):
await entity_service.edit_entity(
identifier=entity.permalink, operation="invalid_operation", content="content"
)
@pytest.mark.asyncio
async def test_edit_entity_find_replace_missing_find_text(entity_service: EntityService):
"""Test find_replace operation without find_text raises error."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="Original content",
)
)
with pytest.raises(ValueError, match="find_text is required"):
await entity_service.edit_entity(
identifier=entity.permalink, operation="find_replace", content="new content"
)
@pytest.mark.asyncio
async def test_edit_entity_replace_section_missing_section(entity_service: EntityService):
"""Test replace_section operation without section parameter raises error."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="Original content",
)
)
with pytest.raises(ValueError, match="section is required"):
await entity_service.edit_entity(
identifier=entity.permalink, operation="replace_section", content="new content"
)
@pytest.mark.asyncio
async def test_edit_entity_with_observations_and_relations(
entity_service: EntityService, file_service: FileService
):
"""Test editing entity updates observations and relations correctly."""
# Create test entity with observations and relations
content = dedent("""
# Test Note
- [note] This is an observation
- links to [[Other Entity]]
Original content
""").strip()
entity = await entity_service.create_entity(
EntitySchema(
title="Sample Note",
folder="docs",
entity_type="note",
content=content,
)
)
# Verify initial state
assert len(entity.observations) == 1
assert len(entity.relations) == 1
# Edit entity by appending content with new observations/relations
updated = await entity_service.edit_entity(
identifier=entity.permalink,
operation="append",
content="\n- [category] New observation\n- relates to [[New Entity]]",
)
# Verify observations and relations were updated
assert len(updated.observations) == 2
assert len(updated.relations) == 2
# Check new observation
new_obs = [obs for obs in updated.observations if obs.category == "category"][0]
assert new_obs.content == "New observation"
# Check new relation
new_rel = [rel for rel in updated.relations if rel.to_name == "New Entity"][0]
assert new_rel.relation_type == "relates to"
@pytest.mark.asyncio
async def test_create_entity_from_markdown_with_upsert(
entity_service: EntityService, file_service: FileService
):
"""Test that create_entity_from_markdown uses UPSERT approach for conflict resolution."""
file_path = Path("test/upsert-test.md")
# Create a mock EntityMarkdown object
from basic_memory.markdown.schemas import (
EntityFrontmatter,
EntityMarkdown as RealEntityMarkdown,
)
from datetime import datetime, timezone
frontmatter = EntityFrontmatter(metadata={"title": "UPSERT Test", "type": "test"})
markdown = RealEntityMarkdown(
frontmatter=frontmatter,
observations=[],
relations=[],
created=datetime.now(timezone.utc),
modified=datetime.now(timezone.utc),
)
# Call the method - should succeed without complex exception handling
result = await entity_service.create_entity_from_markdown(file_path, markdown)
# Verify it created the entity successfully using the UPSERT approach
assert result is not None
assert result.title == "UPSERT Test"
assert result.file_path == file_path.as_posix()
# create_entity_from_markdown sets checksum to None (incomplete sync)
assert result.checksum is None
@pytest.mark.asyncio
async def test_create_entity_from_markdown_error_handling(
entity_service: EntityService, file_service: FileService, monkeypatch
):
"""Test that create_entity_from_markdown handles repository errors gracefully."""
from basic_memory.services.exceptions import EntityCreationError
file_path = Path("test/error-test.md")
# Create a mock EntityMarkdown object
from basic_memory.markdown.schemas import (
EntityFrontmatter,
EntityMarkdown as RealEntityMarkdown,
)
from datetime import datetime, timezone
frontmatter = EntityFrontmatter(metadata={"title": "Error Test", "type": "test"})
markdown = RealEntityMarkdown(
frontmatter=frontmatter,
observations=[],
relations=[],
created=datetime.now(timezone.utc),
modified=datetime.now(timezone.utc),
)
# Mock the repository.upsert_entity to raise a general error
async def mock_upsert(*args, **kwargs):
# Simulate a general database error
raise Exception("Database connection failed")
monkeypatch.setattr(entity_service.repository, "upsert_entity", mock_upsert)
# Should wrap the error in EntityCreationError
with pytest.raises(EntityCreationError, match="Failed to create entity"):
await entity_service.create_entity_from_markdown(file_path, markdown)
# Edge case tests for find_replace operation
@pytest.mark.asyncio
async def test_edit_entity_find_replace_not_found(entity_service: EntityService):
"""Test find_replace operation when text is not found."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="This is some content",
)
)
# Try to replace text that doesn't exist
with pytest.raises(ValueError, match="Text to replace not found: 'nonexistent'"):
await entity_service.edit_entity(
identifier=entity.permalink,
operation="find_replace",
content="new content",
find_text="nonexistent",
)
@pytest.mark.asyncio
async def test_edit_entity_find_replace_multiple_occurrences_expected_one(
entity_service: EntityService,
):
"""Test find_replace with multiple occurrences when expecting one."""
# Create entity with repeated text (avoiding "test" since it appears in frontmatter)
entity = await entity_service.create_entity(
EntitySchema(
title="Sample Note",
folder="docs",
entity_type="note",
content="The word banana appears here. Another banana word here.",
)
)
# Try to replace with expected count of 1 when there are 2
with pytest.raises(ValueError, match="Expected 1 occurrences of 'banana', but found 2"):
await entity_service.edit_entity(
identifier=entity.permalink,
operation="find_replace",
content="replacement",
find_text="banana",
expected_replacements=1,
)
@pytest.mark.asyncio
async def test_edit_entity_find_replace_multiple_occurrences_success(
entity_service: EntityService, file_service: FileService
):
"""Test find_replace with multiple occurrences when expected count matches."""
# Create test entity with repeated text (avoiding "test" since it appears in frontmatter)
entity = await entity_service.create_entity(
EntitySchema(
title="Sample Note",
folder="docs",
entity_type="note",
content="The word banana appears here. Another banana word here.",
)
)
# Replace with correct expected count
updated = await entity_service.edit_entity(
identifier=entity.permalink,
operation="find_replace",
content="apple",
find_text="banana",
expected_replacements=2,
)
# Verify both instances were replaced
file_path = file_service.get_entity_path(updated)
file_content, _ = await file_service.read_file(file_path)
assert "The word apple appears here. Another apple word here." in file_content
@pytest.mark.asyncio
async def test_edit_entity_find_replace_empty_find_text(entity_service: EntityService):
"""Test find_replace with empty find_text."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="Some content",
)
)
# Try with empty find_text
with pytest.raises(ValueError, match="find_text cannot be empty or whitespace only"):
await entity_service.edit_entity(
identifier=entity.permalink,
operation="find_replace",
content="new content",
find_text=" ", # whitespace only
)
@pytest.mark.asyncio
async def test_edit_entity_find_replace_multiline(
entity_service: EntityService, file_service: FileService
):
"""Test find_replace with multiline text."""
# Create test entity with multiline content
content = dedent("""
# Title
This is a paragraph
that spans multiple lines
and needs replacement.
Other content.
""").strip()
entity = await entity_service.create_entity(
EntitySchema(
title="Sample Note",
folder="docs",
entity_type="note",
content=content,
)
)
# Replace multiline text
find_text = "This is a paragraph\nthat spans multiple lines\nand needs replacement."
new_text = "This is new content\nthat replaces the old paragraph."
updated = await entity_service.edit_entity(
identifier=entity.permalink, operation="find_replace", content=new_text, find_text=find_text
)
# Verify replacement worked
file_path = file_service.get_entity_path(updated)
file_content, _ = await file_service.read_file(file_path)
assert "This is new content\nthat replaces the old paragraph." in file_content
assert "Other content." in file_content # Make sure rest is preserved
# Edge case tests for replace_section operation
@pytest.mark.asyncio
async def test_edit_entity_replace_section_multiple_sections_error(entity_service: EntityService):
"""Test replace_section with multiple sections having same header."""
# Create test entity with duplicate section headers
content = dedent("""
# Main Title
## Section 1
First instance content
## Section 2
Some content
## Section 1
Second instance content
""").strip()
entity = await entity_service.create_entity(
EntitySchema(
title="Sample Note",
folder="docs",
entity_type="note",
content=content,
)
)
# Try to replace section when multiple exist
with pytest.raises(ValueError, match="Multiple sections found with header '## Section 1'"):
await entity_service.edit_entity(
identifier=entity.permalink,
operation="replace_section",
content="New content",
section="## Section 1",
)
@pytest.mark.asyncio
async def test_edit_entity_replace_section_empty_section(entity_service: EntityService):
"""Test replace_section with empty section parameter."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="Some content",
)
)
# Try with empty section
with pytest.raises(ValueError, match="section cannot be empty or whitespace only"):
await entity_service.edit_entity(
identifier=entity.permalink,
operation="replace_section",
content="new content",
section=" ", # whitespace only
)
@pytest.mark.asyncio
async def test_edit_entity_replace_section_header_variations(
entity_service: EntityService, file_service: FileService
):
"""Test replace_section with different header formatting."""
# Create entity with various header formats (avoiding "test" in frontmatter)
content = dedent("""
# Main Title
## Section Name
Original content
### Subsection
Sub content
""").strip()
entity = await entity_service.create_entity(
EntitySchema(
title="Sample Note",
folder="docs",
entity_type="note",
content=content,
)
)
# Test replacing with different header format (no ##)
updated = await entity_service.edit_entity(
identifier=entity.permalink,
operation="replace_section",
content="New section content",
section="Section Name", # No ## prefix
)
# Verify replacement worked
file_path = file_service.get_entity_path(updated)
file_content, _ = await file_service.read_file(file_path)
assert "New section content" in file_content
assert "Original content" not in file_content
assert "### Subsection" in file_content # Subsection preserved
@pytest.mark.asyncio
async def test_edit_entity_replace_section_at_end_of_document(
entity_service: EntityService, file_service: FileService
):
"""Test replace_section when section is at the end of document."""
# Create test entity with section at end
content = dedent("""
# Main Title
## First Section
First content
## Last Section
Last section content""").strip() # No trailing newline
entity = await entity_service.create_entity(
EntitySchema(
title="Sample Note",
folder="docs",
entity_type="note",
content=content,
)
)
# Replace the last section
updated = await entity_service.edit_entity(
identifier=entity.permalink,
operation="replace_section",
content="New last section content",
section="## Last Section",
)
# Verify replacement worked
file_path = file_service.get_entity_path(updated)
file_content, _ = await file_service.read_file(file_path)
assert "New last section content" in file_content
assert "Last section content" not in file_content
assert "First content" in file_content # Previous section preserved
@pytest.mark.asyncio
async def test_edit_entity_replace_section_with_subsections(
entity_service: EntityService, file_service: FileService
):
"""Test replace_section preserves subsections (stops at any header)."""
# Create test entity with nested sections
content = dedent("""
# Main Title
## Parent Section
Parent content
### Child Section 1
Child 1 content
### Child Section 2
Child 2 content
## Another Section
Other content
""").strip()
entity = await entity_service.create_entity(
EntitySchema(
title="Sample Note",
folder="docs",
entity_type="note",
content=content,
)
)
# Replace parent section (should only replace content until first subsection)
updated = await entity_service.edit_entity(
identifier=entity.permalink,
operation="replace_section",
content="New parent content",
section="## Parent Section",
)
# Verify replacement worked - only immediate content replaced, subsections preserved
file_path = file_service.get_entity_path(updated)
file_content, _ = await file_service.read_file(file_path)
assert "New parent content" in file_content
assert "Parent content" not in file_content # Original content replaced
assert "Child 1 content" in file_content # Child sections preserved
assert "Child 2 content" in file_content # Child sections preserved
assert "## Another Section" in file_content # Next section preserved
assert "Other content" in file_content
@pytest.mark.asyncio
async def test_edit_entity_replace_section_strips_duplicate_header(
entity_service: EntityService, file_service: FileService
):
"""Test that replace_section strips duplicate header from content (issue #390)."""
# Create test entity with a section
content = dedent("""
# Main Title
## Testing
Original content
## Another Section
Other content
""").strip()
entity = await entity_service.create_entity(
EntitySchema(
title="Sample Note",
folder="docs",
entity_type="note",
content=content,
)
)
# Replace section with content that includes the duplicate header
# (This is what LLMs sometimes do)
updated = await entity_service.edit_entity(
identifier=entity.permalink,
operation="replace_section",
content="## Testing\nNew content for testing section",
section="## Testing",
)
# Verify that we don't have duplicate headers
file_path = file_service.get_entity_path(updated)
file_content, _ = await file_service.read_file(file_path)
# Count occurrences of "## Testing" - should only be 1
testing_header_count = file_content.count("## Testing")
assert testing_header_count == 1, (
f"Expected 1 '## Testing' header, found {testing_header_count}"
)
assert "New content for testing section" in file_content
assert "Original content" not in file_content
assert "## Another Section" in file_content # Other sections preserved
# Move entity tests
@pytest.mark.asyncio
async def test_move_entity_success(
entity_service: EntityService,
file_service: FileService,
project_config: ProjectConfig,
):
"""Test successful entity move with basic settings."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="original",
entity_type="note",
content="Original content",
)
)
# Verify original file exists
original_path = file_service.get_entity_path(entity)
assert await file_service.exists(original_path)
# Create app config with permalinks disabled
app_config = BasicMemoryConfig(update_permalinks_on_move=False)
# Move entity
assert entity.permalink == "original/test-note"
await entity_service.move_entity(
identifier=entity.permalink,
destination_path="moved/test-note.md",
project_config=project_config,
app_config=app_config,
)
# Verify original file no longer exists
assert not await file_service.exists(original_path)
# Verify new file exists
new_path = project_config.home / "moved/test-note.md"
assert new_path.exists()
# Verify database was updated
updated_entity = await entity_service.get_by_permalink(entity.permalink)
assert updated_entity.file_path == "moved/test-note.md"
# Verify file content is preserved
new_content, _ = await file_service.read_file("moved/test-note.md")
assert "Original content" in new_content
@pytest.mark.asyncio
async def test_move_entity_with_permalink_update(
entity_service: EntityService,
file_service: FileService,
project_config: ProjectConfig,
):
"""Test entity move with permalink updates enabled."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="original",
entity_type="note",
content="Original content",
)
)
original_permalink = entity.permalink
# Create app config with permalinks enabled
app_config = BasicMemoryConfig(update_permalinks_on_move=True)
# Move entity
await entity_service.move_entity(
identifier=entity.permalink,
destination_path="moved/test-note.md",
project_config=project_config,
app_config=app_config,
)
# Verify entity was found by new path (since permalink changed)
moved_entity = await entity_service.link_resolver.resolve_link("moved/test-note.md")
assert moved_entity is not None
assert moved_entity.file_path == "moved/test-note.md"
assert moved_entity.permalink != original_permalink
# Verify frontmatter was updated with new permalink
new_content, _ = await file_service.read_file("moved/test-note.md")
assert moved_entity.permalink in new_content
@pytest.mark.asyncio
async def test_move_entity_creates_destination_directory(
entity_service: EntityService,
file_service: FileService,
project_config: ProjectConfig,
):
"""Test that moving creates destination directory if it doesn't exist."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="original",
entity_type="note",
content="Original content",
)
)
app_config = BasicMemoryConfig(update_permalinks_on_move=False)
# Move to deeply nested path that doesn't exist
await entity_service.move_entity(
identifier=entity.permalink,
destination_path="deeply/nested/folders/test-note.md",
project_config=project_config,
app_config=app_config,
)
# Verify directory was created
new_path = project_config.home / "deeply/nested/folders/test-note.md"
assert new_path.exists()
assert new_path.parent.exists()
@pytest.mark.asyncio
async def test_move_entity_not_found(
entity_service: EntityService,
project_config: ProjectConfig,
):
"""Test moving non-existent entity raises error."""
app_config = BasicMemoryConfig(update_permalinks_on_move=False)
with pytest.raises(EntityNotFoundError, match="Entity not found: non-existent"):
await entity_service.move_entity(
identifier="non-existent",
destination_path="new/path.md",
project_config=project_config,
app_config=app_config,
)
@pytest.mark.asyncio
async def test_move_entity_source_file_missing(
entity_service: EntityService,
file_service: FileService,
project_config: ProjectConfig,
):
"""Test moving when source file doesn't exist on filesystem."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="Original content",
)
)
# Manually delete the file (simulating corruption/external deletion)
file_path = file_service.get_entity_path(entity)
file_path.unlink()
app_config = BasicMemoryConfig(update_permalinks_on_move=False)
with pytest.raises(ValueError, match="Source file not found:"):
await entity_service.move_entity(
identifier=entity.permalink,
destination_path="new/path.md",
project_config=project_config,
app_config=app_config,
)
@pytest.mark.asyncio
async def test_move_entity_destination_exists(
entity_service: EntityService,
file_service: FileService,
project_config: ProjectConfig,
):
"""Test moving to existing destination fails."""
# Create two test entities
entity1 = await entity_service.create_entity(
EntitySchema(
title="Test Note 1",
folder="test",
entity_type="note",
content="Content 1",
)
)
entity2 = await entity_service.create_entity(
EntitySchema(
title="Test Note 2",
folder="test",
entity_type="note",
content="Content 2",
)
)
app_config = BasicMemoryConfig(update_permalinks_on_move=False)
# Try to move entity1 to entity2's location
with pytest.raises(ValueError, match="Destination already exists:"):
await entity_service.move_entity(
identifier=entity1.permalink,
destination_path=entity2.file_path,
project_config=project_config,
app_config=app_config,
)
@pytest.mark.asyncio
async def test_move_entity_invalid_destination_path(
entity_service: EntityService,
project_config: ProjectConfig,
):
"""Test moving with invalid destination paths."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="test",
entity_type="note",
content="Original content",
)
)
app_config = BasicMemoryConfig(update_permalinks_on_move=False)
# Test absolute path
with pytest.raises(ValueError, match="Invalid destination path:"):
await entity_service.move_entity(
identifier=entity.permalink,
destination_path="/absolute/path.md",
project_config=project_config,
app_config=app_config,
)
# Test empty path
with pytest.raises(ValueError, match="Invalid destination path:"):
await entity_service.move_entity(
identifier=entity.permalink,
destination_path="",
project_config=project_config,
app_config=app_config,
)
@pytest.mark.asyncio
async def test_move_entity_by_title(
entity_service: EntityService,
file_service: FileService,
project_config: ProjectConfig,
app_config: BasicMemoryConfig,
):
"""Test moving entity by title instead of permalink."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="original",
entity_type="note",
content="Original content",
)
)
app_config = BasicMemoryConfig(update_permalinks_on_move=False)
# Move by title
await entity_service.move_entity(
identifier="Test Note", # Use title instead of permalink
destination_path="moved/test-note.md",
project_config=project_config,
app_config=app_config,
)
# Verify old path no longer exists
new_path = project_config.home / entity.file_path
assert not new_path.exists()
# Verify new file exists
new_path = project_config.home / "moved/test-note.md"
assert new_path.exists()
@pytest.mark.asyncio
async def test_move_entity_preserves_observations_and_relations(
entity_service: EntityService,
file_service: FileService,
project_config: ProjectConfig,
):
"""Test that moving preserves entity observations and relations."""
# Create test entity with observations and relations
content = dedent("""
# Test Note
- [note] This is an observation #test
- links to [[Other Entity]]
Original content
""").strip()
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="original",
entity_type="note",
content=content,
)
)
# Verify initial observations and relations
assert len(entity.observations) == 1
assert len(entity.relations) == 1
app_config = BasicMemoryConfig(update_permalinks_on_move=False)
# Move entity
await entity_service.move_entity(
identifier=entity.permalink,
destination_path="moved/test-note.md",
project_config=project_config,
app_config=app_config,
)
# Get moved entity
moved_entity = await entity_service.link_resolver.resolve_link("moved/test-note.md")
# Verify observations and relations are preserved
assert len(moved_entity.observations) == 1
assert moved_entity.observations[0].content == "This is an observation #test"
assert len(moved_entity.relations) == 1
assert moved_entity.relations[0].to_name == "Other Entity"
# Verify file content includes observations and relations
new_content, _ = await file_service.read_file("moved/test-note.md")
assert "- [note] This is an observation #test" in new_content
assert "- links to [[Other Entity]]" in new_content
@pytest.mark.asyncio
async def test_move_entity_rollback_on_database_failure(
entity_service: EntityService,
file_service: FileService,
project_config: ProjectConfig,
entity_repository: EntityRepository,
):
"""Test that filesystem changes are rolled back on database failures."""
# Create test entity
entity = await entity_service.create_entity(
EntitySchema(
title="Test Note",
folder="original",
entity_type="note",
content="Original content",
)
)
original_path = file_service.get_entity_path(entity)
assert await file_service.exists(original_path)
app_config = BasicMemoryConfig(update_permalinks_on_move=False)
# Mock repository update to fail
original_update = entity_repository.update
async def failing_update(*args, **kwargs):
return None # Simulate failure
entity_repository.update = failing_update
try:
with pytest.raises(ValueError, match="Move failed:"):
await entity_service.move_entity(
identifier=entity.permalink,
destination_path="moved/test-note.md",
project_config=project_config,
app_config=app_config,
)
# Verify rollback - original file should still exist
assert await file_service.exists(original_path)
# Verify destination file was cleaned up
destination_path = project_config.home / "moved/test-note.md"
assert not destination_path.exists()
finally:
# Restore original update method
entity_repository.update = original_update
@pytest.mark.asyncio
async def test_move_entity_with_complex_observations(
entity_service: EntityService,
file_service: FileService,
project_config: ProjectConfig,
):
"""Test moving entity with complex observations (tags, context)."""
content = dedent("""
# Complex Note
- [design] Keep feature branches short-lived #git #workflow (Reduces merge conflicts)
- [tech] Using SQLite for storage #implementation (Fast and reliable)
- implements [[Branch Strategy]] (Our standard workflow)
Complex content with [[Multiple]] [[Links]].
""").strip()
entity = await entity_service.create_entity(
EntitySchema(
title="Complex Note",
folder="docs",
entity_type="note",
content=content,
)
)
# Verify complex structure
assert len(entity.observations) == 2
assert len(entity.relations) == 3 # 1 explicit + 2 wikilinks
app_config = BasicMemoryConfig(update_permalinks_on_move=False)
# Move entity
await entity_service.move_entity(
identifier=entity.permalink,
destination_path="moved/complex-note.md",
project_config=project_config,
app_config=app_config,
)
# Verify moved entity maintains structure
moved_entity = await entity_service.link_resolver.resolve_link("moved/complex-note.md")
# Check observations with tags and context
design_obs = [obs for obs in moved_entity.observations if obs.category == "design"][0]
assert "git" in design_obs.tags
assert "workflow" in design_obs.tags
assert design_obs.context == "Reduces merge conflicts"
tech_obs = [obs for obs in moved_entity.observations if obs.category == "tech"][0]
assert "implementation" in tech_obs.tags
assert tech_obs.context == "Fast and reliable"
# Check relations
relation_types = {rel.relation_type for rel in moved_entity.relations}
assert "implements" in relation_types
assert "links_to" in relation_types
relation_targets = {rel.to_name for rel in moved_entity.relations}
assert "Branch Strategy" in relation_targets
assert "Multiple" in relation_targets
assert "Links" in relation_targets
@pytest.mark.asyncio
async def test_move_entity_with_null_permalink_generates_permalink(
entity_service: EntityService,
project_config: ProjectConfig,
entity_repository: EntityRepository,
):
"""Test that moving entity with null permalink generates a new permalink automatically.
This tests the fix for issue #155 where entities with null permalinks from the database
migration would fail validation when being moved. The fix ensures that entities with
null permalinks get a generated permalink during move operations, regardless of the
update_permalinks_on_move setting.
"""
# Create entity through direct database insertion to simulate migrated entity with null permalink
from datetime import datetime, timezone
# Create an entity with null permalink directly in database (simulating migrated data)
entity_data = {
"title": "Test Entity",
"file_path": "test/null-permalink-entity.md",
"entity_type": "note",
"content_type": "text/markdown",
"permalink": None, # This is the key - null permalink from migration
"created_at": datetime.now(timezone.utc),
"updated_at": datetime.now(timezone.utc),
}
# Create the entity directly in database
created_entity = await entity_repository.create(entity_data)
assert created_entity.permalink is None
# Create the physical file
file_path = project_config.home / created_entity.file_path
file_path.parent.mkdir(parents=True, exist_ok=True)
file_path.write_text("# Test Entity\n\nContent here.")
# Configure move without permalink updates (the default setting that previously triggered the bug)
app_config = BasicMemoryConfig(update_permalinks_on_move=False)
# Move entity - this should now succeed and generate a permalink
moved_entity = await entity_service.move_entity(
identifier=created_entity.title, # Use title since permalink is None
destination_path="moved/test-entity.md",
project_config=project_config,
app_config=app_config,
)
# Verify the move succeeded and a permalink was generated
assert moved_entity is not None
assert moved_entity.file_path == "moved/test-entity.md"
assert moved_entity.permalink is not None
assert moved_entity.permalink != ""
# Verify the moved entity can be used to create an EntityResponse without validation errors
from basic_memory.schemas.response import EntityResponse
response = EntityResponse.model_validate(moved_entity)
assert response.permalink == moved_entity.permalink
# Verify the physical file was moved
old_path = project_config.home / "test/null-permalink-entity.md"
new_path = project_config.home / "moved/test-entity.md"
assert not old_path.exists()
assert new_path.exists()
@pytest.mark.asyncio
async def test_create_or_update_entity_fuzzy_search_bug(
entity_service: EntityService,
file_service: FileService,
project_config: ProjectConfig,
search_service: SearchService,
):
"""Test that create_or_update_entity doesn't incorrectly match similar entities via fuzzy search.
This reproduces the critical bug where creating "Node C" overwrote "Node A.md"
because fuzzy search incorrectly matched the similar file paths.
Root cause: link_resolver.resolve_link() uses fuzzy search fallback which matches
"edge-cases/Node C.md" to existing "edge-cases/Node A.md" because they share
similar words ("edge-cases", "Node").
Expected: Create new entity "Node C" with its own file
Actual Bug: Updates existing "Node A" entity, overwriting its file
"""
# Step 1: Create first entity "Node A"
entity_a = EntitySchema(
title="Node A",
folder="edge-cases",
entity_type="note",
content="# Node A\n\nOriginal content for Node A",
)
created_a, is_new_a = await entity_service.create_or_update_entity(entity_a)
assert is_new_a is True, "Node A should be created as new entity"
assert created_a.title == "Node A"
assert created_a.file_path == "edge-cases/Node A.md"
# CRITICAL: Index Node A in search to enable fuzzy search fallback
# This is what triggers the bug - without indexing, fuzzy search returns no results
await search_service.index_entity(created_a)
# Verify Node A file exists with correct content
file_a = project_config.home / "edge-cases" / "Node A.md"
assert file_a.exists(), "Node A.md file should exist"
content_a = file_a.read_text()
assert "Node A" in content_a
assert "Original content for Node A" in content_a
# Step 2: Create Node B to match live test scenario
entity_b = EntitySchema(
title="Node B",
folder="edge-cases",
entity_type="note",
content="# Node B\n\nContent for Node B",
)
created_b, is_new_b = await entity_service.create_or_update_entity(entity_b)
assert is_new_b is True
await search_service.index_entity(created_b)
# Step 3: Create Node C - this is where the bug occurs in live testing
# BUG: This will incorrectly match Node A via fuzzy search
entity_c = EntitySchema(
title="Node C",
folder="edge-cases",
entity_type="note",
content="# Node C\n\nContent for Node C",
)
created_c, is_new_c = await entity_service.create_or_update_entity(entity_c)
# CRITICAL ASSERTIONS: Node C should be created as NEW entity, not update Node A
assert is_new_c is True, "Node C should be created as NEW entity, not update existing"
assert created_c.title == "Node C", "Created entity should have title 'Node C'"
assert created_c.file_path == "edge-cases/Node C.md", "Should create Node C.md file"
assert created_c.id != created_a.id, "Node C should have different ID than Node A"
# Verify both files exist with correct content
file_c = project_config.home / "edge-cases" / "Node C.md"
assert file_c.exists(), "Node C.md file should exist as separate file"
# Re-read Node A file to ensure it wasn't overwritten
content_a_after = file_a.read_text()
assert "title: Node A" in content_a_after, "Node A.md should still have Node A title"
assert "Original content for Node A" in content_a_after, (
"Node A.md should NOT be overwritten with Node C content"
)
assert "Content for Node C" not in content_a_after, (
"Node A.md should not contain Node C content"
)
# Verify Node C file has correct content
content_c = file_c.read_text()
assert "title: Node C" in content_c, "Node C.md should have Node C title"
assert "Content for Node C" in content_c, "Node C.md should have Node C content"
assert "Original content for Node A" not in content_c, (
"Node C.md should not contain Node A content"
)
```
--------------------------------------------------------------------------------
/docs/ai-assistant-guide-extended.md:
--------------------------------------------------------------------------------
```markdown
# AI Assistant Guide for Basic Memory - Extended Edition
**This is the comprehensive guide for AI assistants using Basic Memory through MCP.**
> **Note for Developers**: This guide is organized into self-contained sections. You can copy/paste individual sections to create customized guides for specific use cases or AI assistants. Each section is designed to stand alone while also working as part of the complete guide.
## Table of Contents
1. [Understanding Basic Memory](#understanding-basic-memory)
2. [Project Management](#project-management)
3. [Knowledge Graph Fundamentals](#knowledge-graph-fundamentals)
4. [Writing Knowledge](#writing-knowledge)
5. [Reading and Navigation](#reading-and-navigation)
6. [Search and Discovery](#search-and-discovery)
7. [Building Context](#building-context)
8. [Recording Conversations](#recording-conversations)
9. [Editing Notes](#editing-notes)
10. [Moving and Organizing](#moving-and-organizing)
11. [Error Handling](#error-handling)
12. [Advanced Patterns](#advanced-patterns)
13. [Tool Reference](#tool-reference)
14. [Best Practices](#best-practices)
---
## Understanding Basic Memory
**Core Concept**: Basic Memory is a local-first knowledge management system that creates a semantic knowledge graph from markdown files. It enables persistent, structured knowledge that survives across AI sessions.
### Key Principles
**Local-First Architecture**
- All knowledge stored as plain text markdown files on user's computer
- SQLite database indexes files for fast search and navigation
- Files are the source of truth, database is derived state
- User maintains complete control over their data
**Semantic Knowledge Graph**
- Entities: Individual markdown files representing concepts
- Observations: Categorized facts with optional tags
- Relations: Directional links between entities
- Graph traversal enables context building and exploration
**Persistent Context**
- Knowledge persists across conversations
- AI can reference previous discussions
- Context builds over time through accumulated knowledge
- Enables long-term collaborative development
### AI as Knowledge Collaborator
Basic Memory's semantic knowledge graph - observations, relations, context building - is designed to help you (the AI assistant) provide better help to humans. You use the graph structure to:
- Build relevant context from past conversations
- Navigate connections between ideas
- Understand relationships and dependencies
- Provide continuity across sessions
**The distinction**: You're helping humans build enduring knowledge they'll own forever, not creating disposable agent memory. The better you use these tools, the more valuable their knowledge becomes over time. Think of markdown files as artifacts that will outlast any particular AI model - your job is to help create knowledge worth keeping.
### Architecture Overview
```
User's Markdown Files (Source of Truth)
↓
File Sync
↓
SQLite Database (Index)
↓
MCP Server
↓
AI Assistant
```
**Data Flow**:
1. User creates/edits markdown files
2. Sync process detects changes
3. Files are parsed and indexed in SQLite
4. MCP server exposes indexed data to AI
5. AI can query, traverse, and update knowledge graph
---
## Project Management
**Project Concept**: A project is a directory of markdown files with its own knowledge graph. Users can have multiple independent projects.
### Discovering Projects
**Always start by discovering available projects:**
```python
# List all projects
projects = await list_memory_projects()
# Response structure:
# [
# {
# "name": "main",
# "path": "/Users/name/notes",
# "is_default": True,
# "note_count": 156,
# "last_synced": "2025-01-15T10:30:00Z"
# },
# {
# "name": "work",
# "path": "/Users/name/work-notes",
# "is_default": False,
# "note_count": 89,
# "last_synced": "2025-01-14T16:45:00Z"
# }
# ]
```
**When to discover projects**:
- Start of conversation when project unknown
- User asks about available projects
- Before any operation requiring project selection
- After errors related to project not found
### Project Selection Patterns
**Single-Project Users**:
```python
# Enable default_project_mode in config
# ~/.basic-memory/config.json
{
"default_project": "main",
"default_project_mode": true
}
# Then tools work without project parameter
await write_note("Note", "Content", "folder")
await search_notes(query="test")
```
**Multi-Project Users**:
```python
# Keep default_project_mode disabled (default)
# Always specify project explicitly
# All tool calls require project
await write_note("Note", "Content", "folder", project="main")
await search_notes(query="test", project="work")
# Can target different projects in same conversation
results_main = await search_notes(query="auth", project="main")
results_work = await search_notes(query="auth", project="work")
```
**Recommended Workflow**:
```python
# 1. Discover projects
projects = await list_memory_projects()
# 2. Ask user which to use (if ambiguous)
# "I found 2 projects: 'main' and 'work'. Which should I use?"
# 3. Store choice for session
active_project = "main"
# 4. Use in all subsequent calls
results = await search_notes(query="topic", project=active_project)
```
### Cross-Project Operations
**Some tools work across all projects when project parameter omitted:**
```python
# Recent activity across all projects
activity = await recent_activity(timeframe="7d")
# Returns activity from all projects
# Recent activity for specific project
activity = await recent_activity(timeframe="7d", project="main")
# Returns activity only from "main" project
```
**Tools supporting cross-project mode**:
- `recent_activity()` - aggregate activity across projects
- `list_memory_projects()` - always returns all projects
- `sync_status()` - can show all projects or specific
### Creating Projects
**Create new projects programmatically:**
```python
# Create new project
await create_memory_project(
project_name="research",
project_path="/Users/name/Documents/research",
set_default=False
)
# Create and set as default
await create_memory_project(
project_name="primary",
project_path="/Users/name/notes",
set_default=True
)
```
**Use cases**:
- User requests new knowledge base
- Separating work/personal notes
- Project-specific documentation
- Client-specific knowledge
### Project Status
**Check sync status before operations:**
```python
# Check if sync complete
status = await sync_status(project="main")
# Response indicates:
# - sync_in_progress: bool
# - files_processed: int
# - files_remaining: int
# - last_sync: datetime
# - errors: list
# Wait for sync if needed
if status["sync_in_progress"]:
# Inform user: "Sync in progress, please wait..."
# Or proceed with available data
```
---
## Knowledge Graph Fundamentals
**The knowledge graph is built from three core elements: entities, observations, and relations.**
### Entities
**What is an Entity?**
- Any concept, document, or idea represented as a markdown file
- Has a unique title and permalink
- Contains frontmatter metadata
- Includes observations and relations
**Entity Structure**:
```markdown
---
title: Authentication System
permalink: authentication-system
tags: [security, auth, api]
type: note
created: 2025-01-10T14:30:00Z
updated: 2025-01-15T09:15:00Z
---
# Authentication System
## Context
Brief description of the entity
## Observations
- [category] Facts about this entity
## Relations
- relation_type [[Other Entity]]
```
**Entity Types**:
- `note`: General knowledge (default)
- `person`: People and contacts
- `project`: Projects and initiatives
- `meeting`: Meeting notes
- `decision`: Documented decisions
- `spec`: Technical specifications
### Observations
**Observations are categorized facts with optional tags.**
**Syntax**: `- [category] content #tag1 #tag2`
**Common Categories**:
- `[fact]`: Objective information
- `[idea]`: Thoughts and concepts
- `[decision]`: Choices made
- `[technique]`: Methods and approaches
- `[requirement]`: Needs and constraints
- `[question]`: Open questions
- `[insight]`: Key realizations
- `[problem]`: Issues identified
- `[solution]`: Resolutions
**Examples**:
```markdown
## Observations
- [decision] Use JWT tokens for authentication #security
- [technique] Hash passwords with bcrypt before storage #best-practice
- [requirement] Support OAuth 2.0 providers (Google, GitHub) #auth
- [fact] Session timeout set to 24 hours #configuration
- [problem] Password reset emails sometimes delayed #bug
- [solution] Implemented retry queue for email delivery #fix
- [insight] 2FA adoption increased security by 40% #metrics
```
**Why Categorize?**:
- Enables semantic search by observation type
- Helps AI understand context and intent
- Makes knowledge more queryable
- Provides structure for analysis
### Relations
**Relations are directional links between entities.**
**Syntax**: `- relation_type [[Target Entity]]`
**Common Relation Types**:
- `relates_to`: General connection
- `implements`: Implementation of spec/design
- `requires`: Dependency relationship
- `extends`: Extension or enhancement
- `part_of`: Hierarchical membership
- `contrasts_with`: Opposite or alternative
- `caused_by`: Causal relationship
- `leads_to`: Sequential relationship
- `similar_to`: Similarity relationship
**Examples**:
```markdown
## Relations
- implements [[Authentication Spec v2]]
- requires [[User Database Schema]]
- extends [[Base Security Model]]
- part_of [[API Backend Services]]
- contrasts_with [[API Key Authentication]]
- leads_to [[Session Management]]
```
**Bidirectional Links**:
```markdown
# In "Login Flow" note
## Relations
- part_of [[Authentication System]]
# In "Authentication System" note
## Relations
- includes [[Login Flow]]
```
**Why explicit relation types matter**:
- Enables semantic graph traversal
- AI can understand relationship meaning
- Supports sophisticated context building
- Makes knowledge more navigable
### Forward References
**You can reference entities that don't exist yet:**
```python
# Create note referencing non-existent entity
await write_note(
title="API Implementation",
content="""# API Implementation
## Relations
- implements [[API Specification]]
- requires [[Database Models]]
""",
folder="api",
project="main"
)
# Creates forward references to "API Specification" and "Database Models"
# Later, create referenced entities
await write_note(
title="API Specification",
content="# API Specification\n...",
folder="specs",
project="main"
)
# Forward reference automatically resolved!
await write_note(
title="Database Models",
content="# Database Models\n...",
folder="database",
project="main"
)
# Second forward reference resolved!
```
**How it works**:
1. Forward reference creates placeholder in knowledge graph
2. When target entity is created, relation is automatically resolved
3. Graph traversal works in both directions
4. No manual linking required
**Use cases**:
- Planning features before implementation
- Creating outlines with linked topics
- Bottom-up knowledge building
- Incremental documentation
---
## Writing Knowledge
**Creating rich, well-structured notes is fundamental to building a useful knowledge graph.**
### Basic Note Creation
**Minimal note**:
```python
await write_note(
title="Quick Note",
content="# Quick Note\n\nSome basic content.",
folder="notes",
project="main"
)
```
**Well-structured note**:
```python
await write_note(
title="Database Design Decisions",
content="""# Database Design Decisions
## Context
Documenting our database architecture choices for the authentication system.
## Observations
- [decision] PostgreSQL chosen over MySQL for better JSON support #database
- [technique] Using UUID primary keys instead of auto-increment #design
- [requirement] Must support multi-tenant data isolation #security
- [fact] Expected load is 10K requests/minute #performance
- [insight] UUID keys enable easier horizontal scaling #scalability
## Relations
- implements [[Authentication System Spec]]
- requires [[Database Infrastructure]]
- relates_to [[API Design]]
- contrasts_with [[Previous MySQL Design]]
""",
folder="architecture",
tags=["database", "design", "authentication"],
project="main"
)
```
### Effective Observation Writing
**Good observations are**:
- **Specific**: Avoid vague statements
- **Categorized**: Use appropriate category
- **Tagged**: Add relevant tags
- **Atomic**: One fact per observation
- **Contextual**: Include enough detail
**Examples**:
**❌ Poor observations**:
```markdown
- [fact] We use a database
- [idea] Security is important
- [decision] Made some changes
```
**✓ Good observations**:
```markdown
- [fact] PostgreSQL 14 database runs on AWS RDS with 16GB RAM #infrastructure
- [decision] Implemented rate limiting at 100 requests/minute per user #security
- [technique] Using bcrypt with cost factor 12 for password hashing #cryptography
```
### Writing Effective Relations
**Relations should be**:
- **Directional**: Clear source and target
- **Typed**: Use meaningful relation type
- **Accurate**: Use exact entity titles
- **Purposeful**: Add value to graph
**Choosing relation types**:
```markdown
# Implementation relationship
- implements [[Feature Specification]]
# Dependency relationship
- requires [[User Authentication]]
- depends_on [[Database Connection]]
# Hierarchical relationship
- part_of [[Payment System]]
- includes [[Payment Validation]]
# Contrast relationship
- contrasts_with [[Alternative Approach]]
- alternative_to [[Previous Design]]
# Temporal relationship
- leads_to [[Next Phase]]
- follows [[Initial Setup]]
# Causal relationship
- caused_by [[Performance Issue]]
- results_in [[Optimization]]
```
### Note Templates
**Decision Record**:
```python
await write_note(
title="Decision: Use GraphQL for API",
content="""# Decision: Use GraphQL for API
## Context
Evaluating API architecture for new product features.
## Decision
Adopt GraphQL instead of REST for our API layer.
## Observations
- [decision] GraphQL chosen for flexible client queries #api
- [requirement] Frontend needs to minimize round trips #performance
- [technique] Apollo Server for GraphQL implementation #technology
- [fact] REST API still maintained for legacy clients #compatibility
- [insight] GraphQL reduced API calls by 60% in prototype #metrics
## Rationale
- Type safety reduces runtime errors
- Single endpoint simplifies deployment
- Built-in schema documentation
- Better mobile performance
## Consequences
- Team needs GraphQL training
- More complex caching strategy
- Additional monitoring required
## Relations
- implements [[API Architecture Plan]]
- requires [[GraphQL Schema Design]]
- affects [[Frontend Development]]
- replaces [[REST API v1]]
""",
folder="decisions",
tags=["decision", "api", "graphql"],
note_type="decision",
project="main"
)
```
**Meeting Notes**:
```python
await write_note(
title="API Review Meeting 2025-01-15",
content="""# API Review Meeting 2025-01-15
## Attendees
- Alice (Backend Lead)
- Bob (Frontend Lead)
- Carol (Product)
## Observations
- [decision] Finalized GraphQL schema for user endpoints #api
- [action] Bob to implement Apollo client integration by Friday #task
- [problem] Rate limiting causing issues in staging #bug
- [insight] GraphQL subscriptions reduce polling load significantly #performance
- [requirement] Need better error handling for network failures #frontend
## Action Items
- [ ] Implement rate limiting improvements (Alice)
- [ ] Apollo client setup (Bob)
- [ ] Document error handling patterns (Alice)
- [ ] Update API documentation (Carol)
## Relations
- relates_to [[API Architecture Plan]]
- references [[GraphQL Implementation]]
- follows_up [[API Planning Meeting 2025-01-08]]
""",
folder="meetings",
tags=["meeting", "api", "team"],
note_type="meeting",
project="main"
)
```
**Technical Specification**:
```python
await write_note(
title="User Authentication Spec",
content="""# User Authentication Spec
## Overview
Specification for user authentication system using JWT tokens.
## Observations
- [requirement] Support email/password and OAuth authentication #auth
- [requirement] JWT tokens expire after 24 hours #security
- [requirement] Refresh tokens valid for 30 days #security
- [technique] Use RS256 algorithm for token signing #cryptography
- [fact] Tokens include user_id, email, and roles claims #implementation
- [decision] Store refresh tokens in HTTP-only cookies #security
- [technique] Implement rate limiting on login endpoints #protection
## Technical Details
### Authentication Flow
1. User submits credentials
2. Server validates against database
3. Generate JWT access token
4. Generate refresh token
5. Return tokens to client
### Token Structure
```json
{
"user_id": "uuid",
"email": "[email protected]",
"roles": ["user"],
"exp": 1234567890,
"iat": 1234567890
}
```
## Relations
- implemented_by [[Authentication Service]]
- requires [[User Database Schema]]
- part_of [[Security Architecture]]
- extends [[OAuth 2.0 Spec]]
""",
folder="specs",
tags=["spec", "auth", "security"],
note_type="spec",
project="main"
)
```
### Tags Strategy
**Effective tagging**:
```python
# Technology tags
#python #fastapi #graphql #postgresql
# Domain tags
#auth #security #api #frontend #backend
# Status tags
#wip #completed #deprecated #planned
# Priority tags
#urgent #important #nice-to-have
# Category tags
#bug #feature #refactor #docs #test
```
**Example with strategic tags**:
```python
await write_note(
title="OAuth Integration",
content="""# OAuth Integration
## Observations
- [feature] Google OAuth integration completed #oauth #google #completed
- [feature] GitHub OAuth in progress #oauth #github #wip
- [requirement] Add Microsoft OAuth support #oauth #microsoft #planned
- [technique] Using authlib for OAuth flow #python #authlib
- [insight] OAuth reduces password reset requests by 80% #metrics #security
""",
folder="features",
tags=["oauth", "authentication", "integration"],
project="main"
)
```
---
## Reading and Navigation
**Reading notes and navigating the knowledge graph is fundamental to building context.**
### Reading by Identifier
**Read by title**:
```python
# Simple title
note = await read_note(
identifier="Authentication System",
project="main"
)
# Title in specific folder
note = await read_note(
identifier="specs/Authentication System",
project="main"
)
```
**Read by permalink**:
```python
# Permalink is auto-generated from title
note = await read_note(
identifier="authentication-system",
project="main"
)
# Permalink with folder
note = await read_note(
identifier="specs/authentication-system",
project="main"
)
```
### Reading by memory:// URL
**URL formats**:
```python
# By title
note = await read_note(
identifier="memory://Authentication System",
project="main"
)
# By folder and title
note = await read_note(
identifier="memory://specs/Authentication System",
project="main"
)
# By permalink
note = await read_note(
identifier="memory://authentication-system",
project="main"
)
# Wildcards for folder contents
notes = await read_note(
identifier="memory://specs/*",
project="main"
)
```
```python
# Underscores automatically converted to hyphens
note = await read_note(
identifier="memory://my_note_title",
project="main"
)
# Finds entity with permalink "my-note-title"
# Both forms work
note1 = await read_note("memory://api_design", project="main")
note2 = await read_note("memory://api-design", project="main")
# Both find same entity
```
### Response Structure
**read_note response includes**:
```python
{
"title": "Authentication System",
"permalink": "authentication-system",
"content": "# Authentication System\n\n...",
"folder": "specs",
"tags": ["auth", "security"],
"type": "spec",
"created": "2025-01-10T14:30:00Z",
"updated": "2025-01-15T09:15:00Z",
"observations": [
{
"category": "decision",
"content": "Use JWT for authentication",
"tags": ["security"]
}
],
"relations": [
{
"type": "implemented_by",
"target": "Authentication Service",
"target_permalink": "authentication-service"
}
]
}
```
### Pagination
**For long notes, use pagination**:
```python
# First page (default: 10 items)
page1 = await read_note(
identifier="Long Document",
page=1,
page_size=10,
project="main"
)
# Second page
page2 = await read_note(
identifier="Long Document",
page=2,
page_size=10,
project="main"
)
# Large page size for complete content
full = await read_note(
identifier="Long Document",
page=1,
page_size=1000,
project="main"
)
```
### Reading Raw Content
**For non-markdown files or raw access**:
```python
# Read text file
content = await read_content(
path="config/settings.json",
project="main"
)
# Read image (returned as base64)
image = await read_content(
path="diagrams/architecture.png",
project="main"
)
# Read any file type
data = await read_content(
path="data/export.csv",
project="main"
)
```
**Difference from read_note**:
- `read_note`: Parses markdown, extracts knowledge graph
- `read_content`: Returns raw file content
- Use `read_note` for knowledge graph navigation
- Use `read_content` for non-markdown files
### Viewing as Artifact
**For better readability, use view_note**:
```python
# Display as formatted artifact
artifact = await view_note(
identifier="Authentication System",
project="main"
)
# Returns formatted markdown suitable for display
# - Syntax highlighting
# - Rendered markdown
# - Better visual presentation
```
**When to use view_note**:
- Showing content to user
- Presenting documentation
- Displaying specifications
- Better than raw markdown for reading
### Directory Browsing
**List directory contents**:
```python
# List top-level folders
root = await list_directory(
dir_name="/",
project="main"
)
# List specific folder
specs = await list_directory(
dir_name="specs",
project="main"
)
# Recursive listing
all_files = await list_directory(
dir_name="/",
depth=3,
project="main"
)
# Filter by pattern
markdown_files = await list_directory(
dir_name="docs",
file_name_glob="*.md",
project="main"
)
```
**Response structure**:
```python
{
"path": "specs",
"files": [
{
"name": "authentication-system.md",
"path": "specs/authentication-system.md",
"type": "file",
"size": 2048,
"modified": "2025-01-15T09:15:00Z"
}
],
"directories": [
{
"name": "api",
"path": "specs/api",
"type": "directory",
"file_count": 5
}
]
}
```
---
## Search and Discovery
**Search is the primary way to discover relevant knowledge.**
### Basic Search
**Simple text search**:
```python
# Search across all content
results = await search_notes(
query="authentication",
project="main"
)
# Search with pagination
results = await search_notes(
query="authentication",
page=1,
page_size=10,
project="main"
)
# Get more results
results = await search_notes(
query="authentication",
page=1,
page_size=50,
project="main"
)
```
### Advanced Search
**Filter by entity type**:
```python
# Search only specifications
specs = await search_notes(
query="authentication",
types=["spec"],
project="main"
)
# Search decisions and meetings
decisions = await search_notes(
query="api design",
types=["decision", "meeting"],
project="main"
)
```
**Filter by observation category**:
```python
# Find all decisions
decisions = await search_notes(
query="",
entity_types=["decision"],
project="main"
)
# Find problems and solutions
issues = await search_notes(
query="performance",
entity_types=["problem", "solution"],
project="main"
)
```
**Date filtering**:
```python
# Find recent changes
recent = await search_notes(
query="api",
after_date="2025-01-01",
project="main"
)
# Combine with other filters
recent_decisions = await search_notes(
query="authentication",
types=["decision"],
after_date="2025-01-01",
project="main"
)
```
### Search Types
**Text search (default)**:
```python
# Full-text search across all content
results = await search_notes(
query="JWT authentication",
search_type="text",
project="main"
)
```
**Semantic search**:
```python
# Semantic/vector search (if enabled)
results = await search_notes(
query="user login security",
search_type="semantic",
project="main"
)
```
### Search Response
**Result structure**:
```python
{
"results": [
{
"title": "Authentication System",
"permalink": "authentication-system",
"folder": "specs",
"snippet": "...JWT authentication for user login...",
"score": 0.95,
"tags": ["auth", "security"],
"type": "spec",
"updated": "2025-01-15T09:15:00Z"
}
],
"total": 15,
"page": 1,
"page_size": 10,
"has_more": true
}
```
### Search Strategies
**Broad to narrow**:
```python
# Start broad
all_auth = await search_notes(
query="authentication",
project="main"
)
# Narrow down
jwt_auth = await search_notes(
query="JWT authentication",
types=["spec", "decision"],
project="main"
)
# Very specific
recent_jwt = await search_notes(
query="JWT token implementation",
types=["spec"],
after_date="2025-01-01",
project="main"
)
```
**Find related content**:
```python
# 1. Search for main topic
auth_notes = await search_notes(
query="authentication",
project="main"
)
# 2. Read top result
main_note = await read_note(
identifier=auth_notes["results"][0]["permalink"],
project="main"
)
# 3. Build context from relations
context = await build_context(
url=f"memory://{main_note['permalink']}",
depth=2,
project="main"
)
# 4. Search for related terms from relations
for relation in main_note["relations"]:
related = await search_notes(
query=relation["target"],
project="main"
)
```
**Multi-faceted search**:
```python
# Search by different aspects
by_topic = await search_notes(query="API design", project="main")
by_author = await search_notes(query="Alice", project="main")
by_date = await search_notes(query="", after_date="2025-01-15", project="main")
by_tag = await search_notes(query="#security", project="main")
by_type = await search_notes(query="", types=["decision"], project="main")
# Combine for precision
precise = await search_notes(
query="API security",
types=["decision"],
after_date="2025-01-01",
project="main"
)
```
---
## Building Context
**Context building enables conversation continuity by traversing the knowledge graph.**
### Basic Context Building
**Simple context**:
```python
# Build context from entity
context = await build_context(
url="memory://Authentication System",
project="main"
)
# Returns:
# - The root entity
# - Directly related entities
# - Recent observations
# - Connection paths
```
### Depth Control
**Shallow context (depth=1)**:
```python
# Only immediate connections
shallow = await build_context(
url="memory://Authentication System",
depth=1,
project="main"
)
# Returns:
# - Root entity
# - Entities with direct relations
# - First-degree connections only
```
**Deep context (depth=2)**:
```python
# Two levels of connections
deep = await build_context(
url="memory://Authentication System",
depth=2,
project="main"
)
# Returns:
# - Root entity
# - Direct relations (depth 1)
# - Relations of relations (depth 2)
# - More comprehensive context
```
**Very deep context (depth=3+)**:
```python
# Three or more levels
very_deep = await build_context(
url="memory://Authentication System",
depth=3,
project="main"
)
# Warning: Can return a lot of data
# Use for comprehensive understanding
# May be slow for large graphs
```
### Timeframe Filtering
**Recent context**:
```python
# Last 7 days
recent = await build_context(
url="memory://Authentication System",
timeframe="7d",
project="main"
)
# Natural language timeframes
last_week = await build_context(
url="memory://API Design",
timeframe="1 week",
project="main"
)
last_month = await build_context(
url="memory://Project Planning",
timeframe="30 days",
project="main"
)
# Minimum: 1 day (enforced since v0.15.0)
```
**All-time context**:
```python
# No timeframe = all history
complete = await build_context(
url="memory://Authentication System",
depth=2,
project="main"
)
```
### Context Response Structure
**Response includes**:
```python
{
"root_entity": {
"title": "Authentication System",
"permalink": "authentication-system",
"content": "...",
"observations": [...],
"relations": [...]
},
"related_entities": [
{
"title": "User Database",
"permalink": "user-database",
"relation_type": "requires",
"distance": 1,
"content": "...",
"observations": [...],
"relations": [...]
},
{
"title": "Login API",
"permalink": "login-api",
"relation_type": "implemented_by",
"distance": 1,
"content": "...",
"observations": [...],
"relations": [...]
}
],
"paths": [
{
"from": "authentication-system",
"to": "login-api",
"path": [
{"entity": "authentication-system", "relation": "implemented_by"},
{"entity": "login-api"}
]
}
],
"summary": {
"total_entities": 5,
"total_relations": 8,
"max_depth": 2,
"timeframe": "7d"
}
}
```
### Context Building Patterns
**Continuing conversations**:
```python
# User: "Let's discuss authentication"
# 1. Search for topic
results = await search_notes(
query="authentication",
project="main"
)
# 2. Build context from most relevant
context = await build_context(
url=f"memory://{results['results'][0]['permalink']}",
depth=2,
timeframe="30d",
project="main"
)
# 3. Use context to inform response
# "Based on our previous work on authentication, including
# the JWT implementation and OAuth integration..."
```
**Exploring knowledge areas**:
```python
# Start with broad topic
initial = await build_context(
url="memory://API Design",
depth=1,
project="main"
)
# Explore interesting branches
for entity in initial["related_entities"]:
if "GraphQL" in entity["title"]:
graphql_context = await build_context(
url=f"memory://{entity['permalink']}",
depth=2,
project="main"
)
```
**Finding connection paths**:
```python
# Build context from both endpoints
start_context = await build_context(
url="memory://Frontend App",
depth=2,
project="main"
)
end_context = await build_context(
url="memory://Database Schema",
depth=2,
project="main"
)
# Analyze paths in response
# Shows how frontend connects to database through API layer
```
---
## Recording Conversations
**Capturing conversations in Basic Memory enables long-term context and knowledge accumulation.**
### Permission and Transparency
**Always ask before recording**:
```
AI: "Would you like me to save our discussion about API authentication
to Basic Memory? This will help us continue this conversation later
and build on what we've learned."
User: "Yes, please"
AI: [Saves to Basic Memory]
"I've saved our discussion to Basic Memory as 'API Authentication Discussion'."
```
**Be transparent**:
- Ask permission before saving
- Confirm after saving
- Explain what was saved
- Mention how it helps future conversations
**Your role**: As an AI collaborator, you're helping build knowledge that will outlast any particular AI model or session. Write observations and relations that will be valuable to the human (and future AI assistants) years from now. Think: enduring insights, not ephemeral chat logs. These plain-text markdown files are artifacts worth keeping - make them count.
### What to Record
**Good candidates for recording**:
1. **Decisions and Rationales**
```python
await write_note(
title="Decision: GraphQL vs REST",
content="""# Decision: GraphQL vs REST
## Context
User asked about API architecture choice.
## Decision
Chose GraphQL for new features, maintain REST for legacy.
## Observations
- [decision] GraphQL for flexibility and performance #api
- [requirement] Mobile app needs efficient data loading #mobile
- [fact] REST API has 50K existing clients #legacy
- [insight] Hybrid approach minimizes migration risk #strategy
## Relations
- implements [[API Modernization Plan]]
- affects [[Mobile Development]]
""",
folder="decisions",
project="main"
)
```
2. **Important Discoveries**
```python
await write_note(
title="Discovery: Database Performance Issue",
content="""# Discovery: Database Performance Issue
## Context
User reported slow login times.
## Observations
- [problem] Login queries taking 2-3 seconds #performance
- [insight] Missing index on users.email column #database
- [solution] Added index, login now <100ms #fix
- [technique] Used EXPLAIN ANALYZE to identify bottleneck #debugging
- [fact] 80% of queries were sequential scans #metrics
## Resolution
Created index on email column, query time improved 20x.
## Relations
- relates_to [[User Authentication]]
- caused_by [[Database Schema Migration]]
""",
folder="troubleshooting",
project="main"
)
```
3. **Action Items and Plans**
```python
await write_note(
title="Plan: API v2 Migration",
content="""# Plan: API v2 Migration
## Overview
Discussed migration strategy from REST v1 to GraphQL v2.
## Observations
- [plan] Phased migration over 3 months #roadmap
- [action] Create GraphQL schema this week #task
- [action] Implement parallel APIs next month #task
- [decision] Deprecate v1 after 6-month notice #timeline
- [requirement] Must maintain backward compatibility #constraint
## Timeline
- Week 1-2: Schema design
- Week 3-4: Core API implementation
- Month 2: Client migration support
- Month 3: Documentation and training
## Relations
- implements [[API Modernization Strategy]]
- requires [[GraphQL Schema Design]]
- affects [[All API Clients]]
""",
folder="planning",
project="main"
)
```
4. **Connected Topics**
```python
await write_note(
title="Conversation: Security Best Practices",
content="""# Conversation: Security Best Practices
## Discussion Summary
User asked about security measures for new API.
## Observations
- [recommendation] Implement rate limiting on all endpoints #security
- [technique] Use JWT with short expiry + refresh tokens #auth
- [requirement] HTTPS only in production #infrastructure
- [technique] Input validation with Pydantic schemas #validation
- [recommendation] Regular security audits quarterly #process
## Key Insights
- Defense in depth approach is essential
- Rate limiting prevents most automated attacks
- Token rotation improves security posture
## Related Topics
- Authentication mechanisms
- Authorization patterns
- Data encryption
- Audit logging
## Relations
- relates_to [[API Security Architecture]]
- implements [[Security Policy]]
- requires [[Rate Limiting Service]]
""",
folder="conversations",
project="main"
)
```
### Recording Patterns
**Conversation summary**:
```python
# After substantial discussion
await write_note(
title=f"Conversation: {topic} - {date}",
content=f"""# Conversation: {topic}
## Summary
{brief_summary}
## Key Points Discussed
{key_points}
## Observations
{categorized_observations}
## Decisions Made
{decisions}
## Action Items
{action_items}
## Relations
{relevant_relations}
""",
folder="conversations",
tags=["conversation", topic_tags],
project="main"
)
```
**Decision record**:
```python
# For important decisions
await write_note(
title=f"Decision: {decision_title}",
content=f"""# Decision: {decision_title}
## Context
{why_decision_needed}
## Decision
{what_was_decided}
## Observations
{categorized_observations}
## Rationale
{reasoning}
## Consequences
{implications}
## Relations
{related_entities}
""",
folder="decisions",
note_type="decision",
project="main"
)
```
**Learning capture**:
```python
# For new knowledge or insights
await write_note(
title=f"Learning: {topic}",
content=f"""# Learning: {topic}
## What We Learned
{insights}
## Observations
{categorized_facts}
## How This Helps
{practical_applications}
## Relations
{connected_knowledge}
""",
folder="learnings",
project="main"
)
```
### Building on Past Conversations
**Reference previous discussions**:
```python
# 1. Search for related past conversations
past = await search_notes(
query="API authentication",
types=["conversation", "decision"],
project="main"
)
# 2. Build context
context = await build_context(
url=f"memory://{past['results'][0]['permalink']}",
depth=2,
timeframe="30d",
project="main"
)
# 3. Reference in new conversation
# "Building on our previous discussion about JWT authentication,
# let's now address the refresh token implementation..."
# 4. Link new note to previous
await write_note(
title="Refresh Token Implementation",
content="""# Refresh Token Implementation
## Relations
- builds_on [[Conversation: API Authentication]]
- implements [[JWT Authentication Decision]]
""",
folder="implementation",
project="main"
)
```
---
## Editing Notes
**Edit existing notes incrementally without rewriting entire content.**
### Edit Operations
**Available operations**:
- `append`: Add to end of note
- `prepend`: Add to beginning
- `find_replace`: Replace specific text
- `replace_section`: Replace markdown section
### Append Content
**Add to end of note**:
```python
await edit_note(
identifier="Authentication System",
operation="append",
content="""
## New Section
Additional information discovered.
## Observations
- [fact] New security requirement identified #security
""",
project="main"
)
```
**Use cases**:
- Adding new observations
- Appending related topics
- Adding follow-up information
- Extending discussions
### Prepend Content
**Add to beginning of note**:
```python
await edit_note(
identifier="Meeting Notes",
operation="prepend",
content="""## Update
Important development since meeting.
---
""",
project="main"
)
```
**Use cases**:
- Adding urgent updates
- Inserting warnings
- Adding important context
- Prepending summaries
### Find and Replace
**Replace specific text**:
```python
await edit_note(
identifier="API Documentation",
operation="find_replace",
find_text="http://api.example.com",
content="https://api.example.com",
expected_replacements=3,
project="main"
)
```
**With expected replacements count**:
```python
# Expects exactly 1 replacement
await edit_note(
identifier="Config File",
operation="find_replace",
find_text="DEBUG = True",
content="DEBUG = False",
expected_replacements=1,
project="main"
)
# Error if count doesn't match
# Prevents unintended changes
```
**Use cases**:
- Updating URLs
- Correcting terminology
- Fixing typos
- Updating version numbers
### Replace Section
**Replace markdown section by heading**:
```python
await edit_note(
identifier="Project Status",
operation="replace_section",
section="## Current Status",
content="""## Current Status
Project completed successfully.
All milestones achieved ahead of schedule.
""",
project="main"
)
```
**Replace nested section**:
```python
await edit_note(
identifier="Technical Docs",
operation="replace_section",
section="### Authentication", # Finds h3 heading
content="""### Authentication
Updated authentication flow using OAuth 2.0.
See [[OAuth Implementation]] for details.
""",
project="main"
)
```
**Use cases**:
- Updating status sections
- Replacing outdated information
- Modifying specific topics
- Restructuring content
### Adding Observations
**Append new observations**:
```python
# Read current note
note = await read_note("API Design", project="main")
# Add new observations
await edit_note(
identifier="API Design",
operation="append",
content="""
- [insight] GraphQL reduces API calls by 60% #performance
- [decision] Implement query complexity limiting #security
- [action] Document schema changes weekly #documentation
""",
project="main"
)
```
### Adding Relations
**Append new relations**:
```python
await edit_note(
identifier="Authentication System",
operation="append",
content="""
- integrates_with [[OAuth Provider]]
- requires [[Rate Limiting Service]]
""",
project="main"
)
```
**Update relations section**:
```python
await edit_note(
identifier="API Backend",
operation="replace_section",
section="## Relations",
content="""## Relations
- implements [[API Specification v2]]
- requires [[Database Layer]]
- integrates_with [[Authentication Service]]
- monitored_by [[Logging System]]
- deployed_to [[Production Infrastructure]]
""",
project="main"
)
```
### Bulk Updates
**Update multiple notes**:
```python
# Search for notes to update
notes = await search_notes(
query="deprecated",
project="main"
)
# Update each note
for note in notes["results"]:
await edit_note(
identifier=note["permalink"],
operation="prepend",
content="⚠️ **DEPRECATED** - See [[New Implementation]]\n\n---\n\n",
project="main"
)
```
### Collaborative Editing
**Track changes and updates**:
```python
# Add update log
await edit_note(
identifier="Living Document",
operation="append",
content=f"""
## Update Log
### {current_date}
- Updated authentication section
- Added OAuth examples
- Fixed broken links
""",
project="main"
)
```
---
## Moving and Organizing
**Organize notes by moving them between folders while maintaining knowledge graph integrity.**
### Basic Move
**Move to new folder**:
```python
await move_note(
identifier="API Documentation",
destination_path="docs/api/api-documentation.md",
project="main"
)
```
**Move with auto-extension**:
```python
# Both work (v0.15.0+)
await move_note(
identifier="Note",
destination_path="new-folder/note.md",
project="main"
)
await move_note(
identifier="Note",
destination_path="new-folder/note", # .md added automatically
project="main"
)
```
### Organizing Knowledge
**Create folder structure**:
```python
# Move related notes to dedicated folders
# Move specs
await move_note("Authentication Spec", "specs/auth/authentication.md", project="main")
await move_note("API Spec", "specs/api/api-spec.md", project="main")
# Move implementations
await move_note("Auth Service", "services/auth/auth-service.md", project="main")
await move_note("API Server", "services/api/api-server.md", project="main")
# Move decisions
await move_note("Decision: OAuth", "decisions/oauth-decision.md", project="main")
# Move meetings
await move_note("API Review 2025-01-15", "meetings/2025/01/api-review.md", project="main")
```
**Folder hierarchy**:
```
project/
├── specs/
│ ├── auth/
│ └── api/
├── services/
│ ├── auth/
│ └── api/
├── decisions/
├── meetings/
│ └── 2025/
│ └── 01/
├── conversations/
└── learnings/
```
### Batch Organization
**Organize multiple notes**:
```python
# Get all auth-related notes
auth_notes = await search_notes(
query="authentication",
project="main"
)
# Move to auth folder
for note in auth_notes["results"]:
if note["type"] == "spec":
await move_note(
identifier=note["permalink"],
destination_path=f"specs/auth/{note['permalink']}.md",
project="main"
)
elif note["type"] == "decision":
await move_note(
identifier=note["permalink"],
destination_path=f"decisions/auth/{note['permalink']}.md",
project="main"
)
```
### Preserving Relations
**Relations are automatically updated**:
```python
# Before move:
# Note A (folder: root) -> relates_to [[Note B]]
# Note B (folder: root)
# Move Note B
await move_note(
identifier="Note B",
destination_path="subfolder/note-b.md",
project="main"
)
# After move:
# Note A (folder: root) -> relates_to [[Note B]]
# Note B (folder: subfolder) <- relation still works!
# Database updated automatically
```
### Renaming
**Move to rename**:
```python
# Rename by moving to same folder with new name
await move_note(
identifier="Old Name",
destination_path="same-folder/new-name.md",
project="main"
)
# Title and permalink updated
# Relations preserved
```
### Archiving
**Move to archive folder**:
```python
# Archive old notes
await move_note(
identifier="Deprecated Feature",
destination_path="archive/deprecated/deprecated-feature.md",
project="main"
)
# Batch archive by date
old_notes = await search_notes(
query="",
after_date="2024-01-01",
project="main"
)
for note in old_notes["results"]:
if note["updated"] < "2024-06-01":
await move_note(
identifier=note["permalink"],
destination_path=f"archive/2024/{note['permalink']}.md",
project="main"
)
```
---
## Error Handling
**Robust error handling ensures reliable AI-human interaction.**
### Missing Project Parameter
**Error**: Tool called without project parameter
**Solution**:
```python
try:
results = await search_notes(query="test")
except:
# Show available projects
projects = await list_memory_projects()
# Ask user which to use
# "I need to know which project to search. Available projects: ..."
# Retry with project
results = await search_notes(query="test", project="main")
```
**Prevention**:
```python
# Always discover projects first
projects = await list_memory_projects()
# Store active project for session
active_project = projects[0]["name"]
# Use in all calls
results = await search_notes(query="test", project=active_project)
```
### Entity Not Found
**Error**: Note doesn't exist
**Solution**:
```python
try:
note = await read_note("Nonexistent Note", project="main")
except:
# Search for similar
results = await search_notes(query="Note", project="main")
# Suggest alternatives
# "I couldn't find 'Nonexistent Note'. Did you mean:"
# - Similar Note 1
# - Similar Note 2
```
### Forward Reference Resolution
**Not an error**: Forward references resolve automatically
```python
# Create note with forward reference
response = await write_note(
title="Implementation",
content="## Relations\n- implements [[Future Spec]]",
folder="code",
project="main"
)
# Response may indicate unresolved reference
# This is OK - will resolve when target created
# Later, create target
await write_note(
title="Future Spec",
content="# Future Spec\n...",
folder="specs",
project="main"
)
# Reference automatically resolved
# No action needed
```
### Sync Status Issues
**Error**: Data not found, sync in progress
**Solution**:
```python
# Check sync status
status = await sync_status(project="main")
if status["sync_in_progress"]:
# Inform user
# "The knowledge base is still syncing. Please wait..."
# Wait or proceed with available data
# Can still search/read synced content
else:
# Sync complete, proceed normally
results = await search_notes(query="topic", project="main")
```
### Ambiguous References
**Error**: Multiple entities match
**Solution**:
```python
# Ambiguous title
try:
note = await read_note("API", project="main")
except:
# Search to disambiguate
results = await search_notes(query="API", project="main")
# Show options to user
# "Multiple notes found with 'API':"
# - API Specification (specs/)
# - API Implementation (services/)
# - API Documentation (docs/)
# Use specific identifier
note = await read_note("specs/API Specification", project="main")
```
### Empty Search Results
**Not an error**: No matches found
**Solution**:
```python
results = await search_notes(query="rare topic", project="main")
if results["total"] == 0:
# Broaden search
broader = await search_notes(query="topic", project="main")
# Or suggest creating note
# "No notes found about 'rare topic'. Would you like me to create one?"
```
### Project Not Found
**Error**: Specified project doesn't exist
**Solution**:
```python
try:
results = await search_notes(query="test", project="nonexistent")
except:
# List available projects
projects = await list_memory_projects()
# Show to user
# "Project 'nonexistent' not found. Available projects:"
# - main
# - work
# Offer to create
# "Would you like to create a new project called 'nonexistent'?"
```
### Edit Conflicts
**Error**: find_replace didn't match expected count
**Solution**:
```python
try:
await edit_note(
identifier="Config",
operation="find_replace",
find_text="old_value",
content="new_value",
expected_replacements=1,
project="main"
)
except:
# Read note to check
note = await read_note("Config", project="main")
# Verify text exists
if "old_value" in note["content"]:
count = note["content"].count("old_value")
# Inform user: "Found {count} occurrences, expected 1"
# Adjust or use replace_all
await edit_note(
identifier="Config",
operation="find_replace",
find_text="old_value",
content="new_value",
replace_all=True,
project="main"
)
```
### Permission Errors
**Error**: Can't write to destination
**Solution**:
```python
try:
await move_note(
identifier="Note",
destination_path="/restricted/note.md",
project="main"
)
except:
# Inform user about permission issue
# "Cannot move note to /restricted/ - permission denied"
# Suggest alternative
# "Try moving to a folder within the project directory"
# Use valid path
await move_note(
identifier="Note",
destination_path="archive/note.md",
project="main"
)
```
---
## Advanced Patterns
**Sophisticated techniques for knowledge management and AI collaboration.**
### Knowledge Graph Visualization
**Create visual representation using canvas**:
```python
# Gather entities to visualize
auth_context = await build_context(
url="memory://Authentication System",
depth=2,
project="main"
)
# Create nodes
nodes = [
{
"id": "auth-system",
"type": "file",
"file": "specs/authentication-system.md",
"x": 0,
"y": 0,
"width": 400,
"height": 300
},
{
"id": "user-db",
"type": "file",
"file": "services/user-database.md",
"x": 500,
"y": 0,
"width": 400,
"height": 300
},
{
"id": "login-api",
"type": "file",
"file": "api/login-api.md",
"x": 250,
"y": 400,
"width": 400,
"height": 300
}
]
# Create edges showing relations
edges = [
{
"id": "edge-1",
"fromNode": "auth-system",
"toNode": "user-db",
"label": "requires"
},
{
"id": "edge-2",
"fromNode": "auth-system",
"toNode": "login-api",
"label": "implemented_by"
}
]
# Generate canvas
canvas = await canvas(
nodes=nodes,
edges=edges,
title="Authentication System Overview",
folder="diagrams",
project="main"
)
# Opens in Obsidian for interactive exploration
```
### Progressive Knowledge Building
**Build knowledge incrementally over time**:
```python
# Session 1: Create foundation
await write_note(
title="API Design",
content="""# API Design
## Observations
- [requirement] Need REST API for mobile app
## Relations
- relates_to [[Mobile Development]]
""",
folder="planning",
project="main"
)
# Session 2: Add details
await edit_note(
identifier="API Design",
operation="append",
content="""
- [decision] Using FastAPI framework #python
- [technique] Auto-generate OpenAPI docs
""",
project="main"
)
# Session 3: Add related entities
await write_note(
title="API Authentication",
content="""# API Authentication
## Relations
- part_of [[API Design]]
""",
folder="specs",
project="main"
)
# Update original with relation
await edit_note(
identifier="API Design",
operation="append",
content="""
- includes [[API Authentication]]
""",
project="main"
)
# Session 4: Add implementation
await write_note(
title="API Implementation",
content="""# API Implementation
## Relations
- implements [[API Design]]
""",
folder="code",
project="main"
)
```
### Cross-Project Knowledge Transfer
**Transfer knowledge between projects**:
```python
# Read from source project
template = await read_note(
identifier="API Architecture Template",
project="templates"
)
# Adapt for target project
adapted_content = template["content"].replace(
"{{PROJECT_NAME}}",
"New Project"
)
# Write to target project
await write_note(
title="API Architecture",
content=adapted_content,
folder="architecture",
project="new-project"
)
```
### Knowledge Graph Traversal
**Traverse graph to discover insights**:
```python
# Start with entry point
start = await read_note("Product Roadmap", project="main")
# Traverse relations
visited = set()
to_visit = [start["permalink"]]
all_related = []
while to_visit:
current = to_visit.pop(0)
if current in visited:
continue
visited.add(current)
note = await read_note(current, project="main")
all_related.append(note)
# Add related entities to queue
for relation in note["relations"]:
if relation["target_permalink"] not in visited:
to_visit.append(relation["target_permalink"])
# Analyze collected knowledge
# - All connected entities
# - Relation patterns
# - Knowledge clusters
```
### Temporal Analysis
**Track knowledge evolution over time**:
```python
# Get recent activity
week1 = await recent_activity(timeframe="7d", project="main")
week2 = await recent_activity(timeframe="14d", project="main")
# Compare what's new
new_this_week = [
item for item in week1
if item not in week2
]
# Identify trends
# - What topics are active
# - What areas growing
# - What needs attention
```
### Knowledge Validation
**Ensure knowledge graph integrity**:
```python
# Find all forward references
all_notes = await search_notes(query="", page_size=1000, project="main")
unresolved = []
for note in all_notes["results"]:
full_note = await read_note(note["permalink"], project="main")
for relation in full_note["relations"]:
if not relation.get("target_exists"):
unresolved.append({
"source": note["title"],
"target": relation["target"]
})
# Report unresolved references
# "Found {len(unresolved)} unresolved references:"
# - Note A -> Missing Target 1
# - Note B -> Missing Target 2
```
### Automated Documentation
**Generate documentation from knowledge graph**:
```python
# Gather all specs
specs = await search_notes(
query="",
types=["spec"],
project="main"
)
# Build comprehensive documentation
doc_content = "# System Documentation\n\n"
for spec in specs["results"]:
full_spec = await read_note(spec["permalink"], project="main")
doc_content += f"\n## {full_spec['title']}\n"
doc_content += f"{full_spec['content']}\n"
# Add related implementations
context = await build_context(
url=f"memory://{spec['permalink']}",
depth=1,
project="main"
)
implementations = [
e for e in context["related_entities"]
if e.get("relation_type") == "implemented_by"
]
if implementations:
doc_content += "\n### Implementations\n"
for impl in implementations:
doc_content += f"- {impl['title']}\n"
# Save generated documentation
await write_note(
title="Generated System Documentation",
content=doc_content,
folder="docs",
project="main"
)
```
### Knowledge Consolidation
**Merge related notes**:
```python
# Find related notes
related = await search_notes(
query="authentication",
project="main"
)
# Read all related
notes_to_merge = []
for note in related["results"]:
full = await read_note(note["permalink"], project="main")
notes_to_merge.append(full)
# Consolidate
merged_content = "# Consolidated: Authentication\n\n"
merged_observations = []
merged_relations = []
for note in notes_to_merge:
merged_observations.extend(note.get("observations", []))
merged_relations.extend(note.get("relations", []))
# Deduplicate
unique_observations = list({
obs["content"]: obs for obs in merged_observations
}.values())
unique_relations = list({
rel["target"]: rel for rel in merged_relations
}.values())
# Build consolidated note
merged_content += "## Observations\n"
for obs in unique_observations:
merged_content += f"- [{obs['category']}] {obs['content']}"
if obs.get('tags'):
merged_content += " " + " ".join(f"#{tag}" for tag in obs['tags'])
merged_content += "\n"
merged_content += "\n## Relations\n"
for rel in unique_relations:
merged_content += f"- {rel['type']} [[{rel['target']}]]\n"
# Save consolidated note
await write_note(
title="Consolidated: Authentication",
content=merged_content,
folder="consolidated",
project="main"
)
```
---
## Tool Reference
**Complete reference for all MCP tools.**
### Content Management
**write_note(title, content, folder, tags, note_type, project)**
- Create or update markdown notes
- Parameters:
- `title` (required): Note title
- `content` (required): Markdown content
- `folder` (required): Destination folder
- `tags` (optional): List of tags
- `note_type` (optional): Type of note (stored in frontmatter). Can be "note", "person", "meeting", "guide", etc.
- `project` (required unless default_project_mode): Target project
- Returns: Created/updated entity with permalink
- Example:
```python
await write_note(
title="API Design",
content="# API Design\n...",
folder="specs",
tags=["api", "design"],
note_type="spec",
project="main"
)
```
**read_note(identifier, page, page_size, project)**
- Read notes with knowledge graph context
- Parameters:
- `identifier` (required): Title, permalink, or memory:// URL
- `page` (optional): Page number (default: 1)
- `page_size` (optional): Results per page (default: 10)
- `project` (required unless default_project_mode): Target project
- Returns: Entity with content, observations, relations
- Example:
```python
note = await read_note(
identifier="memory://specs/api-design",
project="main"
)
```
**edit_note(identifier, operation, content, find_text, section, expected_replacements, project)**
- Edit notes incrementally
- Parameters:
- `identifier` (required): Note identifier
- `operation` (required): append, prepend, find_replace, replace_section
- `content` (required): Content to add/replace
- `find_text` (optional): Text to find (for find_replace)
- `section` (optional): Section heading (for replace_section)
- `expected_replacements` (optional): Expected replacement count
- `project` (required unless default_project_mode): Target project
- Returns: Updated entity
- Example:
```python
await edit_note(
identifier="API Design",
operation="append",
content="\n- [fact] New requirement",
project="main"
)
```
**move_note(identifier, destination_path, project)**
- Move notes to new locations
- Parameters:
- `identifier` (required): Note identifier
- `destination_path` (required): New path (with or without .md)
- `project` (required unless default_project_mode): Target project
- Returns: Updated entity with new path
- Example:
```python
await move_note(
identifier="API Design",
destination_path="archive/api-design.md",
project="main"
)
```
**delete_note(identifier, project)**
- Delete notes from knowledge base
- Parameters:
- `identifier` (required): Note identifier
- `project` (required unless default_project_mode): Target project
- Returns: Deletion confirmation
- Example:
```python
await delete_note(
identifier="outdated-note",
project="main"
)
```
**read_content(path, project)**
- Read raw file content
- Parameters:
- `path` (required): File path
- `project` (required unless default_project_mode): Target project
- Returns: Raw file content (text or base64 for binary)
- Example:
```python
content = await read_content(
path="config/settings.json",
project="main"
)
```
**view_note(identifier, page, page_size, project)**
- View notes as formatted artifacts
- Parameters: Same as read_note
- Returns: Formatted markdown for display
- Example:
```python
artifact = await view_note(
identifier="API Design",
project="main"
)
```
### Knowledge Graph Navigation
**build_context(url, depth, timeframe, max_related, page, page_size, project)**
- Navigate knowledge graph
- Parameters:
- `url` (required): memory:// URL
- `depth` (optional): Traversal depth (default: 1)
- `timeframe` (optional): Time window (e.g., "7d", "1 week")
- `max_related` (optional): Max related entities (default: 10)
- `page` (optional): Page number
- `page_size` (optional): Results per page
- `project` (required unless default_project_mode): Target project
- Returns: Root entity, related entities, paths
- Example:
```python
context = await build_context(
url="memory://api-design",
depth=2,
timeframe="30d",
project="main"
)
```
**recent_activity(type, depth, timeframe, project)**
- Get recent changes
- Parameters:
- `type` (optional): Activity type filter
- `depth` (optional): Include related entities
- `timeframe` (optional): Time window (default: "7d")
- `project` (optional): Target project (omit for all projects)
- Returns: List of recently updated entities
- Example:
```python
activity = await recent_activity(
timeframe="7d",
project="main"
)
```
**list_directory(dir_name, depth, file_name_glob, project)**
- Browse directory contents
- Parameters:
- `dir_name` (optional): Directory path (default: "/")
- `depth` (optional): Recursion depth (default: 1)
- `file_name_glob` (optional): File pattern (e.g., "*.md")
- `project` (required unless default_project_mode): Target project
- Returns: Files and subdirectories
- Example:
```python
contents = await list_directory(
dir_name="specs",
depth=2,
file_name_glob="*.md",
project="main"
)
```
### Search & Discovery
**search_notes(query, page, page_size, search_type, types, entity_types, after_date, project)**
- Search across knowledge base
- Parameters:
- `query` (required): Search query
- `page` (optional): Page number (default: 1)
- `page_size` (optional): Results per page (default: 10)
- `search_type` (optional): "text" or "semantic"
- `types` (optional): Entity type filter
- `entity_types` (optional): Observation category filter
- `after_date` (optional): Date filter (ISO format)
- `project` (required unless default_project_mode): Target project
- Returns: Matching entities with scores
- Example:
```python
results = await search_notes(
query="authentication",
types=["spec", "decision"],
after_date="2025-01-01",
project="main"
)
```
### Project Management
**list_memory_projects()**
- List all available projects
- Parameters: None
- Returns: List of projects with metadata
- Example:
```python
projects = await list_memory_projects()
```
**create_memory_project(project_name, project_path, set_default)**
- Create new project
- Parameters:
- `project_name` (required): Project name
- `project_path` (required): Directory path
- `set_default` (optional): Set as default (default: False)
- Returns: Created project details
- Example:
```python
await create_memory_project(
project_name="research",
project_path="/Users/name/research",
set_default=False
)
```
**delete_project(project_name)**
- Delete project from configuration
- Parameters:
- `project_name` (required): Project to delete
- Returns: Deletion confirmation
- Example:
```python
await delete_project(project_name="old-project")
```
**sync_status(project)**
- Check synchronization status
- Parameters:
- `project` (optional): Target project
- Returns: Sync progress and status
- Example:
```python
status = await sync_status(project="main")
```
### Visualization
**canvas(nodes, edges, title, folder, project)**
- Create Obsidian canvas
- Parameters:
- `nodes` (required): List of node objects
- `edges` (required): List of edge objects
- `title` (required): Canvas title
- `folder` (required): Destination folder
- `project` (required unless default_project_mode): Target project
- Returns: Created canvas file
- Example:
```python
await canvas(
nodes=[{"id": "1", "type": "file", "file": "note.md", "x": 0, "y": 0}],
edges=[],
title="Graph View",
folder="diagrams",
project="main"
)
```
---
## Best Practices
**Guidelines for effective knowledge management and AI collaboration.**
### 1. Project Setup
**Single-project users**:
- Enable `default_project_mode=true` in config
- Simplifies tool calls
- Less explicit project parameters
**Multi-project users**:
- Keep `default_project_mode=false`
- Always specify project explicitly
- Prevents cross-project errors
**Always start with discovery**:
```python
# First action in conversation
projects = await list_memory_projects()
# Ask user which to use
# Store for session
# Use consistently
```
### 2. Knowledge Structure
**Every note should have**:
- Clear, descriptive title
- 3-5 observations minimum
- 2-3 relations minimum
- Appropriate categories and tags
- Proper frontmatter
**Good structure example**:
```markdown
---
title: Clear Descriptive Title
tags: [relevant, tags, here]
type: note
---
# Title
## Context
Brief background
## Observations
- [category] Specific fact #tag1 #tag2
- [category] Another fact #tag3
- [category] Third fact #tag4
## Relations
- relation_type [[Related Entity 1]]
- relation_type [[Related Entity 2]]
```
### 3. Search Before Creating
**Always search first**:
```python
# Before writing new note
existing = await search_notes(
query="topic name",
project="main"
)
if existing["total"] > 0:
# Update existing instead of creating duplicate
await edit_note(
identifier=existing["results"][0]["permalink"],
operation="append",
content=new_information,
project="main"
)
else:
# Create new
await write_note(...)
```
### 4. Use Exact Entity Titles in Relations
**Wrong**:
```markdown
## Relations
- relates_to [[auth system]] # Won't match "Authentication System"
- implements [[api spec]] # Won't match "API Specification"
```
**Right**:
```python
# Search for exact title
results = await search_notes(query="Authentication System", project="main")
exact_title = results["results"][0]["title"]
# Use in relation
content = f"## Relations\n- relates_to [[{exact_title}]]"
```
### 5. Meaningful Categories
**Use semantic categories**:
- `[decision]` for choices made
- `[fact]` for objective information
- `[technique]` for methods
- `[requirement]` for needs
- `[insight]` for realizations
- `[problem]` for issues
- `[solution]` for resolutions
- `[action]` for tasks
**Not generic categories**:
- Avoid `[note]`, `[info]`, `[misc]`
- Be specific and intentional
### 6. Descriptive Relation Types
**Use meaningful relation types**:
- `implements` for implementation
- `requires` for dependencies
- `part_of` for hierarchy
- `extends` for enhancement
- `contrasts_with` for alternatives
**Not generic**:
- Avoid overusing `relates_to`
- Be specific about relationship
### 7. Progressive Elaboration
**Build knowledge over time**:
```python
# Session 1: Create foundation
await write_note(
title="Topic",
content="Basic structure with initial observations",
folder="notes",
project="main"
)
# Session 2: Add details
await edit_note(
identifier="Topic",
operation="append",
content="Additional observations and insights",
project="main"
)
# Session 3: Add relations
await edit_note(
identifier="Topic",
operation="append",
content="Relations to related topics",
project="main"
)
```
### 8. Consistent Naming
**Folder structure**:
- specs/ - Specifications
- decisions/ - Decision records
- meetings/ - Meeting notes
- conversations/ - AI conversations
- implementations/ - Code/implementations
- docs/ - Documentation
**File naming**:
- Use descriptive titles
- Consistent format
- Avoid special characters
### 9. Regular Validation
**Check knowledge graph health**:
```python
# Find unresolved references
# Check for orphaned notes
# Verify relation consistency
# Update outdated information
```
### 10. Permission and Transparency
**With users**:
- Always ask before recording
- Confirm after saving
- Explain what was saved
- Describe how it helps
**Recording pattern**:
```
AI: "Would you like me to save our discussion about {topic}?"
User: "Yes"
AI: [Saves to Basic Memory]
"Saved as '{title}' in {folder}/"
```
### 11. Context Building Strategy
**For new conversations**:
```python
# 1. Search for topic
results = await search_notes(query="topic", project="main")
# 2. Build context from top result
context = await build_context(
url=f"memory://{results['results'][0]['permalink']}",
depth=2,
timeframe="30d",
project="main"
)
# 3. Use context to inform response
# Reference previous knowledge
# Build on existing understanding
```
### 12. Error Recovery
**Graceful degradation**:
```python
try:
# Attempt operation
result = await tool_call(...)
except:
# Fall back to alternative
# Inform user of issue
# Suggest workaround
```
### 13. Incremental Updates
**Prefer editing over rewriting**:
```python
# Good: Incremental update
await edit_note(
identifier="Note",
operation="append",
content="New information",
project="main"
)
# Avoid: Complete rewrite
# (unless necessary for major restructuring)
```
### 14. Tagging Strategy
**Use tags strategically**:
- Technology: #python #fastapi
- Domain: #auth #security
- Status: #wip #completed
- Priority: #urgent #important
- Category: #bug #feature
**Not too many**:
- 3-5 tags per observation
- Focus on most relevant
- Avoid tag proliferation
### 15. Documentation as Code
**Treat knowledge like code**:
- Version control friendly (markdown)
- Review and refine regularly
- Keep it DRY (Don't Repeat Yourself)
- Link instead of duplicating
- Maintain consistency
---
## Conclusion
This extended guide provides comprehensive coverage of Basic Memory's capabilities for AI assistants. Each section is designed to be self-contained so you can reference or copy specific sections as needed.
For the condensed quick-reference version, see the [AI Assistant Guide](https://github.com/basicmachines-co/basic-memory/blob/main/src/basic_memory/mcp/resources/ai_assistant_guide.md).
For complete documentation including setup, integrations, and advanced features, visit [docs.basicmemory.com](https://docs.basicmemory.com).
**Remember**: Basic Memory is about building persistent, structured knowledge that grows over time. Focus on creating rich observations, meaningful relations, and building a connected knowledge graph that provides lasting value across conversations and sessions.
Built with ♥️ by Basic Machines
```