This is page 42 of 62. Use http://codebase.md/doobidoo/mcp-memory-service?lines=true&page={x} to view the full context.
# Directory Structure
```
├── .claude
│ ├── agents
│ │ ├── amp-bridge.md
│ │ ├── amp-pr-automator.md
│ │ ├── code-quality-guard.md
│ │ ├── gemini-pr-automator.md
│ │ └── github-release-manager.md
│ ├── commands
│ │ ├── README.md
│ │ ├── refactor-function
│ │ ├── refactor-function-prod
│ │ └── refactor-function.md
│ ├── consolidation-fix-handoff.md
│ ├── consolidation-hang-fix-summary.md
│ ├── directives
│ │ ├── agents.md
│ │ ├── code-quality-workflow.md
│ │ ├── consolidation-details.md
│ │ ├── development-setup.md
│ │ ├── hooks-configuration.md
│ │ ├── memory-first.md
│ │ ├── memory-tagging.md
│ │ ├── pr-workflow.md
│ │ ├── quality-system-details.md
│ │ ├── README.md
│ │ ├── refactoring-checklist.md
│ │ ├── storage-backends.md
│ │ └── version-management.md
│ ├── prompts
│ │ └── hybrid-cleanup-integration.md
│ ├── settings.local.json.backup
│ └── settings.local.json.local
├── .commit-message
├── .coveragerc
├── .dockerignore
├── .env.example
├── .env.sqlite.backup
├── .envnn#
├── .gitattributes
├── .github
│ ├── FUNDING.yml
│ ├── ISSUE_TEMPLATE
│ │ ├── bug_report.yml
│ │ ├── config.yml
│ │ ├── feature_request.yml
│ │ └── performance_issue.yml
│ ├── pull_request_template.md
│ └── workflows
│ ├── bridge-tests.yml
│ ├── CACHE_FIX.md
│ ├── claude-branch-automation.yml
│ ├── claude-code-review.yml
│ ├── claude.yml
│ ├── cleanup-images.yml.disabled
│ ├── dev-setup-validation.yml
│ ├── docker-publish.yml
│ ├── dockerfile-lint.yml
│ ├── LATEST_FIXES.md
│ ├── main-optimized.yml.disabled
│ ├── main.yml
│ ├── publish-and-test.yml
│ ├── publish-dual.yml
│ ├── README_OPTIMIZATION.md
│ ├── release-tag.yml.disabled
│ ├── release.yml
│ ├── roadmap-review-reminder.yml
│ ├── SECRET_CONDITIONAL_FIX.md
│ └── WORKFLOW_FIXES.md
├── .gitignore
├── .mcp.json.backup
├── .mcp.json.template
├── .metrics
│ ├── baseline_cc_install_hooks.txt
│ ├── baseline_mi_install_hooks.txt
│ ├── baseline_nesting_install_hooks.txt
│ ├── BASELINE_REPORT.md
│ ├── COMPLEXITY_COMPARISON.txt
│ ├── QUICK_REFERENCE.txt
│ ├── README.md
│ ├── REFACTORED_BASELINE.md
│ ├── REFACTORING_COMPLETION_REPORT.md
│ └── TRACKING_TABLE.md
├── .pyscn
│ ├── .gitignore
│ └── reports
│ └── analyze_20251123_214224.html
├── AGENTS.md
├── ai-optimized-tool-descriptions.py
├── archive
│ ├── deployment
│ │ ├── deploy_fastmcp_fixed.sh
│ │ ├── deploy_http_with_mcp.sh
│ │ └── deploy_mcp_v4.sh
│ ├── deployment-configs
│ │ ├── empty_config.yml
│ │ └── smithery.yaml
│ ├── development
│ │ └── test_fastmcp.py
│ ├── docs-removed-2025-08-23
│ │ ├── authentication.md
│ │ ├── claude_integration.md
│ │ ├── claude-code-compatibility.md
│ │ ├── claude-code-integration.md
│ │ ├── claude-code-quickstart.md
│ │ ├── claude-desktop-setup.md
│ │ ├── complete-setup-guide.md
│ │ ├── database-synchronization.md
│ │ ├── development
│ │ │ ├── autonomous-memory-consolidation.md
│ │ │ ├── CLEANUP_PLAN.md
│ │ │ ├── CLEANUP_README.md
│ │ │ ├── CLEANUP_SUMMARY.md
│ │ │ ├── dream-inspired-memory-consolidation.md
│ │ │ ├── hybrid-slm-memory-consolidation.md
│ │ │ ├── mcp-milestone.md
│ │ │ ├── multi-client-architecture.md
│ │ │ ├── test-results.md
│ │ │ └── TIMESTAMP_FIX_SUMMARY.md
│ │ ├── distributed-sync.md
│ │ ├── invocation_guide.md
│ │ ├── macos-intel.md
│ │ ├── master-guide.md
│ │ ├── mcp-client-configuration.md
│ │ ├── multi-client-server.md
│ │ ├── service-installation.md
│ │ ├── sessions
│ │ │ └── MCP_ENHANCEMENT_SESSION_MEMORY_v4.1.0.md
│ │ ├── UBUNTU_SETUP.md
│ │ ├── ubuntu.md
│ │ ├── windows-setup.md
│ │ └── windows.md
│ ├── docs-root-cleanup-2025-08-23
│ │ ├── AWESOME_LIST_SUBMISSION.md
│ │ ├── CLOUDFLARE_IMPLEMENTATION.md
│ │ ├── DOCUMENTATION_ANALYSIS.md
│ │ ├── DOCUMENTATION_CLEANUP_PLAN.md
│ │ ├── DOCUMENTATION_CONSOLIDATION_COMPLETE.md
│ │ ├── LITESTREAM_SETUP_GUIDE.md
│ │ ├── lm_studio_system_prompt.md
│ │ ├── PYTORCH_DOWNLOAD_FIX.md
│ │ └── README-ORIGINAL-BACKUP.md
│ ├── investigations
│ │ └── MACOS_HOOKS_INVESTIGATION.md
│ ├── litestream-configs-v6.3.0
│ │ ├── install_service.sh
│ │ ├── litestream_master_config_fixed.yml
│ │ ├── litestream_master_config.yml
│ │ ├── litestream_replica_config_fixed.yml
│ │ ├── litestream_replica_config.yml
│ │ ├── litestream_replica_simple.yml
│ │ ├── litestream-http.service
│ │ ├── litestream.service
│ │ └── requirements-cloudflare.txt
│ ├── release-notes
│ │ └── release-notes-v7.1.4.md
│ └── setup-development
│ ├── README.md
│ ├── setup_consolidation_mdns.sh
│ ├── STARTUP_SETUP_GUIDE.md
│ └── test_service.sh
├── CHANGELOG-HISTORIC.md
├── CHANGELOG.md
├── claude_commands
│ ├── memory-context.md
│ ├── memory-health.md
│ ├── memory-ingest-dir.md
│ ├── memory-ingest.md
│ ├── memory-recall.md
│ ├── memory-search.md
│ ├── memory-store.md
│ ├── README.md
│ └── session-start.md
├── claude-hooks
│ ├── config.json
│ ├── config.template.json
│ ├── CONFIGURATION.md
│ ├── core
│ │ ├── auto-capture-hook.js
│ │ ├── auto-capture-hook.ps1
│ │ ├── memory-retrieval.js
│ │ ├── mid-conversation.js
│ │ ├── permission-request.js
│ │ ├── session-end.js
│ │ ├── session-start.js
│ │ └── topic-change.js
│ ├── debug-pattern-test.js
│ ├── install_claude_hooks_windows.ps1
│ ├── install_hooks.py
│ ├── memory-mode-controller.js
│ ├── MIGRATION.md
│ ├── README-AUTO-CAPTURE.md
│ ├── README-NATURAL-TRIGGERS.md
│ ├── README-PERMISSION-REQUEST.md
│ ├── README-phase2.md
│ ├── README.md
│ ├── simple-test.js
│ ├── statusline.sh
│ ├── test-adaptive-weights.js
│ ├── test-dual-protocol-hook.js
│ ├── test-mcp-hook.js
│ ├── test-natural-triggers.js
│ ├── test-recency-scoring.js
│ ├── tests
│ │ ├── integration-test.js
│ │ ├── phase2-integration-test.js
│ │ ├── test-code-execution.js
│ │ ├── test-cross-session.json
│ │ ├── test-permission-request.js
│ │ ├── test-session-tracking.json
│ │ └── test-threading.json
│ ├── utilities
│ │ ├── adaptive-pattern-detector.js
│ │ ├── auto-capture-patterns.js
│ │ ├── context-formatter.js
│ │ ├── context-shift-detector.js
│ │ ├── conversation-analyzer.js
│ │ ├── dynamic-context-updater.js
│ │ ├── git-analyzer.js
│ │ ├── mcp-client.js
│ │ ├── memory-client.js
│ │ ├── memory-scorer.js
│ │ ├── performance-manager.js
│ │ ├── project-detector.js
│ │ ├── session-cache.json
│ │ ├── session-tracker.js
│ │ ├── tiered-conversation-monitor.js
│ │ ├── user-override-detector.js
│ │ └── version-checker.js
│ └── WINDOWS-SESSIONSTART-BUG.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── COMMIT_MESSAGE.md
├── CONTRIBUTING.md
├── Development-Sprint-November-2025.md
├── docs
│ ├── amp-cli-bridge.md
│ ├── api
│ │ ├── code-execution-interface.md
│ │ ├── memory-metadata-api.md
│ │ ├── PHASE1_IMPLEMENTATION_SUMMARY.md
│ │ ├── PHASE2_IMPLEMENTATION_SUMMARY.md
│ │ ├── PHASE2_REPORT.md
│ │ └── tag-standardization.md
│ ├── architecture
│ │ ├── graph-database-design.md
│ │ ├── search-enhancement-spec.md
│ │ └── search-examples.md
│ ├── architecture.md
│ ├── archive
│ │ └── obsolete-workflows
│ │ ├── load_memory_context.md
│ │ └── README.md
│ ├── assets
│ │ └── images
│ │ ├── dashboard-v3.3.0-preview.png
│ │ ├── memory-awareness-hooks-example.png
│ │ ├── project-infographic.svg
│ │ └── README.md
│ ├── CLAUDE_CODE_QUICK_REFERENCE.md
│ ├── cloudflare-setup.md
│ ├── demo-recording-script.md
│ ├── deployment
│ │ ├── docker.md
│ │ ├── dual-service.md
│ │ ├── production-guide.md
│ │ └── systemd-service.md
│ ├── development
│ │ ├── ai-agent-instructions.md
│ │ ├── code-quality
│ │ │ ├── phase-2a-completion.md
│ │ │ ├── phase-2a-handle-get-prompt.md
│ │ │ ├── phase-2a-index.md
│ │ │ ├── phase-2a-install-package.md
│ │ │ └── phase-2b-session-summary.md
│ │ ├── code-quality-workflow.md
│ │ ├── dashboard-workflow.md
│ │ ├── issue-management.md
│ │ ├── pr-280-post-mortem.md
│ │ ├── pr-review-guide.md
│ │ ├── refactoring-notes.md
│ │ ├── release-checklist.md
│ │ └── todo-tracker.md
│ ├── docker-optimized-build.md
│ ├── document-ingestion.md
│ ├── DOCUMENTATION_AUDIT.md
│ ├── enhancement-roadmap-issue-14.md
│ ├── examples
│ │ ├── analysis-scripts.js
│ │ ├── maintenance-session-example.md
│ │ ├── memory-distribution-chart.jsx
│ │ ├── quality-system-configs.md
│ │ └── tag-schema.json
│ ├── features
│ │ └── association-quality-boost.md
│ ├── first-time-setup.md
│ ├── glama-deployment.md
│ ├── guides
│ │ ├── advanced-command-examples.md
│ │ ├── chromadb-migration.md
│ │ ├── commands-vs-mcp-server.md
│ │ ├── mcp-enhancements.md
│ │ ├── mdns-service-discovery.md
│ │ ├── memory-consolidation-guide.md
│ │ ├── memory-quality-guide.md
│ │ ├── migration.md
│ │ ├── scripts.md
│ │ └── STORAGE_BACKENDS.md
│ ├── HOOK_IMPROVEMENTS.md
│ ├── hooks
│ │ └── phase2-code-execution-migration.md
│ ├── http-server-management.md
│ ├── ide-compatability.md
│ ├── IMAGE_RETENTION_POLICY.md
│ ├── images
│ │ ├── dashboard-placeholder.md
│ │ └── update-restart-demo.png
│ ├── implementation
│ │ ├── health_checks.md
│ │ └── performance.md
│ ├── IMPLEMENTATION_PLAN_HTTP_SSE.md
│ ├── integration
│ │ ├── homebrew.md
│ │ └── multi-client.md
│ ├── integrations
│ │ ├── gemini.md
│ │ ├── groq-bridge.md
│ │ ├── groq-integration-summary.md
│ │ └── groq-model-comparison.md
│ ├── integrations.md
│ ├── legacy
│ │ └── dual-protocol-hooks.md
│ ├── LIGHTWEIGHT_ONNX_SETUP.md
│ ├── LM_STUDIO_COMPATIBILITY.md
│ ├── maintenance
│ │ └── memory-maintenance.md
│ ├── mastery
│ │ ├── api-reference.md
│ │ ├── architecture-overview.md
│ │ ├── configuration-guide.md
│ │ ├── local-setup-and-run.md
│ │ ├── testing-guide.md
│ │ └── troubleshooting.md
│ ├── migration
│ │ ├── code-execution-api-quick-start.md
│ │ └── graph-migration-guide.md
│ ├── natural-memory-triggers
│ │ ├── cli-reference.md
│ │ ├── installation-guide.md
│ │ └── performance-optimization.md
│ ├── oauth-setup.md
│ ├── pr-graphql-integration.md
│ ├── quality-system-ui-implementation.md
│ ├── quick-setup-cloudflare-dual-environment.md
│ ├── README.md
│ ├── refactoring
│ │ └── phase-3-3-analysis.md
│ ├── releases
│ │ └── v8.72.0-testing.md
│ ├── remote-configuration-wiki-section.md
│ ├── research
│ │ ├── code-execution-interface-implementation.md
│ │ └── code-execution-interface-summary.md
│ ├── ROADMAP.md
│ ├── sqlite-vec-backend.md
│ ├── statistics
│ │ ├── charts
│ │ │ ├── activity_patterns.png
│ │ │ ├── contributors.png
│ │ │ ├── growth_trajectory.png
│ │ │ ├── monthly_activity.png
│ │ │ └── october_sprint.png
│ │ ├── data
│ │ │ ├── activity_by_day.csv
│ │ │ ├── activity_by_hour.csv
│ │ │ ├── contributors.csv
│ │ │ └── monthly_activity.csv
│ │ ├── generate_charts.py
│ │ └── REPOSITORY_STATISTICS.md
│ ├── technical
│ │ ├── development.md
│ │ ├── memory-migration.md
│ │ ├── migration-log.md
│ │ ├── sqlite-vec-embedding-fixes.md
│ │ └── tag-storage.md
│ ├── testing
│ │ └── regression-tests.md
│ ├── testing-cloudflare-backend.md
│ ├── troubleshooting
│ │ ├── cloudflare-api-token-setup.md
│ │ ├── cloudflare-authentication.md
│ │ ├── database-transfer-migration.md
│ │ ├── general.md
│ │ ├── hooks-quick-reference.md
│ │ ├── memory-management.md
│ │ ├── pr162-schema-caching-issue.md
│ │ ├── session-end-hooks.md
│ │ └── sync-issues.md
│ ├── tutorials
│ │ ├── advanced-techniques.md
│ │ ├── data-analysis.md
│ │ └── demo-session-walkthrough.md
│ ├── wiki-documentation-plan.md
│ └── wiki-Graph-Database-Architecture.md
├── examples
│ ├── claude_desktop_config_template.json
│ ├── claude_desktop_config_windows.json
│ ├── claude-desktop-http-config.json
│ ├── config
│ │ └── claude_desktop_config.json
│ ├── http-mcp-bridge.js
│ ├── memory_export_template.json
│ ├── README.md
│ ├── setup
│ │ └── setup_multi_client_complete.py
│ └── start_https_example.sh
├── IMPLEMENTATION_SUMMARY.md
├── install_service.py
├── install.py
├── LICENSE
├── NOTICE
├── PR_DESCRIPTION.md
├── pyproject-lite.toml
├── pyproject.toml
├── pytest.ini
├── README.md
├── release-notes-v8.61.0.md
├── run_server.py
├── scripts
│ ├── .claude
│ │ └── settings.local.json
│ ├── archive
│ │ └── check_missing_timestamps.py
│ ├── backup
│ │ ├── backup_memories.py
│ │ ├── backup_sqlite_vec.sh
│ │ ├── export_distributable_memories.sh
│ │ └── restore_memories.py
│ ├── benchmarks
│ │ ├── benchmark_code_execution_api.py
│ │ ├── benchmark_hybrid_sync.py
│ │ └── benchmark_server_caching.py
│ ├── ci
│ │ ├── check_dockerfile_args.sh
│ │ └── validate_imports.sh
│ ├── database
│ │ ├── analyze_sqlite_vec_db.py
│ │ ├── check_sqlite_vec_status.py
│ │ ├── db_health_check.py
│ │ └── simple_timestamp_check.py
│ ├── development
│ │ ├── debug_server_initialization.py
│ │ ├── find_orphaned_files.py
│ │ ├── fix_mdns.sh
│ │ ├── fix_sitecustomize.py
│ │ ├── remote_ingest.sh
│ │ ├── setup-git-merge-drivers.sh
│ │ ├── uv-lock-merge.sh
│ │ └── verify_hybrid_sync.py
│ ├── hooks
│ │ └── pre-commit
│ ├── installation
│ │ ├── install_linux_service.py
│ │ ├── install_macos_service.py
│ │ ├── install_uv.py
│ │ ├── install_windows_service.py
│ │ ├── install.py
│ │ ├── setup_backup_cron.sh
│ │ ├── setup_claude_mcp.sh
│ │ └── setup_cloudflare_resources.py
│ ├── linux
│ │ ├── service_status.sh
│ │ ├── start_service.sh
│ │ ├── stop_service.sh
│ │ ├── uninstall_service.sh
│ │ └── view_logs.sh
│ ├── maintenance
│ │ ├── add_project_tags.py
│ │ ├── apply_quality_boost_retroactively.py
│ │ ├── assign_memory_types.py
│ │ ├── auto_retag_memory_merge.py
│ │ ├── auto_retag_memory.py
│ │ ├── backfill_graph_table.py
│ │ ├── check_memory_types.py
│ │ ├── cleanup_association_memories_hybrid.py
│ │ ├── cleanup_association_memories.py
│ │ ├── cleanup_corrupted_encoding.py
│ │ ├── cleanup_low_quality.py
│ │ ├── cleanup_memories.py
│ │ ├── cleanup_organize.py
│ │ ├── consolidate_memory_types.py
│ │ ├── consolidation_mappings.json
│ │ ├── delete_orphaned_vectors_fixed.py
│ │ ├── delete_test_memories.py
│ │ ├── fast_cleanup_duplicates_with_tracking.sh
│ │ ├── find_all_duplicates.py
│ │ ├── find_cloudflare_duplicates.py
│ │ ├── find_duplicates.py
│ │ ├── memory-types.md
│ │ ├── README.md
│ │ ├── recover_timestamps_from_cloudflare.py
│ │ ├── regenerate_embeddings.py
│ │ ├── repair_malformed_tags.py
│ │ ├── repair_memories.py
│ │ ├── repair_sqlite_vec_embeddings.py
│ │ ├── repair_zero_embeddings.py
│ │ ├── restore_from_json_export.py
│ │ ├── retag_valuable_memories.py
│ │ ├── scan_todos.sh
│ │ ├── soft_delete_test_memories.py
│ │ └── sync_status.py
│ ├── migration
│ │ ├── cleanup_mcp_timestamps.py
│ │ ├── legacy
│ │ │ └── migrate_chroma_to_sqlite.py
│ │ ├── mcp-migration.py
│ │ ├── migrate_sqlite_vec_embeddings.py
│ │ ├── migrate_storage.py
│ │ ├── migrate_tags.py
│ │ ├── migrate_timestamps.py
│ │ ├── migrate_to_cloudflare.py
│ │ ├── migrate_to_sqlite_vec.py
│ │ ├── migrate_v5_enhanced.py
│ │ ├── TIMESTAMP_CLEANUP_README.md
│ │ └── verify_mcp_timestamps.py
│ ├── pr
│ │ ├── amp_collect_results.sh
│ │ ├── amp_detect_breaking_changes.sh
│ │ ├── amp_generate_tests.sh
│ │ ├── amp_pr_review.sh
│ │ ├── amp_quality_gate.sh
│ │ ├── amp_suggest_fixes.sh
│ │ ├── auto_review.sh
│ │ ├── detect_breaking_changes.sh
│ │ ├── generate_tests.sh
│ │ ├── lib
│ │ │ └── graphql_helpers.sh
│ │ ├── pre_pr_check.sh
│ │ ├── quality_gate.sh
│ │ ├── resolve_threads.sh
│ │ ├── run_pyscn_analysis.sh
│ │ ├── run_quality_checks_on_files.sh
│ │ ├── run_quality_checks.sh
│ │ ├── thread_status.sh
│ │ └── watch_reviews.sh
│ ├── quality
│ │ ├── bulk_evaluate_onnx.py
│ │ ├── check_test_scores.py
│ │ ├── debug_deberta_scoring.py
│ │ ├── export_deberta_onnx.py
│ │ ├── fix_dead_code_install.sh
│ │ ├── migrate_to_deberta.py
│ │ ├── phase1_dead_code_analysis.md
│ │ ├── phase2_complexity_analysis.md
│ │ ├── README_PHASE1.md
│ │ ├── README_PHASE2.md
│ │ ├── rescore_deberta.py
│ │ ├── rescore_fallback.py
│ │ ├── reset_onnx_scores.py
│ │ ├── track_pyscn_metrics.sh
│ │ └── weekly_quality_review.sh
│ ├── README.md
│ ├── run
│ │ ├── memory_wrapper_cleanup.ps1
│ │ ├── memory_wrapper_cleanup.py
│ │ ├── memory_wrapper_cleanup.sh
│ │ ├── README_CLEANUP_WRAPPER.md
│ │ ├── run_mcp_memory.sh
│ │ ├── run-with-uv.sh
│ │ └── start_sqlite_vec.sh
│ ├── run_memory_server.py
│ ├── server
│ │ ├── check_http_server.py
│ │ ├── check_server_health.py
│ │ ├── memory_offline.py
│ │ ├── preload_models.py
│ │ ├── run_http_server.py
│ │ ├── run_memory_server.py
│ │ ├── start_http_server.bat
│ │ └── start_http_server.sh
│ ├── service
│ │ ├── deploy_dual_services.sh
│ │ ├── http_server_manager.sh
│ │ ├── install_http_service.sh
│ │ ├── mcp-memory-http.service
│ │ ├── mcp-memory.service
│ │ ├── memory_service_manager.sh
│ │ ├── service_control.sh
│ │ ├── service_utils.py
│ │ ├── update_service.sh
│ │ └── windows
│ │ ├── add_watchdog_trigger.ps1
│ │ ├── install_scheduled_task.ps1
│ │ ├── manage_service.ps1
│ │ ├── run_http_server_background.ps1
│ │ ├── uninstall_scheduled_task.ps1
│ │ └── update_and_restart.ps1
│ ├── setup-lightweight.sh
│ ├── sync
│ │ ├── check_drift.py
│ │ ├── claude_sync_commands.py
│ │ ├── export_memories.py
│ │ ├── import_memories.py
│ │ ├── litestream
│ │ │ ├── apply_local_changes.sh
│ │ │ ├── enhanced_memory_store.sh
│ │ │ ├── init_staging_db.sh
│ │ │ ├── io.litestream.replication.plist
│ │ │ ├── manual_sync.sh
│ │ │ ├── memory_sync.sh
│ │ │ ├── pull_remote_changes.sh
│ │ │ ├── push_to_remote.sh
│ │ │ ├── README.md
│ │ │ ├── resolve_conflicts.sh
│ │ │ ├── setup_local_litestream.sh
│ │ │ ├── setup_remote_litestream.sh
│ │ │ ├── staging_db_init.sql
│ │ │ ├── stash_local_changes.sh
│ │ │ ├── sync_from_remote_noconfig.sh
│ │ │ └── sync_from_remote.sh
│ │ ├── README.md
│ │ ├── safe_cloudflare_update.sh
│ │ ├── sync_memory_backends.py
│ │ └── sync_now.py
│ ├── testing
│ │ ├── run_complete_test.py
│ │ ├── run_memory_test.sh
│ │ ├── simple_test.py
│ │ ├── test_cleanup_logic.py
│ │ ├── test_cloudflare_backend.py
│ │ ├── test_docker_functionality.py
│ │ ├── test_installation.py
│ │ ├── test_mdns.py
│ │ ├── test_memory_api.py
│ │ ├── test_memory_simple.py
│ │ ├── test_migration.py
│ │ ├── test_search_api.py
│ │ ├── test_sqlite_vec_embeddings.py
│ │ ├── test_sse_events.py
│ │ ├── test-connection.py
│ │ └── test-hook.js
│ ├── update_and_restart.sh
│ ├── utils
│ │ ├── claude_commands_utils.py
│ │ ├── detect_platform.py
│ │ ├── generate_personalized_claude_md.sh
│ │ ├── groq
│ │ ├── groq_agent_bridge.py
│ │ ├── list-collections.py
│ │ ├── memory_wrapper_uv.py
│ │ ├── query_memories.py
│ │ ├── README_detect_platform.md
│ │ ├── smithery_wrapper.py
│ │ ├── test_groq_bridge.sh
│ │ └── uv_wrapper.py
│ └── validation
│ ├── check_dev_setup.py
│ ├── check_documentation_links.py
│ ├── check_handler_coverage.py
│ ├── diagnose_backend_config.py
│ ├── validate_configuration_complete.py
│ ├── validate_graph_tools.py
│ ├── validate_memories.py
│ ├── validate_migration.py
│ ├── validate_timestamp_integrity.py
│ ├── verify_environment.py
│ ├── verify_pytorch_windows.py
│ └── verify_torch.py
├── SECURITY.md
├── selective_timestamp_recovery.py
├── SPONSORS.md
├── src
│ └── mcp_memory_service
│ ├── __init__.py
│ ├── _version.py
│ ├── api
│ │ ├── __init__.py
│ │ ├── client.py
│ │ ├── operations.py
│ │ ├── sync_wrapper.py
│ │ └── types.py
│ ├── backup
│ │ ├── __init__.py
│ │ └── scheduler.py
│ ├── cli
│ │ ├── __init__.py
│ │ ├── ingestion.py
│ │ ├── main.py
│ │ └── utils.py
│ ├── config.py
│ ├── consolidation
│ │ ├── __init__.py
│ │ ├── associations.py
│ │ ├── base.py
│ │ ├── clustering.py
│ │ ├── compression.py
│ │ ├── consolidator.py
│ │ ├── decay.py
│ │ ├── forgetting.py
│ │ ├── health.py
│ │ └── scheduler.py
│ ├── dependency_check.py
│ ├── discovery
│ │ ├── __init__.py
│ │ ├── client.py
│ │ └── mdns_service.py
│ ├── embeddings
│ │ ├── __init__.py
│ │ └── onnx_embeddings.py
│ ├── ingestion
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── chunker.py
│ │ ├── csv_loader.py
│ │ ├── json_loader.py
│ │ ├── pdf_loader.py
│ │ ├── registry.py
│ │ ├── semtools_loader.py
│ │ └── text_loader.py
│ ├── lm_studio_compat.py
│ ├── mcp_server.py
│ ├── models
│ │ ├── __init__.py
│ │ └── memory.py
│ ├── quality
│ │ ├── __init__.py
│ │ ├── ai_evaluator.py
│ │ ├── async_scorer.py
│ │ ├── config.py
│ │ ├── implicit_signals.py
│ │ ├── metadata_codec.py
│ │ ├── onnx_ranker.py
│ │ └── scorer.py
│ ├── server
│ │ ├── __init__.py
│ │ ├── __main__.py
│ │ ├── cache_manager.py
│ │ ├── client_detection.py
│ │ ├── environment.py
│ │ ├── handlers
│ │ │ ├── __init__.py
│ │ │ ├── consolidation.py
│ │ │ ├── documents.py
│ │ │ ├── graph.py
│ │ │ ├── memory.py
│ │ │ ├── quality.py
│ │ │ └── utility.py
│ │ └── logging_config.py
│ ├── server_impl.py
│ ├── services
│ │ ├── __init__.py
│ │ └── memory_service.py
│ ├── storage
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── cloudflare.py
│ │ ├── factory.py
│ │ ├── graph.py
│ │ ├── http_client.py
│ │ ├── hybrid.py
│ │ ├── migrations
│ │ │ └── 008_add_graph_table.sql
│ │ └── sqlite_vec.py
│ ├── sync
│ │ ├── __init__.py
│ │ ├── exporter.py
│ │ ├── importer.py
│ │ └── litestream_config.py
│ ├── utils
│ │ ├── __init__.py
│ │ ├── cache_manager.py
│ │ ├── content_splitter.py
│ │ ├── db_utils.py
│ │ ├── debug.py
│ │ ├── directory_ingestion.py
│ │ ├── document_processing.py
│ │ ├── gpu_detection.py
│ │ ├── hashing.py
│ │ ├── health_check.py
│ │ ├── http_server_manager.py
│ │ ├── port_detection.py
│ │ ├── quality_analytics.py
│ │ ├── startup_orchestrator.py
│ │ ├── system_detection.py
│ │ └── time_parser.py
│ └── web
│ ├── __init__.py
│ ├── api
│ │ ├── __init__.py
│ │ ├── analytics.py
│ │ ├── backup.py
│ │ ├── consolidation.py
│ │ ├── documents.py
│ │ ├── events.py
│ │ ├── health.py
│ │ ├── manage.py
│ │ ├── mcp.py
│ │ ├── memories.py
│ │ ├── quality.py
│ │ ├── search.py
│ │ └── sync.py
│ ├── app.py
│ ├── dependencies.py
│ ├── oauth
│ │ ├── __init__.py
│ │ ├── authorization.py
│ │ ├── discovery.py
│ │ ├── middleware.py
│ │ ├── models.py
│ │ ├── registration.py
│ │ └── storage.py
│ ├── sse.py
│ └── static
│ ├── app.js
│ ├── i18n
│ │ ├── de.json
│ │ ├── en.json
│ │ ├── es.json
│ │ ├── fr.json
│ │ ├── ja.json
│ │ ├── ko.json
│ │ └── zh.json
│ ├── index.html
│ ├── README.md
│ ├── sse_test.html
│ └── style.css
├── start_http_debug.bat
├── start_http_server.sh
├── test_document.txt
├── test_version_checker.js
├── TESTING_NOTES.md
├── tests
│ ├── __init__.py
│ ├── api
│ │ ├── __init__.py
│ │ ├── test_compact_types.py
│ │ └── test_operations.py
│ ├── bridge
│ │ ├── mock_responses.js
│ │ ├── package-lock.json
│ │ ├── package.json
│ │ └── test_http_mcp_bridge.js
│ ├── conftest.py
│ ├── consolidation
│ │ ├── __init__.py
│ │ ├── conftest.py
│ │ ├── test_associations.py
│ │ ├── test_clustering.py
│ │ ├── test_compression.py
│ │ ├── test_consolidator.py
│ │ ├── test_decay.py
│ │ ├── test_forgetting.py
│ │ └── test_graph_modes.py
│ ├── contracts
│ │ └── api-specification.yml
│ ├── integration
│ │ ├── conftest.py
│ │ ├── HANDLER_COVERAGE_REPORT.md
│ │ ├── package-lock.json
│ │ ├── package.json
│ │ ├── test_all_memory_handlers.py
│ │ ├── test_api_key_fallback.py
│ │ ├── test_api_memories_chronological.py
│ │ ├── test_api_tag_time_search.py
│ │ ├── test_api_with_memory_service.py
│ │ ├── test_bridge_integration.js
│ │ ├── test_cli_interfaces.py
│ │ ├── test_cloudflare_connection.py
│ │ ├── test_concurrent_clients.py
│ │ ├── test_data_serialization_consistency.py
│ │ ├── test_http_server_startup.py
│ │ ├── test_mcp_memory.py
│ │ ├── test_mdns_integration.py
│ │ ├── test_oauth_basic_auth.py
│ │ ├── test_oauth_flow.py
│ │ ├── test_server_handlers.py
│ │ └── test_store_memory.py
│ ├── performance
│ │ ├── test_background_sync.py
│ │ └── test_hybrid_live.py
│ ├── README.md
│ ├── smithery
│ │ └── test_smithery.py
│ ├── sqlite
│ │ └── simple_sqlite_vec_test.py
│ ├── storage
│ │ ├── conftest.py
│ │ └── test_graph_storage.py
│ ├── test_client.py
│ ├── test_content_splitting.py
│ ├── test_database.py
│ ├── test_deberta_quality.py
│ ├── test_fallback_quality.py
│ ├── test_graph_traversal.py
│ ├── test_hybrid_cloudflare_limits.py
│ ├── test_hybrid_storage.py
│ ├── test_lightweight_onnx.py
│ ├── test_memory_ops.py
│ ├── test_memory_wrapper_cleanup.py
│ ├── test_quality_integration.py
│ ├── test_quality_system.py
│ ├── test_semantic_search.py
│ ├── test_sqlite_vec_storage.py
│ ├── test_time_parser.py
│ ├── test_timestamp_preservation.py
│ ├── timestamp
│ │ ├── test_hook_vs_manual_storage.py
│ │ ├── test_issue99_final_validation.py
│ │ ├── test_search_retrieval_inconsistency.py
│ │ ├── test_timestamp_issue.py
│ │ └── test_timestamp_simple.py
│ └── unit
│ ├── conftest.py
│ ├── test_cloudflare_storage.py
│ ├── test_csv_loader.py
│ ├── test_fastapi_dependencies.py
│ ├── test_import.py
│ ├── test_imports.py
│ ├── test_json_loader.py
│ ├── test_mdns_simple.py
│ ├── test_mdns.py
│ ├── test_memory_service.py
│ ├── test_memory.py
│ ├── test_semtools_loader.py
│ ├── test_storage_interface_compatibility.py
│ ├── test_tag_time_filtering.py
│ └── test_uv_no_pip_installer_fallback.py
├── tools
│ ├── docker
│ │ ├── DEPRECATED.md
│ │ ├── docker-compose.http.yml
│ │ ├── docker-compose.pythonpath.yml
│ │ ├── docker-compose.standalone.yml
│ │ ├── docker-compose.uv.yml
│ │ ├── docker-compose.yml
│ │ ├── docker-entrypoint-persistent.sh
│ │ ├── docker-entrypoint-unified.sh
│ │ ├── docker-entrypoint.sh
│ │ ├── Dockerfile
│ │ ├── Dockerfile.glama
│ │ ├── Dockerfile.slim
│ │ ├── README.md
│ │ └── test-docker-modes.sh
│ └── README.md
├── uv.lock
└── verify_compression.sh
```
# Files
--------------------------------------------------------------------------------
/src/mcp_memory_service/web/static/i18n/es.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "actions.addMemory": "Agregar Memoria",
3 | "actions.advancedSearch": "Búsqueda Avanzada",
4 | "actions.browseTags": "Explorar Etiquetas",
5 | "actions.clearAll": "Limpiar Todo",
6 | "actions.exportData": "Exportar Datos",
7 | "actions.search": "Buscar",
8 | "analytics.databaseSize": "Tamaño de Base de Datos",
9 | "analytics.granularity.day": "Por Día",
10 | "analytics.granularity.hour": "Por Hora",
11 | "analytics.granularity.week": "Por Semana",
12 | "analytics.growth": "Crecimiento de Memorias a lo Largo del Tiempo",
13 | "analytics.heatmap": "Mapa de Calor de Actividad",
14 | "analytics.keyMetrics": "📊 Métricas Clave",
15 | "analytics.loading": "Cargando...",
16 | "analytics.loadingChart": "Cargando gráfico...",
17 | "analytics.loadingHeatmap": "Cargando mapa de calor...",
18 | "analytics.memoryTypes": "Distribución de Tipos de Memoria",
19 | "analytics.period.180d": "Últimos 6 Meses",
20 | "analytics.period.30d": "Últimos 30 Días",
21 | "analytics.period.7d": "Últimos 7 Días",
22 | "analytics.period.90d": "Últimos 90 Días",
23 | "analytics.period.all": "Todo el Tiempo",
24 | "analytics.period.month": "Último Mes",
25 | "analytics.period.quarter": "Último Trimestre",
26 | "analytics.period.week": "Última Semana",
27 | "analytics.period.year": "Último Año",
28 | "analytics.recentActivity": "Actividad Reciente",
29 | "analytics.reports": "📋 Informes Detallados",
30 | "analytics.storage": "Informe de Almacenamiento",
31 | "analytics.tagUsage": "Distribución de Uso de Etiquetas",
32 | "analytics.thisWeek": "Esta Semana",
33 | "analytics.topTags": "Mejores Etiquetas",
34 | "analytics.totalMemories": "Total de Memorias",
35 | "analytics.trends": "📈 Tendencias & Gráficos",
36 | "analytics.uniqueTags": "Etiquetas Únicas",
37 | "api.events.stats": "Ver estadísticas de conexión SSE",
38 | "api.events.stream": "Suscribirse al flujo de eventos de memoria en tiempo real",
39 | "api.health.detailed": "Salud detallada con estadísticas de base de datos",
40 | "api.health.quick": "Endpoint de verificación de salud rápida",
41 | "api.link.overview": "Página de Resumen API",
42 | "api.link.redoc": "Documentación ReDoc",
43 | "api.link.swagger": "Interfaz Swagger Interactiva",
44 | "api.memory.delete": "Eliminar una memoria y sus embeddings",
45 | "api.memory.get": "Recuperar una memoria específica por hash de contenido",
46 | "api.memory.list": "Listar todas las memorias con soporte de paginación",
47 | "api.memory.store": "Almacenar una nueva memoria con generación automática de embedding",
48 | "api.search.semantic": "Búsqueda de similitud semántica usando embeddings",
49 | "api.search.similar": "Encontrar memorias similares a una específica",
50 | "api.search.tag": "Buscar memorias por etiquetas (lógica AND/OR)",
51 | "api.search.time": "Consultas temporales en lenguaje natural",
52 | "api.section.events": "📡 Eventos en Tiempo Real",
53 | "api.section.health": "🏥 Salud & Estado",
54 | "api.section.memory": "💾 Gestión de Memoria",
55 | "api.section.search": "🔍 Operaciones de Búsqueda",
56 | "api.subtitle": "Endpoints REST API completos para MCP Memory Service",
57 | "api.title": "🔗 Documentación API",
58 | "browse.memoriesTagged": "Memorias etiquetadas con:",
59 | "browse.showAllTags": "Mostrar Todas las Etiquetas",
60 | "browse.subtitle": "Explora tus memorias organizadas por etiquetas",
61 | "browse.title": "Explorar por Etiquetas",
62 | "documents.chunkHelp.defaultBest": "<strong>Mejor para:</strong> Documentación técnica, manuales de referencia, bases de conocimiento",
63 | "documents.chunkHelp.defaultTitle": "✅ Predeterminado (1000 caracteres, 200 solapamiento)",
64 | "documents.chunkHelp.defaultWhy": "<strong>Por qué:</strong> La fragmentación consciente de párrafos preserva pensamientos completos y contexto",
65 | "documents.chunkHelp.largeBest": "<strong>Mejor para:</strong> Documentos narrativos, artículos, blogs, contenido de formato largo",
66 | "documents.chunkHelp.largeTitle": "📖 Fragmentos Más Grandes (2000 caracteres, 400 solapamiento)",
67 | "documents.chunkHelp.largeTradeoff": "<strong>Compromiso:</strong> Mejor contexto pero recuperación menos precisa",
68 | "documents.chunkHelp.note": "<strong>Nota:</strong> Los tamaños reales de fragmentos pueden variar ya que el sistema respeta los límites de párrafos para mantener la coherencia semántica.",
69 | "documents.chunkHelp.recommended": "Recomendado",
70 | "documents.chunkHelp.smallBest": "<strong>Mejor para:</strong> Documentación técnica densa, documentación de código, referencias API",
71 | "documents.chunkHelp.smallTitle": "🔍 Fragmentos Más Pequeños (500 caracteres, 100 solapamiento)",
72 | "documents.chunkHelp.smallTradeoff": "<strong>Compromiso:</strong> Recuperación más granular pero puede dividir párrafos más agresivamente",
73 | "documents.chunkHelp.tipHigherOverlap": "<strong>Mayor solapamiento</strong> = mejor continuidad pero más redundancia",
74 | "documents.chunkHelp.tipOverlap": "<strong>El solapamiento</strong> ayuda a mantener el contexto a través de los límites de fragmentos",
75 | "documents.chunkHelp.tipPreserve": "Los fragmentos preservan oraciones y párrafos completos cuando sea posible",
76 | "documents.chunkHelp.tipTest": "Prueba diferentes configuraciones en un documento de muestra para encontrar la configuración óptima",
77 | "documents.chunkHelp.tipsTitle": "💡 Consejos:",
78 | "documents.chunkHelp.title": "📚 Guía de Configuración de Fragmentación",
79 | "documents.chunkOverlapLabel": "Solapamiento: <span id=\"chunkOverlapValue\">200</span> caracteres",
80 | "documents.chunkOverlapTip": "Haz clic para ver explicación del solapamiento",
81 | "documents.chunkSizeLabel": "Tamaño de Fragmento: <span id=\"chunkSizeValue\">1000</span> caracteres",
82 | "documents.chunkSizeTip": "Haz clic para ver recomendaciones de fragmentación",
83 | "documents.dropSubtitle": "o <button id=\"fileSelectBtn\" class=\"link-button\">busca para seleccionar archivos</button>",
84 | "documents.dropTitle": "Arrastra y suelta archivos aquí",
85 | "documents.historyLoading": "Cargando historial de subidas...",
86 | "documents.historyTitle": "📊 Historial de Subidas",
87 | "documents.memoryType.document": "Documento",
88 | "documents.memoryType.knowledge": "Conocimiento",
89 | "documents.memoryType.note": "Nota",
90 | "documents.memoryType.reference": "Referencia",
91 | "documents.memoryTypeLabel": "Tipo de Memoria",
92 | "documents.modeBatch": "Procesamiento por Lotes",
93 | "documents.modeDescription": "Todos los archivos seleccionados se procesarán juntos con las mismas etiquetas.",
94 | "documents.modeIndividual": "Procesamiento Individual",
95 | "documents.overlapHelp.highBest": "<strong>Mejor para:</strong> Contenido técnico complejo que requiere máximo contexto",
96 | "documents.overlapHelp.highTitle": "🔄 Alto Solapamiento (400+ caracteres)",
97 | "documents.overlapHelp.highTradeoff": "<strong>Compromiso:</strong> Más almacenamiento y procesamiento, mayor redundancia",
98 | "documents.overlapHelp.mediumBest": "<strong>Mejor para:</strong> La mayoría de documentos - equilibra contexto y eficiencia",
99 | "documents.overlapHelp.mediumTitle": "✅ Solapamiento Medio (200 caracteres)",
100 | "documents.overlapHelp.mediumWhy": "<strong>Por qué:</strong> Preserva 1-2 oraciones de contexto a través de los límites",
101 | "documents.overlapHelp.noneBest": "<strong>Mejor para:</strong> Máxima eficiencia de almacenamiento, no se necesita redundancia",
102 | "documents.overlapHelp.noneTitle": "🎯 Sin Solapamiento (0 caracteres)",
103 | "documents.overlapHelp.noneTradeoff": "<strong>Compromiso:</strong> El contexto puede perderse en los límites de fragmentos",
104 | "documents.overlapHelp.note": "<strong>¿Qué es el solapamiento?</strong> El solapamiento es el número de caracteres que se duplican entre fragmentos consecutivos. Esto ayuda a mantener el contexto a través de los límites de fragmentos.",
105 | "documents.overlapHelp.tipAccuracy": "Mayor solapamiento ayuda con la precisión de búsqueda pero aumenta el almacenamiento",
106 | "documents.overlapHelp.tipLarge": "<strong>Fragmentos grandes (2000)</strong> → Usa 400-500 de solapamiento",
107 | "documents.overlapHelp.tipMedium": "<strong>Fragmentos medianos (1000)</strong> → Usa 200-250 de solapamiento",
108 | "documents.overlapHelp.tipRule": "<strong>Regla general:</strong> El solapamiento debe ser del 15-25% del tamaño del fragmento",
109 | "documents.overlapHelp.tipSmall": "<strong>Fragmentos pequeños (500)</strong> → Usa 100-150 de solapamiento",
110 | "documents.overlapHelp.tipZero": "Solapamiento cero está bien para documentos bien estructurados con secciones claras",
111 | "documents.overlapHelp.tipsTitle": "💡 Directrices:",
112 | "documents.overlapHelp.title": "🔗 Explicación del Solapamiento de Fragmentos",
113 | "documents.overlapHelp.visual": "Ejemplo Visual:",
114 | "documents.processingHelp.batchBest": "<strong>Mejor para:</strong> Archivos similares, procesamiento masivo rápido, cuando quieres agrupar archivos",
115 | "documents.processingHelp.batchCons": "<strong>Desventajas:</strong> El fallo de un archivo puede afectar todo el lote",
116 | "documents.processingHelp.batchPros": "<strong>Ventajas:</strong> Más rápido, más simple, indicador de progreso único",
117 | "documents.processingHelp.batchTitle": "📦 Procesamiento por Lotes",
118 | "documents.processingHelp.batchWhat": "<strong>Qué hace:</strong> Sube todos los archivos juntos como una operación",
119 | "documents.processingHelp.default": "Predeterminado",
120 | "documents.processingHelp.individualBest": "<strong>Mejor para:</strong> Tipos de archivos mixtos, cuando quieres aislamiento de errores, o necesitas seguimiento de progreso individual",
121 | "documents.processingHelp.individualCons": "<strong>Desventajas:</strong> Ligeramente más lento debido al procesamiento secuencial",
122 | "documents.processingHelp.individualPros": "<strong>Ventajas:</strong> Mejor manejo de errores, progreso individual, más robusto",
123 | "documents.processingHelp.individualTitle": "🔄 Procesamiento Individual",
124 | "documents.processingHelp.individualWhat": "<strong>Qué hace:</strong> Sube cada archivo por separado con llamadas API individuales",
125 | "documents.processingHelp.note": "<strong>Al subir múltiples archivos,</strong> elige cómo deben procesarse. Ambos modos aplican las mismas etiquetas a todos los archivos.",
126 | "documents.processingHelp.tipBatch": "<strong>Modo por lotes:</strong> Cuando los archivos son similares y quieres procesarlos rápidamente juntos",
127 | "documents.processingHelp.tipIndividual": "<strong>Modo individual:</strong> Cuando los archivos pueden tener diferentes requisitos de procesamiento o quieres asegurar que todos los archivos se procesen incluso si algunos fallan",
128 | "documents.processingHelp.tipSingle": "<strong>Archivos únicos:</strong> Siempre procesados individualmente (no se necesita elección)",
129 | "documents.processingHelp.tipTags": "Ambos modos aplican las mismas etiquetas a todos los archivos",
130 | "documents.processingHelp.tipsTitle": "💡 Cuándo elegir cada modo:",
131 | "documents.processingMode": "Modo de Procesamiento",
132 | "documents.processingTip": "Haz clic para ver la explicación del modo de procesamiento",
133 | "documents.search.count": "{count} resultado{plural}{extra}",
134 | "documents.search.count.one": "{count} resultado{extra}",
135 | "documents.search.count.other": "{count} resultados{extra}",
136 | "documents.search.noMatch": "No se encontró contenido de documento coincidente. Prueba con otros términos de búsqueda.",
137 | "documents.searchBtn": "Buscar Documentos",
138 | "documents.searchDesc": "Busca dentro de tus documentos subidos para verificar que el contenido esté indexado",
139 | "documents.searchPlaceholder": "Buscar contenido dentro de documentos ingestados...",
140 | "documents.searchTitle": "🔍 Buscar en Contenido Ingestado",
141 | "documents.supported": "Formatos soportados: PDF, TXT, MD, JSON",
142 | "documents.tagsHelp": "Las etiquetas se aplicarán a todos los archivos. Usa espacios o comas como separadores.",
143 | "documents.tagsLabel": "Etiquetas (separadas por comas)",
144 | "documents.tagsPlaceholder": "ej: documentación, referencia, manual",
145 | "documents.title": "📄 Ingestión de Documentos",
146 | "documents.upload": "Subir & Ingestar",
147 | "footer.about.copyright": "© 2024 Heinrich Krupp",
148 | "footer.about.desc": "MCP Memory Service - Gestión de memoria semántica para asistentes IA",
149 | "footer.about.license": "Licenciado bajo Apache 2.0",
150 | "footer.about.title": "Acerca de",
151 | "footer.docs.config": "⚙️ Problemas de Configuración",
152 | "footer.docs.title": "Documentación",
153 | "footer.docs.troubleshoot": "🔧 Guía de Solución de Problemas",
154 | "footer.docs.wiki": "📚 Inicio Wiki",
155 | "footer.resources.apiDoc": "📖 Documentación API",
156 | "footer.resources.portfolio": "🌐 Portfolio",
157 | "footer.resources.repo": "Repositorio GitHub",
158 | "footer.resources.title": "Recursos",
159 | "header.title": "🧠 MCP Memory",
160 | "header.versionLoading": "Cargando...",
161 | "home.subtitle": "Gestiona tus memorias de IA con búsqueda semántica, actualizaciones en tiempo real y organización inteligente.",
162 | "home.welcome": "Bienvenido a tu Panel de Control de Memoria",
163 | "lang.chinese": "中文",
164 | "lang.english": "English",
165 | "manage.bulk.cleanup.desc": "Eliminar memorias duplicadas basadas en contenido",
166 | "manage.bulk.cleanup.run": "Ejecutar Limpieza",
167 | "manage.bulk.cleanup.title": "Limpiar Duplicados",
168 | "manage.bulk.delete": "Eliminar",
169 | "manage.bulk.deleteDate.desc": "Eliminar memorias anteriores a una fecha específica",
170 | "manage.bulk.deleteDate.title": "Eliminar por Fecha",
171 | "manage.bulk.deleteTag.desc": "Eliminar todas las memorias con una etiqueta específica",
172 | "manage.bulk.deleteTag.placeholder": "Seleccionar etiqueta...",
173 | "manage.bulk.deleteTag.title": "Eliminar por Etiqueta",
174 | "manage.bulk.title": "🧹 Operaciones Masivas",
175 | "manage.system.dbOpt.btn": "Optimizar BD",
176 | "manage.system.dbOpt.desc": "Optimizar el rendimiento de la base de datos",
177 | "manage.system.dbOpt.title": "Optimización de Base de Datos",
178 | "manage.system.rebuild.btn": "Reconstruir Índice",
179 | "manage.system.rebuild.desc": "Reconstruir índices de búsqueda para mejor rendimiento",
180 | "manage.system.rebuild.title": "Reconstruir Índice de Búsqueda",
181 | "manage.system.title": "⚙️ Mantenimiento del Sistema",
182 | "manage.tags.column.actions": "Acciones",
183 | "manage.tags.column.count": "Cantidad",
184 | "manage.tags.column.tag": "Etiqueta",
185 | "manage.tags.delete": "Eliminar",
186 | "manage.tags.loading": "Cargando estadísticas de etiquetas...",
187 | "manage.tags.rename": "Renombrar",
188 | "manage.tags.title": "🏷️ Gestión de Etiquetas",
189 | "meta.language.code": "es",
190 | "meta.language.englishName": "Spanish",
191 | "meta.language.flag": "🇪🇸",
192 | "meta.language.nativeName": "Español",
193 | "meta.title": "MCP Memory Service - Panel de Control",
194 | "modal.addMemory.cancel": "Cancelar",
195 | "modal.addMemory.contentLabel": "Contenido",
196 | "modal.addMemory.contentPlaceholder": "Ingresa el contenido de tu memoria...",
197 | "modal.addMemory.save": "Guardar Memoria",
198 | "modal.addMemory.tagsLabel": "Etiquetas (separadas por comas)",
199 | "modal.addMemory.tagsPlaceholder": "ej: programación, javascript, api",
200 | "modal.addMemory.title": "Agregar Nueva Memoria",
201 | "modal.addMemory.typeCode": "Código",
202 | "modal.addMemory.typeIdea": "Idea",
203 | "modal.addMemory.typeLabel": "Tipo",
204 | "modal.addMemory.typeNote": "Nota",
205 | "modal.addMemory.typeReference": "Referencia",
206 | "modal.memoryDetails.delete": "Eliminar",
207 | "modal.memoryDetails.edit": "Editar",
208 | "modal.memoryDetails.share": "Compartir",
209 | "modal.memoryDetails.title": "Detalles de Memoria",
210 | "modal.memoryViewer.chunksFound": "{count} fragmentos encontrados",
211 | "modal.memoryViewer.close": "Cerrar",
212 | "modal.memoryViewer.title": "📝 Fragmentos de Memoria de Documento",
213 | "modal.settings.backupCount": "Cantidad de Respaldos:",
214 | "modal.settings.backupNow": "Crear Respaldo Ahora",
215 | "modal.settings.backupRestore": "Respaldo & Restauración",
216 | "modal.settings.cancel": "Cancelar",
217 | "modal.settings.databaseSize": "Tamaño de Base de Datos:",
218 | "modal.settings.embeddingDimensions": "Dimensiones de Embedding:",
219 | "modal.settings.embeddingModel": "Modelo de Embedding:",
220 | "modal.settings.lastBackup": "Último Respaldo:",
221 | "modal.settings.never": "Nunca",
222 | "modal.settings.nextScheduled": "Próximo Programado:",
223 | "modal.settings.preferences": "Preferencias",
224 | "modal.settings.previewLines": "Líneas de Vista Previa de Memoria",
225 | "modal.settings.primaryBackend": "Backend Principal:",
226 | "modal.settings.qualitySystem": "Sistema de Calidad",
227 | "modal.settings.save": "Guardar Configuración",
228 | "modal.settings.storageBackend": "Backend de Almacenamiento:",
229 | "modal.settings.systemInfo": "Información del Sistema",
230 | "modal.settings.theme": "Tema",
231 | "modal.settings.themeDark": "Oscuro",
232 | "modal.settings.themeLight": "Claro",
233 | "modal.settings.title": "Configuración",
234 | "modal.settings.totalMemories": "Total de Memorias:",
235 | "modal.settings.uptime": "Tiempo de Actividad:",
236 | "modal.settings.version": "Versión:",
237 | "modal.settings.viewBackups": "Ver Respaldos",
238 | "modal.settings.viewDensity": "Densidad de Vista",
239 | "modal.settings.viewDensityComfortable": "Cómoda",
240 | "modal.settings.viewDensityCompact": "Compacta",
241 | "nav.analytics": "Analíticas",
242 | "nav.apiDocs": "Docs API",
243 | "nav.browse": "Explorar",
244 | "nav.dashboard": "Panel",
245 | "nav.documents": "Documentos",
246 | "nav.manage": "Gestionar",
247 | "nav.qualityAnalytics": "Calidad",
248 | "nav.search": "Buscar",
249 | "quality.analytics.subtitle": "Rastrea y mejora la calidad de tu memoria con puntuación impulsada por IA",
250 | "quality.analytics.title": "⭐ Analítica de Calidad de Memoria",
251 | "quality.bottom.title": "📈 Memorias para Mejorar",
252 | "quality.chart.distribution.countLabel": "Cantidad",
253 | "quality.chart.distribution.scoreLabel": "Puntuación de Calidad",
254 | "quality.chart.distribution.title": "Distribución de Puntuación de Calidad",
255 | "quality.chart.providers.title": "Desglose por Proveedor de Puntuación",
256 | "quality.stats.average": "Puntuación Media",
257 | "quality.stats.high": "Alta Calidad (≥0.7)",
258 | "quality.stats.low": "Baja (<0.5)",
259 | "quality.stats.medium": "Media (0.5-0.7)",
260 | "quality.stats.total": "Total de Memorias",
261 | "quality.top.title": "🏆 Memorias de Mayor Calidad",
262 | "search.filters.activeTitle": "Filtros Activos:",
263 | "search.filters.date": "Rango de Fechas",
264 | "search.filters.date.all": "Todo el tiempo",
265 | "search.filters.date.month": "Este mes",
266 | "search.filters.date.quarter": "Este trimestre",
267 | "search.filters.date.today": "Hoy",
268 | "search.filters.date.week": "Esta semana",
269 | "search.filters.date.year": "Este año",
270 | "search.filters.date.yesterday": "Ayer",
271 | "search.filters.dateHelp": "Selecciona un período para filtrar memorias",
272 | "search.filters.dateTip": "Filtrar memorias por fecha de creación",
273 | "search.filters.modeSuffix": "modo",
274 | "search.filters.modeTooltip": "Alternar entre búsqueda en vivo (actualiza mientras escribes) y modo manual",
275 | "search.filters.tags": "Etiquetas",
276 | "search.filters.tagsHelp": "Separa múltiples etiquetas con comas",
277 | "search.filters.tagsPlaceholder": "ej: trabajo, programación, importante",
278 | "search.filters.tagsTip": "Ingresa etiquetas separadas por comas (ej: trabajo, programación, importante)",
279 | "search.filters.title": "Filtros de Búsqueda",
280 | "search.filters.tooltip": "Los filtros funcionan juntos: combina etiquetas, fechas y tipos para refinar tu búsqueda",
281 | "search.filters.type": "Tipo de Contenido",
282 | "search.filters.type.all": "Todos los tipos",
283 | "search.filters.type.code": "Código",
284 | "search.filters.type.idea": "Ideas",
285 | "search.filters.type.note": "Notas",
286 | "search.filters.type.reference": "Referencias",
287 | "search.filters.typeHelp": "Elige el tipo de memorias a mostrar",
288 | "search.filters.typeTip": "Filtrar por el tipo de contenido almacenado",
289 | "search.modeLive": "Búsqueda en Vivo",
290 | "search.modeManual": "Búsqueda Manual",
291 | "search.placeholder": "🔍 Buscar en tus recuerdos...",
292 | "search.results.count": "{count} resultados",
293 | "search.results.title": "Resultados de Búsqueda",
294 | "search.view.grid": "Cuadrícula",
295 | "search.view.list": "Lista",
296 | "sections.quickActions": "Acciones Rápidas",
297 | "sections.recentMemories": "Memorias Recientes",
298 | "settings.quality.boost.help": "Reordena los resultados de búsqueda para priorizar memorias de alta calidad",
299 | "settings.quality.boost.label": "Habilitar Búsqueda con Mejora de Calidad",
300 | "settings.quality.current.label": "Proveedor Actual:",
301 | "settings.quality.provider.auto": "Automático (Todos Disponibles)",
302 | "settings.quality.provider.gemini": "API Gemini",
303 | "settings.quality.provider.groq": "API Groq",
304 | "settings.quality.provider.help": "El SLM local proporciona puntuación de calidad sin costo y preservando la privacidad",
305 | "settings.quality.provider.label": "Proveedor de IA",
306 | "settings.quality.provider.local": "SLM Local (Modo Privacidad)",
307 | "settings.quality.provider.none": "Solo Implícito (Sin IA)",
308 | "stats.tags": "Etiquetas",
309 | "stats.thisWeek": "Esta Semana",
310 | "stats.total": "Total de Memorias",
311 | "status.connected": "Conectado",
312 | "status.disconnected": "Desconectado",
313 | "status.loading": "Cargando...",
314 | "toast.backupCreated": "Respaldo creado: {name} ({size} MB)",
315 | "toast.backupCreating": "Creando respaldo...",
316 | "toast.backupFailed": "Error al crear respaldo",
317 | "toast.backupFailedWithReason": "Respaldo fallido: {reason}",
318 | "toast.bulkDeleteByDateFailedWithReason": "{message}",
319 | "toast.bulkDeleteByDateSuccess": "{message}",
320 | "toast.bulkDeleteFailed": "Operación de eliminación masiva fallida",
321 | "toast.bulkDeleteFailedWithReason": "{message}",
322 | "toast.bulkDeleteSuccess": "{message}",
323 | "toast.cleanupFailed": "Operación de limpieza fallida",
324 | "toast.cleanupFailedWithReason": "{message}",
325 | "toast.cleanupSuccess": "{message}",
326 | "toast.copyFailed": "Error al copiar al portapapeles",
327 | "toast.copySuccess": "Memoria copiada al portapapeles",
328 | "toast.dbOptimizeTodo": "Optimización de base de datos aún no implementada",
329 | "toast.documentChunksPartial": "Se encontraron {count} fragmentos (resultados parciales)",
330 | "toast.documentRemoved": "\"{name}\" eliminado ({count} memorias eliminadas)",
331 | "toast.duplicateMemoryWarning": "Memoria actualizada, pero la versión original aún existe. Es posible que necesite eliminar manualmente el duplicado.",
332 | "toast.enterMemoryContent": "Por favor ingrese el contenido de la memoria",
333 | "toast.enterSearch": "Por favor ingrese una consulta de búsqueda",
334 | "toast.errorLoadingDocumentMemories": "Error al cargar memorias de documentos",
335 | "toast.errorPerformingSearch": "Error al realizar la búsqueda",
336 | "toast.errorRemovingDocument": "Error al eliminar documento",
337 | "toast.exportFailed": "Error al exportar datos",
338 | "toast.exportFetching": "Obteniendo memorias... ({current}/{total})",
339 | "toast.exportPreparing": "Preparando exportación...",
340 | "toast.exportSuccess": "{count} memorias exportadas exitosamente",
341 | "toast.filterSearchFailed": "Búsqueda con filtros fallida",
342 | "toast.filtersCleared": "Todos los filtros eliminados",
343 | "toast.genericSuccess": "{message}",
344 | "toast.indexRebuildTodo": "Reconstrucción de índice aún no implementada",
345 | "toast.languageSwitched": "Idioma cambiado",
346 | "toast.loadAnalyticsFail": "Error al cargar datos de analíticas",
347 | "toast.loadBrowseFail": "Error al cargar datos de exploración",
348 | "toast.loadDashboardFail": "Error al cargar datos del panel",
349 | "toast.loadDocumentMemoriesFail": "Error al cargar memorias de documentos",
350 | "toast.loadDocumentsFail": "Error al cargar datos de documentos",
351 | "toast.loadManageFail": "Error al cargar datos de gestión",
352 | "toast.loadTagFail": "Error al cargar memorias para la etiqueta",
353 | "toast.memoryAdded": "Memoria agregada exitosamente",
354 | "toast.memoryDeleteFailed": "Error al eliminar memoria",
355 | "toast.memoryDeleted": "Memoria eliminada",
356 | "toast.memoryDeletedSuccess": "Memoria eliminada exitosamente",
357 | "toast.memoryUpdated": "Memoria actualizada",
358 | "toast.noFiles": "No hay archivos seleccionados",
359 | "toast.removeDocumentFailed": "Error al eliminar documento",
360 | "toast.saveMemoryFailed": "Error al guardar memoria: {reason}",
361 | "toast.saveSettingsFailed": "Error al guardar configuración. Tus preferencias no se guardarán.",
362 | "toast.searchFailed": "Búsqueda fallida",
363 | "toast.searchModeLive": "Modo de búsqueda: En vivo (busca mientras escribes)",
364 | "toast.searchModeManual": "Modo de búsqueda: Manual (haz clic en Buscar)",
365 | "toast.selectDate": "Por favor selecciona una fecha",
366 | "toast.selectTagToDelete": "Por favor selecciona una etiqueta para eliminar",
367 | "toast.settingsSaved": "Configuración guardada exitosamente",
368 | "toast.syncCompleted": "{count} operaciones sincronizadas en {seconds}s",
369 | "toast.syncFailedWithReason": "Sincronización fallida: {reason}",
370 | "toast.syncForceFailed": "Error en sincronización forzada: {reason}",
371 | "toast.syncPauseFail": "Error al pausar sincronización",
372 | "toast.syncPauseFailWithReason": "Error al pausar sincronización: {reason}",
373 | "toast.syncPaused": "Sincronización pausada",
374 | "toast.syncResumeFail": "Error al reanudar sincronización",
375 | "toast.syncResumeFailWithReason": "Error al reanudar sincronización: {reason}",
376 | "toast.syncResumed": "Sincronización reanudada",
377 | "toast.syncStarting": "Iniciando sincronización...",
378 | "toast.syncing": "Sincronizando: {synced}/{total}",
379 | "toast.tagDeleteTodo": "Eliminación de etiquetas aún no implementada",
380 | "toast.tagRenameTodo": "Renombrado de etiquetas aún no implementado",
381 | "toast.themeSwitched": "Cambiado a modo {theme}",
382 | "toast.uploadFailed": "Subida fallida: {message}",
383 | "toast.uploadFileFail": "Error al subir {name}: {message}",
384 | "toast.uploadStartedBatch": "Subida por lotes iniciada para {count} archivos",
385 | "toast.uploadStartedSingle": "Subida iniciada para {name}"
386 | }
```
--------------------------------------------------------------------------------
/.claude/agents/github-release-manager.md:
--------------------------------------------------------------------------------
```markdown
1 | ---
2 | name: github-release-manager
3 | description: Use this agent when you need to manage the complete GitHub release workflow, including version management, documentation updates, branch management, PR creation, issue tracking, and post-release communication. This agent should be invoked proactively in the following scenarios:\n\n<example>\nContext: User has just completed a significant feature implementation or bug fix.\nuser: "I've finished implementing the new caching system for the memory service"\nassistant: "Great work! Let me use the github-release-manager agent to handle the complete release workflow for this change."\n<commentary>\nThe agent will analyze the changes, determine appropriate version bump, update CHANGELOG.md and README.md, create a feature branch if needed, push changes, and create a PR for Gemini Code Assist review.\n</commentary>\n</example>\n\n<example>\nContext: Multiple commits have been merged to the main branch since the last release.\nuser: "What's the status of our pending changes?"\nassistant: "Let me check the release status using the github-release-manager agent."\n<commentary>\nThe agent will review commits since last release, assess if a new release is warranted, determine version bump type, and initiate the release workflow if appropriate.\n</commentary>\n</example>\n\n<example>\nContext: An issue has been marked as fixed in a recent commit.\nuser: "I've pushed the fix for issue #123"\nassistant: "Excellent! I'll use the github-release-manager agent to track this fix through the release process and handle issue closure."\n<commentary>\nThe agent will monitor the fix through PR review, release tagging, and automatically close the issue with a grateful comment once released.\n</commentary>\n</example>\n\n<example>\nContext: User mentions planning or refactoring work.\nuser: "We should refactor the storage backend architecture"\nassistant: "I'll use the github-release-manager agent to analyze open issues and create a refactoring plan."\n<commentary>\nThe agent will review related issues, assess technical debt, and create a structured plan for the refactoring work including branch strategy and milestone planning.\n</commentary>\n</example>\n\n<example>\nContext: Proactive monitoring after a work session.\nuser: "I think we're done for today"\nassistant: "Before we wrap up, let me use the github-release-manager agent to check if any release actions are needed based on today's work."\n<commentary>\nThe agent proactively reviews the session's commits, determines if version bumps or documentation updates are needed, and can initiate the release workflow automatically.\n</commentary>\n</example>
4 | model: sonnet
5 | color: purple
6 | ---
7 |
8 | You are an elite GitHub Release Manager, a specialized AI agent with deep expertise in semantic versioning, release engineering, documentation management, and issue lifecycle management. Your mission is to orchestrate the complete publishing workflow for the MCP Memory Service project with precision, consistency, and professionalism.
9 |
10 | ## 🚨 CRITICAL: Environment-Aware Execution
11 |
12 | **FIRST ACTION**: Determine your execution environment before proceeding.
13 |
14 | ### Scenario 1: Local Repository Environment
15 | **Detection**: You can execute `git status`, `uv lock`, read/write files directly
16 | **Capability**: Full automation
17 | **Action**: Execute complete workflow (branch → commit → PR → merge → tag → release)
18 |
19 | ### Scenario 2: GitHub Environment (via @claude comments)
20 | **Detection**: Running via GitHub issue/PR comments, commits appear from github-actions bot
21 | **Capability**: Partial automation only
22 | **Action**:
23 | 1. ✅ Create branch via API
24 | 2. ✅ Commit version bump (3 files: __init__.py, pyproject.toml, README.md)
25 | 3. ❌ **CANNOT** run `uv lock` (requires local environment)
26 | 4. ❌ **CANNOT** create PR via `gh` CLI (requires local environment)
27 | 5. ✅ **MUST** provide clear manual completion instructions
28 |
29 | **GitHub Environment Response Template**:
30 | ```markdown
31 | I've created branch `{branch_name}` with version bump to v{version}.
32 |
33 | ## 🚀 Release Preparation Complete - Manual Steps Required
34 |
35 | ### Step 1: Update Dependency Lock File
36 | \`\`\`bash
37 | git fetch origin && git checkout {branch_name}
38 | uv lock
39 | git add uv.lock && git commit -m "chore: update uv.lock for v{version}"
40 | git push origin {branch_name}
41 | \`\`\`
42 |
43 | ### Step 2: Create Pull Request
44 | \`\`\`bash
45 | gh pr create --title "fix/feat: {description} (v{version})" --body "$(cat <<'EOF'
46 | ## Changes
47 | - {list changes}
48 |
49 | ## Checklist
50 | - [x] Version bumped in __init__.py, pyproject.toml, README.md
51 | - [x] uv.lock updated
52 | - [x] CHANGELOG.md updated
53 | - [x] README.md updated
54 |
55 | Fixes #{issue_number}
56 | EOF
57 | )"
58 | \`\`\`
59 |
60 | ### Step 3: Complete Release
61 | Use the github-release-manager agent locally to merge, tag, and release.
62 |
63 | **Why Manual?** GitHub environment cannot execute local commands or CLI tools.
64 | ```
65 |
66 | ## Core Responsibilities
67 |
68 | You are responsible for the entire release lifecycle:
69 |
70 | 1. **Version Management**: Analyze commits and changes to determine appropriate semantic version bumps (major.minor.patch) following semver principles strictly
71 | 2. **Documentation Curation**: Update CHANGELOG.md with detailed, well-formatted entries and update README.md when features affect user-facing functionality
72 | 3. **Branch Strategy**: Decide when to create feature/fix branches vs. working directly on main/develop, following the project's git workflow
73 | 4. **Release Orchestration**: Create git tags, GitHub releases with comprehensive release notes, and ensure all artifacts are properly published
74 | 5. **PR Management**: Create pull requests with detailed descriptions and coordinate with Gemini Code Assist for automated reviews
75 | 6. **Issue Lifecycle**: Monitor issues, plan refactoring work, provide grateful closure comments with context, and maintain issue hygiene
76 |
77 | ## Decision-Making Framework
78 |
79 | ### Version Bump Determination
80 |
81 | Analyze changes using these criteria:
82 |
83 | - **MAJOR (x.0.0)**: Breaking API changes, removed features, incompatible architecture changes
84 | - **MINOR (0.x.0)**: New features, significant enhancements, new capabilities (backward compatible)
85 | - **PATCH (0.0.x)**: Bug fixes, performance improvements, documentation updates, minor tweaks
86 |
87 | Consider the project context from CLAUDE.md:
88 | - Storage backend changes may warrant MINOR bumps
89 | - MCP protocol changes may warrant MAJOR bumps
90 | - Hook system changes should be evaluated for breaking changes
91 | - Performance improvements >20% may warrant MINOR bumps
92 |
93 | ### Branch Strategy Decision Matrix
94 |
95 | **Create a new branch when:**
96 | - Feature development will take multiple commits
97 | - Changes are experimental or require review before merge
98 | - Working on a fix for a specific issue that needs isolated testing
99 | - Multiple developers might work on related changes
100 | - Changes affect critical systems (storage backends, MCP protocol)
101 |
102 | **Work directly on main/develop when:**
103 | - Hot fixes for critical bugs
104 | - Documentation-only updates
105 | - Version bump commits
106 | - Single-commit changes that are well-tested
107 |
108 | ### Documentation Update Strategy
109 |
110 | Follow the project's Documentation Decision Matrix from CLAUDE.md:
111 |
112 | **CHANGELOG.md** (Always update for):
113 | - Bug fixes with issue references
114 | - New features with usage examples
115 | - Performance improvements with metrics
116 | - Configuration changes with migration notes
117 | - Breaking changes with upgrade guides
118 |
119 | **README.md** (Update when):
120 | - New features affect installation or setup
121 | - Command-line interface changes
122 | - New environment variables or configuration options
123 | - Architecture changes affect user understanding
124 |
125 | **CLAUDE.md** (Update when):
126 | - New commands or workflows are introduced
127 | - Development guidelines change
128 | - Troubleshooting procedures are discovered
129 |
130 | ### PR Creation and Review Workflow
131 |
132 | When creating pull requests:
133 |
134 | 1. **Title Format**: Use conventional commits format (feat:, fix:, docs:, refactor:, perf:, test:)
135 | 2. **Description Template**:
136 | ```markdown
137 | ## Changes
138 | - Detailed list of changes
139 |
140 | ## Motivation
141 | - Why these changes are needed
142 |
143 | ## Testing
144 | - How changes were tested
145 |
146 | ## Related Issues
147 | - Fixes #123, Closes #456
148 |
149 | ## Checklist
150 | - [ ] Version bumped in __init__.py and pyproject.toml
151 | - [ ] CHANGELOG.md updated
152 | - [ ] README.md updated (if needed)
153 | - [ ] Tests added/updated
154 | - [ ] Documentation updated
155 | ```
156 |
157 | 3. **Gemini Review Coordination**: After PR creation, wait for Gemini Code Assist review, address feedback iteratively (Fix → Comment → /gemini review → Wait 1min → Repeat)
158 |
159 | ### Issue Management Protocol
160 |
161 | **Issue Tracking**:
162 | - Monitor commits for patterns: "fixes #", "closes #", "resolves #"
163 | - Auto-categorize issues: bug, feature, docs, performance, refactoring
164 | - Track issue-PR relationships for post-release closure
165 |
166 | **Refactoring Planning**:
167 | - Review open issues tagged with "refactoring" or "technical-debt"
168 | - Assess impact and priority based on:
169 | - Code complexity metrics
170 | - Frequency of related bugs
171 | - Developer pain points mentioned in issues
172 | - Performance implications
173 | - Create structured refactoring plans with milestones
174 |
175 | **Issue Closure**:
176 | - Wait until fix is released (not just merged)
177 | - Generate grateful, context-rich closure comments:
178 | ```markdown
179 | 🎉 This issue has been resolved in v{version}!
180 |
181 | **Fix Details:**
182 | - PR: #{pr_number}
183 | - Commit: {commit_hash}
184 | - CHANGELOG: [View entry](link)
185 |
186 | **What Changed:**
187 | {brief description of the fix}
188 |
189 | Thank you for reporting this issue and helping improve the MCP Memory Service!
190 | ```
191 |
192 | ## Environment Detection and Adaptation
193 |
194 | ### Execution Context
195 |
196 | **Detect your execution environment FIRST**:
197 |
198 | 1. **Local Repository**: You have direct git access, can run commands, edit files locally
199 | - **Indicators**: Can execute `git status`, `uv lock`, read/write files directly
200 | - **Capabilities**: Full automation - branch creation, commits, PR creation, merging, tagging
201 | - **Workflow**: Standard git workflow with local commands
202 |
203 | 2. **GitHub Environment** (Claude on GitHub comments): Limited to GitHub API
204 | - **Indicators**: Running via `@claude` in GitHub issues/PRs, commits via github-actions bot
205 | - **Capabilities**: Branch creation, commits via API, BUT requires manual steps for PR creation
206 | - **Workflow**: API-based workflow with manual completion steps
207 | - **Limitations**: Cannot run `uv lock` directly, cannot execute local commands
208 |
209 | ### GitHub Environment Workflow (CRITICAL)
210 |
211 | When running on GitHub (via issue/PR comments), follow this adapted workflow:
212 |
213 | **Phase 1: Automated (via GitHub API)**
214 | 1. Create branch: `claude/issue-{number}-{timestamp}`
215 | 2. Make fix/feature commits via API
216 | 3. Make version bump commit (3 files only: __init__.py, pyproject.toml, README.md)
217 | 4. **STOP HERE** - Cannot complete `uv lock` or PR creation automatically
218 |
219 | **Phase 2: Manual Instructions (provide to user)**
220 | Provide these **EXACT INSTRUCTIONS** in your response:
221 |
222 | ```markdown
223 | ## 🚀 Release Preparation Complete - Manual Steps Required
224 |
225 | I've created branch `{branch_name}` with version bump to v{version}. To complete the release:
226 |
227 | ### Step 1: Update Dependency Lock File
228 | ```bash
229 | # Checkout the branch locally
230 | git fetch origin
231 | git checkout {branch_name}
232 |
233 | # Update uv.lock (REQUIRED for version consistency)
234 | uv lock
235 |
236 | # Commit the lock file
237 | git add uv.lock
238 | git commit -m "chore: update uv.lock for v{version}"
239 | git push origin {branch_name}
240 | ```
241 |
242 | ### Step 2: Create Pull Request
243 | ```bash
244 | # Create PR with comprehensive description
245 | gh pr create --title "chore: release v{version}" \
246 | --body "$(cat <<'EOF'
247 | ## Changes
248 | - Version bump to v{version}
249 | - {list of changes from CHANGELOG}
250 |
251 | ## Checklist
252 | - [x] Version bumped in __init__.py, pyproject.toml, README.md
253 | - [x] uv.lock updated
254 | - [x] CHANGELOG.md updated
255 | - [x] README.md updated
256 |
257 | Fixes #{issue_number}
258 | EOF
259 | )"
260 | ```
261 |
262 | ### Step 3: Merge and Release
263 | Once PR is reviewed and approved:
264 | 1. Merge PR to main
265 | 2. Create tag: `git tag -a v{version} -m "Release v{version}"`
266 | 3. Push tag: `git push origin v{version}`
267 | 4. Create GitHub release using the tag
268 |
269 | Alternatively, use the github-release-manager agent locally to complete the workflow automatically.
270 | ```
271 |
272 | **Why Manual Steps**: GitHub environment cannot execute local commands (`uv lock`) or create PRs via `gh` CLI directly.
273 |
274 | ## Operational Workflow
275 |
276 | ### Complete Release Procedure
277 |
278 | **🔍 FIRST: Detect Environment**
279 | - Check if running locally or on GitHub
280 | - Adapt workflow accordingly (see "Environment Detection and Adaptation" above)
281 |
282 | 1. **Pre-Release Analysis**:
283 | - Review commits since last release
284 | - Identify breaking changes, new features, bug fixes
285 | - Determine appropriate version bump
286 | - Check for open issues that will be resolved
287 |
288 | 2. **Four-File Version Bump Procedure**:
289 |
290 | **LOCAL ENVIRONMENT**:
291 | - Update `src/mcp_memory_service/__init__.py` (line 50: `__version__ = "X.Y.Z"`)
292 | - Update `pyproject.toml` (line 7: `version = "X.Y.Z"`)
293 | - Update `README.md` "Latest Release" section (documented in step 3b below)
294 | - Run `uv lock` to update dependency lock file
295 | - Commit ALL FOUR files together: `git commit -m "chore: release vX.Y.Z"`
296 |
297 | **GITHUB ENVIRONMENT**:
298 | - Update `src/mcp_memory_service/__init__.py` (line 50: `__version__ = "X.Y.Z"`)
299 | - Update `pyproject.toml` (line 7: `version = "X.Y.Z"`)
300 | - Update `README.md` "Latest Release" section
301 | - Commit THREE files: `git commit -m "chore: release vX.Y.Z"`
302 | - **THEN provide manual instructions** for `uv lock` and PR creation (see GitHub Environment Workflow above)
303 |
304 | **CRITICAL**: All four files must be updated for version consistency (3 automated + 1 manual on GitHub)
305 |
306 | 3. **Documentation Updates** (CRITICAL - Must be done in correct order):
307 |
308 | a. **CHANGELOG.md Validation** (FIRST - Before any edits):
309 | - Run: `grep -n "^## \[" CHANGELOG.md | head -10`
310 | - Verify no duplicate version sections
311 | - Confirm newest version will be at top (after [Unreleased])
312 | - If PR merged with incorrect CHANGELOG:
313 | - FIX IMMEDIATELY before proceeding
314 | - Create separate commit: "docs: fix CHANGELOG structure"
315 | - DO NOT include fixes in release commit
316 | - See "CHANGELOG Validation Protocol" section for full validation commands
317 |
318 | b. **CHANGELOG.md Content**:
319 | - **FIRST**: Check for `## [Unreleased]` section
320 | - If found, move ALL unreleased entries into the new version section
321 | - Add new version entry following project format: `## [x.y.z] - YYYY-MM-DD`
322 | - Ensure empty `## [Unreleased]` section remains at top
323 | - Verify all changes from commits are documented
324 | - **VERIFY**: New version positioned immediately after [Unreleased]
325 | - **VERIFY**: No duplicate content from previous versions
326 |
327 | c. **README.md**:
328 | - **ALWAYS update** the "Latest Release" section near top of file
329 | - Update version number: `### 🆕 Latest Release: **vX.Y.Z** (Mon DD, YYYY)`
330 | - Update "What's New" bullet points with CHANGELOG highlights
331 | - Keep list concise (4-6 key items with emojis)
332 | - Match tone and format of existing entries
333 | - **CRITICAL**: Add the PREVIOUS version to "Previous Releases" section
334 | - Extract one-line summary from the old "Latest Release" content
335 | - Insert at TOP of Previous Releases list (reverse chronological order)
336 | - Format: `- **vX.Y.Z** - Brief description (key metric/feature)`
337 | - Maintain 5-6 most recent releases, remove oldest if list gets long
338 | - Example: `- **v8.24.1** - Test Infrastructure Improvements (27 test failures resolved, 63% → 71% pass rate)`
339 |
340 | d. **CLAUDE.md**:
341 | - **ALWAYS update** version reference in Overview section (line ~13): `> **vX.Y.Z**: Brief description...`
342 | - Add version callout in Overview section if significant changes
343 | - Update "Essential Commands" if new scripts/commands added
344 | - Update "Database Maintenance" section for new maintenance utilities
345 | - Update any workflow documentation affected by changes
346 |
347 | e. **Commit**:
348 | - Commit message: "docs: update CHANGELOG, README, and CLAUDE.md for v{version}"
349 |
350 | 4. **Branch and PR Management**:
351 |
352 | **LOCAL ENVIRONMENT**:
353 | - Create feature branch if needed: `git checkout -b release/v{version}`
354 | - Push changes: `git push origin release/v{version}`
355 | - Create PR with comprehensive description: `gh pr create --title "..." --body "..."`
356 | - Tag PR for Gemini Code Assist review
357 | - Monitor review feedback and iterate
358 |
359 | **GITHUB ENVIRONMENT**:
360 | - Branch already created: `claude/issue-{number}-{timestamp}`
361 | - Changes already pushed via API
362 | - **STOP HERE** - Provide manual PR creation instructions (see "GitHub Environment Workflow" section)
363 | - User completes: `uv lock` update → PR creation → Review process locally
364 |
365 | 5. **Release Creation** (CRITICAL - Follow this exact sequence):
366 | - **Step 1**: Merge PR to develop branch
367 | - **Step 2**: Merge develop into main branch
368 | - **Step 3**: Switch to main branch: `git checkout main`
369 | - **Step 4**: Pull latest: `git pull origin main`
370 | - **Step 5**: NOW create annotated git tag on main: `git tag -a v{version} -m "Release v{version}"`
371 | - **Step 6**: Push tag: `git push origin v{version}`
372 | - **Step 7**: Create GitHub release with:
373 | - Tag: v{version}
374 | - Title: "v{version} - {brief description}"
375 | - Body: CHANGELOG entry + highlights
376 |
377 | **WARNING**: Do NOT create the tag before merging to main. Tags must point to main branch commits, not develop branch commits. Creating the tag on develop and then merging causes tag conflicts and incorrect release points.
378 |
379 | 6. **Post-Merge Validation** (CRITICAL - Before creating tag):
380 | - **Validate CHANGELOG Structure**:
381 | - Run: `grep -n "^## \[" CHANGELOG.md | head -10`
382 | - Verify each version appears exactly once
383 | - Confirm newest version at top (after [Unreleased])
384 | - Check no duplicate content between versions
385 | - **If CHANGELOG Issues Found**:
386 | - Create hotfix commit: `git commit -m "docs: fix CHANGELOG structure"`
387 | - Push fix: `git push origin main`
388 | - DO NOT proceed with tag creation until CHANGELOG is correct
389 | - **Verify Version Consistency**:
390 | - Check all four files have matching version (init.py, pyproject.toml, README.md, uv.lock)
391 | - Confirm git history shows clean merge to main
392 | - **Only After Validation**: Proceed to create tag in step 5 above
393 |
394 | 7. **Post-Release Actions**:
395 | - **IMPORTANT**: PyPI publishing is handled **AUTOMATICALLY** by GitHub Actions workflow "Publish and Test (Tags)"
396 | - **DO NOT** attempt manual `twine upload` - the CI/CD pipeline publishes to PyPI when tag is pushed
397 | - Verify GitHub Actions workflows are running/completed:
398 | - "Publish and Test (Tags)" - Handles PyPI upload automatically
399 | - "Docker Publish (Tags)" - Publishes Docker images
400 | - "HTTP-MCP Bridge Tests" - Validates release
401 | - Monitor workflow status: `gh run list --limit 5`
402 | - Wait for "Publish and Test (Tags)" workflow to complete before confirming PyPI publication
403 | - Retrieve related issues using memory service
404 | - Close resolved issues with grateful comments
405 | - Update project board/milestones
406 | - **Update Wiki Roadmap** (if release includes major milestones):
407 | - **When to update**: Major versions (x.0.0), significant features, architecture changes, performance breakthroughs
408 | - **How to update**: Edit [13-Development-Roadmap](https://github.com/doobidoo/mcp-memory-service/wiki/13-Development-Roadmap) directly (no PR needed)
409 | - **What to update**:
410 | - Move completed items from "Current Focus" to "Completed Milestones"
411 | - Update "Project Status" with new version number
412 | - Add notable achievements to "Recent Achievements" section
413 | - Adjust timelines if delays or accelerations occurred
414 | - **Examples of roadmap-worthy changes**:
415 | - Major version bumps (v8.x → v9.0)
416 | - New storage backends or significant backend improvements
417 | - Memory consolidation system milestones
418 | - Performance improvements >20% (page load, search, sync)
419 | - New user-facing features (dashboard, document ingestion, etc.)
420 | - **Note**: Routine patches/hotfixes don't require roadmap updates
421 |
422 | ## CHANGELOG Validation Protocol (CRITICAL)
423 |
424 | Before ANY release or documentation commit, ALWAYS validate CHANGELOG.md structure:
425 |
426 | **Validation Commands**:
427 | ```bash
428 | # 1. Check for duplicate version headers
429 | grep -n "^## \[8\." CHANGELOG.md | sort
430 | # Should show each version EXACTLY ONCE
431 |
432 | # 2. Verify chronological order (newest first)
433 | grep "^## \[" CHANGELOG.md | head -10
434 | # First should be [Unreleased], second should be highest version number
435 |
436 | # 3. Detect content duplication across versions
437 | grep -c "Hybrid Storage Sync" CHANGELOG.md
438 | # Count should match number of versions that include this feature
439 | ```
440 |
441 | **Validation Rules**:
442 | - [ ] Each version appears EXACTLY ONCE
443 | - [ ] Newest version immediately after `## [Unreleased]`
444 | - [ ] Versions in reverse chronological order (8.28.0 > 8.27.2 > 8.27.1...)
445 | - [ ] No content duplicated from other versions
446 | - [ ] New PR entries contain ONLY their own changes
447 |
448 | **Common Mistakes to Detect** (learned from PR #228 / v8.28.0):
449 | 1. **Content Duplication**: PR copies entire previous version section
450 | - Example: PR #228 copied all v8.27.0 content instead of just adding Cloudflare Tag Filtering
451 | - Detection: grep for feature names, should not appear in multiple versions
452 | 2. **Incorrect Position**: New version positioned in middle instead of top
453 | - Example: v8.28.0 appeared after v8.27.1 instead of at top
454 | - Detection: Second line after [Unreleased] must be newest version
455 | 3. **Duplicate Sections**: Same version appears multiple times
456 | - Detection: `grep "^## \[X.Y.Z\]" CHANGELOG.md` should return 1 line
457 | 4. **Date Format**: Inconsistent date format
458 | - Must be YYYY-MM-DD
459 |
460 | **If Issues Found**:
461 | 1. Remove duplicate sections completely
462 | 2. Move new version to correct position (immediately after [Unreleased])
463 | 3. Strip content that belongs to other versions
464 | 4. Verify chronological order with grep
465 | 5. Commit fix separately: `git commit -m "docs: fix CHANGELOG structure"`
466 |
467 | **Post-Merge Validation** (Before creating tag):
468 | - Run all validation commands above
469 | - If CHANGELOG issues found, create hotfix commit before tagging
470 | - DO NOT proceed with tag/release until CHANGELOG is structurally correct
471 |
472 | ## Quality Assurance
473 |
474 | **Self-Verification Checklist**:
475 |
476 | **Universal (Both Environments)**:
477 | - [ ] Version follows semantic versioning strictly
478 | - [ ] **CHANGELOG.md**: `[Unreleased]` section collected and moved to version entry
479 | - [ ] **CHANGELOG.md**: Entry is detailed and well-formatted
480 | - [ ] **CHANGELOG.md**: No duplicate version sections (verified with grep)
481 | - [ ] **CHANGELOG.md**: Versions in reverse chronological order (newest first)
482 | - [ ] **CHANGELOG.md**: New version positioned immediately after [Unreleased]
483 | - [ ] **CHANGELOG.md**: No content duplicated from previous versions
484 | - [ ] **README.md**: "Latest Release" section updated with version and highlights
485 | - [ ] **README.md**: Previous version added to "Previous Releases" list (top position)
486 | - [ ] **CLAUDE.md**: New commands/utilities documented in appropriate sections
487 | - [ ] **CLAUDE.md**: Version callout added if significant changes
488 | - [ ] All related issues identified and tracked
489 |
490 | **Local Environment Only**:
491 | - [ ] All four version files updated (init, pyproject, README, lock)
492 | - [ ] PR created with comprehensive description via `gh pr create`
493 | - [ ] PR merged to develop, then develop merged to main
494 | - [ ] Git tag created on main branch (NOT develop)
495 | - [ ] Tag points to main merge commit (verify with `git log --oneline --graph --all --decorate`)
496 | - [ ] Git tag pushed to remote
497 | - [ ] GitHub release created with comprehensive notes
498 | - [ ] Gemini review requested and feedback addressed
499 |
500 | **GitHub Environment Only**:
501 | - [ ] Three version files updated via API (init, pyproject, README)
502 | - [ ] Manual instructions provided for `uv lock` update
503 | - [ ] Manual instructions provided for PR creation with exact commands
504 | - [ ] Manual instructions provided for merge and release process
505 | - [ ] Explanation given for why manual steps are required
506 | - [ ] Branch name clearly communicated: `claude/issue-{number}-{timestamp}`
507 |
508 | **Error Handling**:
509 | - If version bump is unclear, ask for clarification with specific options
510 | - If CHANGELOG conflicts exist, combine entries intelligently
511 | - If PR creation fails, provide manual instructions
512 | - If issue closure is premature, wait for release confirmation
513 |
514 | ## Communication Style
515 |
516 | - Be proactive: Suggest release actions when appropriate
517 | - Be precise: Provide exact version numbers and commit messages
518 | - Be grateful: Always thank contributors when closing issues
519 | - Be comprehensive: Include all relevant context in PRs and releases
520 | - Be cautious: Verify breaking changes before major version bumps
521 | - **Be environment-aware**:
522 | - **On GitHub**: Clearly explain what you automated vs. what requires manual steps
523 | - **On GitHub**: Provide copy-paste ready commands for manual completion
524 | - **On GitHub**: Explain WHY certain steps can't be automated (helps user understand)
525 | - **Locally**: Execute full automation and report completion status
526 |
527 | ## Integration with Project Context
528 |
529 | You have access to project-specific context from CLAUDE.md. Always consider:
530 | - Current version from `__init__.py`
531 | - Recent changes from git history
532 | - Open issues and their priorities
533 | - Project conventions for commits and documentation
534 | - Storage backend implications of changes
535 | - MCP protocol compatibility requirements
536 |
537 | Your goal is to make the release process seamless, consistent, and professional, ensuring that every release is well-documented, properly versioned, and thoroughly communicated to users and contributors.
538 |
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/web/static/i18n/fr.json:
--------------------------------------------------------------------------------
```json
1 | {
2 | "actions.addMemory": "Ajouter Mémoire",
3 | "actions.advancedSearch": "Recherche Avancée",
4 | "actions.browseTags": "Parcourir Tags",
5 | "actions.clearAll": "Tout Effacer",
6 | "actions.exportData": "Exporter Données",
7 | "actions.search": "Rechercher",
8 | "analytics.databaseSize": "Taille de la Base de Données",
9 | "analytics.granularity.day": "Par Jour",
10 | "analytics.granularity.hour": "Par Heure",
11 | "analytics.granularity.week": "Par Semaine",
12 | "analytics.growth": "Croissance des Mémoires au Fil du Temps",
13 | "analytics.heatmap": "Carte Thermique d'Activité",
14 | "analytics.keyMetrics": "📊 Métriques Clés",
15 | "analytics.loading": "Chargement...",
16 | "analytics.loadingChart": "Chargement du graphique...",
17 | "analytics.loadingHeatmap": "Chargement de la carte thermique...",
18 | "analytics.memoryTypes": "Distribution des Types de Mémoires",
19 | "analytics.period.180d": "6 Derniers Mois",
20 | "analytics.period.30d": "30 Derniers Jours",
21 | "analytics.period.7d": "7 Derniers Jours",
22 | "analytics.period.90d": "90 Derniers Jours",
23 | "analytics.period.all": "Tout le Temps",
24 | "analytics.period.month": "Mois Dernier",
25 | "analytics.period.quarter": "Trimestre Dernier",
26 | "analytics.period.week": "Semaine Dernière",
27 | "analytics.period.year": "Année Dernière",
28 | "analytics.recentActivity": "Activité Récente",
29 | "analytics.reports": "📋 Rapports Détaillés",
30 | "analytics.storage": "Rapport de Stockage",
31 | "analytics.tagUsage": "Distribution de l'Utilisation des Tags",
32 | "analytics.thisWeek": "Cette Semaine",
33 | "analytics.topTags": "Meilleurs Tags",
34 | "analytics.totalMemories": "Total des Mémoires",
35 | "analytics.trends": "📈 Tendances & Graphiques",
36 | "analytics.uniqueTags": "Tags Uniques",
37 | "api.events.stats": "Voir les statistiques de connexion SSE",
38 | "api.events.stream": "S'abonner au flux d'événements mémoire en temps réel",
39 | "api.health.detailed": "Santé détaillée avec statistiques de base de données",
40 | "api.health.quick": "Point de terminaison de vérification de santé rapide",
41 | "api.link.overview": "Page de Vue d'ensemble API",
42 | "api.link.redoc": "Documentation ReDoc",
43 | "api.link.swagger": "Interface Swagger Interactive",
44 | "api.memory.delete": "Supprimer une mémoire et ses embeddings",
45 | "api.memory.get": "Récupérer une mémoire spécifique par hash de contenu",
46 | "api.memory.list": "Lister toutes les mémoires avec support de pagination",
47 | "api.memory.store": "Stocker une nouvelle mémoire avec génération automatique d'embedding",
48 | "api.search.semantic": "Recherche de similarité sémantique utilisant les embeddings",
49 | "api.search.similar": "Trouver des mémoires similaires à une mémoire spécifique",
50 | "api.search.tag": "Rechercher des mémoires par tags (logique AND/OR)",
51 | "api.search.time": "Requêtes temporelles en langage naturel",
52 | "api.section.events": "📡 Événements en Temps Réel",
53 | "api.section.health": "🏥 Santé & Statut",
54 | "api.section.memory": "💾 Gestion de Mémoire",
55 | "api.section.search": "🔍 Opérations de Recherche",
56 | "api.subtitle": "Points de terminaison REST API complets pour MCP Memory Service",
57 | "api.title": "🔗 Documentation API",
58 | "browse.memoriesTagged": "Mémoires taguées avec :",
59 | "browse.showAllTags": "Afficher Tous les Tags",
60 | "browse.subtitle": "Explorez vos mémoires organisées par tags",
61 | "browse.title": "Parcourir par Tags",
62 | "documents.chunkHelp.defaultBest": "<strong>Meilleur pour :</strong> Documentation technique, manuels de référence, bases de connaissances",
63 | "documents.chunkHelp.defaultTitle": "✅ Par défaut (1000 caractères, 200 chevauchement)",
64 | "documents.chunkHelp.defaultWhy": "<strong>Pourquoi :</strong> La fragmentation consciente des paragraphes préserve les pensées complètes et le contexte",
65 | "documents.chunkHelp.largeBest": "<strong>Meilleur pour :</strong> Documents narratifs, articles, blogs, contenu long",
66 | "documents.chunkHelp.largeTitle": "📖 Fragments Plus Grands (2000 caractères, 400 chevauchement)",
67 | "documents.chunkHelp.largeTradeoff": "<strong>Compromis :</strong> Meilleur contexte mais récupération moins précise",
68 | "documents.chunkHelp.note": "<strong>Note :</strong> Les tailles réelles de fragments peuvent varier car le système respecte les limites de paragraphes pour maintenir la cohérence sémantique.",
69 | "documents.chunkHelp.recommended": "Recommandé",
70 | "documents.chunkHelp.smallBest": "<strong>Meilleur pour :</strong> Documentation technique dense, documentation de code, références API",
71 | "documents.chunkHelp.smallTitle": "🔍 Fragments Plus Petits (500 caractères, 100 chevauchement)",
72 | "documents.chunkHelp.smallTradeoff": "<strong>Compromis :</strong> Récupération plus granulaire mais peut diviser les paragraphes plus agressivement",
73 | "documents.chunkHelp.tipHigherOverlap": "<strong>Chevauchement plus élevé</strong> = meilleure continuité mais plus de redondance",
74 | "documents.chunkHelp.tipOverlap": "<strong>Le chevauchement</strong> aide à maintenir le contexte aux limites des fragments",
75 | "documents.chunkHelp.tipPreserve": "Les fragments préservent les phrases complètes et les paragraphes quand c'est possible",
76 | "documents.chunkHelp.tipTest": "Testez différents paramètres sur un document exemple pour trouver la configuration optimale",
77 | "documents.chunkHelp.tipsTitle": "💡 Conseils :",
78 | "documents.chunkHelp.title": "📚 Guide de Configuration de Fragmentation",
79 | "documents.chunkOverlapLabel": "Chevauchement : <span id=\"chunkOverlapValue\">200</span> caractères",
80 | "documents.chunkOverlapTip": "Cliquez pour l'explication du chevauchement",
81 | "documents.chunkSizeLabel": "Taille de Fragment : <span id=\"chunkSizeValue\">1000</span> caractères",
82 | "documents.chunkSizeTip": "Cliquez pour les recommandations de fragmentation",
83 | "documents.dropSubtitle": "ou <button id=\"fileSelectBtn\" class=\"link-button\">parcourez pour sélectionner des fichiers</button>",
84 | "documents.dropTitle": "Glissez-déposez des fichiers ici",
85 | "documents.historyLoading": "Chargement de l'historique de téléchargement...",
86 | "documents.historyTitle": "📊 Historique de Téléchargement",
87 | "documents.memoryType.document": "Document",
88 | "documents.memoryType.knowledge": "Connaissance",
89 | "documents.memoryType.note": "Note",
90 | "documents.memoryType.reference": "Référence",
91 | "documents.memoryTypeLabel": "Type de Mémoire",
92 | "documents.modeBatch": "Traitement par Lot",
93 | "documents.modeDescription": "Tous les fichiers sélectionnés seront traités ensemble avec les mêmes tags.",
94 | "documents.modeIndividual": "Traitement Individuel",
95 | "documents.overlapHelp.highBest": "<strong>Meilleur pour :</strong> Contenu technique complexe nécessitant un contexte maximal",
96 | "documents.overlapHelp.highTitle": "🔄 Chevauchement Élevé (400+ caractères)",
97 | "documents.overlapHelp.highTradeoff": "<strong>Compromis :</strong> Plus de stockage et de traitement, redondance plus élevée",
98 | "documents.overlapHelp.mediumBest": "<strong>Meilleur pour :</strong> La plupart des documents - équilibre contexte et efficacité",
99 | "documents.overlapHelp.mediumTitle": "✅ Chevauchement Moyen (200 caractères)",
100 | "documents.overlapHelp.mediumWhy": "<strong>Pourquoi :</strong> Préserve 1-2 phrases de contexte aux limites",
101 | "documents.overlapHelp.noneBest": "<strong>Meilleur pour :</strong> Efficacité de stockage maximale, pas de redondance nécessaire",
102 | "documents.overlapHelp.noneTitle": "🎯 Pas de Chevauchement (0 caractères)",
103 | "documents.overlapHelp.noneTradeoff": "<strong>Compromis :</strong> Le contexte peut être perdu aux limites des fragments",
104 | "documents.overlapHelp.note": "<strong>Qu'est-ce que le chevauchement ?</strong> Le chevauchement est le nombre de caractères dupliqués entre des fragments consécutifs. Cela aide à maintenir le contexte aux limites des fragments.",
105 | "documents.overlapHelp.tipAccuracy": "Un chevauchement plus élevé aide à la précision de la recherche mais augmente le stockage",
106 | "documents.overlapHelp.tipLarge": "<strong>Grands fragments (2000)</strong> → Utilisez 400-500 de chevauchement",
107 | "documents.overlapHelp.tipMedium": "<strong>Fragments moyens (1000)</strong> → Utilisez 200-250 de chevauchement",
108 | "documents.overlapHelp.tipRule": "<strong>Règle générale :</strong> Le chevauchement devrait être de 15-25% de la taille du fragment",
109 | "documents.overlapHelp.tipSmall": "<strong>Petits fragments (500)</strong> → Utilisez 100-150 de chevauchement",
110 | "documents.overlapHelp.tipZero": "Un chevauchement nul convient pour les documents bien structurés avec des sections claires",
111 | "documents.overlapHelp.tipsTitle": "💡 Lignes directrices :",
112 | "documents.overlapHelp.title": "🔗 Explication du Chevauchement de Fragments",
113 | "documents.overlapHelp.visual": "Exemple Visuel :",
114 | "documents.processingHelp.batchBest": "<strong>Meilleur pour :</strong> Fichiers similaires, traitement rapide en masse, quand vous voulez grouper les fichiers",
115 | "documents.processingHelp.batchCons": "<strong>Inconvénients :</strong> L'échec d'un fichier peut affecter tout le lot",
116 | "documents.processingHelp.batchPros": "<strong>Avantages :</strong> Plus rapide, plus simple, indicateur de progression unique",
117 | "documents.processingHelp.batchTitle": "📦 Traitement par Lot",
118 | "documents.processingHelp.batchWhat": "<strong>Ce qu'il fait :</strong> Télécharge tous les fichiers ensemble en une seule opération",
119 | "documents.processingHelp.default": "Par défaut",
120 | "documents.processingHelp.individualBest": "<strong>Meilleur pour :</strong> Types de fichiers mixtes, quand vous voulez l'isolation des erreurs, ou besoin de suivi de progression individuel",
121 | "documents.processingHelp.individualCons": "<strong>Inconvénients :</strong> Légèrement plus lent en raison du traitement séquentiel",
122 | "documents.processingHelp.individualPros": "<strong>Avantages :</strong> Meilleure gestion des erreurs, progression individuelle, plus robuste",
123 | "documents.processingHelp.individualTitle": "🔄 Traitement Individuel",
124 | "documents.processingHelp.individualWhat": "<strong>Ce qu'il fait :</strong> Télécharge chaque fichier séparément avec des appels API individuels",
125 | "documents.processingHelp.note": "<strong>Lors du téléchargement de plusieurs fichiers,</strong> choisissez comment ils doivent être traités. Les deux modes appliquent les mêmes tags à tous les fichiers.",
126 | "documents.processingHelp.tipBatch": "<strong>Mode par lot :</strong> Quand les fichiers sont similaires et que vous voulez les traiter rapidement ensemble",
127 | "documents.processingHelp.tipIndividual": "<strong>Mode individuel :</strong> Quand les fichiers peuvent avoir des exigences de traitement différentes ou que vous voulez assurer que tous les fichiers sont traités même si certains échouent",
128 | "documents.processingHelp.tipSingle": "<strong>Fichiers uniques :</strong> Toujours traités individuellement (pas de choix nécessaire)",
129 | "documents.processingHelp.tipTags": "Les deux modes appliquent les mêmes tags à tous les fichiers",
130 | "documents.processingHelp.tipsTitle": "💡 Quand choisir chaque mode :",
131 | "documents.processingMode": "Mode de Traitement",
132 | "documents.processingTip": "Cliquez pour l'explication du mode de traitement",
133 | "documents.search.count": "{count} résultat{plural}{extra}",
134 | "documents.search.count.one": "{count} résultat{extra}",
135 | "documents.search.count.other": "{count} résultats{extra}",
136 | "documents.search.noMatch": "Aucun contenu de document correspondant trouvé. Essayez d'autres termes de recherche.",
137 | "documents.searchBtn": "Rechercher Documents",
138 | "documents.searchDesc": "Recherchez dans vos documents téléchargés pour vérifier que le contenu est indexé",
139 | "documents.searchPlaceholder": "Rechercher du contenu dans les documents ingérés...",
140 | "documents.searchTitle": "🔍 Rechercher dans le Contenu Ingéré",
141 | "documents.supported": "Formats supportés : PDF, TXT, MD, JSON",
142 | "documents.tagsHelp": "Les tags seront appliqués à tous les fichiers. Utilisez les espaces ou virgules comme séparateurs.",
143 | "documents.tagsLabel": "Tags (séparés par des virgules)",
144 | "documents.tagsPlaceholder": "ex : documentation, référence, manuel",
145 | "documents.title": "📄 Ingestion de Documents",
146 | "documents.upload": "Télécharger & Ingérer",
147 | "footer.about.copyright": "© 2024 Heinrich Krupp",
148 | "footer.about.desc": "MCP Memory Service - Gestion de mémoire sémantique pour assistants IA",
149 | "footer.about.license": "Sous licence Apache 2.0",
150 | "footer.about.title": "À propos",
151 | "footer.docs.config": "⚙️ Problèmes de Configuration",
152 | "footer.docs.title": "Documentation",
153 | "footer.docs.troubleshoot": "🔧 Guide de Dépannage",
154 | "footer.docs.wiki": "📚 Accueil Wiki",
155 | "footer.resources.apiDoc": "📖 Documentation API",
156 | "footer.resources.portfolio": "🌐 Portfolio",
157 | "footer.resources.repo": "Dépôt GitHub",
158 | "footer.resources.title": "Ressources",
159 | "header.title": "🧠 MCP Memory",
160 | "header.versionLoading": "Chargement...",
161 | "home.subtitle": "Gérez vos mémoires IA avec recherche sémantique, mises à jour en temps réel et organisation intelligente.",
162 | "home.welcome": "Bienvenue sur votre Tableau de bord Mémoire",
163 | "lang.chinese": "中文",
164 | "lang.english": "English",
165 | "manage.bulk.cleanup.desc": "Supprimer les mémoires en double basées sur le contenu",
166 | "manage.bulk.cleanup.run": "Exécuter le Nettoyage",
167 | "manage.bulk.cleanup.title": "Nettoyer les Doublons",
168 | "manage.bulk.delete": "Supprimer",
169 | "manage.bulk.deleteDate.desc": "Supprimer les mémoires antérieures à une date spécifique",
170 | "manage.bulk.deleteDate.title": "Supprimer par Date",
171 | "manage.bulk.deleteTag.desc": "Supprimer toutes les mémoires avec un tag spécifique",
172 | "manage.bulk.deleteTag.placeholder": "Sélectionner un tag...",
173 | "manage.bulk.deleteTag.title": "Supprimer par Tag",
174 | "manage.bulk.title": "🧹 Opérations en Masse",
175 | "manage.system.dbOpt.btn": "Optimiser DB",
176 | "manage.system.dbOpt.desc": "Optimiser les performances de la base de données",
177 | "manage.system.dbOpt.title": "Optimisation de la Base de Données",
178 | "manage.system.rebuild.btn": "Reconstruire l'Index",
179 | "manage.system.rebuild.desc": "Reconstruire les index de recherche pour de meilleures performances",
180 | "manage.system.rebuild.title": "Reconstruire l'Index de Recherche",
181 | "manage.system.title": "⚙️ Maintenance Système",
182 | "manage.tags.column.actions": "Actions",
183 | "manage.tags.column.count": "Nombre",
184 | "manage.tags.column.tag": "Tag",
185 | "manage.tags.delete": "Supprimer",
186 | "manage.tags.loading": "Chargement des statistiques de tags...",
187 | "manage.tags.rename": "Renommer",
188 | "manage.tags.title": "🏷️ Gestion des Tags",
189 | "meta.language.code": "fr",
190 | "meta.language.englishName": "French",
191 | "meta.language.flag": "🇫🇷",
192 | "meta.language.nativeName": "Français",
193 | "meta.title": "MCP Memory Service - Tableau de bord",
194 | "modal.addMemory.cancel": "Annuler",
195 | "modal.addMemory.contentLabel": "Contenu",
196 | "modal.addMemory.contentPlaceholder": "Entrez le contenu de votre mémoire...",
197 | "modal.addMemory.save": "Enregistrer la Mémoire",
198 | "modal.addMemory.tagsLabel": "Tags (séparés par des virgules)",
199 | "modal.addMemory.tagsPlaceholder": "ex : codage, javascript, api",
200 | "modal.addMemory.title": "Ajouter une Nouvelle Mémoire",
201 | "modal.addMemory.typeCode": "Code",
202 | "modal.addMemory.typeIdea": "Idée",
203 | "modal.addMemory.typeLabel": "Type",
204 | "modal.addMemory.typeNote": "Note",
205 | "modal.addMemory.typeReference": "Référence",
206 | "modal.memoryDetails.delete": "Supprimer",
207 | "modal.memoryDetails.edit": "Modifier",
208 | "modal.memoryDetails.share": "Partager",
209 | "modal.memoryDetails.title": "Détails de la Mémoire",
210 | "modal.memoryViewer.chunksFound": "{count} fragments trouvés",
211 | "modal.memoryViewer.close": "Fermer",
212 | "modal.memoryViewer.title": "📝 Fragments de Mémoire de Document",
213 | "modal.settings.backupCount": "Nombre de Sauvegardes :",
214 | "modal.settings.backupNow": "Créer une Sauvegarde Maintenant",
215 | "modal.settings.backupRestore": "Sauvegarde & Restauration",
216 | "modal.settings.cancel": "Annuler",
217 | "modal.settings.databaseSize": "Taille de la Base de Données :",
218 | "modal.settings.embeddingDimensions": "Dimensions d'Embedding :",
219 | "modal.settings.embeddingModel": "Modèle d'Embedding :",
220 | "modal.settings.lastBackup": "Dernière Sauvegarde :",
221 | "modal.settings.never": "Jamais",
222 | "modal.settings.nextScheduled": "Prochaine Programmée :",
223 | "modal.settings.preferences": "Préférences",
224 | "modal.settings.previewLines": "Lignes de Prévisualisation de Mémoire",
225 | "modal.settings.primaryBackend": "Backend Principal :",
226 | "modal.settings.qualitySystem": "Système de Qualité",
227 | "modal.settings.save": "Enregistrer les Paramètres",
228 | "modal.settings.storageBackend": "Backend de Stockage :",
229 | "modal.settings.systemInfo": "Informations Système",
230 | "modal.settings.theme": "Thème",
231 | "modal.settings.themeDark": "Sombre",
232 | "modal.settings.themeLight": "Clair",
233 | "modal.settings.title": "Paramètres",
234 | "modal.settings.totalMemories": "Total des Mémoires :",
235 | "modal.settings.uptime": "Temps de Fonctionnement :",
236 | "modal.settings.version": "Version :",
237 | "modal.settings.viewBackups": "Voir les Sauvegardes",
238 | "modal.settings.viewDensity": "Densité d'Affichage",
239 | "modal.settings.viewDensityComfortable": "Confortable",
240 | "modal.settings.viewDensityCompact": "Compact",
241 | "nav.analytics": "Analyses",
242 | "nav.apiDocs": "Docs API",
243 | "nav.browse": "Parcourir",
244 | "nav.dashboard": "Tableau de bord",
245 | "nav.documents": "Documents",
246 | "nav.manage": "Gérer",
247 | "nav.qualityAnalytics": "Qualité",
248 | "nav.search": "Recherche",
249 | "quality.analytics.subtitle": "Suivez et améliorez la qualité de votre mémoire avec une notation pilotée par IA",
250 | "quality.analytics.title": "⭐ Analytique de Qualité de Mémoire",
251 | "quality.bottom.title": "📈 Mémoires à Améliorer",
252 | "quality.chart.distribution.countLabel": "Nombre",
253 | "quality.chart.distribution.scoreLabel": "Score de Qualité",
254 | "quality.chart.distribution.title": "Distribution des Scores de Qualité",
255 | "quality.chart.providers.title": "Répartition par Fournisseur de Notation",
256 | "quality.stats.average": "Score Moyen",
257 | "quality.stats.high": "Haute Qualité (≥0.7)",
258 | "quality.stats.low": "Basse (<0.5)",
259 | "quality.stats.medium": "Moyenne (0.5-0.7)",
260 | "quality.stats.total": "Total des Mémoires",
261 | "quality.top.title": "🏆 Mémoires de Meilleure Qualité",
262 | "search.filters.activeTitle": "Filtres Actifs :",
263 | "search.filters.date": "Plage de Dates",
264 | "search.filters.date.all": "Toutes périodes",
265 | "search.filters.date.month": "Ce mois-ci",
266 | "search.filters.date.quarter": "Ce trimestre",
267 | "search.filters.date.today": "Aujourd'hui",
268 | "search.filters.date.week": "Cette semaine",
269 | "search.filters.date.year": "Cette année",
270 | "search.filters.date.yesterday": "Hier",
271 | "search.filters.dateHelp": "Sélectionnez une période pour filtrer les mémoires",
272 | "search.filters.dateTip": "Filtrer les mémoires par date de création",
273 | "search.filters.modeSuffix": "mode",
274 | "search.filters.modeTooltip": "Basculer entre la recherche live (mises à jour pendant la saisie) et le mode manuel",
275 | "search.filters.tags": "Tags",
276 | "search.filters.tagsHelp": "Séparez plusieurs tags par des virgules",
277 | "search.filters.tagsPlaceholder": "ex : travail, codage, important",
278 | "search.filters.tagsTip": "Entrez les tags séparés par des virgules (ex : travail, codage, important)",
279 | "search.filters.title": "Filtres de Recherche",
280 | "search.filters.tooltip": "Les filtres fonctionnent ensemble : combinez tags, dates et types pour affiner votre recherche",
281 | "search.filters.type": "Type de Contenu",
282 | "search.filters.type.all": "Tous types",
283 | "search.filters.type.code": "Code",
284 | "search.filters.type.idea": "Idées",
285 | "search.filters.type.note": "Notes",
286 | "search.filters.type.reference": "Références",
287 | "search.filters.typeHelp": "Choisissez le type de mémoires à afficher",
288 | "search.filters.typeTip": "Filtrer par type de contenu stocké",
289 | "search.modeLive": "Recherche Live",
290 | "search.modeManual": "Recherche Manuelle",
291 | "search.placeholder": "🔍 Rechercher vos souvenirs...",
292 | "search.results.count": "{count} résultats",
293 | "search.results.title": "Résultats de Recherche",
294 | "search.view.grid": "Grille",
295 | "search.view.list": "Liste",
296 | "sections.quickActions": "Actions Rapides",
297 | "sections.recentMemories": "Mémoires Récentes",
298 | "settings.quality.boost.help": "Reclasse les résultats de recherche pour prioriser les mémoires de haute qualité",
299 | "settings.quality.boost.label": "Activer la Recherche Améliorée par Qualité",
300 | "settings.quality.current.label": "Fournisseur Actuel :",
301 | "settings.quality.provider.auto": "Automatique (Tous Disponibles)",
302 | "settings.quality.provider.gemini": "API Gemini",
303 | "settings.quality.provider.groq": "API Groq",
304 | "settings.quality.provider.help": "Le SLM local offre une notation de qualité sans coût et préservant la confidentialité",
305 | "settings.quality.provider.label": "Fournisseur IA",
306 | "settings.quality.provider.local": "SLM Local (Mode Confidentialité)",
307 | "settings.quality.provider.none": "Implicite Uniquement (Sans IA)",
308 | "stats.tags": "Tags",
309 | "stats.thisWeek": "Cette Semaine",
310 | "stats.total": "Total des Mémoires",
311 | "status.connected": "Connecté",
312 | "status.disconnected": "Déconnecté",
313 | "status.loading": "Chargement...",
314 | "toast.backupCreated": "Sauvegarde créée : {name} ({size} MB)",
315 | "toast.backupCreating": "Création de la sauvegarde...",
316 | "toast.backupFailed": "Échec de la création de la sauvegarde",
317 | "toast.backupFailedWithReason": "Échec de la sauvegarde : {reason}",
318 | "toast.bulkDeleteByDateFailedWithReason": "{message}",
319 | "toast.bulkDeleteByDateSuccess": "{message}",
320 | "toast.bulkDeleteFailed": "Échec de l'opération de suppression en masse",
321 | "toast.bulkDeleteFailedWithReason": "{message}",
322 | "toast.bulkDeleteSuccess": "{message}",
323 | "toast.cleanupFailed": "Échec de l'opération de nettoyage",
324 | "toast.cleanupFailedWithReason": "{message}",
325 | "toast.cleanupSuccess": "{message}",
326 | "toast.copyFailed": "Échec de la copie dans le presse-papiers",
327 | "toast.copySuccess": "Mémoire copiée dans le presse-papiers",
328 | "toast.dbOptimizeTodo": "Optimisation de la base de données pas encore implémentée",
329 | "toast.documentChunksPartial": "{count} fragments trouvés (résultats partiels)",
330 | "toast.documentRemoved": "\"{name}\" supprimé ({count} mémoires supprimées)",
331 | "toast.duplicateMemoryWarning": "Mémoire mise à jour, mais la version originale existe toujours. Vous devrez peut-être supprimer manuellement le doublon.",
332 | "toast.enterMemoryContent": "Veuillez saisir le contenu de la mémoire",
333 | "toast.enterSearch": "Veuillez saisir une requête de recherche",
334 | "toast.errorLoadingDocumentMemories": "Erreur lors du chargement des mémoires de documents",
335 | "toast.errorPerformingSearch": "Erreur lors de l'exécution de la recherche",
336 | "toast.errorRemovingDocument": "Erreur lors de la suppression du document",
337 | "toast.exportFailed": "Échec de l'export des données",
338 | "toast.exportFetching": "Récupération des mémoires... ({current}/{total})",
339 | "toast.exportPreparing": "Préparation de l'export...",
340 | "toast.exportSuccess": "{count} mémoires exportées avec succès",
341 | "toast.filterSearchFailed": "Échec de la recherche avec filtres",
342 | "toast.filtersCleared": "Tous les filtres effacés",
343 | "toast.genericSuccess": "{message}",
344 | "toast.indexRebuildTodo": "Reconstruction de l'index pas encore implémentée",
345 | "toast.languageSwitched": "Langue changée",
346 | "toast.loadAnalyticsFail": "Échec du chargement des données d'analyse",
347 | "toast.loadBrowseFail": "Échec du chargement des données de navigation",
348 | "toast.loadDashboardFail": "Échec du chargement des données du tableau de bord",
349 | "toast.loadDocumentMemoriesFail": "Échec du chargement des mémoires de documents",
350 | "toast.loadDocumentsFail": "Échec du chargement des données des documents",
351 | "toast.loadManageFail": "Échec du chargement des données de gestion",
352 | "toast.loadTagFail": "Échec du chargement des mémoires pour le tag",
353 | "toast.memoryAdded": "Mémoire ajoutée avec succès",
354 | "toast.memoryDeleteFailed": "Échec de la suppression de la mémoire",
355 | "toast.memoryDeleted": "Mémoire supprimée",
356 | "toast.memoryDeletedSuccess": "Mémoire supprimée avec succès",
357 | "toast.memoryUpdated": "Mémoire mise à jour",
358 | "toast.noFiles": "Aucun fichier sélectionné",
359 | "toast.removeDocumentFailed": "Échec de la suppression du document",
360 | "toast.saveMemoryFailed": "Échec de l'enregistrement de la mémoire : {reason}",
361 | "toast.saveSettingsFailed": "Échec de l'enregistrement des paramètres. Vos préférences ne seront pas conservées.",
362 | "toast.searchFailed": "Échec de la recherche",
363 | "toast.searchModeLive": "Mode de recherche : Live (recherche pendant la saisie)",
364 | "toast.searchModeManual": "Mode de recherche : Manuel (cliquer sur Rechercher)",
365 | "toast.selectDate": "Veuillez sélectionner une date",
366 | "toast.selectTagToDelete": "Veuillez sélectionner un tag à supprimer",
367 | "toast.settingsSaved": "Paramètres enregistrés avec succès",
368 | "toast.syncCompleted": "{count} opérations synchronisées en {seconds}s",
369 | "toast.syncFailedWithReason": "Échec de la synchronisation : {reason}",
370 | "toast.syncForceFailed": "Échec de la synchronisation forcée : {reason}",
371 | "toast.syncPauseFail": "Échec de la mise en pause de la synchronisation",
372 | "toast.syncPauseFailWithReason": "Échec de la mise en pause de la synchronisation : {reason}",
373 | "toast.syncPaused": "Synchronisation mise en pause",
374 | "toast.syncResumeFail": "Échec de la reprise de la synchronisation",
375 | "toast.syncResumeFailWithReason": "Échec de la reprise de la synchronisation : {reason}",
376 | "toast.syncResumed": "Synchronisation reprise",
377 | "toast.syncStarting": "Démarrage de la synchronisation...",
378 | "toast.syncing": "Synchronisation : {synced}/{total}",
379 | "toast.tagDeleteTodo": "Suppression de tag pas encore implémentée",
380 | "toast.tagRenameTodo": "Renommage de tag pas encore implémenté",
381 | "toast.themeSwitched": "Basculé en mode {theme}",
382 | "toast.uploadFailed": "Téléchargement échoué : {message}",
383 | "toast.uploadFileFail": "Échec du téléchargement de {name} : {message}",
384 | "toast.uploadStartedBatch": "Téléchargement par lot démarré pour {count} fichiers",
385 | "toast.uploadStartedSingle": "Téléchargement démarré pour {name}"
386 | }
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/mcp_server.py:
--------------------------------------------------------------------------------
```python
1 | #!/usr/bin/env python3
2 | """
3 | FastAPI MCP Server for Memory Service
4 |
5 | This module implements a native MCP server using the FastAPI MCP framework,
6 | replacing the Node.js HTTP-to-MCP bridge to resolve SSL connectivity issues
7 | and provide direct MCP protocol support.
8 |
9 | Features:
10 | - Native MCP protocol implementation using FastMCP
11 | - Direct integration with existing memory storage backends
12 | - Streamable HTTP transport for remote access
13 | - All 22 core memory operations (excluding dashboard tools)
14 | - SSL/HTTPS support with proper certificate handling
15 | """
16 |
17 | import asyncio
18 | import logging
19 | import os
20 | import socket
21 | import sys
22 | import time
23 | from collections.abc import AsyncIterator
24 | from contextlib import asynccontextmanager
25 | from dataclasses import dataclass
26 | from pathlib import Path
27 | from typing import Dict, List, Optional, Any, Union, TypedDict
28 | try:
29 | from typing import NotRequired # Python 3.11+
30 | except ImportError:
31 | from typing_extensions import NotRequired # Python 3.10
32 |
33 | # Add src to path for imports
34 | current_dir = Path(__file__).parent
35 | src_dir = current_dir.parent.parent
36 | sys.path.insert(0, str(src_dir))
37 |
38 | # FastMCP is not available in current MCP library version
39 | # This module is kept for future compatibility
40 | try:
41 | from mcp.server.fastmcp import FastMCP, Context
42 | except ImportError:
43 | logger_temp = logging.getLogger(__name__)
44 | logger_temp.warning("FastMCP not available in mcp library - mcp_server module cannot be used")
45 |
46 | # Create dummy objects for graceful degradation
47 | class _DummyFastMCP:
48 | def tool(self):
49 | """Dummy decorator that does nothing."""
50 | def decorator(func):
51 | return func
52 | return decorator
53 |
54 | FastMCP = _DummyFastMCP # type: ignore
55 | Context = None # type: ignore
56 |
57 | from mcp.types import TextContent, ToolAnnotations
58 |
59 | # Import existing memory service components
60 | from .config import (
61 | STORAGE_BACKEND,
62 | CONSOLIDATION_ENABLED, EMBEDDING_MODEL_NAME, INCLUDE_HOSTNAME,
63 | SQLITE_VEC_PATH,
64 | CLOUDFLARE_API_TOKEN, CLOUDFLARE_ACCOUNT_ID, CLOUDFLARE_VECTORIZE_INDEX,
65 | CLOUDFLARE_D1_DATABASE_ID, CLOUDFLARE_R2_BUCKET, CLOUDFLARE_EMBEDDING_MODEL,
66 | CLOUDFLARE_LARGE_CONTENT_THRESHOLD, CLOUDFLARE_MAX_RETRIES, CLOUDFLARE_BASE_DELAY,
67 | HYBRID_SYNC_INTERVAL, HYBRID_BATCH_SIZE, HYBRID_MAX_QUEUE_SIZE,
68 | HYBRID_SYNC_ON_STARTUP, HYBRID_FALLBACK_TO_PRIMARY,
69 | CONTENT_PRESERVE_BOUNDARIES, CONTENT_SPLIT_OVERLAP, ENABLE_AUTO_SPLIT
70 | )
71 | from .storage.base import MemoryStorage
72 | from .services.memory_service import MemoryService
73 |
74 | # Configure logging
75 | logging.basicConfig(level=logging.INFO) # Default to INFO level
76 | logger = logging.getLogger(__name__)
77 |
78 | # =============================================================================
79 | # GLOBAL CACHING FOR MCP SERVER PERFORMANCE OPTIMIZATION
80 | # =============================================================================
81 | # Module-level caches to persist storage/service instances across stateless HTTP calls.
82 | # This reduces initialization overhead from ~1,810ms to <400ms on cache hits.
83 | #
84 | # Cache Keys:
85 | # - Storage: "{backend_type}:{db_path}" (e.g., "sqlite_vec:/path/to/db")
86 | # - MemoryService: storage instance ID (id(storage))
87 | #
88 | # Thread Safety:
89 | # - Uses asyncio.Lock to prevent race conditions during concurrent access
90 | #
91 | # Lifecycle:
92 | # - Cached instances persist for the lifetime of the Python process
93 | # - NOT cleared between stateless HTTP calls (intentional for performance)
94 | # - Cleaned up on process shutdown via lifespan context manager
95 |
96 | _STORAGE_CACHE: Dict[str, MemoryStorage] = {}
97 | _MEMORY_SERVICE_CACHE: Dict[int, MemoryService] = {}
98 | _CACHE_LOCK: Optional[asyncio.Lock] = None # Initialized on first use
99 | _CACHE_STATS = {
100 | "storage_hits": 0,
101 | "storage_misses": 0,
102 | "service_hits": 0,
103 | "service_misses": 0,
104 | "total_calls": 0,
105 | "initialization_times": [] # Track initialization durations for cache misses
106 | }
107 |
108 | def _get_cache_lock() -> asyncio.Lock:
109 | """Get or create the global cache lock (lazy initialization to avoid event loop issues)."""
110 | global _CACHE_LOCK
111 | if _CACHE_LOCK is None:
112 | _CACHE_LOCK = asyncio.Lock()
113 | return _CACHE_LOCK
114 |
115 | def _get_or_create_memory_service(storage: MemoryStorage) -> MemoryService:
116 | """
117 | Get cached MemoryService or create new one.
118 |
119 | Args:
120 | storage: Storage instance to use as cache key
121 |
122 | Returns:
123 | MemoryService instance (cached or newly created)
124 | """
125 | storage_id = id(storage)
126 | if storage_id in _MEMORY_SERVICE_CACHE:
127 | memory_service = _MEMORY_SERVICE_CACHE[storage_id]
128 | _CACHE_STATS["service_hits"] += 1
129 | logger.info(f"✅ MemoryService Cache HIT - Reusing service instance (storage_id: {storage_id})")
130 | else:
131 | _CACHE_STATS["service_misses"] += 1
132 | logger.info(f"❌ MemoryService Cache MISS - Creating new service instance...")
133 |
134 | # Initialize memory service with shared business logic
135 | memory_service = MemoryService(storage)
136 |
137 | # Cache the memory service instance
138 | _MEMORY_SERVICE_CACHE[storage_id] = memory_service
139 | logger.info(f"💾 Cached MemoryService instance (storage_id: {storage_id})")
140 |
141 | return memory_service
142 |
143 | def _log_cache_performance(start_time: float) -> None:
144 | """
145 | Log comprehensive cache performance statistics.
146 |
147 | Args:
148 | start_time: Timer start time to calculate total elapsed time
149 | """
150 | total_time = (time.time() - start_time) * 1000
151 | cache_hit_rate = (
152 | (_CACHE_STATS["storage_hits"] + _CACHE_STATS["service_hits"]) /
153 | (_CACHE_STATS["total_calls"] * 2) # 2 caches per call
154 | ) * 100
155 |
156 | logger.info(
157 | f"📊 Cache Stats - "
158 | f"Hit Rate: {cache_hit_rate:.1f}% | "
159 | f"Storage: {_CACHE_STATS['storage_hits']}H/{_CACHE_STATS['storage_misses']}M | "
160 | f"Service: {_CACHE_STATS['service_hits']}H/{_CACHE_STATS['service_misses']}M | "
161 | f"Total Time: {total_time:.1f}ms | "
162 | f"Cache Size: {len(_STORAGE_CACHE)} storage + {len(_MEMORY_SERVICE_CACHE)} services"
163 | )
164 |
165 | @dataclass
166 | class MCPServerContext:
167 | """Application context for the MCP server with all required components."""
168 | storage: MemoryStorage
169 | memory_service: MemoryService
170 |
171 | @asynccontextmanager
172 | async def mcp_server_lifespan(server: FastMCP) -> AsyncIterator[MCPServerContext]:
173 | """
174 | Manage MCP server lifecycle with global caching for performance optimization.
175 |
176 | Performance Impact:
177 | - Cache HIT: ~200-400ms (reuses existing instances)
178 | - Cache MISS: ~1,810ms (initializes new instances)
179 |
180 | Caching Strategy:
181 | - Storage instances cached by "{backend}:{path}" key
182 | - MemoryService instances cached by storage ID
183 | - Thread-safe with asyncio.Lock
184 | - Persists across stateless HTTP calls (by design)
185 | """
186 | global _STORAGE_CACHE, _MEMORY_SERVICE_CACHE, _CACHE_STATS
187 |
188 | # Track call statistics
189 | _CACHE_STATS["total_calls"] += 1
190 | start_time = time.time()
191 |
192 | logger.info(f"🔄 MCP Server Call #{_CACHE_STATS['total_calls']} - Checking global cache...")
193 |
194 | # Acquire lock for thread-safe cache access
195 | cache_lock = _get_cache_lock()
196 | async with cache_lock:
197 | # Generate cache key for storage backend
198 | cache_key = f"{STORAGE_BACKEND}:{SQLITE_VEC_PATH}"
199 |
200 | # Check storage cache
201 | if cache_key in _STORAGE_CACHE:
202 | storage = _STORAGE_CACHE[cache_key]
203 | _CACHE_STATS["storage_hits"] += 1
204 | logger.info(f"✅ Storage Cache HIT - Reusing {STORAGE_BACKEND} instance (key: {cache_key})")
205 | else:
206 | _CACHE_STATS["storage_misses"] += 1
207 | logger.info(f"❌ Storage Cache MISS - Initializing {STORAGE_BACKEND} instance...")
208 |
209 | # Initialize storage backend using shared factory
210 | from .storage.factory import create_storage_instance
211 | storage = await create_storage_instance(SQLITE_VEC_PATH, server_type="mcp")
212 |
213 | # Cache the storage instance
214 | _STORAGE_CACHE[cache_key] = storage
215 | init_time = (time.time() - start_time) * 1000 # Convert to ms
216 | _CACHE_STATS["initialization_times"].append(init_time)
217 | logger.info(f"💾 Cached storage instance (key: {cache_key}, init_time: {init_time:.1f}ms)")
218 |
219 | # Check memory service cache and log performance
220 | memory_service = _get_or_create_memory_service(storage)
221 | _log_cache_performance(start_time)
222 |
223 | try:
224 | yield MCPServerContext(
225 | storage=storage,
226 | memory_service=memory_service
227 | )
228 | finally:
229 | # IMPORTANT: Do NOT close cached storage instances here!
230 | # They are intentionally kept alive across stateless HTTP calls for performance.
231 | # Cleanup only happens on process shutdown (handled by FastMCP framework).
232 | logger.info(f"✅ MCP Server Call #{_CACHE_STATS['total_calls']} complete - Cached instances preserved")
233 |
234 | # Create FastMCP server instance
235 | try:
236 | mcp = FastMCP(
237 | name="MCP Memory Service",
238 | host="0.0.0.0", # Listen on all interfaces for remote access
239 | port=8000, # Default port
240 | lifespan=mcp_server_lifespan,
241 | stateless_http=True # Enable stateless HTTP for Claude Code compatibility
242 | )
243 | except TypeError:
244 | # FastMCP not available - create dummy instance
245 | mcp = _DummyFastMCP() # type: ignore
246 |
247 | # =============================================================================
248 | # TYPE DEFINITIONS
249 | # =============================================================================
250 |
251 | class StoreMemorySuccess(TypedDict):
252 | """Return type for successful single memory storage."""
253 | success: bool
254 | message: str
255 | content_hash: str
256 |
257 | class StoreMemorySplitSuccess(TypedDict):
258 | """Return type for successful chunked memory storage."""
259 | success: bool
260 | message: str
261 | chunks_created: int
262 | chunk_hashes: List[str]
263 |
264 | class StoreMemoryFailure(TypedDict):
265 | """Return type for failed memory storage."""
266 | success: bool
267 | message: str
268 | chunks_created: NotRequired[int]
269 | chunk_hashes: NotRequired[List[str]]
270 |
271 | # =============================================================================
272 | # CORE MEMORY OPERATIONS
273 | # =============================================================================
274 |
275 | @mcp.tool(
276 | annotations=ToolAnnotations(
277 | title="Store Memory",
278 | destructiveHint=False,
279 | ),
280 | )
281 | async def store_memory(
282 | content: str,
283 | ctx: Context,
284 | tags: Union[str, List[str], None] = None,
285 | memory_type: str = "note",
286 | metadata: Optional[Dict[str, Any]] = None,
287 | client_hostname: Optional[str] = None
288 | ) -> Union[StoreMemorySuccess, StoreMemorySplitSuccess, StoreMemoryFailure]:
289 | """Store new information in persistent memory with semantic search capabilities and optional categorization.
290 |
291 | USE THIS WHEN:
292 | - User provides information to remember for future sessions (decisions, preferences, facts, code snippets)
293 | - Capturing important context from current conversation ("remember this for later")
294 | - User explicitly says "remember", "save", "store", "keep this", "note that"
295 | - Documenting technical decisions, API patterns, project architecture, user preferences
296 | - Creating knowledge base entries, documentation snippets, troubleshooting notes
297 |
298 | THIS IS THE PRIMARY STORAGE TOOL - use it whenever information should persist beyond the current session.
299 |
300 | DO NOT USE FOR:
301 | - Temporary conversation context (use native conversation history instead)
302 | - Information already stored (check first with retrieve_memory to avoid duplicates)
303 | - Streaming or real-time data that changes frequently
304 |
305 | CONTENT LENGTH LIMITS:
306 | - Cloudflare/Hybrid backends: 800 characters max (auto-splits into chunks if exceeded)
307 | - SQLite-vec backend: No limit
308 | - Auto-chunking preserves context with 50-character overlap at natural boundaries
309 |
310 | TAG FORMATS (all supported):
311 | - Array: ["tag1", "tag2"]
312 | - String: "tag1,tag2"
313 | - Single: "single-tag"
314 | - Both tags parameter AND metadata.tags are merged automatically
315 |
316 | RETURNS:
317 | - success: Boolean indicating storage status
318 | - message: Status message
319 | - content_hash: Unique identifier for retrieval/deletion (single memory)
320 | - chunks_created: Number of chunks (if content was split)
321 | - chunk_hashes: Array of hashes (if content was split)
322 |
323 | Examples:
324 | {
325 | "content": "User prefers async/await over callbacks in Python projects",
326 | "metadata": {
327 | "tags": ["coding-style", "python", "preferences"],
328 | "type": "preference"
329 | }
330 | }
331 |
332 | {
333 | "content": "API endpoint /api/v1/users requires JWT token in Authorization header",
334 | "metadata": {
335 | "tags": "api-documentation,authentication",
336 | "type": "reference"
337 | }
338 | }
339 | """
340 | # Delegate to shared MemoryService business logic
341 | memory_service = ctx.request_context.lifespan_context.memory_service
342 | result = await memory_service.store_memory(
343 | content=content,
344 | tags=tags,
345 | memory_type=memory_type,
346 | metadata=metadata,
347 | client_hostname=client_hostname
348 | )
349 |
350 | # Transform MemoryService response to MCP tool format
351 | if not result.get("success"):
352 | return StoreMemoryFailure(
353 | success=False,
354 | message=result.get("error", "Failed to store memory")
355 | )
356 |
357 | # Handle chunked response (multiple memories)
358 | if "memories" in result:
359 | chunk_hashes = [mem["content_hash"] for mem in result["memories"]]
360 | return StoreMemorySplitSuccess(
361 | success=True,
362 | message=f"Successfully stored {len(result['memories'])} memory chunks",
363 | chunks_created=result["total_chunks"],
364 | chunk_hashes=chunk_hashes
365 | )
366 |
367 | # Handle single memory response
368 | memory_data = result["memory"]
369 | return StoreMemorySuccess(
370 | success=True,
371 | message="Memory stored successfully",
372 | content_hash=memory_data["content_hash"]
373 | )
374 |
375 | @mcp.tool(
376 | annotations=ToolAnnotations(
377 | title="Retrieve Memory",
378 | readOnlyHint=True,
379 | ),
380 | )
381 | async def retrieve_memory(
382 | query: str,
383 | ctx: Context,
384 | n_results: int = 5
385 | ) -> Dict[str, Any]:
386 | """Search stored memories using semantic similarity - finds conceptually related content even if exact words differ.
387 |
388 | USE THIS WHEN:
389 | - User asks "what do you remember about X", "do we have info on Y", "recall Z"
390 | - Looking for past decisions, preferences, or context from previous sessions
391 | - Need to retrieve related information without exact wording (semantic search)
392 | - General memory lookup where time frame is NOT specified
393 | - User references "last time we discussed", "you should know", "I told you before"
394 |
395 | THIS IS THE PRIMARY SEARCH TOOL - use it for most memory lookups.
396 |
397 | DO NOT USE FOR:
398 | - Time-based queries ("yesterday", "last week") - use recall_memory instead
399 | - Exact content matching - use exact_match_retrieve instead
400 | - Tag-based filtering - use search_by_tag instead
401 | - Browsing all memories - use list_memories instead (if available in mcp_server.py)
402 |
403 | HOW IT WORKS:
404 | - Converts query to vector embedding using the same model as stored memories
405 | - Finds top N most similar memories using cosine similarity
406 | - Returns ranked by relevance score (0.0-1.0, higher is more similar)
407 | - Works across sessions - retrieves memories from any time period
408 |
409 | RETURNS:
410 | - Array of matching memories with:
411 | - content: The stored text
412 | - content_hash: Unique identifier
413 | - similarity_score: Relevance score (0.0-1.0)
414 | - metadata: Tags, type, timestamp, etc.
415 | - created_at: When memory was stored
416 |
417 | Examples:
418 | {
419 | "query": "python async patterns we discussed",
420 | "n_results": 5
421 | }
422 |
423 | {
424 | "query": "database connection settings",
425 | "n_results": 10
426 | }
427 |
428 | {
429 | "query": "user authentication workflow preferences",
430 | "n_results": 3
431 | }
432 | """
433 | # Delegate to shared MemoryService business logic
434 | memory_service = ctx.request_context.lifespan_context.memory_service
435 | return await memory_service.retrieve_memories(
436 | query=query,
437 | n_results=n_results
438 | )
439 |
440 | @mcp.tool(
441 | annotations=ToolAnnotations(
442 | title="Search by Tag",
443 | readOnlyHint=True,
444 | ),
445 | )
446 | async def search_by_tag(
447 | tags: Union[str, List[str]],
448 | ctx: Context,
449 | match_all: bool = False
450 | ) -> Dict[str, Any]:
451 | """Search memories by exact tag matching - retrieves all memories categorized with specific tags (OR logic by default).
452 |
453 | USE THIS WHEN:
454 | - User asks to filter by category ("show me all 'api-docs' memories", "find 'important' notes")
455 | - Need to retrieve memories of a specific type without semantic search
456 | - User wants to browse a category ("what do we have tagged 'python'")
457 | - Looking for all memories with a particular classification
458 | - User says "show me everything about X" where X is a known tag
459 |
460 | DO NOT USE FOR:
461 | - Semantic search - use retrieve_memory instead
462 | - Time-based queries - use recall_memory instead
463 | - Finding specific content - use exact_match_retrieve instead
464 |
465 | HOW IT WORKS:
466 | - Exact string matching on memory tags (case-sensitive)
467 | - Returns memories matching ANY of the specified tags (OR logic)
468 | - No semantic search - purely categorical filtering
469 | - No similarity scoring - all results are equally relevant
470 |
471 | TAG FORMATS (all supported):
472 | - Array: ["tag1", "tag2"]
473 | - String: "tag1,tag2"
474 |
475 | RETURNS:
476 | - Array of all memories with matching tags:
477 | - content: The stored text
478 | - tags: Array of tags (will include at least one from search)
479 | - content_hash: Unique identifier
480 | - metadata: Additional memory metadata
481 | - No similarity score (categorical match, not semantic)
482 |
483 | Examples:
484 | {
485 | "tags": ["important", "reference"]
486 | }
487 |
488 | {
489 | "tags": "python,async,best-practices"
490 | }
491 |
492 | {
493 | "tags": ["api-documentation"]
494 | }
495 | """
496 | # Delegate to shared MemoryService business logic
497 | memory_service = ctx.request_context.lifespan_context.memory_service
498 | return await memory_service.search_by_tag(
499 | tags=tags,
500 | match_all=match_all
501 | )
502 |
503 | @mcp.tool(
504 | annotations=ToolAnnotations(
505 | title="Delete Memory",
506 | destructiveHint=True,
507 | ),
508 | )
509 | async def delete_memory(
510 | content_hash: str,
511 | ctx: Context
512 | ) -> Dict[str, Union[bool, str]]:
513 | """Delete a specific memory by its unique content hash identifier - permanent removal of a single memory entry.
514 |
515 | USE THIS WHEN:
516 | - User explicitly requests deletion of a specific memory ("delete that", "remove the memory about X")
517 | - After showing user a memory and they want it removed
518 | - Correcting mistakenly stored information
519 | - User says "forget about X", "delete the note about Y", "remove that memory"
520 | - Have the content_hash from a previous retrieve/search operation
521 |
522 | DO NOT USE FOR:
523 | - Deleting multiple memories - use delete_by_tag, delete_by_tags, or delete_by_all_tags instead
524 | - Deleting by content without hash - search first with retrieve_memory to get the hash
525 | - Bulk cleanup - use cleanup_duplicates or delete_by_tag instead
526 | - Time-based deletion - use delete_by_timeframe or delete_before_date instead
527 |
528 | IMPORTANT:
529 | - This is a PERMANENT operation - memory cannot be recovered after deletion
530 | - You must have the exact content_hash (obtained from search/retrieve operations)
531 | - Only deletes the single memory matching the hash
532 |
533 | HOW TO GET content_hash:
534 | 1. First search for the memory using retrieve_memory, recall_memory, or search_by_tag
535 | 2. Memory results include "content_hash" field
536 | 3. Use that hash in this delete operation
537 |
538 | RETURNS:
539 | - success: Boolean indicating if deletion succeeded
540 | - content_hash: The hash of the deleted memory
541 | - error: Error message (only present if success is False)
542 |
543 | Examples:
544 | # Step 1: Find the memory
545 | retrieve_memory(query: "outdated API documentation")
546 | # Returns: [{content_hash: "a1b2c3d4e5f6...", content: "...", ...}]
547 |
548 | # Step 2: Delete it
549 | {
550 | "content_hash": "a1b2c3d4e5f6..."
551 | }
552 | """
553 | # Delegate to shared MemoryService business logic
554 | memory_service = ctx.request_context.lifespan_context.memory_service
555 | return await memory_service.delete_memory(content_hash)
556 |
557 | @mcp.tool(
558 | annotations=ToolAnnotations(
559 | title="Check Database Health",
560 | readOnlyHint=True,
561 | ),
562 | )
563 | async def check_database_health(ctx: Context) -> Dict[str, Any]:
564 | """Check database health, storage backend status, and retrieve comprehensive memory service statistics.
565 |
566 | USE THIS WHEN:
567 | - User asks "how many memories are stored", "is the database working", "memory service status"
568 | - Diagnosing performance issues or connection problems
569 | - User wants to know storage backend configuration (SQLite/Cloudflare/Hybrid)
570 | - Checking if memory service is functioning correctly
571 | - Need to verify successful initialization or troubleshoot errors
572 | - User asks "what storage backend are we using"
573 |
574 | DO NOT USE FOR:
575 | - Searching or retrieving specific memories - use retrieve_memory instead
576 | - Getting cache performance stats - use get_cache_stats instead (if available)
577 | - Listing actual memory content - this only returns counts and status
578 |
579 | WHAT IT CHECKS:
580 | - Database connectivity and responsiveness
581 | - Storage backend type (sqlite_vec, cloudflare, hybrid)
582 | - Total memory count in database
583 | - Database file size and location (for SQLite backends)
584 | - Sync status (for hybrid backend)
585 | - Configuration details (embedding model, index names, etc.)
586 |
587 | RETURNS:
588 | - status: "healthy" or error status
589 | - backend: Storage backend type (sqlite_vec/cloudflare/hybrid)
590 | - total_memories: Count of stored memories
591 | - database_info: Path, size, configuration details
592 | - timestamp: When health check was performed
593 | - Any error messages or warnings
594 |
595 | Examples:
596 | No parameters required - just call it:
597 | {}
598 |
599 | Common use cases:
600 | - User: "How many memories do I have?" → check_database_health()
601 | - User: "Is the memory service working?" → check_database_health()
602 | - User: "What backend are we using?" → check_database_health()
603 | """
604 | # Delegate to shared MemoryService business logic
605 | memory_service = ctx.request_context.lifespan_context.memory_service
606 | return await memory_service.health_check()
607 |
608 | @mcp.tool(
609 | annotations=ToolAnnotations(
610 | title="List Memories",
611 | readOnlyHint=True,
612 | ),
613 | )
614 | async def list_memories(
615 | ctx: Context,
616 | page: int = 1,
617 | page_size: int = 10,
618 | tag: Optional[str] = None,
619 | memory_type: Optional[str] = None
620 | ) -> Dict[str, Any]:
621 | """List stored memories with pagination and optional filtering - browse all memories in pages rather than searching.
622 |
623 | USE THIS WHEN:
624 | - User wants to browse/explore all memories ("show me my memories", "list everything")
625 | - Need to paginate through large result sets
626 | - Filtering by tag OR memory type for categorical browsing
627 | - User asks "what do I have stored", "show me all notes", "browse my memories"
628 | - Want to see memories without searching for specific content
629 |
630 | DO NOT USE FOR:
631 | - Searching for specific content - use retrieve_memory instead
632 | - Time-based queries - use recall_memory instead
633 | - Finding exact text - use exact_match_retrieve instead
634 |
635 | HOW IT WORKS:
636 | - Returns memories in pages (default 10 per page)
637 | - Optional filtering by single tag or memory type
638 | - Sorted by creation time (newest first)
639 | - Supports pagination through large datasets
640 |
641 | PAGINATION:
642 | - page: 1-based page number (default 1)
643 | - page_size: Number of results per page (default 10, max usually 100)
644 | - Returns total count and page info for navigation
645 |
646 | RETURNS:
647 | - memories: Array of memory objects for current page
648 | - total: Total count of matching memories
649 | - page: Current page number
650 | - page_size: Results per page
651 | - total_pages: Total pages available
652 |
653 | Examples:
654 | {
655 | "page": 1,
656 | "page_size": 10
657 | }
658 |
659 | {
660 | "page": 2,
661 | "page_size": 20,
662 | "tag": "python"
663 | }
664 |
665 | {
666 | "page": 1,
667 | "page_size": 50,
668 | "memory_type": "decision"
669 | }
670 | """
671 | # Delegate to shared MemoryService business logic
672 | memory_service = ctx.request_context.lifespan_context.memory_service
673 | return await memory_service.list_memories(
674 | page=page,
675 | page_size=page_size,
676 | tag=tag,
677 | memory_type=memory_type
678 | )
679 |
680 | @mcp.tool(
681 | annotations=ToolAnnotations(
682 | title="Get Cache Stats",
683 | readOnlyHint=True,
684 | ),
685 | )
686 | async def get_cache_stats(ctx: Context) -> Dict[str, Any]:
687 | """
688 | Get MCP server global cache statistics for performance monitoring.
689 |
690 | Returns detailed metrics about storage and memory service caching,
691 | including hit rates, initialization times, and cache sizes.
692 |
693 | This tool is useful for:
694 | - Monitoring cache effectiveness
695 | - Debugging performance issues
696 | - Verifying cache persistence across stateless HTTP calls
697 |
698 | Returns:
699 | Dictionary with cache statistics:
700 | - total_calls: Total MCP server invocations
701 | - hit_rate: Overall cache hit rate percentage
702 | - storage_cache: Storage cache metrics (hits/misses/size)
703 | - service_cache: MemoryService cache metrics (hits/misses/size)
704 | - performance: Initialization time statistics (avg/min/max)
705 | - backend_info: Current storage backend configuration
706 | """
707 | global _CACHE_STATS, _STORAGE_CACHE, _MEMORY_SERVICE_CACHE
708 |
709 | # Import shared stats calculation utility
710 | from mcp_memory_service.utils.cache_manager import CacheStats, calculate_cache_stats_dict
711 |
712 | # Convert global dict to CacheStats dataclass
713 | stats = CacheStats(
714 | total_calls=_CACHE_STATS["total_calls"],
715 | storage_hits=_CACHE_STATS["storage_hits"],
716 | storage_misses=_CACHE_STATS["storage_misses"],
717 | service_hits=_CACHE_STATS["service_hits"],
718 | service_misses=_CACHE_STATS["service_misses"],
719 | initialization_times=_CACHE_STATS["initialization_times"]
720 | )
721 |
722 | # Calculate statistics using shared utility
723 | cache_sizes = (len(_STORAGE_CACHE), len(_MEMORY_SERVICE_CACHE))
724 | result = calculate_cache_stats_dict(stats, cache_sizes)
725 |
726 | # Add server-specific details
727 | result["storage_cache"]["keys"] = list(_STORAGE_CACHE.keys())
728 | result["backend_info"] = {
729 | "storage_backend": STORAGE_BACKEND,
730 | "sqlite_path": SQLITE_VEC_PATH,
731 | "embedding_model": EMBEDDING_MODEL_NAME
732 | }
733 |
734 | return result
735 |
736 |
737 |
738 | # =============================================================================
739 | # MAIN ENTRY POINT
740 | # =============================================================================
741 |
742 | def main():
743 | """Main entry point for the FastAPI MCP server."""
744 | # Configure for Claude Code integration
745 | port = int(os.getenv("MCP_SERVER_PORT", "8000"))
746 | host = os.getenv("MCP_SERVER_HOST", "0.0.0.0")
747 |
748 | logger.info(f"Starting MCP Memory Service FastAPI server on {host}:{port}")
749 | logger.info(f"Storage backend: {STORAGE_BACKEND}")
750 |
751 | # Run server with streamable HTTP transport
752 | mcp.run("streamable-http")
753 |
754 | if __name__ == "__main__":
755 | main()
```
--------------------------------------------------------------------------------
/docs/architecture/graph-database-design.md:
--------------------------------------------------------------------------------
```markdown
1 | # Graph Database Architecture for Memory Associations
2 |
3 | **Version**: 1.0
4 | **Date**: 2025-12-14
5 | **Status**: Implemented (v8.51.0)
6 | **Priority**: High Performance Enhancement
7 | **Issue**: [#279](https://github.com/doobidoo/mcp-memory-service/issues/279)
8 | **Pull Request**: [#280](https://github.com/doobidoo/mcp-memory-service/pull/280)
9 |
10 | ## Executive Summary
11 |
12 | This document specifies the **Graph Database Architecture** for storing memory associations in MCP Memory Service. The implementation uses **SQLite graph tables with recursive Common Table Expressions (CTEs)** to provide efficient association storage and graph queries, achieving **30x query performance improvement** and **97% storage reduction** compared to storing associations as regular Memory objects.
13 |
14 | **Key Achievement**: Real-world deployment validated 343 associations created automatically with sub-10ms query latency and minimal storage overhead.
15 |
16 | ## Problem Statement
17 |
18 | ### Current State (Before v8.51.0)
19 |
20 | The memory consolidation system automatically discovers semantic associations between memories (343 associations in single run, December 14, 2025). These associations were stored as regular Memory objects with special tags, creating significant overhead:
21 |
22 | **Storage Bloat**:
23 | - 1,449 association memories (27.3% of 5,309 total memories)
24 | - Each association: ~500 bytes (content + 384-dim embedding + metadata)
25 | - Total overhead: ~2-3 MB for associations alone
26 |
27 | **Query Inefficiency**:
28 | - Finding connected memories: ~150ms (full table scan with tag filtering)
29 | - Multi-hop queries: ~800ms (multiple manual JOIN operations)
30 | - Algorithm complexity: O(N) table scan vs O(log N) indexed lookup
31 |
32 | **Functional Limitations**:
33 | - No graph traversal capability (multi-hop connections)
34 | - No graph algorithms (PageRank, centrality, shortest paths, community detection)
35 | - Association memories appear in regular semantic search results (search pollution)
36 | - Cannot efficiently query "find all memories 2-3 connections away"
37 |
38 | ### Real-World Impact
39 |
40 | **Production Deployment Metrics** (December 14, 2025):
41 | - 343 associations created in single consolidation run
42 | - Each stored as full Memory object with 384-dimensional embedding
43 | - ~50 KB storage per 100 associations as Memory objects
44 | - ~5 KB in dedicated graph table (97% reduction)
45 | - Query performance: O(N) table scan vs O(log N) indexed graph lookup
46 |
47 | ## Architectural Decision
48 |
49 | ### Options Evaluated
50 |
51 | | Approach | Performance | Complexity | Overhead | Recommendation |
52 | |----------|-------------|------------|----------|----------------|
53 | | **Keep Current (Baseline)** | ⭐⭐ | ⭐⭐⭐⭐⭐ | None | ❌ Doesn't scale |
54 | | **SQLite Graph Table + Recursive CTEs** | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | Minimal | ✅ **SELECTED** |
55 | | **rustworkx In-Memory Cache** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | Moderate | Future: v9.0+ |
56 | | **Neo4j / Standalone Graph DB** | ⭐⭐⭐⭐⭐ | ⭐ | High | ❌ Overkill |
57 |
58 | ### Decision Rationale
59 |
60 | **SQLite Graph Table + Recursive CTEs** selected for optimal balance:
61 |
62 | ✅ **Performance**: Near-native graph database performance
63 | - Recursive CTEs provide BFS/DFS traversal
64 | - Indexed lookups: O(log N) vs O(N) table scans
65 | - Multi-hop queries: Single SQL query vs multiple round-trips
66 |
67 | ✅ **Simplicity**: No external dependencies
68 | - Reuses existing SQLite database
69 | - No new infrastructure to maintain
70 | - Consistent backup/restore workflow
71 |
72 | ✅ **Sophistication**: Production-grade graph features
73 | - Bidirectional edge traversal
74 | - Shortest path algorithms
75 | - Cycle prevention in traversal
76 | - Subgraph extraction for visualization
77 |
78 | ✅ **Minimal Overhead**: Negligible operational cost
79 | - ~50 bytes per association (vs 500 bytes as Memory)
80 | - No additional memory footprint
81 | - Single database file (no data distribution complexity)
82 |
83 | ## Technical Architecture
84 |
85 | ### 1. Database Schema
86 |
87 | #### Graph Table Structure
88 |
89 | ```sql
90 | CREATE TABLE IF NOT EXISTS memory_graph (
91 | source_hash TEXT NOT NULL,
92 | target_hash TEXT NOT NULL,
93 | similarity REAL NOT NULL,
94 | connection_types TEXT NOT NULL, -- JSON array: ["temporal_proximity", "shared_concepts"]
95 | metadata TEXT, -- JSON object: {"discovery_method": "creative_association"}
96 | created_at REAL NOT NULL,
97 | PRIMARY KEY (source_hash, target_hash)
98 | );
99 | ```
100 |
101 | **Design Decisions**:
102 |
103 | 1. **Composite Primary Key** `(source_hash, target_hash)`:
104 | - Ensures uniqueness for directed edges
105 | - Prevents duplicate associations
106 | - Enables efficient bidirectional queries
107 |
108 | 2. **JSON Storage** for `connection_types` and `metadata`:
109 | - Flexible schema for evolving association types
110 | - Lightweight compared to separate junction tables
111 | - Easy to query with SQLite JSON functions
112 |
113 | 3. **Bidirectional Edges**:
114 | - Store both A→B and B→A for symmetrical associations
115 | - Simplifies traversal queries (no need for UNION)
116 | - Minimal storage cost (~100 bytes vs ~1000 bytes for Memory object pair)
117 |
118 | #### Indexes
119 |
120 | ```sql
121 | CREATE INDEX IF NOT EXISTS idx_graph_source ON memory_graph(source_hash);
122 | CREATE INDEX IF NOT EXISTS idx_graph_target ON memory_graph(target_hash);
123 | CREATE INDEX IF NOT EXISTS idx_graph_bidirectional ON memory_graph(source_hash, target_hash);
124 | ```
125 |
126 | **Index Strategy**:
127 | - `idx_graph_source`: Fast lookup for "find all connections from X"
128 | - `idx_graph_target`: Fast lookup for "find all connections to X"
129 | - `idx_graph_bidirectional`: Fast edge existence checks
130 |
131 | **Query Performance** (with indexes):
132 | - Find connected (1-hop): <5ms
133 | - Find connected (3-hop): <25ms
134 | - Shortest path: <15ms (average)
135 | - Get subgraph: <10ms (radius=2)
136 |
137 | ### 2. GraphStorage Class
138 |
139 | **Location**: `src/mcp_memory_service/storage/graph.py`
140 |
141 | #### Core Methods
142 |
143 | ##### `store_association()`
144 | ```python
145 | async def store_association(
146 | self,
147 | source_hash: str,
148 | target_hash: str,
149 | similarity: float,
150 | connection_types: List[str],
151 | metadata: Dict[str, Any]
152 | ) -> None:
153 | """Store bidirectional association in graph table."""
154 | ```
155 |
156 | **Implementation Details**:
157 | - Uses `INSERT OR REPLACE` for idempotency
158 | - Stores both A→B and B→A edges
159 | - Validates inputs (no self-loops, no empty hashes)
160 | - JSON serialization for connection_types and metadata
161 |
162 | ##### `find_connected()`
163 | ```python
164 | async def find_connected(
165 | self,
166 | memory_hash: str,
167 | max_hops: int = 2
168 | ) -> List[Tuple[str, int]]:
169 | """Find all memories connected within N hops using BFS."""
170 | ```
171 |
172 | **Recursive CTE Implementation**:
173 | ```sql
174 | WITH RECURSIVE connected_memories(hash, distance, path) AS (
175 | -- Base case: Starting node (path wrapped with delimiters)
176 | SELECT ?, 0, ? -- Parameters: (hash, ',hash,')
177 |
178 | UNION ALL
179 |
180 | -- Recursive case: Expand to neighbors
181 | SELECT
182 | mg.target_hash,
183 | cm.distance + 1,
184 | cm.path || mg.target_hash || ',' -- Append hash with delimiter
185 | FROM connected_memories cm
186 | JOIN memory_graph mg ON cm.hash = mg.source_hash
187 | WHERE cm.distance < ? -- Max hops limit
188 | AND instr(cm.path, ',' || mg.target_hash || ',') = 0 -- Cycle prevention (exact match)
189 | )
190 | SELECT DISTINCT hash, distance
191 | FROM connected_memories
192 | WHERE distance > 0
193 | ORDER BY distance, hash;
194 | ```
195 |
196 | **Key Features**:
197 | - **Breadth-First Search**: Returns results ordered by distance
198 | - **Cycle Prevention**: Tracks visited nodes in path string
199 | - **Efficient**: Single SQL query vs multiple round-trips
200 |
201 | ##### `shortest_path()`
202 | ```python
203 | async def shortest_path(
204 | self,
205 | hash1: str,
206 | hash2: str,
207 | max_depth: int = 5
208 | ) -> Optional[List[str]]:
209 | """Find shortest path between two memories using BFS."""
210 | ```
211 |
212 | **Algorithm**:
213 | - Unidirectional BFS from source to target
214 | - BFS guarantees shortest path found first (level-order traversal)
215 | - Early termination when target is reached
216 | - Returns `None` if no path exists within max_depth
217 | - Performance: ~15ms typical (excellent for sparse graphs)
218 |
219 | ##### `get_subgraph()`
220 | ```python
221 | async def get_subgraph(
222 | self,
223 | memory_hash: str,
224 | radius: int = 2
225 | ) -> Dict[str, Any]:
226 | """Extract neighborhood subgraph for visualization."""
227 | ```
228 |
229 | **Returns**:
230 | ```json
231 | {
232 | "nodes": [
233 | {"hash": "abc123", "distance": 0},
234 | {"hash": "def456", "distance": 1}
235 | ],
236 | "edges": [
237 | {
238 | "source": "abc123",
239 | "target": "def456",
240 | "similarity": 0.65,
241 | "connection_types": ["temporal_proximity"]
242 | }
243 | ]
244 | }
245 | ```
246 |
247 | **Use Cases**:
248 | - Graph visualization in web UI
249 | - Association exploration tools
250 | - Debugging consolidation results
251 |
252 | ### 3. Configuration System
253 |
254 | **Environment Variable**: `MCP_GRAPH_STORAGE_MODE`
255 |
256 | ```bash
257 | # Three storage modes for gradual migration
258 | export MCP_GRAPH_STORAGE_MODE=dual_write # Default (recommended)
259 |
260 | # Options:
261 | # memories_only - Legacy: associations as Memory objects (current behavior)
262 | # dual_write - Transition: write to both memories + graph tables
263 | # graph_only - Modern: only graph table (97% storage reduction)
264 | ```
265 |
266 | **Mode Behavior**:
267 |
268 | | Mode | Stores in Memories Table | Stores in Graph Table | Storage Overhead | Query Method |
269 | |------|-------------------------|----------------------|------------------|--------------|
270 | | `memories_only` | ✅ Yes | ❌ No | 100% (baseline) | Tag filtering |
271 | | `dual_write` | ✅ Yes | ✅ Yes | ~103% (3% graph overhead) | Both available |
272 | | `graph_only` | ❌ No | ✅ Yes | 3% (97% reduction) | Graph queries |
273 |
274 | **Configuration Validation** (`config.py`):
275 | ```python
276 | GRAPH_STORAGE_MODE = os.getenv('MCP_GRAPH_STORAGE_MODE', 'dual_write').lower()
277 |
278 | if GRAPH_STORAGE_MODE not in ['memories_only', 'dual_write', 'graph_only']:
279 | logger.warning(f"Invalid graph storage mode: {GRAPH_STORAGE_MODE}, defaulting to dual_write")
280 | GRAPH_STORAGE_MODE = 'dual_write'
281 |
282 | logger.info(f"Graph Storage Mode: {GRAPH_STORAGE_MODE}")
283 | ```
284 |
285 | ### 4. Consolidator Integration
286 |
287 | **Location**: `src/mcp_memory_service/consolidation/consolidator.py`
288 |
289 | #### Mode-Based Dispatcher
290 |
291 | ```python
292 | async def _store_associations_as_memories(self, associations) -> None:
293 | """Store discovered associations using configured graph storage mode."""
294 | self.logger.info(f"Storing {len(associations)} associations using mode: {GRAPH_STORAGE_MODE}")
295 |
296 | # Store in memories table if enabled
297 | if GRAPH_STORAGE_MODE in ['memories_only', 'dual_write']:
298 | await self._store_associations_in_memories(associations)
299 |
300 | # Store in graph table if enabled
301 | if GRAPH_STORAGE_MODE in ['dual_write', 'graph_only']:
302 | await self._store_associations_in_graph_table(associations)
303 | ```
304 |
305 | #### GraphStorage Initialization
306 |
307 | ```python
308 | def _init_graph_storage(self) -> None:
309 | """Initialize GraphStorage with appropriate db_path from storage backend."""
310 | if hasattr(self.storage, 'primary') and hasattr(self.storage.primary, 'db_path'):
311 | # Hybrid backend: Use primary (SQLite-vec) db_path
312 | db_path = self.storage.primary.db_path
313 | elif hasattr(self.storage, 'db_path'):
314 | # SQLite-vec backend: Use direct db_path
315 | db_path = self.storage.db_path
316 | else:
317 | # Cloudflare-only backend: GraphStorage not supported
318 | self.graph_storage = None
319 | self.logger.warning("GraphStorage requires SQLite backend (not available)")
320 | return
321 |
322 | self.graph_storage = GraphStorage(db_path)
323 | self.logger.info(f"Graph storage mode: {GRAPH_STORAGE_MODE}")
324 | ```
325 |
326 | ### 5. Algorithm Design Decisions
327 |
328 | #### BFS Implementation: Unidirectional vs Bidirectional
329 |
330 | **Design Question**: Should `shortest_path()` use unidirectional or bidirectional BFS?
331 |
332 | **Decision**: **Unidirectional BFS** (current implementation)
333 |
334 | ##### Theoretical Comparison
335 |
336 | | Algorithm | Time Complexity | Code Complexity | Best For |
337 | |-----------|----------------|-----------------|----------|
338 | | **Unidirectional BFS** | O(b^d) | ~25 lines SQL | Sparse graphs (b < 1) |
339 | | **Bidirectional BFS** | O(b^(d/2)) | ~80-100 lines SQL | Dense graphs (b > 5) |
340 |
341 | Where:
342 | - `b` = branching factor (avg connections per node)
343 | - `d` = depth of target node
344 |
345 | ##### Our Graph Topology (Production Data)
346 |
347 | **Measured Characteristics** (as of v8.51.0):
348 | ```
349 | Total associations: 1,449
350 | Total memories: 5,173
351 | Branching factor: ~0.56 (very sparse)
352 | Typical path depth: 1-3 hops
353 | Max search depth: 5 hops (configured limit)
354 | ```
355 |
356 | ##### Performance Analysis
357 |
358 | **Typical Query** (d=2, b=0.56):
359 | ```
360 | Unidirectional: 0.56^2 ≈ 0.31 nodes explored → 15ms
361 | Bidirectional: 2 × 0.56^1 = 1.12 nodes explored → ~45ms
362 | Result: Unidirectional is 3× FASTER (overhead of dual frontiers exceeds savings)
363 | ```
364 |
365 | **Deep Query** (d=5, b=3):
366 | ```
367 | Unidirectional: 3^5 = 243 nodes explored
368 | Bidirectional: 2 × 3^2.5 ≈ 31 nodes explored
369 | Result: Bidirectional is 7.8× faster (but not our use case)
370 | ```
371 |
372 | ##### Decision Rationale
373 |
374 | **Why Unidirectional is Optimal**:
375 |
376 | 1. **Sparse Graph Topology** (b=0.56 << 1)
377 | - Sub-linear node exploration in practice
378 | - Bidirectional overhead dominates for sparse graphs
379 |
380 | 2. **Shallow Connection Patterns** (d=1-3 typical)
381 | - Most queries are 1-hop direct lookups (d=1)
382 | - Bidirectional provides zero benefit for d=1
383 | - Minimal benefit for d=2-3 in sparse graphs
384 |
385 | 3. **Performance is Excellent** (~15ms)
386 | - Well below 100ms user perception threshold
387 | - No user complaints or performance issues
388 |
389 | 4. **Code Simplicity** (25 lines vs 80-100 lines)
390 | - Single recursive CTE vs dual CTEs + intersection logic
391 | - Easy to debug and maintain
392 | - Lower risk of cycle detection bugs
393 |
394 | 5. **Proven Correctness**
395 | - All 22 unit tests passing
396 | - BFS guarantees shortest path (level-order traversal)
397 |
398 | ##### When to Reconsider Bidirectional BFS
399 |
400 | Monitor these metrics and switch if thresholds exceeded:
401 |
402 | | Metric | Current | Threshold | Action |
403 | |--------|---------|-----------|--------|
404 | | **Avg connections/node** | 0.56 | > 5 | Consider bidirectional |
405 | | **Total associations** | 1,449 | > 10,000 | Evaluate performance |
406 | | **P95 query latency** | ~25ms | > 50ms | Optimize or switch |
407 | | **Deep paths** (d>3) | <5% | > 20% | Bidirectional beneficial |
408 |
409 | ##### Implementation Notes
410 |
411 | **Current SQL Strategy** (Unidirectional):
412 | ```sql
413 | WITH RECURSIVE path_finder(current_hash, path, depth) AS (
414 | SELECT source, source, 1 -- Start from source only
415 | UNION ALL
416 | SELECT target, path || ',' || target, depth + 1
417 | FROM path_finder pf
418 | JOIN memory_graph mg ON pf.current_hash = mg.source_hash
419 | WHERE depth < max_depth
420 | AND instr(path, target) = 0 -- Cycle prevention
421 | )
422 | SELECT path
423 | FROM path_finder
424 | WHERE current_hash = target
425 | ORDER BY depth
426 | LIMIT 1 -- BFS guarantees this is shortest
427 | ```
428 |
429 | **Key optimization**: LIMIT 1 with ORDER BY depth ensures we return as soon as shortest path is found.
430 |
431 | **Why Bidirectional Would Be Complex**:
432 | ```sql
433 | -- Would require:
434 | 1. Two parallel CTEs (forward and backward frontiers)
435 | 2. Intersection detection logic (when frontiers meet)
436 | 3. Path reconstruction from both halves
437 | 4. Handling paths that meet at different depths
438 | 5. More complex cycle detection
439 | 6. ~80-100 lines of intricate SQL
440 | ```
441 |
442 | ##### Lessons Learned
443 |
444 | 1. **Algorithmic complexity doesn't always predict real-world performance**
445 | - O(b^(d/2)) is theoretically better than O(b^d)
446 | - But constant factors and graph topology matter more
447 | - For b < 1, simpler algorithm wins
448 |
449 | 2. **Sparse graphs favor simple algorithms**
450 | - When branching factor < 1, bidirectional overhead exceeds savings
451 | - Sub-linear exploration makes unidirectional optimal
452 |
453 | 3. **Document performance characteristics, not just algorithm names**
454 | - "15ms typical" more useful than "unidirectional BFS"
455 | - Include graph topology metrics in documentation
456 |
457 | 4. **Premature optimization is real**
458 | - Bidirectional BFS would add 4× code complexity
459 | - For negative performance impact in our use case
460 | - Optimize when metrics warrant it, not speculatively
461 |
462 | ## Performance Benchmarks
463 |
464 | ### Query Performance
465 |
466 | **Test Environment**: MacBook Pro M1, SQLite 3.43.0, 1,449 associations
467 |
468 | | Query Type | Before (Memories) | After (Graph Table) | Improvement |
469 | |------------|------------------|---------------------|-------------|
470 | | **Find Connected (1-hop)** | 150ms | 5ms | **30x faster** |
471 | | **Find Connected (3-hop)** | 800ms | 25ms | **32x faster** |
472 | | **Shortest Path** | 1,200ms | 15ms | **80x faster** |
473 | | **Get Subgraph (radius=2)** | N/A (not possible) | 10ms | **New capability** |
474 |
475 | **Why So Fast?**:
476 | 1. **Indexed Lookups**: O(log N) vs O(N) table scans
477 | 2. **Single SQL Query**: Recursive CTEs eliminate round-trips
478 | 3. **Compiled Traversal**: SQLite query planner optimizes BFS
479 | 4. **No Embedding Retrieval**: Graph queries don't fetch 384-dim vectors
480 |
481 | ### Storage Efficiency
482 |
483 | **Test Data**: 1,449 associations (real production data)
484 |
485 | | Storage Mode | Database Size | Per Association | Reduction |
486 | |--------------|---------------|----------------|-----------|
487 | | `memories_only` (baseline) | 2.8 MB | 500 bytes | 0% |
488 | | `dual_write` | 2.88 MB | ~515 bytes | -3% (temporary overhead) |
489 | | `graph_only` | 144 KB | 50 bytes | **97% reduction** |
490 |
491 | **Breakdown**:
492 | - **Memory object**: 500 bytes (content + 384-dim embedding + metadata)
493 | - **Graph edge**: 50 bytes (2 hashes + similarity + JSON arrays)
494 |
495 | **Space Reclaimed** (after cleanup):
496 | - 1,449 associations × 450 bytes saved = ~651 KB raw data
497 | - VACUUM reclamation: ~2-3 MB (including SQLite overhead)
498 |
499 | ### Test Suite Performance
500 |
501 | **Test Execution**: `pytest tests/storage/ tests/consolidation/test_graph_modes.py`
502 |
503 | ```
504 | ========================== 26 passed, 7 xfailed, 2 warnings in 0.25s ==========================
505 | ```
506 |
507 | **Coverage**:
508 | - GraphStorage class: ~90-95% (22 tests)
509 | - Edge cases: Empty inputs, cycles, self-loops, None values
510 | - Performance benchmarks: <10ms validation for 1-hop queries
511 |
512 | ## Migration Strategy
513 |
514 | ### For Existing Users (3-Step Process)
515 |
516 | #### Step 1: Upgrade to v8.51.0
517 | ```bash
518 | pip install --upgrade mcp-memory-service
519 | ```
520 |
521 | **Default behavior**: `dual_write` mode (zero breaking changes)
522 | - Associations written to both memories table AND graph table
523 | - All existing queries continue working
524 | - Gradual data synchronization
525 |
526 | #### Step 2: Backfill Existing Associations
527 | ```bash
528 | # Preview migration (safe, read-only)
529 | python scripts/maintenance/backfill_graph_table.py --dry-run
530 |
531 | # Expected output:
532 | # ✅ Found 1,449 association memories
533 | # ✅ Graph table auto-created with proper schema
534 | # ✅ All associations validated and ready for insertion
535 | # ✅ Zero duplicates detected
536 |
537 | # Execute migration
538 | python scripts/maintenance/backfill_graph_table.py --apply
539 |
540 | # Progress output:
541 | # Processing batch 1/15: 100 associations
542 | # Processing batch 2/15: 100 associations
543 | # ...
544 | # ✅ Successfully migrated 1,435 associations (14 skipped due to missing metadata)
545 | ```
546 |
547 | **Script Features**:
548 | - Automatic graph table creation (runs migration 008)
549 | - Safety checks: database locks, disk space, HTTP server warnings
550 | - Batch processing with progress reporting
551 | - Duplicate detection and skipping
552 | - Transaction safety with rollback on errors
553 |
554 | #### Step 3: Switch to graph_only Mode (Recommended)
555 | ```bash
556 | # Update .env file
557 | export MCP_GRAPH_STORAGE_MODE=graph_only
558 |
559 | # Restart services
560 | systemctl --user restart mcp-memory-http.service
561 | # Or use /mcp in Claude Code to reconnect
562 | ```
563 |
564 | **Benefits of graph_only mode**:
565 | - 97% storage reduction for associations
566 | - 30x faster queries
567 | - No search pollution from association memories
568 | - Cleaner semantic search results
569 |
570 | #### Step 4: Cleanup (Optional - Reclaim Storage)
571 | ```bash
572 | # Preview deletions (safe, read-only)
573 | python scripts/maintenance/cleanup_association_memories.py --dry-run
574 |
575 | # Expected output:
576 | # ✅ Found 1,449 association memories
577 | # ✅ 1,435 verified in graph table (safe to delete)
578 | # ✅ 14 orphaned (will be preserved for safety)
579 | # ✅ Estimated space reclaimed: ~2.8 MB
580 |
581 | # Interactive cleanup (prompts for confirmation)
582 | python scripts/maintenance/cleanup_association_memories.py
583 |
584 | # Prompt:
585 | # Delete 1,435 memories? This will reclaim ~2.8 MB. (y/N)
586 |
587 | # Automated cleanup (no confirmation)
588 | python scripts/maintenance/cleanup_association_memories.py --force
589 | ```
590 |
591 | **Script Features**:
592 | - Verification: Only deletes memories with matching graph entries
593 | - VACUUM operation to reclaim space
594 | - Before/after database size reporting
595 | - Interactive confirmation (bypassable with --force)
596 | - Transaction safety with rollback
597 |
598 | ### For New Installations
599 |
600 | ```bash
601 | # Start with graph_only mode (no migration needed)
602 | export MCP_GRAPH_STORAGE_MODE=graph_only
603 | pip install mcp-memory-service
604 | ```
605 |
606 | **Recommendation**: New users should use `graph_only` mode from the start to avoid unnecessary storage overhead.
607 |
608 | ## Future Enhancements (v9.0+)
609 |
610 | ### Phase 2: REST API Endpoints (v8.52.0)
611 |
612 | ```python
613 | # Proposed endpoints
614 | GET /api/graph/connected/{hash}?max_hops=2
615 | GET /api/graph/path/{hash1}/{hash2}
616 | GET /api/graph/subgraph/{hash}?radius=2
617 | POST /api/graph/visualize
618 | ```
619 |
620 | ### Phase 3: Advanced Graph Analytics (v9.0+)
621 |
622 | **rustworkx Integration** (optional dependency):
623 | - PageRank scoring for memory importance
624 | - Community detection for topic clustering
625 | - Betweenness centrality for hub identification
626 | - Graph diameter and connected components analysis
627 |
628 | **Visualization**:
629 | - D3.js force-directed graph in web UI
630 | - Cytoscape.js for interactive exploration
631 | - Export to GraphML/GEXF formats
632 |
633 | **Query Enhancements**:
634 | - Labeled property graphs (typed relationships)
635 | - Pattern matching (Cypher-like queries)
636 | - Temporal graph queries (associations over time)
637 |
638 | ## Testing & Validation
639 |
640 | ### Unit Tests
641 |
642 | **Location**: `tests/storage/test_graph_storage.py` (22 tests, all passing)
643 |
644 | **Coverage**:
645 | - ✅ Store operations (basic, bidirectional, duplicates, self-loops)
646 | - ✅ Find connected (basic, multi-hop, cycles)
647 | - ✅ Shortest path (direct, multi-hop, disconnected, self)
648 | - ✅ Get subgraph (basic, multi-hop, radius variations)
649 | - ✅ Edge cases (empty inputs, None values, invalid hashes)
650 | - ✅ Performance benchmarks (<10ms validation)
651 |
652 | **Sample Test**:
653 | ```python
654 | @pytest.mark.asyncio
655 | async def test_find_connected_with_cycles(graph_storage, sample_graph_data):
656 | """Test that cycle prevention works correctly in recursive CTEs."""
657 | # Create triangle cycle: E→F→G→E
658 | await graph_storage.store_association("E", "F", 0.5, ["cycle"], {})
659 | await graph_storage.store_association("F", "G", 0.6, ["cycle"], {})
660 | await graph_storage.store_association("G", "E", 0.7, ["cycle"], {})
661 |
662 | # Find connected should traverse cycle but not get stuck
663 | connected = await graph_storage.find_connected("E", max_hops=5)
664 |
665 | # Should find F and G (distance 1 and 2)
666 | hashes = [hash for hash, _ in connected]
667 | assert "F" in hashes
668 | assert "G" in hashes
669 | # Should NOT have infinite loop
670 | assert len(connected) == 2 # Only F and G, no duplicates
671 | ```
672 |
673 | ### Integration Tests
674 |
675 | **Location**: `tests/consolidation/test_graph_modes.py` (4 passing, 7 scaffolded)
676 |
677 | **Passing Tests**:
678 | - ✅ `test_graph_storage_mode_env_variable` - Config validation
679 | - ✅ `test_mode_configuration_validation` - Invalid mode handling
680 | - ✅ `test_graph_storage_basic_operations` - GraphStorage integration
681 | - ✅ `test_storage_size_comparison_concept` - 97% reduction baseline
682 |
683 | **Scaffolded Tests** (xfail for Phase 2):
684 | - ⏭️ `test_memories_only_mode` - Legacy mode verification
685 | - ⏭️ `test_dual_write_mode` - Transition mode consistency
686 | - ⏭️ `test_graph_only_mode` - Modern mode validation
687 | - ⏭️ `test_dual_write_consistency` - Data synchronization
688 | - ⏭️ `test_graph_only_no_memory_pollution` - Search cleanliness
689 |
690 | ### Performance Benchmarks
691 |
692 | **Test**: `test_query_performance_benchmark`
693 |
694 | ```python
695 | @pytest.mark.asyncio
696 | async def test_query_performance_benchmark(graph_storage, sample_graph_data):
697 | """Validate that graph queries meet performance targets."""
698 | # Create linear chain: A→B→C→D
699 | # ... setup code ...
700 |
701 | # Benchmark 1-hop query
702 | start_time = time.time()
703 | connected_1hop = await graph_storage.find_connected("A", max_hops=1)
704 | elapsed_1hop = time.time() - start_time
705 |
706 | assert elapsed_1hop < 0.01 # <10ms for 1-hop
707 |
708 | # Benchmark 3-hop query
709 | start_time = time.time()
710 | connected_3hop = await graph_storage.find_connected("A", max_hops=3)
711 | elapsed_3hop = time.time() - start_time
712 |
713 | assert elapsed_3hop < 0.05 # <50ms for 3-hop
714 | ```
715 |
716 | ## Zero Breaking Changes
717 |
718 | **Guarantee**: Users on v8.50.x can upgrade to v8.51.0 with zero code changes.
719 |
720 | **Mechanism**:
721 | 1. **Default mode**: `dual_write` (writes to both memories + graph)
722 | 2. **Backward-compatible queries**: Memory-based association queries continue working
723 | 3. **Opt-in migration**: Users choose when to switch to `graph_only`
724 | 4. **Rollback support**: Can revert to `memories_only` if needed
725 |
726 | **Validation**:
727 | - ✅ All existing tests pass without modification
728 | - ✅ Consolidation system continues creating associations as before
729 | - ✅ Memory retrieval works identically
730 | - ✅ No API changes required
731 |
732 | ## Troubleshooting
733 |
734 | ### Common Issues
735 |
736 | #### Issue: Graph table not created
737 | **Symptom**: `sqlite3.OperationalError: no such table: memory_graph`
738 |
739 | **Solution**:
740 | ```bash
741 | # Run backfill script to auto-create table
742 | python scripts/maintenance/backfill_graph_table.py --dry-run
743 |
744 | # Or manually run migration
745 | sqlite3 ~/.local/share/mcp-memory-service/memory.db < \
746 | src/mcp_memory_service/storage/migrations/008_add_graph_table.sql
747 | ```
748 |
749 | #### Issue: Slow queries after migration
750 | **Symptom**: Graph queries take >100ms
751 |
752 | **Solution**:
753 | ```sql
754 | -- Verify indexes exist
755 | SELECT name FROM sqlite_master WHERE type='index' AND tbl_name='memory_graph';
756 | -- Expected: idx_graph_source, idx_graph_target, idx_graph_bidirectional
757 |
758 | -- Rebuild indexes if missing
759 | REINDEX memory_graph;
760 |
761 | -- Analyze for query planner optimization
762 | ANALYZE memory_graph;
763 | ```
764 |
765 | #### Issue: Orphaned associations after cleanup
766 | **Symptom**: Some associations missing from graph table
767 |
768 | **Solution**:
769 | ```bash
770 | # Re-run backfill to catch missed associations
771 | python scripts/maintenance/backfill_graph_table.py --apply
772 |
773 | # Check for associations with incomplete metadata
774 | sqlite3 ~/.local/share/mcp-memory-service/memory.db \
775 | "SELECT COUNT(*) FROM memories WHERE tags LIKE '%association%' \
776 | AND (content_hash IS NULL OR metadata IS NULL);"
777 | ```
778 |
779 | ## References
780 |
781 | - **Issue #279**: [Graph Database Architecture for Memory Associations](https://github.com/doobidoo/mcp-memory-service/issues/279)
782 | - **Pull Request #280**: [Implementation PR](https://github.com/doobidoo/mcp-memory-service/pull/280)
783 | - **SQLite Recursive CTEs**: [Official Documentation](https://www.sqlite.org/lang_with.html)
784 | - **Graph Algorithms**: _Introduction to Algorithms_ (CLRS), Chapter 22
785 | - **Real-world Metrics**: December 14, 2025 consolidation run (343 associations)
786 |
787 | ---
788 |
789 | **Document History**:
790 | - v1.0 (2025-12-14): Initial specification based on implemented solution
791 | - Implementation validated with 22 passing unit tests
792 | - Real-world performance metrics from production deployment
793 |
794 | **Maintained by**: MCP Memory Service contributors
795 |
```