#
tokens: 48633/50000 26/625 files (page 6/47)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 6 of 47. Use http://codebase.md/doobidoo/mcp-memory-service?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .claude
│   ├── agents
│   │   ├── amp-bridge.md
│   │   ├── amp-pr-automator.md
│   │   ├── code-quality-guard.md
│   │   ├── gemini-pr-automator.md
│   │   └── github-release-manager.md
│   ├── settings.local.json.backup
│   └── settings.local.json.local
├── .commit-message
├── .dockerignore
├── .env.example
├── .env.sqlite.backup
├── .envnn#
├── .gitattributes
├── .github
│   ├── FUNDING.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── feature_request.yml
│   │   └── performance_issue.yml
│   ├── pull_request_template.md
│   └── workflows
│       ├── bridge-tests.yml
│       ├── CACHE_FIX.md
│       ├── claude-code-review.yml
│       ├── claude.yml
│       ├── cleanup-images.yml.disabled
│       ├── dev-setup-validation.yml
│       ├── docker-publish.yml
│       ├── LATEST_FIXES.md
│       ├── main-optimized.yml.disabled
│       ├── main.yml
│       ├── publish-and-test.yml
│       ├── README_OPTIMIZATION.md
│       ├── release-tag.yml.disabled
│       ├── release.yml
│       ├── roadmap-review-reminder.yml
│       ├── SECRET_CONDITIONAL_FIX.md
│       └── WORKFLOW_FIXES.md
├── .gitignore
├── .mcp.json.backup
├── .mcp.json.template
├── .pyscn
│   ├── .gitignore
│   └── reports
│       └── analyze_20251123_214224.html
├── AGENTS.md
├── archive
│   ├── deployment
│   │   ├── deploy_fastmcp_fixed.sh
│   │   ├── deploy_http_with_mcp.sh
│   │   └── deploy_mcp_v4.sh
│   ├── deployment-configs
│   │   ├── empty_config.yml
│   │   └── smithery.yaml
│   ├── development
│   │   └── test_fastmcp.py
│   ├── docs-removed-2025-08-23
│   │   ├── authentication.md
│   │   ├── claude_integration.md
│   │   ├── claude-code-compatibility.md
│   │   ├── claude-code-integration.md
│   │   ├── claude-code-quickstart.md
│   │   ├── claude-desktop-setup.md
│   │   ├── complete-setup-guide.md
│   │   ├── database-synchronization.md
│   │   ├── development
│   │   │   ├── autonomous-memory-consolidation.md
│   │   │   ├── CLEANUP_PLAN.md
│   │   │   ├── CLEANUP_README.md
│   │   │   ├── CLEANUP_SUMMARY.md
│   │   │   ├── dream-inspired-memory-consolidation.md
│   │   │   ├── hybrid-slm-memory-consolidation.md
│   │   │   ├── mcp-milestone.md
│   │   │   ├── multi-client-architecture.md
│   │   │   ├── test-results.md
│   │   │   └── TIMESTAMP_FIX_SUMMARY.md
│   │   ├── distributed-sync.md
│   │   ├── invocation_guide.md
│   │   ├── macos-intel.md
│   │   ├── master-guide.md
│   │   ├── mcp-client-configuration.md
│   │   ├── multi-client-server.md
│   │   ├── service-installation.md
│   │   ├── sessions
│   │   │   └── MCP_ENHANCEMENT_SESSION_MEMORY_v4.1.0.md
│   │   ├── UBUNTU_SETUP.md
│   │   ├── ubuntu.md
│   │   ├── windows-setup.md
│   │   └── windows.md
│   ├── docs-root-cleanup-2025-08-23
│   │   ├── AWESOME_LIST_SUBMISSION.md
│   │   ├── CLOUDFLARE_IMPLEMENTATION.md
│   │   ├── DOCUMENTATION_ANALYSIS.md
│   │   ├── DOCUMENTATION_CLEANUP_PLAN.md
│   │   ├── DOCUMENTATION_CONSOLIDATION_COMPLETE.md
│   │   ├── LITESTREAM_SETUP_GUIDE.md
│   │   ├── lm_studio_system_prompt.md
│   │   ├── PYTORCH_DOWNLOAD_FIX.md
│   │   └── README-ORIGINAL-BACKUP.md
│   ├── investigations
│   │   └── MACOS_HOOKS_INVESTIGATION.md
│   ├── litestream-configs-v6.3.0
│   │   ├── install_service.sh
│   │   ├── litestream_master_config_fixed.yml
│   │   ├── litestream_master_config.yml
│   │   ├── litestream_replica_config_fixed.yml
│   │   ├── litestream_replica_config.yml
│   │   ├── litestream_replica_simple.yml
│   │   ├── litestream-http.service
│   │   ├── litestream.service
│   │   └── requirements-cloudflare.txt
│   ├── release-notes
│   │   └── release-notes-v7.1.4.md
│   └── setup-development
│       ├── README.md
│       ├── setup_consolidation_mdns.sh
│       ├── STARTUP_SETUP_GUIDE.md
│       └── test_service.sh
├── CHANGELOG-HISTORIC.md
├── CHANGELOG.md
├── claude_commands
│   ├── memory-context.md
│   ├── memory-health.md
│   ├── memory-ingest-dir.md
│   ├── memory-ingest.md
│   ├── memory-recall.md
│   ├── memory-search.md
│   ├── memory-store.md
│   ├── README.md
│   └── session-start.md
├── claude-hooks
│   ├── config.json
│   ├── config.template.json
│   ├── CONFIGURATION.md
│   ├── core
│   │   ├── memory-retrieval.js
│   │   ├── mid-conversation.js
│   │   ├── session-end.js
│   │   ├── session-start.js
│   │   └── topic-change.js
│   ├── debug-pattern-test.js
│   ├── install_claude_hooks_windows.ps1
│   ├── install_hooks.py
│   ├── memory-mode-controller.js
│   ├── MIGRATION.md
│   ├── README-NATURAL-TRIGGERS.md
│   ├── README-phase2.md
│   ├── README.md
│   ├── simple-test.js
│   ├── statusline.sh
│   ├── test-adaptive-weights.js
│   ├── test-dual-protocol-hook.js
│   ├── test-mcp-hook.js
│   ├── test-natural-triggers.js
│   ├── test-recency-scoring.js
│   ├── tests
│   │   ├── integration-test.js
│   │   ├── phase2-integration-test.js
│   │   ├── test-code-execution.js
│   │   ├── test-cross-session.json
│   │   ├── test-session-tracking.json
│   │   └── test-threading.json
│   ├── utilities
│   │   ├── adaptive-pattern-detector.js
│   │   ├── context-formatter.js
│   │   ├── context-shift-detector.js
│   │   ├── conversation-analyzer.js
│   │   ├── dynamic-context-updater.js
│   │   ├── git-analyzer.js
│   │   ├── mcp-client.js
│   │   ├── memory-client.js
│   │   ├── memory-scorer.js
│   │   ├── performance-manager.js
│   │   ├── project-detector.js
│   │   ├── session-tracker.js
│   │   ├── tiered-conversation-monitor.js
│   │   └── version-checker.js
│   └── WINDOWS-SESSIONSTART-BUG.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Development-Sprint-November-2025.md
├── docs
│   ├── amp-cli-bridge.md
│   ├── api
│   │   ├── code-execution-interface.md
│   │   ├── memory-metadata-api.md
│   │   ├── PHASE1_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_REPORT.md
│   │   └── tag-standardization.md
│   ├── architecture
│   │   ├── search-enhancement-spec.md
│   │   └── search-examples.md
│   ├── architecture.md
│   ├── archive
│   │   └── obsolete-workflows
│   │       ├── load_memory_context.md
│   │       └── README.md
│   ├── assets
│   │   └── images
│   │       ├── dashboard-v3.3.0-preview.png
│   │       ├── memory-awareness-hooks-example.png
│   │       ├── project-infographic.svg
│   │       └── README.md
│   ├── CLAUDE_CODE_QUICK_REFERENCE.md
│   ├── cloudflare-setup.md
│   ├── deployment
│   │   ├── docker.md
│   │   ├── dual-service.md
│   │   ├── production-guide.md
│   │   └── systemd-service.md
│   ├── development
│   │   ├── ai-agent-instructions.md
│   │   ├── code-quality
│   │   │   ├── phase-2a-completion.md
│   │   │   ├── phase-2a-handle-get-prompt.md
│   │   │   ├── phase-2a-index.md
│   │   │   ├── phase-2a-install-package.md
│   │   │   └── phase-2b-session-summary.md
│   │   ├── code-quality-workflow.md
│   │   ├── dashboard-workflow.md
│   │   ├── issue-management.md
│   │   ├── pr-review-guide.md
│   │   ├── refactoring-notes.md
│   │   ├── release-checklist.md
│   │   └── todo-tracker.md
│   ├── docker-optimized-build.md
│   ├── document-ingestion.md
│   ├── DOCUMENTATION_AUDIT.md
│   ├── enhancement-roadmap-issue-14.md
│   ├── examples
│   │   ├── analysis-scripts.js
│   │   ├── maintenance-session-example.md
│   │   ├── memory-distribution-chart.jsx
│   │   └── tag-schema.json
│   ├── first-time-setup.md
│   ├── glama-deployment.md
│   ├── guides
│   │   ├── advanced-command-examples.md
│   │   ├── chromadb-migration.md
│   │   ├── commands-vs-mcp-server.md
│   │   ├── mcp-enhancements.md
│   │   ├── mdns-service-discovery.md
│   │   ├── memory-consolidation-guide.md
│   │   ├── migration.md
│   │   ├── scripts.md
│   │   └── STORAGE_BACKENDS.md
│   ├── HOOK_IMPROVEMENTS.md
│   ├── hooks
│   │   └── phase2-code-execution-migration.md
│   ├── http-server-management.md
│   ├── ide-compatability.md
│   ├── IMAGE_RETENTION_POLICY.md
│   ├── images
│   │   └── dashboard-placeholder.md
│   ├── implementation
│   │   ├── health_checks.md
│   │   └── performance.md
│   ├── IMPLEMENTATION_PLAN_HTTP_SSE.md
│   ├── integration
│   │   ├── homebrew.md
│   │   └── multi-client.md
│   ├── integrations
│   │   ├── gemini.md
│   │   ├── groq-bridge.md
│   │   ├── groq-integration-summary.md
│   │   └── groq-model-comparison.md
│   ├── integrations.md
│   ├── legacy
│   │   └── dual-protocol-hooks.md
│   ├── LM_STUDIO_COMPATIBILITY.md
│   ├── maintenance
│   │   └── memory-maintenance.md
│   ├── mastery
│   │   ├── api-reference.md
│   │   ├── architecture-overview.md
│   │   ├── configuration-guide.md
│   │   ├── local-setup-and-run.md
│   │   ├── testing-guide.md
│   │   └── troubleshooting.md
│   ├── migration
│   │   └── code-execution-api-quick-start.md
│   ├── natural-memory-triggers
│   │   ├── cli-reference.md
│   │   ├── installation-guide.md
│   │   └── performance-optimization.md
│   ├── oauth-setup.md
│   ├── pr-graphql-integration.md
│   ├── quick-setup-cloudflare-dual-environment.md
│   ├── README.md
│   ├── remote-configuration-wiki-section.md
│   ├── research
│   │   ├── code-execution-interface-implementation.md
│   │   └── code-execution-interface-summary.md
│   ├── ROADMAP.md
│   ├── sqlite-vec-backend.md
│   ├── statistics
│   │   ├── charts
│   │   │   ├── activity_patterns.png
│   │   │   ├── contributors.png
│   │   │   ├── growth_trajectory.png
│   │   │   ├── monthly_activity.png
│   │   │   └── october_sprint.png
│   │   ├── data
│   │   │   ├── activity_by_day.csv
│   │   │   ├── activity_by_hour.csv
│   │   │   ├── contributors.csv
│   │   │   └── monthly_activity.csv
│   │   ├── generate_charts.py
│   │   └── REPOSITORY_STATISTICS.md
│   ├── technical
│   │   ├── development.md
│   │   ├── memory-migration.md
│   │   ├── migration-log.md
│   │   ├── sqlite-vec-embedding-fixes.md
│   │   └── tag-storage.md
│   ├── testing
│   │   └── regression-tests.md
│   ├── testing-cloudflare-backend.md
│   ├── troubleshooting
│   │   ├── cloudflare-api-token-setup.md
│   │   ├── cloudflare-authentication.md
│   │   ├── general.md
│   │   ├── hooks-quick-reference.md
│   │   ├── pr162-schema-caching-issue.md
│   │   ├── session-end-hooks.md
│   │   └── sync-issues.md
│   └── tutorials
│       ├── advanced-techniques.md
│       ├── data-analysis.md
│       └── demo-session-walkthrough.md
├── examples
│   ├── claude_desktop_config_template.json
│   ├── claude_desktop_config_windows.json
│   ├── claude-desktop-http-config.json
│   ├── config
│   │   └── claude_desktop_config.json
│   ├── http-mcp-bridge.js
│   ├── memory_export_template.json
│   ├── README.md
│   ├── setup
│   │   └── setup_multi_client_complete.py
│   └── start_https_example.sh
├── install_service.py
├── install.py
├── LICENSE
├── NOTICE
├── pyproject.toml
├── pytest.ini
├── README.md
├── run_server.py
├── scripts
│   ├── .claude
│   │   └── settings.local.json
│   ├── archive
│   │   └── check_missing_timestamps.py
│   ├── backup
│   │   ├── backup_memories.py
│   │   ├── backup_sqlite_vec.sh
│   │   ├── export_distributable_memories.sh
│   │   └── restore_memories.py
│   ├── benchmarks
│   │   ├── benchmark_code_execution_api.py
│   │   ├── benchmark_hybrid_sync.py
│   │   └── benchmark_server_caching.py
│   ├── database
│   │   ├── analyze_sqlite_vec_db.py
│   │   ├── check_sqlite_vec_status.py
│   │   ├── db_health_check.py
│   │   └── simple_timestamp_check.py
│   ├── development
│   │   ├── debug_server_initialization.py
│   │   ├── find_orphaned_files.py
│   │   ├── fix_mdns.sh
│   │   ├── fix_sitecustomize.py
│   │   ├── remote_ingest.sh
│   │   ├── setup-git-merge-drivers.sh
│   │   ├── uv-lock-merge.sh
│   │   └── verify_hybrid_sync.py
│   ├── hooks
│   │   └── pre-commit
│   ├── installation
│   │   ├── install_linux_service.py
│   │   ├── install_macos_service.py
│   │   ├── install_uv.py
│   │   ├── install_windows_service.py
│   │   ├── install.py
│   │   ├── setup_backup_cron.sh
│   │   ├── setup_claude_mcp.sh
│   │   └── setup_cloudflare_resources.py
│   ├── linux
│   │   ├── service_status.sh
│   │   ├── start_service.sh
│   │   ├── stop_service.sh
│   │   ├── uninstall_service.sh
│   │   └── view_logs.sh
│   ├── maintenance
│   │   ├── assign_memory_types.py
│   │   ├── check_memory_types.py
│   │   ├── cleanup_corrupted_encoding.py
│   │   ├── cleanup_memories.py
│   │   ├── cleanup_organize.py
│   │   ├── consolidate_memory_types.py
│   │   ├── consolidation_mappings.json
│   │   ├── delete_orphaned_vectors_fixed.py
│   │   ├── fast_cleanup_duplicates_with_tracking.sh
│   │   ├── find_all_duplicates.py
│   │   ├── find_cloudflare_duplicates.py
│   │   ├── find_duplicates.py
│   │   ├── memory-types.md
│   │   ├── README.md
│   │   ├── recover_timestamps_from_cloudflare.py
│   │   ├── regenerate_embeddings.py
│   │   ├── repair_malformed_tags.py
│   │   ├── repair_memories.py
│   │   ├── repair_sqlite_vec_embeddings.py
│   │   ├── repair_zero_embeddings.py
│   │   ├── restore_from_json_export.py
│   │   └── scan_todos.sh
│   ├── migration
│   │   ├── cleanup_mcp_timestamps.py
│   │   ├── legacy
│   │   │   └── migrate_chroma_to_sqlite.py
│   │   ├── mcp-migration.py
│   │   ├── migrate_sqlite_vec_embeddings.py
│   │   ├── migrate_storage.py
│   │   ├── migrate_tags.py
│   │   ├── migrate_timestamps.py
│   │   ├── migrate_to_cloudflare.py
│   │   ├── migrate_to_sqlite_vec.py
│   │   ├── migrate_v5_enhanced.py
│   │   ├── TIMESTAMP_CLEANUP_README.md
│   │   └── verify_mcp_timestamps.py
│   ├── pr
│   │   ├── amp_collect_results.sh
│   │   ├── amp_detect_breaking_changes.sh
│   │   ├── amp_generate_tests.sh
│   │   ├── amp_pr_review.sh
│   │   ├── amp_quality_gate.sh
│   │   ├── amp_suggest_fixes.sh
│   │   ├── auto_review.sh
│   │   ├── detect_breaking_changes.sh
│   │   ├── generate_tests.sh
│   │   ├── lib
│   │   │   └── graphql_helpers.sh
│   │   ├── quality_gate.sh
│   │   ├── resolve_threads.sh
│   │   ├── run_pyscn_analysis.sh
│   │   ├── run_quality_checks.sh
│   │   ├── thread_status.sh
│   │   └── watch_reviews.sh
│   ├── quality
│   │   ├── fix_dead_code_install.sh
│   │   ├── phase1_dead_code_analysis.md
│   │   ├── phase2_complexity_analysis.md
│   │   ├── README_PHASE1.md
│   │   ├── README_PHASE2.md
│   │   ├── track_pyscn_metrics.sh
│   │   └── weekly_quality_review.sh
│   ├── README.md
│   ├── run
│   │   ├── run_mcp_memory.sh
│   │   ├── run-with-uv.sh
│   │   └── start_sqlite_vec.sh
│   ├── run_memory_server.py
│   ├── server
│   │   ├── check_http_server.py
│   │   ├── check_server_health.py
│   │   ├── memory_offline.py
│   │   ├── preload_models.py
│   │   ├── run_http_server.py
│   │   ├── run_memory_server.py
│   │   ├── start_http_server.bat
│   │   └── start_http_server.sh
│   ├── service
│   │   ├── deploy_dual_services.sh
│   │   ├── install_http_service.sh
│   │   ├── mcp-memory-http.service
│   │   ├── mcp-memory.service
│   │   ├── memory_service_manager.sh
│   │   ├── service_control.sh
│   │   ├── service_utils.py
│   │   └── update_service.sh
│   ├── sync
│   │   ├── check_drift.py
│   │   ├── claude_sync_commands.py
│   │   ├── export_memories.py
│   │   ├── import_memories.py
│   │   ├── litestream
│   │   │   ├── apply_local_changes.sh
│   │   │   ├── enhanced_memory_store.sh
│   │   │   ├── init_staging_db.sh
│   │   │   ├── io.litestream.replication.plist
│   │   │   ├── manual_sync.sh
│   │   │   ├── memory_sync.sh
│   │   │   ├── pull_remote_changes.sh
│   │   │   ├── push_to_remote.sh
│   │   │   ├── README.md
│   │   │   ├── resolve_conflicts.sh
│   │   │   ├── setup_local_litestream.sh
│   │   │   ├── setup_remote_litestream.sh
│   │   │   ├── staging_db_init.sql
│   │   │   ├── stash_local_changes.sh
│   │   │   ├── sync_from_remote_noconfig.sh
│   │   │   └── sync_from_remote.sh
│   │   ├── README.md
│   │   ├── safe_cloudflare_update.sh
│   │   ├── sync_memory_backends.py
│   │   └── sync_now.py
│   ├── testing
│   │   ├── run_complete_test.py
│   │   ├── run_memory_test.sh
│   │   ├── simple_test.py
│   │   ├── test_cleanup_logic.py
│   │   ├── test_cloudflare_backend.py
│   │   ├── test_docker_functionality.py
│   │   ├── test_installation.py
│   │   ├── test_mdns.py
│   │   ├── test_memory_api.py
│   │   ├── test_memory_simple.py
│   │   ├── test_migration.py
│   │   ├── test_search_api.py
│   │   ├── test_sqlite_vec_embeddings.py
│   │   ├── test_sse_events.py
│   │   ├── test-connection.py
│   │   └── test-hook.js
│   ├── utils
│   │   ├── claude_commands_utils.py
│   │   ├── generate_personalized_claude_md.sh
│   │   ├── groq
│   │   ├── groq_agent_bridge.py
│   │   ├── list-collections.py
│   │   ├── memory_wrapper_uv.py
│   │   ├── query_memories.py
│   │   ├── smithery_wrapper.py
│   │   ├── test_groq_bridge.sh
│   │   └── uv_wrapper.py
│   └── validation
│       ├── check_dev_setup.py
│       ├── check_documentation_links.py
│       ├── diagnose_backend_config.py
│       ├── validate_configuration_complete.py
│       ├── validate_memories.py
│       ├── validate_migration.py
│       ├── validate_timestamp_integrity.py
│       ├── verify_environment.py
│       ├── verify_pytorch_windows.py
│       └── verify_torch.py
├── SECURITY.md
├── selective_timestamp_recovery.py
├── SPONSORS.md
├── src
│   └── mcp_memory_service
│       ├── __init__.py
│       ├── api
│       │   ├── __init__.py
│       │   ├── client.py
│       │   ├── operations.py
│       │   ├── sync_wrapper.py
│       │   └── types.py
│       ├── backup
│       │   ├── __init__.py
│       │   └── scheduler.py
│       ├── cli
│       │   ├── __init__.py
│       │   ├── ingestion.py
│       │   ├── main.py
│       │   └── utils.py
│       ├── config.py
│       ├── consolidation
│       │   ├── __init__.py
│       │   ├── associations.py
│       │   ├── base.py
│       │   ├── clustering.py
│       │   ├── compression.py
│       │   ├── consolidator.py
│       │   ├── decay.py
│       │   ├── forgetting.py
│       │   ├── health.py
│       │   └── scheduler.py
│       ├── dependency_check.py
│       ├── discovery
│       │   ├── __init__.py
│       │   ├── client.py
│       │   └── mdns_service.py
│       ├── embeddings
│       │   ├── __init__.py
│       │   └── onnx_embeddings.py
│       ├── ingestion
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── chunker.py
│       │   ├── csv_loader.py
│       │   ├── json_loader.py
│       │   ├── pdf_loader.py
│       │   ├── registry.py
│       │   ├── semtools_loader.py
│       │   └── text_loader.py
│       ├── lm_studio_compat.py
│       ├── mcp_server.py
│       ├── models
│       │   ├── __init__.py
│       │   └── memory.py
│       ├── server.py
│       ├── services
│       │   ├── __init__.py
│       │   └── memory_service.py
│       ├── storage
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── cloudflare.py
│       │   ├── factory.py
│       │   ├── http_client.py
│       │   ├── hybrid.py
│       │   └── sqlite_vec.py
│       ├── sync
│       │   ├── __init__.py
│       │   ├── exporter.py
│       │   ├── importer.py
│       │   └── litestream_config.py
│       ├── utils
│       │   ├── __init__.py
│       │   ├── cache_manager.py
│       │   ├── content_splitter.py
│       │   ├── db_utils.py
│       │   ├── debug.py
│       │   ├── document_processing.py
│       │   ├── gpu_detection.py
│       │   ├── hashing.py
│       │   ├── http_server_manager.py
│       │   ├── port_detection.py
│       │   ├── system_detection.py
│       │   └── time_parser.py
│       └── web
│           ├── __init__.py
│           ├── api
│           │   ├── __init__.py
│           │   ├── analytics.py
│           │   ├── backup.py
│           │   ├── consolidation.py
│           │   ├── documents.py
│           │   ├── events.py
│           │   ├── health.py
│           │   ├── manage.py
│           │   ├── mcp.py
│           │   ├── memories.py
│           │   ├── search.py
│           │   └── sync.py
│           ├── app.py
│           ├── dependencies.py
│           ├── oauth
│           │   ├── __init__.py
│           │   ├── authorization.py
│           │   ├── discovery.py
│           │   ├── middleware.py
│           │   ├── models.py
│           │   ├── registration.py
│           │   └── storage.py
│           ├── sse.py
│           └── static
│               ├── app.js
│               ├── index.html
│               ├── README.md
│               ├── sse_test.html
│               └── style.css
├── start_http_debug.bat
├── start_http_server.sh
├── test_document.txt
├── test_version_checker.js
├── tests
│   ├── __init__.py
│   ├── api
│   │   ├── __init__.py
│   │   ├── test_compact_types.py
│   │   └── test_operations.py
│   ├── bridge
│   │   ├── mock_responses.js
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   └── test_http_mcp_bridge.js
│   ├── conftest.py
│   ├── consolidation
│   │   ├── __init__.py
│   │   ├── conftest.py
│   │   ├── test_associations.py
│   │   ├── test_clustering.py
│   │   ├── test_compression.py
│   │   ├── test_consolidator.py
│   │   ├── test_decay.py
│   │   └── test_forgetting.py
│   ├── contracts
│   │   └── api-specification.yml
│   ├── integration
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   ├── test_api_key_fallback.py
│   │   ├── test_api_memories_chronological.py
│   │   ├── test_api_tag_time_search.py
│   │   ├── test_api_with_memory_service.py
│   │   ├── test_bridge_integration.js
│   │   ├── test_cli_interfaces.py
│   │   ├── test_cloudflare_connection.py
│   │   ├── test_concurrent_clients.py
│   │   ├── test_data_serialization_consistency.py
│   │   ├── test_http_server_startup.py
│   │   ├── test_mcp_memory.py
│   │   ├── test_mdns_integration.py
│   │   ├── test_oauth_basic_auth.py
│   │   ├── test_oauth_flow.py
│   │   ├── test_server_handlers.py
│   │   └── test_store_memory.py
│   ├── performance
│   │   ├── test_background_sync.py
│   │   └── test_hybrid_live.py
│   ├── README.md
│   ├── smithery
│   │   └── test_smithery.py
│   ├── sqlite
│   │   └── simple_sqlite_vec_test.py
│   ├── test_client.py
│   ├── test_content_splitting.py
│   ├── test_database.py
│   ├── test_hybrid_cloudflare_limits.py
│   ├── test_hybrid_storage.py
│   ├── test_memory_ops.py
│   ├── test_semantic_search.py
│   ├── test_sqlite_vec_storage.py
│   ├── test_time_parser.py
│   ├── test_timestamp_preservation.py
│   ├── timestamp
│   │   ├── test_hook_vs_manual_storage.py
│   │   ├── test_issue99_final_validation.py
│   │   ├── test_search_retrieval_inconsistency.py
│   │   ├── test_timestamp_issue.py
│   │   └── test_timestamp_simple.py
│   └── unit
│       ├── conftest.py
│       ├── test_cloudflare_storage.py
│       ├── test_csv_loader.py
│       ├── test_fastapi_dependencies.py
│       ├── test_import.py
│       ├── test_json_loader.py
│       ├── test_mdns_simple.py
│       ├── test_mdns.py
│       ├── test_memory_service.py
│       ├── test_memory.py
│       ├── test_semtools_loader.py
│       ├── test_storage_interface_compatibility.py
│       └── test_tag_time_filtering.py
├── tools
│   ├── docker
│   │   ├── DEPRECATED.md
│   │   ├── docker-compose.http.yml
│   │   ├── docker-compose.pythonpath.yml
│   │   ├── docker-compose.standalone.yml
│   │   ├── docker-compose.uv.yml
│   │   ├── docker-compose.yml
│   │   ├── docker-entrypoint-persistent.sh
│   │   ├── docker-entrypoint-unified.sh
│   │   ├── docker-entrypoint.sh
│   │   ├── Dockerfile
│   │   ├── Dockerfile.glama
│   │   ├── Dockerfile.slim
│   │   ├── README.md
│   │   └── test-docker-modes.sh
│   └── README.md
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/scripts/sync/litestream/stash_local_changes.sh:
--------------------------------------------------------------------------------

```bash
  1 | #!/bin/bash
  2 | # Stash local memory changes to staging database before sync
  3 | 
  4 | MAIN_DB="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec.db"
  5 | STAGING_DB="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec_staging.db"
  6 | HOSTNAME=$(hostname)
  7 | 
  8 | echo "$(date): Stashing local changes..."
  9 | 
 10 | if [ ! -f "$MAIN_DB" ]; then
 11 |     echo "$(date): No main database found at $MAIN_DB"
 12 |     exit 1
 13 | fi
 14 | 
 15 | if [ ! -f "$STAGING_DB" ]; then
 16 |     echo "$(date): Staging database not found. Run ./init_staging_db.sh first"
 17 |     exit 1
 18 | fi
 19 | 
 20 | # Get last sync timestamp from staging database
 21 | LAST_SYNC=$(sqlite3 "$STAGING_DB" "SELECT value FROM sync_status WHERE key = 'last_local_sync';" 2>/dev/null || echo "")
 22 | 
 23 | # If no last sync, get all memories from a reasonable time ago (7 days)
 24 | if [ -z "$LAST_SYNC" ]; then
 25 |     LAST_SYNC="datetime('now', '-7 days')"
 26 | else
 27 |     LAST_SYNC="'$LAST_SYNC'"
 28 | fi
 29 | 
 30 | echo "$(date): Looking for changes since: $LAST_SYNC"
 31 | 
 32 | # Find memories that might be new/modified locally
 33 | # Note: This assumes your sqlite_vec.db has a similar schema
 34 | # We'll need to adapt this based on your actual schema
 35 | 
 36 | # First, let's check the schema of the main database
 37 | echo "$(date): Analyzing main database schema..."
 38 | MAIN_SCHEMA=$(sqlite3 "$MAIN_DB" ".schema" 2>/dev/null | head -10)
 39 | 
 40 | if [ $? -ne 0 ] || [ -z "$MAIN_SCHEMA" ]; then
 41 |     echo "$(date): ERROR: Cannot read main database schema"
 42 |     exit 1
 43 | fi
 44 | 
 45 | echo "$(date): Main database schema detected"
 46 | 
 47 | # For now, we'll create a simple approach that looks for memories
 48 | # This will need to be customized based on your actual schema
 49 | 
 50 | # Query to find recent memories (adjust based on actual schema)
 51 | TEMP_QUERY_RESULT=$(mktemp)
 52 | 
 53 | # Try different table names that might exist in sqlite_vec databases
 54 | for TABLE in memories memory_entries memories_table memory_items; do
 55 |     if sqlite3 "$MAIN_DB" ".tables" | grep -q "^$TABLE$"; then
 56 |         echo "$(date): Found table: $TABLE"
 57 |         
 58 |         # Try to extract memories (adjust columns based on actual schema)
 59 |         sqlite3 "$MAIN_DB" "
 60 |         SELECT 
 61 |             COALESCE(id, rowid) as id,
 62 |             content,
 63 |             COALESCE(content_hash, '') as content_hash,
 64 |             COALESCE(tags, '[]') as tags,
 65 |             COALESCE(metadata, '{}') as metadata,
 66 |             COALESCE(memory_type, 'note') as memory_type,
 67 |             COALESCE(created_at, datetime('now')) as created_at
 68 |         FROM $TABLE 
 69 |         WHERE datetime(COALESCE(updated_at, created_at, datetime('now'))) > $LAST_SYNC
 70 |         LIMIT 100;
 71 |         " 2>/dev/null > "$TEMP_QUERY_RESULT"
 72 |         
 73 |         if [ -s "$TEMP_QUERY_RESULT" ]; then
 74 |             break
 75 |         fi
 76 |     fi
 77 | done
 78 | 
 79 | # Count changes found
 80 | CHANGE_COUNT=$(wc -l < "$TEMP_QUERY_RESULT" | tr -d ' ')
 81 | 
 82 | if [ "$CHANGE_COUNT" -eq 0 ]; then
 83 |     echo "$(date): No local changes found to stash"
 84 |     rm -f "$TEMP_QUERY_RESULT"
 85 |     exit 0
 86 | fi
 87 | 
 88 | echo "$(date): Found $CHANGE_COUNT potential local changes"
 89 | 
 90 | # Process each change and add to staging
 91 | while IFS='|' read -r id content content_hash tags metadata memory_type created_at; do
 92 |     # Generate content hash if missing
 93 |     if [ -z "$content_hash" ]; then
 94 |         content_hash=$(echo -n "$content" | shasum -a 256 | cut -d' ' -f1)
 95 |     fi
 96 |     
 97 |     # Escape single quotes for SQL
 98 |     content_escaped=$(echo "$content" | sed "s/'/''/g")
 99 |     tags_escaped=$(echo "$tags" | sed "s/'/''/g")
100 |     metadata_escaped=$(echo "$metadata" | sed "s/'/''/g")
101 |     
102 |     # Insert into staging database
103 |     sqlite3 "$STAGING_DB" "
104 |     INSERT OR REPLACE INTO staged_memories (
105 |         id, content, content_hash, tags, metadata, memory_type,
106 |         operation, staged_at, original_created_at, source_machine
107 |     ) VALUES (
108 |         '$id',
109 |         '$content_escaped',
110 |         '$content_hash',
111 |         '$tags_escaped',
112 |         '$metadata_escaped',
113 |         '$memory_type',
114 |         'INSERT',
115 |         datetime('now'),
116 |         '$created_at',
117 |         '$HOSTNAME'
118 |     );
119 |     "
120 |     
121 |     if [ $? -eq 0 ]; then
122 |         echo "$(date): Staged change: ${content:0:50}..."
123 |     else
124 |         echo "$(date): ERROR: Failed to stage change for ID: $id"
125 |     fi
126 |     
127 | done < "$TEMP_QUERY_RESULT"
128 | 
129 | # Update sync status
130 | sqlite3 "$STAGING_DB" "
131 | UPDATE sync_status 
132 | SET value = datetime('now'), updated_at = CURRENT_TIMESTAMP 
133 | WHERE key = 'last_local_sync';
134 | "
135 | 
136 | # Show staging status
137 | STAGED_COUNT=$(sqlite3 "$STAGING_DB" "SELECT value FROM sync_status WHERE key = 'total_staged_changes';" 2>/dev/null || echo "0")
138 | 
139 | echo "$(date): Stashing completed"
140 | echo "$(date): Total staged changes: $STAGED_COUNT"
141 | echo "$(date): New changes stashed: $CHANGE_COUNT"
142 | 
143 | # Cleanup
144 | rm -f "$TEMP_QUERY_RESULT"
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/ingestion/registry.py:
--------------------------------------------------------------------------------

```python
  1 | # Copyright 2024 Heinrich Krupp
  2 | #
  3 | # Licensed under the Apache License, Version 2.0 (the "License");
  4 | # you may not use this file except in compliance with the License.
  5 | # You may obtain a copy of the License at
  6 | #
  7 | #     http://www.apache.org/licenses/LICENSE-2.0
  8 | #
  9 | # Unless required by applicable law or agreed to in writing, software
 10 | # distributed under the License is distributed on an "AS IS" BASIS,
 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | # See the License for the specific language governing permissions and
 13 | # limitations under the License.
 14 | 
 15 | """
 16 | Document loader registry for automatic format detection and loader selection.
 17 | """
 18 | 
 19 | import logging
 20 | import mimetypes
 21 | from pathlib import Path
 22 | from typing import Dict, Type, List, Optional
 23 | 
 24 | from .base import DocumentLoader
 25 | 
 26 | logger = logging.getLogger(__name__)
 27 | 
 28 | # Registry of document loaders by file extension
 29 | _LOADER_REGISTRY: Dict[str, Type[DocumentLoader]] = {}
 30 | 
 31 | # Supported file formats
 32 | SUPPORTED_FORMATS = {
 33 |     'pdf': 'PDF documents',
 34 |     'docx': 'Word documents (requires semtools)',
 35 |     'doc': 'Word documents (requires semtools)',
 36 |     'pptx': 'PowerPoint presentations (requires semtools)',
 37 |     'xlsx': 'Excel spreadsheets (requires semtools)',
 38 |     'txt': 'Plain text files',
 39 |     'md': 'Markdown documents',
 40 |     'json': 'JSON data files',
 41 |     'csv': 'CSV data files',
 42 | }
 43 | 
 44 | 
 45 | def register_loader(loader_class: Type[DocumentLoader], extensions: List[str]) -> None:
 46 |     """
 47 |     Register a document loader for specific file extensions.
 48 |     
 49 |     Args:
 50 |         loader_class: The DocumentLoader subclass to register
 51 |         extensions: List of file extensions this loader handles (without dots)
 52 |     """
 53 |     for ext in extensions:
 54 |         ext = ext.lower().lstrip('.')
 55 |         _LOADER_REGISTRY[ext] = loader_class
 56 |         logger.debug(f"Registered {loader_class.__name__} for .{ext} files")
 57 | 
 58 | 
 59 | def get_loader_for_file(file_path: Path) -> Optional[DocumentLoader]:
 60 |     """
 61 |     Get appropriate document loader for a file.
 62 |     
 63 |     Args:
 64 |         file_path: Path to the file
 65 |         
 66 |     Returns:
 67 |         DocumentLoader instance that can handle the file, or None if unsupported
 68 |     """
 69 |     if not file_path.exists():
 70 |         logger.warning(f"File does not exist: {file_path}")
 71 |         return None
 72 |     
 73 |     # Try by file extension first
 74 |     extension = file_path.suffix.lower().lstrip('.')
 75 |     if extension in _LOADER_REGISTRY:
 76 |         loader_class = _LOADER_REGISTRY[extension]
 77 |         loader = loader_class()
 78 |         if loader.can_handle(file_path):
 79 |             return loader
 80 |     
 81 |     # Try by MIME type detection
 82 |     mime_type, _ = mimetypes.guess_type(str(file_path))
 83 |     if mime_type:
 84 |         loader = _get_loader_by_mime_type(mime_type)
 85 |         if loader and loader.can_handle(file_path):
 86 |             return loader
 87 |     
 88 |     # Try all registered loaders as fallback
 89 |     for loader_class in _LOADER_REGISTRY.values():
 90 |         loader = loader_class()
 91 |         if loader.can_handle(file_path):
 92 |             logger.debug(f"Found fallback loader {loader_class.__name__} for {file_path}")
 93 |             return loader
 94 |     
 95 |     logger.warning(f"No suitable loader found for file: {file_path}")
 96 |     return None
 97 | 
 98 | 
 99 | def _get_loader_by_mime_type(mime_type: str) -> Optional[DocumentLoader]:
100 |     """
101 |     Get loader based on MIME type.
102 |     
103 |     Args:
104 |         mime_type: MIME type string
105 |         
106 |     Returns:
107 |         DocumentLoader instance or None
108 |     """
109 |     mime_to_extension = {
110 |         'application/pdf': 'pdf',
111 |         'text/plain': 'txt',
112 |         'text/markdown': 'md',
113 |         'application/json': 'json',
114 |         'text/csv': 'csv',
115 |     }
116 |     
117 |     extension = mime_to_extension.get(mime_type)
118 |     if extension and extension in _LOADER_REGISTRY:
119 |         loader_class = _LOADER_REGISTRY[extension]
120 |         return loader_class()
121 |     
122 |     return None
123 | 
124 | 
125 | def get_supported_extensions() -> List[str]:
126 |     """
127 |     Get list of all supported file extensions.
128 |     
129 |     Returns:
130 |         List of supported extensions (without dots)
131 |     """
132 |     return list(_LOADER_REGISTRY.keys())
133 | 
134 | 
135 | def is_supported_file(file_path: Path) -> bool:
136 |     """
137 |     Check if a file format is supported.
138 |     
139 |     Args:
140 |         file_path: Path to check
141 |         
142 |     Returns:
143 |         True if file format is supported
144 |     """
145 |     return get_loader_for_file(file_path) is not None
146 | 
147 | 
148 | def list_registered_loaders() -> Dict[str, str]:
149 |     """
150 |     Get mapping of extensions to loader class names.
151 |     
152 |     Returns:
153 |         Dictionary mapping extensions to loader class names
154 |     """
155 |     return {ext: loader_class.__name__ for ext, loader_class in _LOADER_REGISTRY.items()}
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/utils/document_processing.py:
--------------------------------------------------------------------------------

```python
  1 | # Copyright 2024 Heinrich Krupp
  2 | #
  3 | # Licensed under the Apache License, Version 2.0 (the "License");
  4 | # you may not use this file except in compliance with the License.
  5 | # You may obtain a copy of the License at
  6 | #
  7 | #     http://www.apache.org/licenses/LICENSE-2.0
  8 | #
  9 | # Unless required by applicable law or agreed to in writing, software
 10 | # distributed under the License is distributed on an "AS IS" BASIS,
 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | # See the License for the specific language governing permissions and
 13 | # limitations under the License.
 14 | 
 15 | """
 16 | Utilities for processing document chunks into memories.
 17 | """
 18 | 
 19 | from typing import List, Dict, Any, Optional, Tuple
 20 | import logging
 21 | 
 22 | from ..models.memory import Memory
 23 | from . import generate_content_hash
 24 | 
 25 | logger = logging.getLogger(__name__)
 26 | 
 27 | 
 28 | def create_memory_from_chunk(
 29 |     chunk: Any,
 30 |     base_tags: List[str],
 31 |     memory_type: str = "document",
 32 |     context_tags: Optional[Dict[str, str]] = None,
 33 |     extra_metadata: Optional[Dict[str, Any]] = None
 34 | ) -> Memory:
 35 |     """
 36 |     Create a Memory object from a document chunk with tag and metadata processing.
 37 | 
 38 |     Args:
 39 |         chunk: Document chunk object with content, metadata, and chunk_index
 40 |         base_tags: Base tags to apply to the memory
 41 |         memory_type: Type of memory (default: "document")
 42 |         context_tags: Additional context-specific tags as key-value pairs
 43 |                      (e.g., {"source_dir": "docs", "file_type": "pdf"})
 44 |         extra_metadata: Additional metadata to merge into chunk metadata
 45 | 
 46 |     Returns:
 47 |         Memory object ready for storage
 48 | 
 49 |     Example:
 50 |         >>> memory = create_memory_from_chunk(
 51 |         ...     chunk,
 52 |         ...     base_tags=["documentation"],
 53 |         ...     context_tags={"source_dir": "docs", "file_type": "pdf"},
 54 |         ...     extra_metadata={"upload_id": "batch123"}
 55 |         ... )
 56 |     """
 57 |     # Build tag list
 58 |     all_tags = list(base_tags)
 59 | 
 60 |     # Add context-specific tags
 61 |     if context_tags:
 62 |         for key, value in context_tags.items():
 63 |             all_tags.append(f"{key}:{value}")
 64 | 
 65 |     # Handle chunk metadata tags (can be string or list)
 66 |     if chunk.metadata and chunk.metadata.get('tags'):
 67 |         chunk_tags = chunk.metadata['tags']
 68 |         if isinstance(chunk_tags, str):
 69 |             # Split comma-separated string into list
 70 |             chunk_tags = [tag.strip() for tag in chunk_tags.split(',') if tag.strip()]
 71 |         all_tags.extend(chunk_tags)
 72 | 
 73 |     # Prepare metadata
 74 |     chunk_metadata = chunk.metadata.copy() if chunk.metadata else {}
 75 |     if extra_metadata:
 76 |         chunk_metadata.update(extra_metadata)
 77 | 
 78 |     # Create and return memory object
 79 |     return Memory(
 80 |         content=chunk.content,
 81 |         content_hash=generate_content_hash(chunk.content, chunk_metadata),
 82 |         tags=list(set(all_tags)),  # Remove duplicates
 83 |         memory_type=memory_type,
 84 |         metadata=chunk_metadata
 85 |     )
 86 | 
 87 | 
 88 | async def _process_and_store_chunk(
 89 |     chunk: Any,
 90 |     storage: Any,
 91 |     file_name: str,
 92 |     base_tags: List[str],
 93 |     context_tags: Dict[str, str],
 94 |     memory_type: str = "document",
 95 |     extra_metadata: Optional[Dict[str, Any]] = None
 96 | ) -> Tuple[bool, Optional[str]]:
 97 |     """
 98 |     Process a document chunk and store it as a memory.
 99 | 
100 |     This consolidates the common pattern of creating a memory from a chunk
101 |     and storing it to the database across multiple ingestion entry points.
102 | 
103 |     Args:
104 |         chunk: Document chunk with content and metadata
105 |         storage: Storage backend instance
106 |         file_name: Name of the source file (for error messages)
107 |         base_tags: Base tags to apply to the memory
108 |         context_tags: Context-specific tags (e.g., source_dir, file_type)
109 |         memory_type: Type of memory (default: "document")
110 |         extra_metadata: Additional metadata to merge into chunk metadata
111 | 
112 |     Returns:
113 |         Tuple of (success: bool, error: Optional[str])
114 |             - (True, None) if stored successfully
115 |             - (False, error_message) if storage failed
116 |     """
117 |     try:
118 |         # Create memory from chunk with context
119 |         memory = create_memory_from_chunk(
120 |             chunk,
121 |             base_tags=base_tags,
122 |             memory_type=memory_type,
123 |             context_tags=context_tags,
124 |             extra_metadata=extra_metadata
125 |         )
126 | 
127 |         # Store the memory
128 |         success, error = await storage.store(memory)
129 |         if not success:
130 |             return False, f"{file_name} chunk {chunk.chunk_index}: {error}"
131 |         return True, None
132 | 
133 |     except Exception as e:
134 |         return False, f"{file_name} chunk {chunk.chunk_index}: {str(e)}"
135 | 
```

--------------------------------------------------------------------------------
/docs/troubleshooting/pr162-schema-caching-issue.md:
--------------------------------------------------------------------------------

```markdown
  1 | # PR #162 Fix Troubleshooting - Comma-Separated Tags Issue
  2 | 
  3 | ## Issue
  4 | After PR #162 was merged (fixing support for comma-separated tags), users still saw error:
  5 | ```
  6 | Input validation error: 'tag1,tag2,tag3' is not of type 'array'
  7 | ```
  8 | 
  9 | ## Root Cause Analysis
 10 | 
 11 | ### What PR #162 Fixed
 12 | - **File**: `src/mcp_memory_service/server.py` lines 1320-1337
 13 | - **Fix**: Changed `tags` schema from requiring array to accepting `oneOf`:
 14 |   ```json
 15 |   "tags": {
 16 |     "oneOf": [
 17 |       {"type": "array", "items": {"type": "string"}},
 18 |       {"type": "string", "description": "Tags as comma-separated string"}
 19 |     ]
 20 |   }
 21 |   ```
 22 | - **Server Code**: Lines 2076-2081 normalize tags from string to array
 23 | 
 24 | ### Why Error Persisted
 25 | 
 26 | 1. **MCP Client Schema Caching**: Claude Code's MCP client caches tool schemas when it first connects
 27 | 2. **Stale Server Processes**: MCP server processes continued running with old code:
 28 |    - Old process started at 10:43 (before git pull/merge)
 29 |    - New code pulled but server not restarted
 30 | 3. **HTTP vs MCP Servers**: HTTP server restart doesn't affect MCP server processes
 31 | 4. **Validation Layer**: JSONSchema validation happens **client-side** before request reaches server
 32 | 
 33 | ## Evidence
 34 | 
 35 | ### Server Processes Found
 36 | ```
 37 | PID 68270: Started 10:43 (OLD - before PR merge)
 38 | PID 70013: Started 10:44 (OLD - before PR merge)
 39 | PID 117228: HTTP server restarted 11:51 (NEW - has fix)
 40 | ```
 41 | 
 42 | ### Error Source
 43 | - Error message format: `'value' is not of type 'array'`
 44 | - Source: `jsonschema` library (Python package)
 45 | - Layer: **Client-side validation** in Claude Code's MCP client
 46 | 
 47 | ### Timeline
 48 | - **Oct 20, 2025 17:22**: PR #162 merged
 49 | - **Oct 21, 2025 10:48**: HTTP server started (unknown which version)
 50 | - **Oct 21, 2025 11:51**: HTTP server restarted with latest code
 51 | - **Oct 21, 2025 11:5x**: MCP reconnection in Claude Code
 52 | 
 53 | ## Solution
 54 | 
 55 | ### Immediate Fix
 56 | ```bash
 57 | # In Claude Code, run:
 58 | /mcp
 59 | 
 60 | # This forces reconnection and:
 61 | # 1. Terminates old MCP server process
 62 | # 2. Starts new MCP server with latest code
 63 | # 3. Re-fetches tool schemas (including updated tags schema)
 64 | # 4. Clears client-side schema cache
 65 | ```
 66 | 
 67 | ### Verification Steps
 68 | After reconnection:
 69 | 1. Check MCP server process started after git pull/merge time
 70 | 2. Test with comma-separated tags: `{"tags": "tag1,tag2,tag3"}`
 71 | 3. Test with array tags: `{"tags": ["tag1", "tag2", "tag3"]}`
 72 | 4. Both should work without validation errors
 73 | 
 74 | ## Prevention for Future PRs
 75 | 
 76 | ### When Schema Changes are Merged
 77 | 1. **Restart HTTP Server** (if using HTTP protocol):
 78 |    ```bash
 79 |    systemctl --user restart mcp-memory-http.service
 80 |    ```
 81 | 
 82 | 2. **Reconnect MCP in Claude Code** (if using MCP protocol):
 83 |    ```
 84 |    /mcp
 85 |    ```
 86 |    Or fully restart Claude Code application
 87 | 
 88 | 3. **Check Process Age**:
 89 |    ```bash
 90 |    ps aux | grep "memory.*server" | grep -v grep
 91 |    # Ensure start time is AFTER the git pull
 92 |    ```
 93 | 
 94 | ### For Contributors
 95 | When merging PRs that change tool schemas:
 96 | 1. Add note in PR description: "Requires MCP reconnection after deployment"
 97 | 2. Update CHANGELOG with reconnection instructions
 98 | 3. Consider automated server restart in deployment scripts
 99 | 
100 | ## Key Learnings
101 | 
102 | 1. **Client-side validation**: MCP clients validate against cached schemas
103 | 2. **Multiple server processes**: HTTP and MCP servers are separate
104 | 3. **Schema propagation**: New schemas only available after reconnection
105 | 4. **Git pull != Code reload**: Running processes don't auto-reload
106 | 5. **Troubleshooting order**:
107 |    - Check PR merge time
108 |    - Check server process start time
109 |    - Check git log on running server's code
110 |    - Restart/reconnect if process predates code change
111 | 
112 | ## Related Files
113 | - Server schema: `src/mcp_memory_service/server.py:1320-1337`
114 | - Server handler: `src/mcp_memory_service/server.py:2076-2081`
115 | - PR: https://github.com/doobidoo/mcp-memory-service/pull/162
116 | - Issue: (original issue that reported comma-separated tags not working)
117 | 
118 | ## Quick Reference Card
119 | 
120 | ### Symptom
121 | ✗ Error after merged PR: "Input validation error: 'X' is not of type 'Y'"
122 | 
123 | ### Diagnosis
124 | ```bash
125 | # 1. Check when PR was merged
126 | gh pr view <PR_NUMBER> --json mergedAt
127 | 
128 | # 2. Check when server process started
129 | ps aux | grep "memory.*server" | grep -v grep
130 | 
131 | # 3. Compare times - if server started BEFORE merge, that's the issue
132 | ```
133 | 
134 | ### Fix
135 | ```bash
136 | # In Claude Code:
137 | /mcp
138 | 
139 | # Or restart systemd service:
140 | systemctl --user restart mcp-memory-http.service
141 | ```
142 | 
143 | ### Verify
144 | ```bash
145 | # Check new server process exists with recent start time
146 | ps aux | grep "memory.*server" | grep -v grep
147 | 
148 | # Test the fixed functionality
149 | ```
150 | 
151 | ## Date
152 | - Analyzed: October 21, 2025
153 | - PR Merged: October 20, 2025 17:22 UTC
154 | - Issue: Schema caching in MCP client after schema update
155 | 
```

--------------------------------------------------------------------------------
/docs/integrations/groq-model-comparison.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Groq Model Comparison for Code Quality Analysis
  2 | 
  3 | ## Available Models
  4 | 
  5 | ### 1. llama-3.3-70b-versatile (Default)
  6 | **Best for:** General-purpose code analysis with detailed explanations
  7 | 
  8 | **Characteristics:**
  9 | - ✅ Comprehensive, detailed responses
 10 | - ✅ Thorough breakdown of complexity factors
 11 | - ✅ Balanced speed and quality
 12 | - ⚠️ Can be verbose for simple tasks
 13 | 
 14 | **Performance:**
 15 | - Response time: ~1.2-1.6s
 16 | - Detail level: High
 17 | - Accuracy: Excellent
 18 | 
 19 | **Example Output (Complexity 6/10):**
 20 | ```
 21 | **Complexity Rating: 6/10**
 22 | 
 23 | Here's a breakdown of the complexity factors:
 24 | 1. **Functionality**: The function performs data processing...
 25 | 2. **Conditional Statements**: There are two conditional statements...
 26 | 3. **Loops**: There is one loop...
 27 | [... detailed analysis continues ...]
 28 | ```
 29 | 
 30 | ### 2. moonshotai/kimi-k2-instruct (Recommended for Code Analysis)
 31 | **Best for:** Fast, accurate code analysis with agentic intelligence
 32 | 
 33 | **Characteristics:**
 34 | - ✅ **Fastest response time** (~0.9s)
 35 | - ✅ Concise, accurate assessments
 36 | - ✅ 256K context window (largest on GroqCloud)
 37 | - ✅ Excellent for complex coding tasks
 38 | - ✅ Superior agentic intelligence
 39 | 
 40 | **Performance:**
 41 | - Response time: ~0.9s (fastest tested)
 42 | - Detail level: Concise but accurate
 43 | - Accuracy: Excellent
 44 | 
 45 | **Example Output (Complexity 2/10):**
 46 | ```
 47 | Complexity: 2/10
 48 | 
 49 | The function is short, uses only basic control flow and dict/list
 50 | operations, and has no recursion, nested loops, or advanced algorithms.
 51 | ```
 52 | 
 53 | **Kimi K2 Features:**
 54 | - 1 trillion parameters (32B activated MoE)
 55 | - 256K context window
 56 | - 185 tokens/second throughput
 57 | - Optimized for front-end development
 58 | - Superior tool calling capabilities
 59 | 
 60 | ### 3. llama-3.1-8b-instant
 61 | **Best for:** Simple queries requiring minimal analysis
 62 | 
 63 | **Characteristics:**
 64 | - ⚠️ Despite name "instant", actually slower than Kimi K2
 65 | - ⚠️ Very verbose, includes unnecessary details
 66 | - ✅ Lower cost than larger models
 67 | 
 68 | **Performance:**
 69 | - Response time: ~1.6s (slowest tested)
 70 | - Detail level: Very high (sometimes excessive)
 71 | - Accuracy: Good but over-explains
 72 | 
 73 | **Example Output (Complexity 4/10):**
 74 | ```
 75 | I would rate the complexity of this function a 4 out of 10.
 76 | 
 77 | Here's a breakdown of the factors I considered:
 78 | - **Readability**: 6/10
 79 | - **Locality**: 7/10
 80 | - **Abstraction**: 8/10
 81 | - **Efficiency**: 9/10
 82 | [... continues with edge cases, type hints, etc ...]
 83 | ```
 84 | 
 85 | ## Recommendations by Use Case
 86 | 
 87 | ### Pre-commit Hooks (Speed Critical)
 88 | **Use: moonshotai/kimi-k2-instruct**
 89 | ```bash
 90 | ./scripts/utils/groq "Complexity 1-10: $(cat file.py)" --model moonshotai/kimi-k2-instruct
 91 | ```
 92 | - Fastest response (~0.9s)
 93 | - Accurate enough for quality gates
 94 | - Minimizes developer wait time
 95 | 
 96 | ### PR Review (Quality Critical)
 97 | **Use: llama-3.3-70b-versatile**
 98 | ```bash
 99 | ./scripts/utils/groq "Detailed analysis: $(cat file.py)"
100 | ```
101 | - Comprehensive feedback
102 | - Detailed explanations help reviewers
103 | - Balanced speed/quality
104 | 
105 | ### Security Analysis (Accuracy Critical)
106 | **Use: moonshotai/kimi-k2-instruct**
107 | ```bash
108 | ./scripts/utils/groq "Security scan: $(cat file.py)" --model moonshotai/kimi-k2-instruct
109 | ```
110 | - Excellent at identifying vulnerabilities
111 | - Fast enough for CI/CD
112 | - Superior agentic intelligence for complex patterns
113 | 
114 | ### Simple Queries
115 | **Use: llama-3.1-8b-instant** (if cost is priority)
116 | ```bash
117 | ./scripts/utils/groq "Is this function pure?" --model llama-3.1-8b-instant
118 | ```
119 | - Lowest cost
120 | - Good for yes/no questions
121 | - Avoid for complex analysis (slower than Kimi K2)
122 | 
123 | ## Performance Summary
124 | 
125 | | Model | Response Time | Detail Level | Best For | Context |
126 | |-------|--------------|--------------|----------|---------|
127 | | **Kimi K2** | 0.9s ⚡ | Concise ✓ | Speed + Accuracy | 256K |
128 | | **llama-3.3-70b** | 1.2-1.6s | Detailed ✓ | Comprehensive | 128K |
129 | | **llama-3.1-8b** | 1.6s | Very Detailed | Cost savings | 128K |
130 | 
131 | ## Cost Comparison (Groq Pricing)
132 | 
133 | | Model | Input | Output | Use Case |
134 | |-------|-------|--------|----------|
135 | | Kimi K2 | $1.00/M | $3.00/M | Premium speed + quality |
136 | | llama-3.3-70b | ~$0.50/M | ~$0.80/M | Balanced |
137 | | llama-3.1-8b | ~$0.05/M | ~$0.10/M | High volume |
138 | 
139 | ## Switching Models
140 | 
141 | All models use the same interface:
142 | ```bash
143 | # Default (llama-3.3-70b-versatile)
144 | ./scripts/utils/groq "Your prompt"
145 | 
146 | # Kimi K2 (recommended for code analysis)
147 | ./scripts/utils/groq "Your prompt" --model moonshotai/kimi-k2-instruct
148 | 
149 | # Fast/cheap
150 | ./scripts/utils/groq "Your prompt" --model llama-3.1-8b-instant
151 | ```
152 | 
153 | ## Conclusion
154 | 
155 | **For MCP Memory Service code quality workflows:**
156 | - ✅ **Kimi K2**: Best overall - fastest, accurate, excellent for code
157 | - ✅ **llama-3.3-70b**: Good for detailed explanations in PR reviews
158 | - ⚠️ **llama-3.1-8b**: Avoid for code analysis despite "instant" name
159 | 
```

--------------------------------------------------------------------------------
/scripts/database/analyze_sqlite_vec_db.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Simple analysis script to examine SQLite-vec database without dependencies.
  4 | """
  5 | 
  6 | import sqlite3
  7 | import sys
  8 | import os
  9 | import re
 10 | 
 11 | def analyze_database(db_path):
 12 |     """Analyze the database structure and content."""
 13 |     print(f"Analyzing database: {db_path}")
 14 |     print("="*60)
 15 |     
 16 |     if not os.path.exists(db_path):
 17 |         print(f"❌ Database not found: {db_path}")
 18 |         return
 19 |         
 20 |     conn = sqlite3.connect(db_path)
 21 |     cursor = conn.cursor()
 22 |     
 23 |     try:
 24 |         # Check tables
 25 |         cursor.execute("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name")
 26 |         tables = [row[0] for row in cursor.fetchall()]
 27 |         print(f"Tables found: {', '.join(tables)}")
 28 |         print()
 29 |         
 30 |         # Analyze memories table
 31 |         if 'memories' in tables:
 32 |             cursor.execute("SELECT COUNT(*) FROM memories")
 33 |             memory_count = cursor.fetchone()[0]
 34 |             print(f"📝 Memories: {memory_count}")
 35 |             
 36 |             if memory_count > 0:
 37 |                 # Sample some memories
 38 |                 cursor.execute("SELECT content, tags, memory_type FROM memories LIMIT 3")
 39 |                 samples = cursor.fetchall()
 40 |                 print("   Sample memories:")
 41 |                 for i, (content, tags, mem_type) in enumerate(samples, 1):
 42 |                     print(f"   {i}. [{mem_type or 'general'}] {content[:50]}..." + 
 43 |                           (f" (tags: {tags})" if tags else ""))
 44 |         
 45 |         # Analyze embeddings table
 46 |         if 'memory_embeddings' in tables:
 47 |             try:
 48 |                 cursor.execute("SELECT COUNT(*) FROM memory_embeddings")
 49 |                 embedding_count = cursor.fetchone()[0]
 50 |                 print(f"🧠 Embeddings: {embedding_count}")
 51 |                 
 52 |                 # Check schema to get dimension
 53 |                 cursor.execute("""
 54 |                     SELECT sql FROM sqlite_master 
 55 |                     WHERE type='table' AND name='memory_embeddings'
 56 |                 """)
 57 |                 schema = cursor.fetchone()
 58 |                 if schema:
 59 |                     print(f"   Schema: {schema[0]}")
 60 |                     match = re.search(r'FLOAT\[(\d+)\]', schema[0])
 61 |                     if match:
 62 |                         dimension = int(match.group(1))
 63 |                         print(f"   Dimension: {dimension}")
 64 |                         
 65 |             except Exception as e:
 66 |                 print(f"🧠 Embeddings: Error accessing table - {e}")
 67 |                 
 68 |         else:
 69 |             print("🧠 Embeddings: Table not found")
 70 |             
 71 |         # Check for mismatches
 72 |         if 'memories' in tables and 'memory_embeddings' in tables:
 73 |             cursor.execute("SELECT COUNT(*) FROM memories")
 74 |             mem_count = cursor.fetchone()[0]
 75 |             cursor.execute("SELECT COUNT(*) FROM memory_embeddings")
 76 |             emb_count = cursor.fetchone()[0]
 77 |             
 78 |             print()
 79 |             if mem_count == emb_count:
 80 |                 print("✅ Memory and embedding counts match")
 81 |             else:
 82 |                 print(f"⚠️  Mismatch: {mem_count} memories vs {emb_count} embeddings")
 83 |                 
 84 |                 # Find memories without embeddings
 85 |                 cursor.execute("""
 86 |                     SELECT COUNT(*) FROM memories m
 87 |                     WHERE NOT EXISTS (
 88 |                         SELECT 1 FROM memory_embeddings e WHERE e.rowid = m.id
 89 |                     )
 90 |                 """)
 91 |                 missing = cursor.fetchone()[0]
 92 |                 if missing > 0:
 93 |                     print(f"   → {missing} memories missing embeddings")
 94 |                     
 95 |                 # Find orphaned embeddings
 96 |                 cursor.execute("""
 97 |                     SELECT COUNT(*) FROM memory_embeddings e
 98 |                     WHERE NOT EXISTS (
 99 |                         SELECT 1 FROM memories m WHERE m.id = e.rowid
100 |                     )
101 |                 """)
102 |                 orphaned = cursor.fetchone()[0]
103 |                 if orphaned > 0:
104 |                     print(f"   → {orphaned} orphaned embeddings")
105 |         
106 |         # Check for extension loading capability
107 |         print()
108 |         try:
109 |             conn.enable_load_extension(True)
110 |             print("✅ Extension loading enabled")
111 |         except:
112 |             print("❌ Extension loading not available")
113 |             
114 |     except Exception as e:
115 |         print(f"❌ Error analyzing database: {e}")
116 |         
117 |     finally:
118 |         conn.close()
119 | 
120 | def main():
121 |     if len(sys.argv) != 2:
122 |         print("Usage: python analyze_sqlite_vec_db.py <database_path>")
123 |         sys.exit(1)
124 |         
125 |     db_path = sys.argv[1]
126 |     analyze_database(db_path)
127 | 
128 | if __name__ == "__main__":
129 |     main()
```

--------------------------------------------------------------------------------
/docs/technical/migration-log.md:
--------------------------------------------------------------------------------

```markdown
  1 | # FastAPI MCP Server Migration Log
  2 | 
  3 | ## Architecture Decision Record
  4 | 
  5 | **Date**: 2025-08-03  
  6 | **Branch**: `feature/fastapi-mcp-native-v4`  
  7 | **Version**: 4.0.0-alpha.1
  8 | 
  9 | ### Decision: Migrate from Node.js Bridge to Native FastAPI MCP Server
 10 | 
 11 | **Problem**: Node.js HTTP-to-MCP bridge has SSL handshake issues with self-signed certificates, preventing reliable remote memory service access.
 12 | 
 13 | **Solution**: Replace Node.js bridge with native FastAPI MCP server using official MCP Python SDK.
 14 | 
 15 | ### Technical Findings
 16 | 
 17 | 1. **Node.js SSL Issues**: 
 18 |    - Node.js HTTPS client fails SSL handshake with self-signed certificates
 19 |    - Issue persists despite custom HTTPS agents and disabled certificate validation
 20 |    - Workaround: Slash commands using curl work, but direct MCP tools fail
 21 | 
 22 | 2. **FastAPI MCP Benefits**:
 23 |    - Native MCP protocol support via FastMCP framework
 24 |    - Python SSL stack handles self-signed certificates more reliably
 25 |    - Eliminates bridging complexity and failure points
 26 |    - Direct integration with existing storage backends
 27 | 
 28 | ### Implementation Status
 29 | 
 30 | #### ✅ Completed (Commit: 5709be1)
 31 | - [x] Created feature branch `feature/fastapi-mcp-native-v4`
 32 | - [x] Updated GitHub issues #71 and #72 with migration plan
 33 | - [x] Implemented basic FastAPI MCP server structure
 34 | - [x] Added 5 core memory operations: store, retrieve, search_by_tag, delete, health
 35 | - [x] Version bump to 4.0.0-alpha.1
 36 | - [x] Added new script entry point: `mcp-memory-server`
 37 | 
 38 | #### ✅ Migration Completed (Commit: c0a0a45)
 39 | - [x] Dual-service architecture deployed successfully
 40 | - [x] FastMCP server (port 8000) + HTTP dashboard (port 8080) 
 41 | - [x] SSL issues completely resolved
 42 | - [x] Production deployment to memory.local verified
 43 | - [x] Standard MCP client compatibility confirmed
 44 | - [x] Documentation and deployment scripts completed
 45 | 
 46 | #### 🚧 Known Limitations
 47 | - **Claude Code SSE Compatibility**: Claude Code's SSE client has specific requirements incompatible with FastMCP implementation
 48 | - **Workaround**: Claude Code users can use HTTP dashboard or alternative MCP clients
 49 | - **Impact**: Core migration objectives achieved; this is a client-specific limitation
 50 | 
 51 | #### 📋 Future Development
 52 | 1. **Claude Code Compatibility**: Investigate custom SSE client implementation
 53 | 2. **Tool Expansion**: Add remaining 17 memory operations as needed
 54 | 3. **Performance Optimization**: Monitor and optimize dual-service performance
 55 | 4. **Client Library**: Develop Python/JavaScript MCP client libraries
 56 | 5. **Documentation**: Expand client compatibility matrix
 57 | 
 58 | ### Dashboard Tools Exclusion
 59 | 
 60 | **Decision**: Exclude 8 dashboard-specific tools from FastAPI MCP server.
 61 | 
 62 | **Rationale**: 
 63 | - HTTP dashboard at https://github.com/doobidoo/mcp-memory-dashboard provides superior web interface
 64 | - MCP server should focus on Claude Code integration, not duplicate dashboard functionality
 65 | - Clear separation of concerns: MCP for Claude Code, HTTP for administration
 66 | 
 67 | **Excluded Tools**:
 68 | - dashboard_check_health, dashboard_recall_memory, dashboard_retrieve_memory
 69 | - dashboard_search_by_tag, dashboard_get_stats, dashboard_optimize_db
 70 | - dashboard_create_backup, dashboard_delete_memory
 71 | 
 72 | ### Architecture Comparison
 73 | 
 74 | | Aspect | Node.js Bridge | FastAPI MCP |
 75 | |--------|----------------|-------------|
 76 | | Protocol | HTTP→MCP translation | Native MCP |
 77 | | SSL Handling | Node.js HTTPS (problematic) | Python SSL (reliable) |
 78 | | Complexity | 3 layers (Claude→Bridge→HTTP→Memory) | 2 layers (Claude→MCP Server) |
 79 | | Maintenance | Multiple codebases | Unified Python |
 80 | | Remote Access | SSL issues | Direct support |
 81 | | Mobile Support | Limited by bridge | Full MCP compatibility |
 82 | 
 83 | ### Success Metrics
 84 | 
 85 | - [x] ~~All MCP tools function correctly with Claude Code~~ **Standard MCP clients work; Claude Code has SSE compatibility issue**
 86 | - [x] SSL/HTTPS connectivity works without workarounds
 87 | - [x] Performance equals or exceeds Node.js bridge  
 88 | - [x] Remote access works from multiple clients
 89 | - [x] Easy deployment without local bridge requirements
 90 | 
 91 | ### Project Completion Summary
 92 | 
 93 | **Status**: ✅ **MIGRATION SUCCESSFUL**
 94 | 
 95 | **Date Completed**: August 3, 2025  
 96 | **Final Commit**: c0a0a45  
 97 | **Deployment Status**: Production-ready dual-service architecture
 98 | 
 99 | The FastAPI MCP migration has successfully achieved its primary objectives:
100 | 1. **SSL Issues Eliminated**: Node.js SSL handshake problems completely resolved
101 | 2. **Architecture Simplified**: Removed complex bridging layers
102 | 3. **Standard Compliance**: Full MCP protocol compatibility with standard clients
103 | 4. **Production Ready**: Deployed and tested dual-service architecture
104 | 
105 | **Note**: Claude Code SSE client compatibility remains a separate issue to be addressed in future development.
```

--------------------------------------------------------------------------------
/claude-hooks/MIGRATION.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Migration Guide: Unified Python Hook Installer
  2 | 
  3 | ## 🎯 Overview
  4 | 
  5 | The Claude Code Memory Awareness Hooks have been **consolidated into a single, unified Python installer** that replaces all previous platform-specific installers.
  6 | 
  7 | ## 📋 **What Changed**
  8 | 
  9 | ### ❌ **Deprecated (Removed)**
 10 | - `install.sh` - Legacy shell installer
 11 | - `install-natural-triggers.sh` - Natural triggers shell installer
 12 | - `install_claude_hooks_windows.bat` - Windows batch installer
 13 | 
 14 | ### ✅ **New Unified Solution**
 15 | - `install_hooks.py` - **Single cross-platform Python installer**
 16 | 
 17 | ## 🚀 **Migration Steps**
 18 | 
 19 | ### For New Installations
 20 | ```bash
 21 | # Navigate to hooks directory
 22 | cd claude-hooks
 23 | 
 24 | # Install basic memory awareness hooks
 25 | python install_hooks.py --basic
 26 | 
 27 | # OR install Natural Memory Triggers v7.1.3 (recommended)
 28 | python install_hooks.py --natural-triggers
 29 | 
 30 | # OR install everything
 31 | python install_hooks.py --all
 32 | ```
 33 | 
 34 | ### For Existing Users
 35 | ```bash
 36 | # Uninstall old hooks (optional, installer handles upgrades)
 37 | python install_hooks.py --uninstall
 38 | 
 39 | # Install fresh with new installer
 40 | python install_hooks.py --natural-triggers
 41 | ```
 42 | 
 43 | ## ✨ **Benefits of Unified Installer**
 44 | 
 45 | ### 🔧 **Technical Improvements**
 46 | - **Intelligent JSON merging** - Preserves existing Claude Code hook configurations
 47 | - **Cross-platform compatibility** - Works on Windows, macOS, and Linux
 48 | - **Dynamic path resolution** - No hardcoded paths, works in any location
 49 | - **Atomic installations** - Automatic rollback on failure
 50 | - **Comprehensive backups** - Timestamped backups before changes
 51 | - **Empty directory cleanup** - Proper uninstall process
 52 | 
 53 | ### 🎯 **User Experience**
 54 | - **Single installation method** across all platforms
 55 | - **Consistent CLI interface** with clear options
 56 | - **Dry-run support** for testing without changes
 57 | - **Enhanced error handling** with detailed feedback
 58 | - **CLI management tools** for real-time configuration
 59 | 
 60 | ## 📖 **Advanced Usage**
 61 | 
 62 | ### Available Options
 63 | ```bash
 64 | # Test installation without making changes
 65 | python install_hooks.py --dry-run --natural-triggers
 66 | 
 67 | # Install only basic hooks
 68 | python install_hooks.py --basic
 69 | 
 70 | # Install Natural Memory Triggers (recommended)
 71 | python install_hooks.py --natural-triggers
 72 | 
 73 | # Install everything (basic + natural triggers)
 74 | python install_hooks.py --all
 75 | 
 76 | # Uninstall all hooks
 77 | python install_hooks.py --uninstall
 78 | 
 79 | # Get help
 80 | python install_hooks.py --help
 81 | ```
 82 | 
 83 | ### CLI Management (Natural Memory Triggers)
 84 | ```bash
 85 | # Check status
 86 | node ~/.claude/hooks/memory-mode-controller.js status
 87 | 
 88 | # Switch performance profiles
 89 | node ~/.claude/hooks/memory-mode-controller.js profile balanced
 90 | node ~/.claude/hooks/memory-mode-controller.js profile speed_focused
 91 | node ~/.claude/hooks/memory-mode-controller.js profile memory_aware
 92 | 
 93 | # Adjust sensitivity
 94 | node ~/.claude/hooks/memory-mode-controller.js sensitivity 0.7
 95 | ```
 96 | 
 97 | ## 🔧 **Integration with Main Installer**
 98 | 
 99 | The main MCP Memory Service installer now uses the unified hook installer:
100 | 
101 | ```bash
102 | # Install service + basic hooks
103 | python scripts/installation/install.py --install-hooks
104 | 
105 | # Install service + Natural Memory Triggers
106 | python scripts/installation/install.py --install-natural-triggers
107 | ```
108 | 
109 | ## 🛠 **Troubleshooting**
110 | 
111 | ### Common Issues
112 | 
113 | **Q: Can I still use the old shell scripts?**
114 | A: No, they have been removed. The unified Python installer provides all functionality with improved reliability.
115 | 
116 | **Q: Will my existing hook configuration be preserved?**
117 | A: Yes, the unified installer intelligently merges configurations and preserves existing hooks.
118 | 
119 | **Q: What if I have custom modifications to the old installers?**
120 | A: The unified installer is designed to be extensible. Please file an issue if you need specific functionality.
121 | 
122 | **Q: Does this work on Windows?**
123 | A: Yes, the unified Python installer provides full Windows support with proper path handling.
124 | 
125 | ## 📞 **Support**
126 | 
127 | If you encounter issues:
128 | 
129 | 1. **Check prerequisites**: Ensure Claude Code CLI, Node.js, and Python are installed
130 | 2. **Test with dry-run**: Use `--dry-run` flag to identify issues
131 | 3. **Check logs**: The installer provides detailed error messages
132 | 4. **File an issue**: [GitHub Issues](https://github.com/doobidoo/mcp-memory-service/issues)
133 | 
134 | ## 🎉 **Benefits Summary**
135 | 
136 | The unified installer provides:
137 | - ✅ **Better reliability** across all platforms
138 | - ✅ **Safer installations** with intelligent configuration merging
139 | - ✅ **Consistent experience** regardless of operating system
140 | - ✅ **Advanced features** like Natural Memory Triggers v7.1.3
141 | - ✅ **Professional tooling** with comprehensive testing and validation
142 | 
143 | This migration represents a significant improvement in the installation experience while maintaining full backward compatibility for existing users.
```

--------------------------------------------------------------------------------
/docs/deployment/dual-service.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Dual Service Deployment - FastMCP + HTTP Dashboard
  2 | 
  3 | ## Overview
  4 | 
  5 | This deployment provides both **FastMCP Protocol** and **HTTP Dashboard** services running simultaneously, eliminating Node.js SSL issues while maintaining full functionality.
  6 | 
  7 | ## Architecture
  8 | 
  9 | ### Service 1: FastMCP Server (Port 8000)
 10 | - **Purpose**: Native MCP protocol for Claude Code clients
 11 | - **Protocol**: JSON-RPC 2.0 over Server-Sent Events
 12 | - **Access**: `http://[IP]:8000/mcp`
 13 | - **Service**: `mcp-memory.service`
 14 | 
 15 | ### Service 2: HTTP Dashboard (Port 8080)
 16 | - **Purpose**: Web dashboard and HTTP API
 17 | - **Protocol**: Standard HTTP/REST
 18 | - **Access**: `http://[IP]:8080/`
 19 | - **API**: `http://[IP]:8080/api/*`
 20 | - **Service**: `mcp-http-dashboard.service`
 21 | 
 22 | ## Deployment
 23 | 
 24 | ### Quick Deploy
 25 | ```bash
 26 | ./deploy_dual_services.sh
 27 | ```
 28 | 
 29 | ### Manual Setup
 30 | ```bash
 31 | # Install FastMCP service
 32 | sudo cp /tmp/fastmcp-server-with-mdns.service /etc/systemd/system/mcp-memory.service
 33 | 
 34 | # Install HTTP Dashboard service  
 35 | sudo cp /tmp/mcp-http-dashboard.service /etc/systemd/system/mcp-http-dashboard.service
 36 | 
 37 | # Enable and start services
 38 | sudo systemctl daemon-reload
 39 | sudo systemctl enable mcp-memory mcp-http-dashboard
 40 | sudo systemctl start mcp-memory mcp-http-dashboard
 41 | ```
 42 | 
 43 | ## Access URLs
 44 | 
 45 | Replace `[IP]` with your actual server IP address (e.g., `10.0.1.30`):
 46 | 
 47 | - **FastMCP Protocol**: `http://[IP]:8000/mcp` (for Claude Code)
 48 | - **Web Dashboard**: `http://[IP]:8080/` (for monitoring)
 49 | - **Health API**: `http://[IP]:8080/api/health`
 50 | - **Memory API**: `http://[IP]:8080/api/memories`
 51 | - **Search API**: `http://[IP]:8080/api/search`
 52 | 
 53 | ## Service Management
 54 | 
 55 | ### Status Checks
 56 | ```bash
 57 | sudo systemctl status mcp-memory          # FastMCP server
 58 | sudo systemctl status mcp-http-dashboard  # HTTP dashboard
 59 | ```
 60 | 
 61 | ### View Logs
 62 | ```bash
 63 | sudo journalctl -u mcp-memory -f          # FastMCP logs
 64 | sudo journalctl -u mcp-http-dashboard -f  # Dashboard logs
 65 | ```
 66 | 
 67 | ### Control Services
 68 | ```bash
 69 | # Start services
 70 | sudo systemctl start mcp-memory mcp-http-dashboard
 71 | 
 72 | # Stop services  
 73 | sudo systemctl stop mcp-memory mcp-http-dashboard
 74 | 
 75 | # Restart services
 76 | sudo systemctl restart mcp-memory mcp-http-dashboard
 77 | ```
 78 | 
 79 | ## mDNS Discovery
 80 | 
 81 | Both services advertise via mDNS for network discovery:
 82 | 
 83 | ```bash
 84 | # Browse HTTP services
 85 | avahi-browse -t _http._tcp
 86 | 
 87 | # Browse MCP services (if supported)
 88 | avahi-browse -t _mcp._tcp
 89 | 
 90 | # Resolve hostname
 91 | avahi-resolve-host-name memory.local
 92 | ```
 93 | 
 94 | **Services Advertised:**
 95 | - `MCP Memory Dashboard._http._tcp.local.` (port 8080)
 96 | - `MCP Memory FastMCP._mcp._tcp.local.` (port 8000)
 97 | 
 98 | ## Dependencies
 99 | 
100 | Ensure these packages are installed in the virtual environment:
101 | - `mcp` - MCP Protocol support
102 | - `fastapi` - Web framework
103 | - `uvicorn` - ASGI server
104 | - `zeroconf` - mDNS advertising
105 | - `aiohttp` - HTTP client/server
106 | - `sqlite-vec` - Vector database
107 | - `sentence-transformers` - Embeddings
108 | 
109 | ## Configuration
110 | 
111 | ### Environment Variables
112 | - `MCP_MEMORY_STORAGE_BACKEND=sqlite_vec`
113 | - `MCP_MDNS_ENABLED=true`
114 | - `MCP_HTTP_ENABLED=true`
115 | - `MCP_SERVER_HOST=0.0.0.0`
116 | - `MCP_SERVER_PORT=8000`
117 | - `MCP_HTTP_PORT=8080`
118 | 
119 | ### Storage
120 | Both services share the same SQLite-vec database:
121 | - **Path**: `~/.local/share/mcp-memory/sqlite_vec.db`
122 | - **Backend**: `sqlite_vec`
123 | - **Model**: `all-MiniLM-L6-v2`
124 | 
125 | ## Troubleshooting
126 | 
127 | ### Services Not Accessible
128 | 1. Check if services are running: `systemctl status [service]`
129 | 2. Verify ports are listening: `ss -tlnp | grep -E ":800[08]"`
130 | 3. Test direct IP access instead of hostname
131 | 4. Check firewall rules if accessing remotely
132 | 
133 | ### mDNS Not Working
134 | 1. Ensure avahi-daemon is running: `systemctl status avahi-daemon`
135 | 2. Install missing dependencies: `pip install zeroconf aiohttp`
136 | 3. Restart services after installing dependencies
137 | 
138 | ### FastMCP Protocol Issues
139 | 1. Ensure client accepts `text/event-stream` headers
140 | 2. Use JSON-RPC 2.0 format for requests
141 | 3. Access `/mcp` endpoint, not root `/`
142 | 
143 | ## Client Configuration
144 | 
145 | ### Claude Code
146 | Configure MCP client to use:
147 | ```
148 | http://[SERVER_IP]:8000/mcp
149 | ```
150 | 
151 | ### curl Examples
152 | ```bash
153 | # Health check
154 | curl http://[SERVER_IP]:8080/api/health
155 | 
156 | # Store memory
157 | curl -X POST http://[SERVER_IP]:8080/api/memories \
158 |   -H "Content-Type: application/json" \
159 |   -d '{"content": "test memory", "tags": ["test"]}'
160 | 
161 | # Search memories
162 | curl -X POST http://[SERVER_IP]:8080/api/search \
163 |   -H "Content-Type: application/json" \
164 |   -d '{"query": "test", "limit": 5}'
165 | ```
166 | 
167 | ## Benefits
168 | 
169 | ✅ **No Node.js SSL Issues** - Pure Python implementation  
170 | ✅ **Dual Protocol Support** - Both MCP and HTTP available  
171 | ✅ **Network Discovery** - mDNS advertising for easy access  
172 | ✅ **Production Ready** - systemd managed services  
173 | ✅ **Backward Compatible** - HTTP API preserved for existing tools  
174 | ✅ **Claude Code Ready** - Native MCP protocol support
```

--------------------------------------------------------------------------------
/claude_commands/memory-context.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Add Current Session to Memory
 2 | 
 3 | I'll help you capture the current conversation and project context as a memory that can be recalled later. This command is perfect for preserving important session insights, decisions, and progress summaries.
 4 | 
 5 | ## What I'll do:
 6 | 
 7 | 1. **Session Analysis**: I'll analyze our current conversation to extract key insights, decisions, and progress made.
 8 | 
 9 | 2. **Project Context**: I'll capture the current project state including:
10 |    - Working directory and git repository status
11 |    - Recent file changes and commits
12 |    - Current branch and development context
13 | 
14 | 3. **Conversation Summary**: I'll create a concise summary of our session including:
15 |    - Main topics discussed
16 |    - Decisions made or problems solved
17 |    - Action items or next steps identified
18 |    - Code changes or configurations applied
19 | 
20 | 4. **Smart Tagging**: I'll automatically generate relevant tags based on the session content and project context, including the machine hostname as a source identifier.
21 | 
22 | 5. **Memory Storage**: I'll store the session summary with appropriate metadata for easy future retrieval.
23 | 
24 | ## Usage Examples:
25 | 
26 | ```bash
27 | claude /memory-context
28 | claude /memory-context --summary "Architecture planning session"
29 | claude /memory-context --tags "planning,architecture" --type "session"
30 | claude /memory-context --include-files --include-commits
31 | ```
32 | 
33 | ## Implementation:
34 | 
35 | I'll automatically analyze our current session and project state, then store it to your MCP Memory Service at `https://memory.local:8443/`:
36 | 
37 | 1. **Conversation Analysis**: Extract key topics, decisions, and insights from our current chat
38 | 2. **Project State Capture**: 
39 |    - Current working directory and git status
40 |    - Recent commits and file changes
41 |    - Branch information and repository state
42 | 3. **Context Synthesis**: Combine conversation and project context into a coherent summary
43 | 4. **Memory Creation**: Store the context with automatic tags including machine hostname
44 | 5. **Auto-Save**: Memory is stored immediately without confirmation prompts
45 | 
46 | The service uses HTTPS with curl `-k` flag for secure communication and automatically detects client hostname using the `X-Client-Hostname` header.
47 | 
48 | The stored memory will include:
49 | - **Source Machine**: Hostname tag for tracking memory origin (e.g., "source:your-machine-name")
50 | - **Session Summary**: Concise overview of our conversation
51 | - **Key Decisions**: Important choices or conclusions reached
52 | - **Technical Details**: Code changes, configurations, or technical insights
53 | - **Project Context**: Repository state, files modified, current focus
54 | - **Action Items**: Next steps or follow-up tasks identified
55 | - **Timestamp**: When the session context was captured
56 | 
57 | ## Context Elements:
58 | 
59 | ### Conversation Context
60 | - **Topics Discussed**: Main subjects and themes from our chat
61 | - **Problems Solved**: Issues resolved or questions answered
62 | - **Decisions Made**: Choices made or approaches agreed upon
63 | - **Insights Gained**: New understanding or knowledge acquired
64 | 
65 | ### Project Context
66 | - **Repository Info**: Git repository, branch, and recent commits
67 | - **File Changes**: Modified, added, or deleted files
68 | - **Directory Structure**: Current working directory and project layout
69 | - **Development State**: Current development phase or focus area
70 | 
71 | ### Technical Context
72 | - **Code Changes**: Functions, classes, or modules modified
73 | - **Configuration Updates**: Settings, dependencies, or environment changes
74 | - **Architecture Decisions**: Design choices or structural changes
75 | - **Performance Considerations**: Optimization or efficiency insights
76 | 
77 | ## Arguments:
78 | 
79 | - `$ARGUMENTS` - Optional custom summary or context description
80 | - `--summary "text"` - Custom session summary override
81 | - `--tags "tag1,tag2"` - Additional tags to apply
82 | - `--type "session|meeting|planning|development"` - Context type
83 | - `--include-files` - Include detailed file change information
84 | - `--include-commits` - Include recent commit messages and changes
85 | - `--include-code` - Include snippets of important code changes
86 | - `--private` - Mark as private/sensitive session content
87 | - `--project "name"` - Override project name detection
88 | 
89 | ## Automatic Features:
90 | 
91 | - **Smart Summarization**: Extract the most important points from our conversation
92 | - **Duplicate Detection**: Avoid storing redundant session information
93 | - **Context Linking**: Connect to related memories and previous sessions
94 | - **Progress Tracking**: Identify progress made since last context capture
95 | - **Knowledge Extraction**: Pull out reusable insights and patterns
96 | 
97 | This command is especially useful at the end of productive development sessions, after important architectural discussions, or when you want to preserve the current state of your thinking and progress for future reference.
```

--------------------------------------------------------------------------------
/tests/timestamp/test_timestamp_issue.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """Test script to debug timestamp issues in recall functionality."""
  3 | 
  4 | import sys
  5 | import os
  6 | sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'src'))
  7 | 
  8 | import asyncio
  9 | import time
 10 | from datetime import datetime, timedelta
 11 | from mcp_memory_service.models.memory import Memory
 12 | from mcp_memory_service.utils.hashing import generate_content_hash
 13 | from mcp_memory_service.utils.time_parser import parse_time_expression, extract_time_expression
 14 | 
 15 | async def test_timestamp_issue():
 16 |     """Test timestamp storage and retrieval issues."""
 17 |     
 18 |     print("=== Testing Timestamp Issue ===")
 19 |     
 20 |     # Test 1: Precision loss when converting float to int
 21 |     print("\n1. Testing precision loss:")
 22 |     current_time = time.time()
 23 |     print(f"Current time (float): {current_time}")
 24 |     print(f"Current time (int): {int(current_time)}")
 25 |     print(f"Difference: {current_time - int(current_time)} seconds")
 26 |     
 27 |     # Test 2: Time expression parsing
 28 |     print("\n2. Testing time expression parsing:")
 29 |     test_queries = [
 30 |         "yesterday",
 31 |         "last week",
 32 |         "2 days ago",
 33 |         "last month",
 34 |         "this morning",
 35 |         "yesterday afternoon"
 36 |     ]
 37 |     
 38 |     for query in test_queries:
 39 |         cleaned_query, (start_ts, end_ts) = extract_time_expression(query)
 40 |         if start_ts and end_ts:
 41 |             start_dt = datetime.fromtimestamp(start_ts)
 42 |             end_dt = datetime.fromtimestamp(end_ts)
 43 |             print(f"\nQuery: '{query}'")
 44 |             print(f"  Cleaned: '{cleaned_query}'")
 45 |             print(f"  Start: {start_dt} (timestamp: {start_ts})")
 46 |             print(f"  End: {end_dt} (timestamp: {end_ts})")
 47 |             print(f"  Start (int): {int(start_ts)}")
 48 |             print(f"  End (int): {int(end_ts)}")
 49 |     
 50 |     # Test 3: Memory timestamp creation
 51 |     print("\n3. Testing Memory timestamp creation:")
 52 |     memory = Memory(
 53 |         content="Test memory",
 54 |         content_hash=generate_content_hash("Test memory"),
 55 |         tags=["test"]
 56 |     )
 57 |     
 58 |     print(f"Memory created_at (float): {memory.created_at}")
 59 |     print(f"Memory created_at (int): {int(memory.created_at)}")
 60 |     print(f"Memory created_at_iso: {memory.created_at_iso}")
 61 |     
 62 |     # Test 4: Timestamp comparison issue
 63 |     print("\n4. Testing timestamp comparison issue:")
 64 |     # Create a timestamp from "yesterday"
 65 |     yesterday_query = "yesterday"
 66 |     _, (yesterday_start, yesterday_end) = extract_time_expression(yesterday_query)
 67 |     
 68 |     # Create a memory with timestamp in the middle of yesterday
 69 |     yesterday_middle = (yesterday_start + yesterday_end) / 2
 70 |     test_memory_timestamp = yesterday_middle
 71 |     
 72 |     print(f"\nYesterday range:")
 73 |     print(f"  Start: {yesterday_start} ({datetime.fromtimestamp(yesterday_start)})")
 74 |     print(f"  End: {yesterday_end} ({datetime.fromtimestamp(yesterday_end)})")
 75 |     print(f"  Test memory timestamp: {test_memory_timestamp} ({datetime.fromtimestamp(test_memory_timestamp)})")
 76 |     
 77 |     # Check if memory would be included with float comparison
 78 |     print(f"\nFloat comparison:")
 79 |     print(f"  {test_memory_timestamp} >= {yesterday_start}: {test_memory_timestamp >= yesterday_start}")
 80 |     print(f"  {test_memory_timestamp} <= {yesterday_end}: {test_memory_timestamp <= yesterday_end}")
 81 |     print(f"  Would be included: {test_memory_timestamp >= yesterday_start and test_memory_timestamp <= yesterday_end}")
 82 |     
 83 |     # Check if memory would be included with int comparison (current implementation)
 84 |     print(f"\nInt comparison (current implementation):")
 85 |     print(f"  {int(test_memory_timestamp)} >= {int(yesterday_start)}: {int(test_memory_timestamp) >= int(yesterday_start)}")
 86 |     print(f"  {int(test_memory_timestamp)} <= {int(yesterday_end)}: {int(test_memory_timestamp) <= int(yesterday_end)}")
 87 |     print(f"  Would be included: {int(test_memory_timestamp) >= int(yesterday_start) and int(test_memory_timestamp) <= int(yesterday_end)}")
 88 |     
 89 |     # Test edge case: memory created at the very beginning or end of a day
 90 |     print(f"\n5. Testing edge cases:")
 91 |     # Memory at 00:00:00.5 (half second past midnight)
 92 |     edge_timestamp = yesterday_start + 0.5
 93 |     print(f"  Edge case timestamp: {edge_timestamp} ({datetime.fromtimestamp(edge_timestamp)})")
 94 |     print(f"  Float: {edge_timestamp} >= {yesterday_start}: {edge_timestamp >= yesterday_start}")
 95 |     print(f"  Int: {int(edge_timestamp)} >= {int(yesterday_start)}: {int(edge_timestamp) >= int(yesterday_start)}")
 96 |     
 97 |     # If the int values are the same but float values are different, we might miss memories
 98 |     if int(edge_timestamp) == int(yesterday_start) and edge_timestamp > yesterday_start:
 99 |         print(f"  WARNING: This memory would be missed with int comparison!")
100 | 
101 | if __name__ == "__main__":
102 |     asyncio.run(test_timestamp_issue())
103 | 
```

--------------------------------------------------------------------------------
/tests/integration/test_cloudflare_connection.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """Integration test for Cloudflare connection and storage initialization."""
  3 | 
  4 | import os
  5 | import sys
  6 | import requests
  7 | from pathlib import Path
  8 | from dotenv import load_dotenv
  9 | 
 10 | # Add the project directory to path for standalone execution
 11 | project_dir = Path(__file__).parent.parent.parent
 12 | sys.path.insert(0, str(project_dir / "src"))
 13 | 
 14 | # Load environment from .env file
 15 | load_dotenv(project_dir / ".env")
 16 | 
 17 | 
 18 | def test_cloudflare_config():
 19 |     """Test Cloudflare configuration and API connectivity."""
 20 |     print("=== Cloudflare Configuration Test ===")
 21 |     print(f"Backend: {os.getenv('MCP_MEMORY_STORAGE_BACKEND')}")
 22 |     print(f"API Token: {os.getenv('CLOUDFLARE_API_TOKEN')[:10]}..." if os.getenv('CLOUDFLARE_API_TOKEN') else "API Token: NOT SET")
 23 |     print(f"Account ID: {os.getenv('CLOUDFLARE_ACCOUNT_ID')}")
 24 |     print(f"D1 Database ID: {os.getenv('CLOUDFLARE_D1_DATABASE_ID')}")
 25 |     print(f"Vectorize Index: {os.getenv('CLOUDFLARE_VECTORIZE_INDEX')}")
 26 | 
 27 |     # Test Cloudflare API connection
 28 |     api_token = os.getenv('CLOUDFLARE_API_TOKEN')
 29 |     account_id = os.getenv('CLOUDFLARE_ACCOUNT_ID')
 30 | 
 31 |     if api_token and account_id:
 32 |         print("\n=== Testing Cloudflare API Connection ===")
 33 |         headers = {
 34 |             'Authorization': f'Bearer {api_token}',
 35 |             'Content-Type': 'application/json'
 36 |         }
 37 | 
 38 |         # Test API token validity
 39 |         url = f"https://api.cloudflare.com/client/v4/accounts/{account_id}"
 40 |         try:
 41 |             response = requests.get(url, headers=headers)
 42 |             if response.status_code == 200:
 43 |                 print("✅ Cloudflare API token is valid")
 44 |                 return True
 45 |             else:
 46 |                 print(f"❌ Cloudflare API error: {response.status_code}")
 47 |                 print(f"Response: {response.text}")
 48 |                 return False
 49 |         except Exception as e:
 50 |             print(f"❌ Connection error: {str(e)}")
 51 |             return False
 52 |     else:
 53 |         print("❌ Missing API token or account ID")
 54 |         return False
 55 | 
 56 | 
 57 | def test_storage_backend_import():
 58 |     """Test storage backend import and initialization."""
 59 |     print("\n=== Testing Storage Backend Import ===")
 60 |     try:
 61 |         from mcp_memory_service.config import STORAGE_BACKEND
 62 |         print(f"Config says backend is: {STORAGE_BACKEND}")
 63 | 
 64 |         if STORAGE_BACKEND == 'cloudflare':
 65 |             try:
 66 |                 from mcp_memory_service.storage.cloudflare import CloudflareStorage
 67 |                 print("✅ CloudflareStorage import successful")
 68 | 
 69 |                 # Try to initialize
 70 |                 print("\n=== Attempting to Initialize Cloudflare Storage ===")
 71 |                 from mcp_memory_service.config import (
 72 |                     CLOUDFLARE_API_TOKEN, CLOUDFLARE_ACCOUNT_ID,
 73 |                     CLOUDFLARE_VECTORIZE_INDEX, CLOUDFLARE_D1_DATABASE_ID,
 74 |                     CLOUDFLARE_R2_BUCKET, CLOUDFLARE_EMBEDDING_MODEL,
 75 |                     CLOUDFLARE_LARGE_CONTENT_THRESHOLD, CLOUDFLARE_MAX_RETRIES,
 76 |                     CLOUDFLARE_BASE_DELAY
 77 |                 )
 78 |                 storage = CloudflareStorage(
 79 |                     api_token=CLOUDFLARE_API_TOKEN,
 80 |                     account_id=CLOUDFLARE_ACCOUNT_ID,
 81 |                     vectorize_index=CLOUDFLARE_VECTORIZE_INDEX,
 82 |                     d1_database_id=CLOUDFLARE_D1_DATABASE_ID,
 83 |                     r2_bucket=CLOUDFLARE_R2_BUCKET,
 84 |                     embedding_model=CLOUDFLARE_EMBEDDING_MODEL,
 85 |                     large_content_threshold=CLOUDFLARE_LARGE_CONTENT_THRESHOLD,
 86 |                     max_retries=CLOUDFLARE_MAX_RETRIES,
 87 |                     base_delay=CLOUDFLARE_BASE_DELAY
 88 |                 )
 89 |                 print("✅ CloudflareStorage initialized")
 90 |                 return True
 91 |             except Exception as e:
 92 |                 print(f"❌ CloudflareMemoryStorage error: {str(e)}")
 93 |                 import traceback
 94 |                 traceback.print_exc()
 95 |                 return False
 96 |         else:
 97 |             print(f"⚠️ Backend is set to {STORAGE_BACKEND}, not cloudflare")
 98 |             return False
 99 | 
100 |     except Exception as e:
101 |         print(f"❌ Import error: {str(e)}")
102 |         import traceback
103 |         traceback.print_exc()
104 |         return False
105 | 
106 | 
107 | def main():
108 |     """Run all Cloudflare connection tests."""
109 |     print("🧪 Running Cloudflare Integration Tests")
110 |     print("=" * 50)
111 | 
112 |     success = True
113 | 
114 |     # Test API connectivity
115 |     if not test_cloudflare_config():
116 |         success = False
117 | 
118 |     # Test storage backend
119 |     if not test_storage_backend_import():
120 |         success = False
121 | 
122 |     print("\n" + "=" * 50)
123 |     if success:
124 |         print("✅ All Cloudflare connection tests passed!")
125 |         return 0
126 |     else:
127 |         print("❌ Some Cloudflare connection tests failed!")
128 |         return 1
129 | 
130 | 
131 | if __name__ == "__main__":
132 |     sys.exit(main())
```

--------------------------------------------------------------------------------
/SPONSORS.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Support MCP Memory Service Development
  2 | 
  3 | Thank you for considering sponsoring MCP Memory Service! Your support helps maintain and enhance this production-ready knowledge management system.
  4 | 
  5 | ## 🌟 Why Your Sponsorship Matters
  6 | 
  7 | MCP Memory Service is more than just a memory storage tool—it's a comprehensive knowledge management platform that:
  8 | 
  9 | - **Processes queries in <1 second** with advanced semantic search
 10 | - **Manages 300+ memories** in production environments
 11 | - **Provides 16 operations** for complete memory lifecycle management
 12 | - **Offers enterprise features** like automatic backups and health monitoring
 13 | - **Supports the MCP ecosystem** with a reference implementation
 14 | 
 15 | ## 📊 Project Impact
 16 | 
 17 | <p align="center">
 18 |   <img src="https://img.shields.io/badge/Memories_Managed-319+-blue?style=for-the-badge" />
 19 |   <img src="https://img.shields.io/badge/Query_Time-828ms-green?style=for-the-badge" />
 20 |   <img src="https://img.shields.io/badge/Cache_Hit_Rate-100%25-brightgreen?style=for-the-badge" />
 21 |   <img src="https://img.shields.io/badge/Operations-16-orange?style=for-the-badge" />
 22 | </p>
 23 | 
 24 | ## 💎 Sponsorship Tiers
 25 | 
 26 | ### 🥉 Bronze Sponsor ($10/month)
 27 | - ✅ Name listed in README.md
 28 | - ✅ Access to sponsor-only discussions
 29 | - ✅ Early access to new features
 30 | - ✅ Sponsor badge on GitHub profile
 31 | 
 32 | ### 🥈 Silver Sponsor ($50/month)
 33 | - ✅ All Bronze benefits
 34 | - ✅ Priority issue support
 35 | - ✅ Monthly development updates
 36 | - ✅ Name in release notes
 37 | - ✅ Access to development roadmap
 38 | 
 39 | ### 🥇 Gold Sponsor ($200/month)
 40 | - ✅ All Silver benefits
 41 | - ✅ Feature request priority
 42 | - ✅ 1-on-1 monthly video call
 43 | - ✅ Logo on project documentation
 44 | - ✅ Custom integration support
 45 | 
 46 | ### 💎 Diamond Sponsor ($500+/month)
 47 | - ✅ All Gold benefits
 48 | - ✅ Custom feature development
 49 | - ✅ Direct integration support
 50 | - ✅ Team training sessions
 51 | - ✅ Logo on project homepage
 52 | - ✅ Co-marketing opportunities
 53 | 
 54 | ## 🎯 Sponsorship Goals
 55 | 
 56 | Your sponsorship directly funds:
 57 | 
 58 | ### Immediate Goals (Q3 2025)
 59 | - [ ] **$200/month** - Automatic backup scheduling system
 60 | - [ ] **$400/month** - Multi-language support (ES, FR, DE, JP)
 61 | - [ ] **$600/month** - Cloud sync capabilities (AWS, GCP, Azure)
 62 | 
 63 | ### Long-term Goals (Q4 2025)
 64 | - [ ] **$800/month** - Advanced analytics dashboard
 65 | - [ ] **$1000/month** - Plugin system for custom extensions
 66 | - [ ] **$1500/month** - Enterprise authentication (SSO, LDAP)
 67 | 
 68 | ## 🤝 How to Sponsor
 69 | 
 70 | ### GitHub Sponsors (Recommended)
 71 | <a href="https://github.com/sponsors/doobidoo">
 72 |   <img src="https://img.shields.io/badge/Sponsor_on_GitHub-❤️-ea4aaa?style=for-the-badge&logo=github-sponsors" />
 73 | </a>
 74 | 
 75 | ### One-time Donations
 76 | - **Ko-fi**: [ko-fi.com/doobidoo](https://ko-fi.com/doobidoo)
 77 | - **Buy Me a Coffee**: [buymeacoffee.com/doobidoo](https://coff.ee/doobidoo)
 78 | - **PayPal**: [paypal.me/doobidoo](https://paypal.me/heinrichkrupp1)
 79 | 
 80 | ### Cryptocurrency
 81 | - **Bitcoin**: `bc1qypcx7m9jl3mkptvc3xrzyd7dywjctpxyvaajgr`
 82 | - **Ethereum**: `0xf049d21449D1F6FAD2B94080c40B751147F1099a`
 83 | 
 84 | ## 🏆 Current Sponsors
 85 | 
 86 | ### 💎 Diamond Sponsors
 87 | *Be the first Diamond sponsor!*
 88 | 
 89 | ### 🥇 Gold Sponsors
 90 | *Be the first Gold sponsor!*
 91 | 
 92 | ### 🥈 Silver Sponsors
 93 | *Be the first Silver sponsor!*
 94 | 
 95 | ### 🥉 Bronze Sponsors
 96 | *Be the first Bronze sponsor!*
 97 | 
 98 | ## 📈 Sponsorship Benefits in Detail
 99 | 
100 | ### For Individuals
101 | - Support open-source development
102 | - Get priority support for your use cases
103 | - Shape the future of the project
104 | - Learn from direct developer interaction
105 | 
106 | ### For Companies
107 | - Ensure long-term project sustainability
108 | - Get custom features for your needs
109 | - Receive integration support
110 | - Show your commitment to open source
111 | - Marketing visibility to our user base
112 | 
113 | ## 💬 Testimonials
114 | 
115 | > "MCP Memory Service transformed how our AI assistants manage context. The semantic search is incredibly fast and accurate." - *Production User*
116 | 
117 | > "The tag system and memory maintenance features saved us hours of manual organization work." - *Enterprise User*
118 | 
119 | ## 📞 Contact
120 | 
121 | For custom sponsorship packages or enterprise inquiries:
122 | - Email: [[email protected]]
123 | - Discord: [Join our community](https://discord.gg/cc4BU9Hqfinder)
124 | - GitHub Discussions: [Start a conversation](https://github.com/doobidoo/mcp-memory-service/discussions)
125 | 
126 | ## 🙏 Thank You
127 | 
128 | Every sponsorship, no matter the size, makes a difference. Your support enables continued development, better documentation, and a stronger community around MCP Memory Service.
129 | 
130 | Together, we're building the future of AI memory management!
131 | 
132 | ---
133 | 
134 | <p align="center">
135 |   <a href="https://github.com/doobidoo/mcp-memory-service">
136 |     <img src="https://img.shields.io/github/stars/doobidoo/mcp-memory-service?style=social" />
137 |   </a>
138 |   <a href="https://github.com/doobidoo/mcp-memory-service/fork">
139 |     <img src="https://img.shields.io/github/forks/doobidoo/mcp-memory-service?style=social" />
140 |   </a>
141 | </p>
```

--------------------------------------------------------------------------------
/docs/integrations/groq-integration-summary.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Groq Bridge Integration Summary
  2 | 
  3 | ## Overview
  4 | 
  5 | Groq bridge integration provides ultra-fast LLM inference (10x faster than standard models) for code quality analysis workflows. This is an **optional enhancement** to the existing Gemini CLI integration.
  6 | 
  7 | ## Installation Status
  8 | 
  9 | ### ✅ Completed
 10 | - Groq bridge relocated to `scripts/utils/groq_agent_bridge.py`
 11 | - Documentation moved to `docs/integrations/groq-bridge.md`
 12 | - CLAUDE.md updated with Groq integration
 13 | - code-quality-guard agent updated to support both Gemini and Groq
 14 | - Pre-commit hook installed at `.git/hooks/pre-commit`
 15 | 
 16 | ### ⚠️ Pending (User Setup Required)
 17 | 1. Install groq Python package:
 18 |    ```bash
 19 |    pip install groq
 20 |    # or
 21 |    uv pip install groq
 22 |    ```
 23 | 
 24 | 2. Set GROQ_API_KEY:
 25 |    ```bash
 26 |    export GROQ_API_KEY="your-api-key-here"
 27 |    # Get your key from: https://console.groq.com/keys
 28 |    ```
 29 | 
 30 | ## Usage
 31 | 
 32 | ### Gemini CLI (Default - Currently Working)
 33 | ```bash
 34 | # Complexity analysis
 35 | gemini "Analyze complexity 1-10 per function: $(cat file.py)"
 36 | 
 37 | # Security scan
 38 | gemini "Check for security vulnerabilities: $(cat file.py)"
 39 | ```
 40 | 
 41 | ### Groq Bridge (Optional - After Setup)
 42 | ```bash
 43 | # Complexity analysis (10x faster)
 44 | python scripts/utils/groq_agent_bridge.py "Analyze complexity 1-10 per function: $(cat file.py)"
 45 | 
 46 | # Security scan (10x faster)
 47 | python scripts/utils/groq_agent_bridge.py "Check for security vulnerabilities: $(cat file.py)" --json
 48 | 
 49 | # With custom model
 50 | python scripts/utils/groq_agent_bridge.py "Your prompt" --model llama2-70b-4096 --temperature 0.3
 51 | ```
 52 | 
 53 | ## Pre-Commit Hook
 54 | 
 55 | The pre-commit hook is installed and will run automatically on `git commit`. It currently uses **Gemini CLI** by default.
 56 | 
 57 | **What it checks:**
 58 | - Code complexity (blocks if score >8, warns if score 7)
 59 | - Security vulnerabilities (blocks on any findings)
 60 | - SQL injection, XSS, command injection patterns
 61 | - Hardcoded secrets
 62 | 
 63 | **Hook location:** `.git/hooks/pre-commit` → `scripts/hooks/pre-commit`
 64 | 
 65 | **To use Groq instead of Gemini in hooks:**
 66 | Edit `scripts/hooks/pre-commit` and replace `gemini` commands with:
 67 | ```bash
 68 | python scripts/utils/groq_agent_bridge.py
 69 | ```
 70 | 
 71 | ## Testing the Integration
 72 | 
 73 | ### Test Groq Bridge (Requires Setup)
 74 | ```bash
 75 | # Quick test
 76 | bash scripts/utils/test_groq_bridge.sh
 77 | 
 78 | # Manual test
 79 | python scripts/utils/groq_agent_bridge.py "Rate the complexity of: def add(a,b): return a+b"
 80 | ```
 81 | 
 82 | ### Test Pre-Commit Hook (Uses Gemini)
 83 | ```bash
 84 | # Create a test file
 85 | echo "def test(): pass" > test.py
 86 | 
 87 | # Stage it
 88 | git add test.py
 89 | 
 90 | # Commit will trigger hook
 91 | git commit -m "test: pre-commit hook"
 92 | 
 93 | # The hook will run Gemini CLI analysis automatically
 94 | ```
 95 | 
 96 | ## Performance Comparison
 97 | 
 98 | | Task | Gemini CLI | Groq Bridge | Speedup |
 99 | |------|-----------|-------------|---------|
100 | | Complexity analysis (1 file) | ~3-5s | ~300-500ms | 10x |
101 | | Security scan (1 file) | ~3-5s | ~300-500ms | 10x |
102 | | TODO prioritization (10 files) | ~30s | ~3s | 10x |
103 | 
104 | ## When to Use Each
105 | 
106 | **Use Gemini CLI (default):**
107 | - ✅ Already authenticated and working
108 | - ✅ One-off analysis during development
109 | - ✅ No setup required
110 | 
111 | **Use Groq Bridge (optional):**
112 | - ✅ CI/CD pipelines (faster builds)
113 | - ✅ Large-scale codebase analysis
114 | - ✅ Pre-commit hooks on large files
115 | - ✅ Batch processing multiple files
116 | 
117 | ## Integration Points
118 | 
119 | The Groq bridge is integrated into:
120 | 
121 | 1. **code-quality-guard agent** (`.claude/agents/code-quality-guard.md`)
122 |    - Supports both Gemini and Groq
123 |    - Examples show both options
124 | 
125 | 2. **CLAUDE.md** (lines 343-377)
126 |    - Agent integrations table updated
127 |    - Usage examples for both tools
128 | 
129 | 3. **Pre-commit hook** (`scripts/hooks/pre-commit`)
130 |    - Currently uses Gemini (working out of the box)
131 |    - Can be switched to Groq after setup
132 | 
133 | 4. **Utility scripts** (`scripts/utils/`)
134 |    - `groq_agent_bridge.py` - Main bridge implementation
135 |    - `test_groq_bridge.sh` - Integration test script
136 | 
137 | ## Troubleshooting
138 | 
139 | **Issue: ModuleNotFoundError: No module named 'groq'**
140 | ```bash
141 | pip install groq
142 | # or
143 | uv pip install groq
144 | ```
145 | 
146 | **Issue: GROQ_API_KEY environment variable required**
147 | ```bash
148 | export GROQ_API_KEY="your-api-key"
149 | # Get key from: https://console.groq.com/keys
150 | ```
151 | 
152 | **Issue: Gemini CLI authentication in pre-commit hook**
153 | - The hook uses the Gemini CLI from your PATH
154 | - Authentication state should be shared across terminal sessions
155 | - If issues persist, manually run: `gemini --version` to authenticate
156 | 
157 | ## Next Steps
158 | 
159 | 1. **Optional**: Install groq package and set API key to enable ultra-fast inference
160 | 2. **Test**: Run a manual commit to see pre-commit hook in action with Gemini
161 | 3. **Optimize**: Switch pre-commit hook to Groq for faster CI/CD workflows
162 | 
163 | ## Documentation References
164 | 
165 | - Groq Bridge Setup: `docs/integrations/groq-bridge.md`
166 | - Code Quality Agent: `.claude/agents/code-quality-guard.md`
167 | - CLAUDE.md Agent Section: Lines 343-377
168 | 
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/utils/debug.py:
--------------------------------------------------------------------------------

```python
  1 | # Copyright 2024 Heinrich Krupp
  2 | #
  3 | # Licensed under the Apache License, Version 2.0 (the "License");
  4 | # you may not use this file except in compliance with the License.
  5 | # You may obtain a copy of the License at
  6 | #
  7 | #     http://www.apache.org/licenses/LICENSE-2.0
  8 | #
  9 | # Unless required by applicable law or agreed to in writing, software
 10 | # distributed under the License is distributed on an "AS IS" BASIS,
 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 12 | # See the License for the specific language governing permissions and
 13 | # limitations under the License.
 14 | 
 15 | """Debug utilities for memory service."""
 16 | from typing import Dict, Any, List
 17 | import numpy as np
 18 | from ..models.memory import Memory, MemoryQueryResult
 19 | 
 20 | def _get_embedding_model(storage):
 21 |     """
 22 |     Get the embedding model from storage, handling different backend attribute names.
 23 | 
 24 |     Different backends use different attribute names (e.g., 'model', 'embedding_model').
 25 |     """
 26 |     if hasattr(storage, 'model') and storage.model is not None:
 27 |         return storage.model
 28 |     elif hasattr(storage, 'embedding_model') and storage.embedding_model is not None:
 29 |         return storage.embedding_model
 30 |     else:
 31 |         raise AttributeError(f"Storage backend {type(storage).__name__} has no embedding model attribute")
 32 | 
 33 | def get_raw_embedding(storage, content: str) -> Dict[str, Any]:
 34 |     """Get raw embedding vector for content."""
 35 |     try:
 36 |         model = _get_embedding_model(storage)
 37 |         embedding = model.encode(content).tolist()
 38 |         return {
 39 |             "status": "success",
 40 |             "embedding": embedding,
 41 |             "dimension": len(embedding)
 42 |         }
 43 |     except Exception as e:
 44 |         return {
 45 |             "status": "error",
 46 |             "error": str(e)
 47 |         }
 48 | 
 49 | def check_embedding_model(storage) -> Dict[str, Any]:
 50 |     """Check if embedding model is loaded and working."""
 51 |     try:
 52 |         model = _get_embedding_model(storage)
 53 |         test_embedding = model.encode("test").tolist()
 54 |         
 55 |         # Try to get model name, handling different model types
 56 |         model_name = "unknown"
 57 |         if hasattr(model, '_model_card_vars'):
 58 |             model_name = model._model_card_vars.get('modelname', 'unknown')
 59 |         elif hasattr(storage, 'embedding_model_name'):
 60 |             model_name = storage.embedding_model_name
 61 |         
 62 |         return {
 63 |             "status": "healthy",
 64 |             "model_loaded": True,
 65 |             "model_name": model_name,
 66 |             "embedding_dimension": len(test_embedding)
 67 |         }
 68 |     except Exception as e:
 69 |         return {
 70 |             "status": "unhealthy",
 71 |             "error": str(e)
 72 |         }
 73 | 
 74 | async def debug_retrieve_memory(
 75 |     storage,
 76 |     query: str,
 77 |     n_results: int = 5,
 78 |     similarity_threshold: float = 0.0
 79 | ) -> List[MemoryQueryResult]:
 80 |     """Retrieve memories with enhanced debug information including raw similarity scores."""
 81 |     try:
 82 |         # Use the storage's retrieve method which handles embedding generation and search
 83 |         results = await storage.retrieve(query, n_results)
 84 | 
 85 |         # Filter by similarity threshold and enhance debug info
 86 |         filtered_results = []
 87 |         for result in results:
 88 |             if result.relevance_score >= similarity_threshold:
 89 |                 # Enhance debug info with additional details
 90 |                 enhanced_debug_info = result.debug_info.copy() if result.debug_info else {}
 91 |                 enhanced_debug_info.update({
 92 |                     "raw_similarity": result.relevance_score,
 93 |                     "backend": type(storage).__name__,
 94 |                     "query": query,
 95 |                     "similarity_threshold": similarity_threshold
 96 |                 })
 97 | 
 98 |                 filtered_results.append(
 99 |                     MemoryQueryResult(
100 |                         memory=result.memory,
101 |                         relevance_score=result.relevance_score,
102 |                         debug_info=enhanced_debug_info
103 |                     )
104 |                 )
105 | 
106 |         return filtered_results
107 |     except Exception as e:
108 |         # Return empty list on error to match original behavior
109 |         return []
110 | 
111 | async def exact_match_retrieve(storage, content: str) -> List[Memory]:
112 |     """Retrieve memories using exact content match."""
113 |     try:
114 |         results = storage.collection.get(
115 |             where={"content": content}
116 |         )
117 |         
118 |         memories = []
119 |         for i in range(len(results["ids"])):
120 |             memory = Memory.from_dict(
121 |                 {
122 |                     "content": results["documents"][i],
123 |                     **results["metadatas"][i]
124 |                 },
125 |                 embedding=results["embeddings"][i] if "embeddings" in results else None
126 |             )
127 |             memories.append(memory)
128 |         
129 |         return memories
130 |     except Exception as e:
131 |         return []
```

--------------------------------------------------------------------------------
/.github/workflows/bridge-tests.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: HTTP-MCP Bridge Tests
  2 | 
  3 | on:
  4 |   push:
  5 |     paths:
  6 |       - 'examples/http-mcp-bridge.js'
  7 |       - 'tests/bridge/**'
  8 |       - 'tests/integration/test_bridge_integration.js'
  9 |       - '.github/workflows/bridge-tests.yml'
 10 |   pull_request:
 11 |     paths:
 12 |       - 'examples/http-mcp-bridge.js'  
 13 |       - 'tests/bridge/**'
 14 |       - 'tests/integration/test_bridge_integration.js'
 15 | 
 16 | jobs:
 17 |   test-bridge:
 18 |     runs-on: ubuntu-latest
 19 |     strategy:
 20 |       matrix:
 21 |         node-version: [18.x, 20.x]
 22 |     
 23 |     steps:
 24 |     - name: Checkout code
 25 |       uses: actions/checkout@v4
 26 |     
 27 |     - name: Setup Node.js ${{ matrix.node-version }}
 28 |       uses: actions/setup-node@v4
 29 |       with:
 30 |         node-version: ${{ matrix.node-version }}
 31 |     
 32 |     - name: Install bridge unit test dependencies
 33 |       run: npm install
 34 |       working-directory: tests/bridge
 35 |     
 36 |     - name: Install integration test dependencies
 37 |       run: npm install
 38 |       working-directory: tests/integration
 39 |     
 40 |     - name: Run bridge unit tests
 41 |       run: npm test
 42 |       working-directory: tests/bridge
 43 |     
 44 |     - name: Run integration tests
 45 |       run: npm test
 46 |       working-directory: tests/integration
 47 |     
 48 |     - name: Validate health endpoint fix
 49 |       run: |
 50 |         if ! grep -q "/api/health" examples/http-mcp-bridge.js; then
 51 |           echo "❌ ERROR: Health endpoint should be /api/health"
 52 |           exit 1
 53 |         fi
 54 |         echo "✅ Health endpoint check passed"
 55 |     
 56 |     - name: Validate status code handling fix
 57 |       run: |
 58 |         if grep -v "|| response.statusCode === 201" examples/http-mcp-bridge.js | grep -q "statusCode === 201"; then
 59 |           echo "❌ ERROR: Found hardcoded 201-only status check - server returns 200!"
 60 |           exit 1
 61 |         fi
 62 |         echo "✅ Status code handling check passed"
 63 |     
 64 |     - name: Validate URL construction fix
 65 |       run: |
 66 |         if grep -q "new URL(path, this.endpoint)" examples/http-mcp-bridge.js; then
 67 |           echo "❌ ERROR: Old URL construction bug still present"
 68 |           exit 1
 69 |         fi
 70 |         echo "✅ URL construction check passed"
 71 | 
 72 |   contract-validation:
 73 |     runs-on: ubuntu-latest
 74 |     
 75 |     steps:
 76 |     - name: Checkout code
 77 |       uses: actions/checkout@v4
 78 |     
 79 |     - name: Setup Node.js
 80 |       uses: actions/setup-node@v4
 81 |       with:
 82 |         node-version: '20.x'
 83 |     
 84 |     - name: Validate API contracts
 85 |       run: node -e "
 86 |         const path = require('path');
 87 |         const mocks = require('./tests/bridge/mock_responses.js');
 88 |         const memorySuccess = mocks.mockResponses.memories.createSuccess;
 89 |         if (memorySuccess.status !== 200) throw new Error('Memory creation should return 200');
 90 |         if (!memorySuccess.body.hasOwnProperty('success')) throw new Error('Memory response must have success field');
 91 |         console.log('✅ API contract validation passed');
 92 |         "
 93 |     
 94 |     - name: Generate contract report
 95 |       run: |
 96 |         echo "# API Contract Report" > contract-report.md
 97 |         echo "Generated: $(date)" >> contract-report.md
 98 |         echo "" >> contract-report.md
 99 |         echo "## Critical Endpoints Validated" >> contract-report.md
100 |         echo "- ✅ POST /api/memories: Returns 200 with success field" >> contract-report.md
101 |         echo "- ✅ GET /api/health: Correct endpoint path used" >> contract-report.md
102 |         echo "- ✅ URL construction: Preserves /api base path" >> contract-report.md
103 |         echo "- ✅ Status codes: Handles HTTP 200 correctly" >> contract-report.md
104 |     
105 |     - name: Upload contract report
106 |       uses: actions/upload-artifact@v4
107 |       if: always()
108 |       with:
109 |         name: api-contract-report
110 |         path: contract-report.md
111 | 
112 |   quick-smoke-test:
113 |     runs-on: ubuntu-latest
114 |     
115 |     steps:
116 |     - name: Checkout code
117 |       uses: actions/checkout@v4
118 |       
119 |     - name: Setup Node.js
120 |       uses: actions/setup-node@v4
121 |       with:
122 |         node-version: '20.x'
123 |     
124 |     - name: Check JavaScript syntax
125 |       run: node -c examples/http-mcp-bridge.js
126 |     
127 |     - name: Test bridge instantiation
128 |       run: node -e "
129 |         const Bridge = require('./examples/http-mcp-bridge.js');
130 |         const bridge = new Bridge();
131 |         console.log('✅ Bridge can be instantiated');
132 |         "
133 |     
134 |     - name: Test URL construction
135 |       run: |
136 |         node -e "
137 |         const Bridge = require('./examples/http-mcp-bridge.js');
138 |         const bridge = new Bridge();
139 |         bridge.endpoint = 'https://test.local:8443/api';
140 |         const fullPath = '/memories';
141 |         let baseUrl;
142 |         if (bridge.endpoint.endsWith('/')) {
143 |           baseUrl = bridge.endpoint.slice(0, -1);
144 |         } else {
145 |           baseUrl = bridge.endpoint;
146 |         }
147 |         if (baseUrl + fullPath !== 'https://test.local:8443/api/memories') {
148 |           throw new Error('URL construction test failed');
149 |         }
150 |         console.log('✅ URL construction works correctly');
151 |         "
```

--------------------------------------------------------------------------------
/claude-hooks/tests/test-threading.json:
--------------------------------------------------------------------------------

```json
  1 | {
  2 |   "sessions": [
  3 |     {
  4 |       "id": "session-1",
  5 |       "startTime": "2025-10-03T13:00:20.984Z",
  6 |       "endTime": "2025-10-03T13:00:20.985Z",
  7 |       "projectContext": {
  8 |         "name": "mcp-memory-service",
  9 |         "type": "Multi-language Project",
 10 |         "languages": [
 11 |           "javascript",
 12 |           "python"
 13 |         ],
 14 |         "frameworks": [
 15 |           "node.js",
 16 |           "fastapi"
 17 |         ],
 18 |         "tools": [
 19 |           "git",
 20 |           "npm",
 21 |           "pip"
 22 |         ],
 23 |         "confidence": 0.95
 24 |       },
 25 |       "workingDirectory": "/test/directory",
 26 |       "initialTopics": [],
 27 |       "finalTopics": [],
 28 |       "memoriesLoaded": [],
 29 |       "memoriesCreated": [],
 30 |       "conversationSummary": "Implemented auth",
 31 |       "outcome": {
 32 |         "type": "completed",
 33 |         "summary": "Implemented auth"
 34 |       },
 35 |       "threadId": "thread-46e4b32e1e1d1fe9",
 36 |       "parentSessionId": null,
 37 |       "childSessionIds": [
 38 |         "session-2"
 39 |       ],
 40 |       "status": "completed"
 41 |     },
 42 |     {
 43 |       "id": "session-2",
 44 |       "startTime": "2025-10-03T13:00:20.985Z",
 45 |       "endTime": null,
 46 |       "projectContext": {
 47 |         "name": "mcp-memory-service",
 48 |         "type": "Multi-language Project",
 49 |         "languages": [
 50 |           "javascript",
 51 |           "python"
 52 |         ],
 53 |         "frameworks": [
 54 |           "node.js",
 55 |           "fastapi"
 56 |         ],
 57 |         "tools": [
 58 |           "git",
 59 |           "npm",
 60 |           "pip"
 61 |         ],
 62 |         "confidence": 0.95
 63 |       },
 64 |       "workingDirectory": "/test/directory",
 65 |       "initialTopics": [],
 66 |       "finalTopics": [],
 67 |       "memoriesLoaded": [],
 68 |       "memoriesCreated": [],
 69 |       "conversationSummary": null,
 70 |       "outcome": null,
 71 |       "threadId": "thread-46e4b32e1e1d1fe9",
 72 |       "parentSessionId": "session-1",
 73 |       "childSessionIds": [],
 74 |       "status": "active"
 75 |     }
 76 |   ],
 77 |   "conversationThreads": [
 78 |     {
 79 |       "id": "thread-3c54682968af57cc",
 80 |       "createdAt": "2025-08-20T11:38:33.002Z",
 81 |       "projectContext": {
 82 |         "name": "mcp-memory-service",
 83 |         "type": "Multi-language Project",
 84 |         "languages": [
 85 |           "javascript",
 86 |           "python"
 87 |         ],
 88 |         "frameworks": [
 89 |           "node.js",
 90 |           "fastapi"
 91 |         ],
 92 |         "tools": [
 93 |           "git",
 94 |           "npm",
 95 |           "pip"
 96 |         ],
 97 |         "confidence": 0.95
 98 |       },
 99 |       "sessionIds": [
100 |         "session-1"
101 |       ],
102 |       "topics": [],
103 |       "outcomes": [
104 |         {
105 |           "sessionId": "session-1",
106 |           "outcome": {
107 |             "type": "completed",
108 |             "summary": "Implemented auth"
109 |           },
110 |           "timestamp": "2025-08-20T11:38:33.003Z"
111 |         },
112 |         {
113 |           "sessionId": "session-1",
114 |           "outcome": {
115 |             "type": "completed",
116 |             "summary": "Implemented auth"
117 |           },
118 |           "timestamp": "2025-08-20T11:42:10.898Z"
119 |         },
120 |         {
121 |           "sessionId": "session-1",
122 |           "outcome": {
123 |             "type": "completed",
124 |             "summary": "Implemented auth"
125 |           },
126 |           "timestamp": "2025-08-20T11:43:24.009Z"
127 |         },
128 |         {
129 |           "sessionId": "session-1",
130 |           "outcome": {
131 |             "type": "completed",
132 |             "summary": "Implemented auth"
133 |           },
134 |           "timestamp": "2025-08-20T11:43:49.798Z"
135 |         },
136 |         {
137 |           "sessionId": "session-1",
138 |           "outcome": {
139 |             "type": "completed",
140 |             "summary": "Implemented auth"
141 |           },
142 |           "timestamp": "2025-08-20T11:44:35.339Z"
143 |         },
144 |         {
145 |           "sessionId": "session-1",
146 |           "outcome": {
147 |             "type": "completed",
148 |             "summary": "Implemented auth"
149 |           },
150 |           "timestamp": "2025-08-20T11:45:38.591Z"
151 |         }
152 |       ],
153 |       "status": "active",
154 |       "lastUpdated": "2025-08-20T11:45:38.591Z"
155 |     },
156 |     {
157 |       "id": "thread-46e4b32e1e1d1fe9",
158 |       "createdAt": "2025-10-03T13:00:20.984Z",
159 |       "projectContext": {
160 |         "name": "mcp-memory-service",
161 |         "type": "Multi-language Project",
162 |         "languages": [
163 |           "javascript",
164 |           "python"
165 |         ],
166 |         "frameworks": [
167 |           "node.js",
168 |           "fastapi"
169 |         ],
170 |         "tools": [
171 |           "git",
172 |           "npm",
173 |           "pip"
174 |         ],
175 |         "confidence": 0.95
176 |       },
177 |       "sessionIds": [
178 |         "session-1"
179 |       ],
180 |       "topics": [],
181 |       "outcomes": [
182 |         {
183 |           "sessionId": "session-1",
184 |           "outcome": {
185 |             "type": "completed",
186 |             "summary": "Implemented auth"
187 |           },
188 |           "timestamp": "2025-10-03T13:00:20.985Z"
189 |         }
190 |       ],
191 |       "status": "active",
192 |       "lastUpdated": "2025-10-03T13:00:20.985Z"
193 |     }
194 |   ],
195 |   "projectSessions": {
196 |     "mcp-memory-service": [
197 |       "session-1",
198 |       "session-2"
199 |     ]
200 |   },
201 |   "lastSaved": "2025-10-03T13:00:20.985Z"
202 | }
```

--------------------------------------------------------------------------------
/claude-hooks/config.json:
--------------------------------------------------------------------------------

```json
  1 | {
  2 |   "codeExecution": {
  3 |     "enabled": true,
  4 |     "timeout": 15000,
  5 |     "fallbackToMCP": true,
  6 |     "enableMetrics": true,
  7 |     "pythonPath": "python3"
  8 |   },
  9 |   "memoryService": {
 10 |     "protocol": "auto",
 11 |     "preferredProtocol": "http",
 12 |     "fallbackEnabled": true,
 13 |     "http": {
 14 |       "endpoint": "http://127.0.0.1:8000",
 15 |       "apiKey": "VhOGAoUOE5_BMzu-phDORdyXHNMcDRBxvndK_Uop",
 16 |       "healthCheckTimeout": 3000,
 17 |       "useDetailedHealthCheck": true
 18 |     },
 19 |     "mcp": {
 20 |       "serverCommand": [
 21 |         "uv",
 22 |         "run",
 23 |         "memory",
 24 |         "server",
 25 |         "-s",
 26 |         "hybrid"
 27 |       ],
 28 |       "serverWorkingDir": "/home/hkr/repositories/mcp-memory-service",
 29 |       "connectionTimeout": 2000,
 30 |       "toolCallTimeout": 3000
 31 |     },
 32 |     "defaultTags": [
 33 |       "claude-code",
 34 |       "auto-generated"
 35 |     ],
 36 |     "maxMemoriesPerSession": 8,
 37 |     "enableSessionConsolidation": true,
 38 |     "injectAfterCompacting": false,
 39 |     "recentFirstMode": true,
 40 |     "recentMemoryRatio": 0.6,
 41 |     "recentTimeWindow": "last week",
 42 |     "fallbackTimeWindow": "last 2 weeks",
 43 |     "showStorageSource": true,
 44 |     "sourceDisplayMode": "brief"
 45 |   },
 46 |   "projectDetection": {
 47 |     "gitRepository": true,
 48 |     "packageFiles": [
 49 |       "package.json",
 50 |       "pyproject.toml",
 51 |       "Cargo.toml",
 52 |       "go.mod",
 53 |       "pom.xml"
 54 |     ],
 55 |     "frameworkDetection": true,
 56 |     "languageDetection": true,
 57 |     "confidenceThreshold": 0.3
 58 |   },
 59 |   "memoryScoring": {
 60 |     "weights": {
 61 |       "timeDecay": 0.5,
 62 |       "tagRelevance": 0.2,
 63 |       "contentRelevance": 0.15,
 64 |       "contentQuality": 0.2,
 65 |       "conversationRelevance": 0.25
 66 |     },
 67 |     "minRelevanceScore": 0.4,
 68 |     "timeDecayRate": 0.15,
 69 |     "enableConversationContext": true,
 70 |     "autoCalibrate": true
 71 |   },
 72 |   "contextFormatting": {
 73 |     "includeProjectSummary": true,
 74 |     "includeRelevanceScores": false,
 75 |     "groupByCategory": true,
 76 |     "maxContentLength": 500,
 77 |     "maxContentLengthCLI": 400,
 78 |     "maxContentLengthCategorized": 350,
 79 |     "includeTimestamps": true,
 80 |     "truncationMode": "smart"
 81 |   },
 82 |   "sessionAnalysis": {
 83 |     "extractTopics": true,
 84 |     "extractDecisions": true,
 85 |     "extractInsights": true,
 86 |     "extractCodeChanges": true,
 87 |     "extractNextSteps": true,
 88 |     "minSessionLength": 100,
 89 |     "minConfidence": 0.1
 90 |   },
 91 |   "hooks": {
 92 |     "sessionStart": {
 93 |       "enabled": true,
 94 |       "timeout": 20000,
 95 |       "priority": "high"
 96 |     },
 97 |     "midConversation": {
 98 |       "timeout": 8000,
 99 |       "priority": "high"
100 |     },
101 |     "sessionEnd": {
102 |       "enabled": true,
103 |       "timeout": 15000,
104 |       "priority": "normal"
105 |     },
106 |     "topicChange": {
107 |       "enabled": false,
108 |       "timeout": 5000,
109 |       "priority": "low"
110 |     }
111 |   },
112 |   "gitAnalysis": {
113 |     "enabled": true,
114 |     "commitLookback": 14,
115 |     "maxCommits": 20,
116 |     "includeChangelog": true,
117 |     "maxGitMemories": 3,
118 |     "gitContextWeight": 1.2
119 |   },
120 |   "output": {
121 |     "verbose": true,
122 |     "showMemoryDetails": true,
123 |     "showProjectDetails": true,
124 |     "showScoringDetails": false,
125 |     "showRecencyInfo": true,
126 |     "showPhaseDetails": true,
127 |     "showGitAnalysis": true,
128 |     "cleanMode": false,
129 |     "style": "balanced",
130 |     "showScoringBreakdown": false,
131 |     "adaptiveTruncation": true,
132 |     "prioritySections": [
133 |       "recent-work",
134 |       "current-problems",
135 |       "key-decisions",
136 |       "additional-context"
137 |     ]
138 |   },
139 |   "contentLength": {
140 |     "manyMemories": 300,
141 |     "fewMemories": 500,
142 |     "veryFewMemories": 800
143 |   },
144 |   "naturalTriggers": {
145 |     "enabled": true,
146 |     "triggerThreshold": 0.6,
147 |     "cooldownPeriod": 30000,
148 |     "maxMemoriesPerTrigger": 5
149 |   },
150 |   "performance": {
151 |     "defaultProfile": "balanced",
152 |     "enableMonitoring": true,
153 |     "autoAdjust": true,
154 |     "profiles": {
155 |       "speed_focused": {
156 |         "maxLatency": 100,
157 |         "enabledTiers": [
158 |           "instant"
159 |         ],
160 |         "backgroundProcessing": false,
161 |         "degradeThreshold": 200,
162 |         "description": "Fastest response, minimal memory awareness"
163 |       },
164 |       "balanced": {
165 |         "maxLatency": 200,
166 |         "enabledTiers": [
167 |           "instant",
168 |           "fast"
169 |         ],
170 |         "backgroundProcessing": true,
171 |         "degradeThreshold": 400,
172 |         "description": "Moderate latency, smart memory triggers"
173 |       },
174 |       "memory_aware": {
175 |         "maxLatency": 500,
176 |         "enabledTiers": [
177 |           "instant",
178 |           "fast",
179 |           "intensive"
180 |         ],
181 |         "backgroundProcessing": true,
182 |         "degradeThreshold": 1000,
183 |         "description": "Full memory awareness, accept higher latency"
184 |       }
185 |     }
186 |   },
187 |   "conversationMonitor": {
188 |     "contextWindow": 10,
189 |     "enableCaching": true,
190 |     "maxCacheSize": 100
191 |   },
192 |   "patternDetector": {
193 |     "sensitivity": 0.7,
194 |     "adaptiveLearning": true,
195 |     "learningRate": 0.05
196 |   },
197 |   "logging": {
198 |     "level": "debug",
199 |     "enableDebug": true,
200 |     "logToFile": true,
201 |     "logFilePath": "./claude-hooks.log"
202 |   }
203 | }
```

--------------------------------------------------------------------------------
/claude_commands/memory-health.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Check Memory Service Health
  2 | 
  3 | I'll help you check the health and status of your MCP Memory Service, providing detailed diagnostics and statistics about your memory storage and service connectivity.
  4 | 
  5 | ## What I'll do:
  6 | 
  7 | 1. **Service Discovery**: I'll locate your MCP Memory Service using mDNS auto-discovery or configured endpoints.
  8 | 
  9 | 2. **Connectivity Test**: I'll verify that the service is running and accessible from Claude Code.
 10 | 
 11 | 3. **Health Assessment**: I'll check the database health, memory statistics, and service performance.
 12 | 
 13 | 4. **Storage Analysis**: I'll provide insights about your stored memories, database size, and usage patterns.
 14 | 
 15 | 5. **Troubleshooting**: If issues are found, I'll provide specific recommendations for resolution.
 16 | 
 17 | ## Usage Examples:
 18 | 
 19 | ```bash
 20 | claude /memory-health
 21 | claude /memory-health --detailed
 22 | claude /memory-health --test-operations
 23 | claude /memory-health --export-report
 24 | ```
 25 | 
 26 | ## Implementation:
 27 | 
 28 | I'll perform comprehensive health checks using your MCP Memory Service at `https://memory.local:8443/`:
 29 | 
 30 | 1. **Service Detection**: 
 31 |    - Connect to configured HTTPS endpoint
 32 |    - Check health endpoint at `/api/health/detailed`
 33 |    - Verify HTTPS connectivity with `-k` flag for self-signed certificates
 34 | 
 35 | 2. **Basic Health Check**:
 36 |    - Service responsiveness via `/api/health` endpoint
 37 |    - Detailed diagnostics via `/api/health/detailed`
 38 |    - Database connection status and embedding model availability
 39 | 
 40 | 3. **Detailed Diagnostics**:
 41 |    - Memory count and database statistics from API response
 42 |    - Storage backend type (SQLite-vec, ChromaDB, etc.)
 43 |    - Performance metrics, uptime, and response times
 44 |    - Disk usage warnings and system resource status
 45 | 
 46 | 4. **Operational Testing** (if requested):
 47 |    - Test memory storage and retrieval
 48 |    - Verify search functionality
 49 |    - Check tag operations
 50 | 
 51 | ## Health Report Includes:
 52 | 
 53 | ### Service Status
 54 | - **Running**: Whether the MCP Memory Service is active
 55 | - **Endpoint**: Service URL and port information
 56 | - **Response Time**: Average API response latency
 57 | - **Version**: Service version and backend type
 58 | - **Uptime**: How long the service has been running
 59 | 
 60 | ### Database Health
 61 | - **Backend Type**: ChromaDB, SQLite-vec, or other storage backend
 62 | - **Connection Status**: Database connectivity and integrity
 63 | - **Total Memories**: Number of stored memory entries
 64 | - **Database Size**: Storage space used by the database
 65 | - **Last Backup**: When backups were last created (if applicable)
 66 | 
 67 | ### Performance Metrics
 68 | - **Query Performance**: Average search and retrieval times
 69 | - **Embedding Model**: Model type and loading status
 70 | - **Memory Usage**: Service memory consumption
 71 | - **Cache Status**: Embedding and model cache statistics
 72 | 
 73 | ### Storage Statistics
 74 | - **Popular Tags**: Most frequently used tags
 75 | - **Memory Types**: Distribution of memory types (note, decision, etc.)
 76 | - **Recent Activity**: Recent storage and retrieval operations
 77 | - **Growth Trends**: Database growth over time
 78 | 
 79 | ## Troubleshooting Features:
 80 | 
 81 | ### Common Issues I'll Check:
 82 | - **Service Not Running**: Instructions to start the MCP Memory Service
 83 | - **Port Conflicts**: Detection of port usage conflicts
 84 | - **Database Corruption**: Database integrity verification
 85 | - **Permission Issues**: File system access and write permissions
 86 | - **Model Loading**: Embedding model download and loading status
 87 | 
 88 | ### Auto-Fix Capabilities:
 89 | - **Restart Service**: Attempt to restart a hung service
 90 | - **Clear Cache**: Clean corrupted cache files
 91 | - **Repair Database**: Basic database repair operations
 92 | - **Update Configuration**: Fix common configuration issues
 93 | 
 94 | ## Arguments:
 95 | 
 96 | - `$ARGUMENTS` - Optional specific health check focus
 97 | - `--detailed` - Show comprehensive diagnostics and statistics  
 98 | - `--test-operations` - Perform functional tests of store/retrieve operations
 99 | - `--check-backups` - Verify backup system health and recent backups
100 | - `--performance-test` - Run performance benchmarks
101 | - `--export-report` - Save health report to file for sharing
102 | - `--fix-issues` - Attempt automatic fixes for detected problems
103 | - `--quiet` - Show only critical issues (minimal output)
104 | 
105 | ## Advanced Diagnostics:
106 | 
107 | - **Network Connectivity**: Test mDNS discovery and HTTP endpoints
108 | - **Database Integrity**: Verify database consistency and repair if needed
109 | - **Model Health**: Check embedding model files and performance
110 | - **Configuration Validation**: Verify environment variables and settings
111 | - **Resource Usage**: Monitor CPU, memory, and disk usage
112 | 
113 | If critical issues are detected, I'll provide step-by-step resolution instructions and can attempt automatic fixes with your permission. For complex issues, I'll recommend specific diagnostic commands or log file locations to investigate further.
114 | 
115 | The health check helps ensure your memory service is running optimally and can identify potential issues before they impact your workflow.
```

--------------------------------------------------------------------------------
/selective_timestamp_recovery.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Selective Timestamp Recovery Script
  4 | 
  5 | Merges backup timestamps with current database:
  6 | - Restores timestamps for 2,174 corrupted memories from Nov 5 backup
  7 | - Preserves 807 new memories created Nov 5-17 with their current timestamps
  8 | - Result: ALL 2,981 memories have correct timestamps!
  9 | """
 10 | 
 11 | import sqlite3
 12 | import sys
 13 | from datetime import datetime
 14 | 
 15 | DRY_RUN = '--apply' not in sys.argv
 16 | 
 17 | current_db = r'C:\Users\heinrich.krupp\AppData\Local\mcp-memory\backups\sqlite_vec.db'
 18 | backup_db = r'C:\Users\heinrich.krupp\AppData\Local\mcp-memory\backups\sqlite_vec.backup-20251105-114637.db'
 19 | 
 20 | def selective_recovery():
 21 |     print("="*80)
 22 |     print("SELECTIVE TIMESTAMP RECOVERY")
 23 |     print("="*80)
 24 |     print(f"Mode: {'DRY RUN (no changes)' if not DRY_RUN else 'LIVE (applying changes)'}")
 25 |     print()
 26 | 
 27 |     # Open databases
 28 |     current = sqlite3.connect(current_db)
 29 |     backup = sqlite3.connect(backup_db)
 30 | 
 31 |     # Get common hashes
 32 |     cur_hashes = {h[0] for h in current.execute('SELECT content_hash FROM memories').fetchall()}
 33 |     bak_hashes = {h[0] for h in backup.execute('SELECT content_hash FROM memories').fetchall()}
 34 | 
 35 |     common = cur_hashes & bak_hashes
 36 |     only_current = cur_hashes - bak_hashes
 37 | 
 38 |     print(f"Analysis:")
 39 |     print(f"  - {len(common):4} memories to restore from backup")
 40 |     print(f"  - {len(only_current):4} new memories to keep as-is")
 41 |     print(f"  - {len(common) + len(only_current):4} total memories after recovery")
 42 |     print()
 43 | 
 44 |     if DRY_RUN:
 45 |         print("[DRY RUN] Pass --apply to make actual changes")
 46 |         print()
 47 | 
 48 |     # Restore timestamps for common memories
 49 |     restored_count = 0
 50 |     errors = 0
 51 | 
 52 |     print("Restoring timestamps...")
 53 |     for i, content_hash in enumerate(common, 1):
 54 |         try:
 55 |             # Get timestamps from backup
 56 |             bak_row = backup.execute('''
 57 |                 SELECT created_at, created_at_iso, updated_at, updated_at_iso
 58 |                 FROM memories WHERE content_hash = ?
 59 |             ''', (content_hash,)).fetchone()
 60 | 
 61 |             if not bak_row:
 62 |                 continue
 63 | 
 64 |             created_at, created_at_iso, updated_at, updated_at_iso = bak_row
 65 | 
 66 |             # Check if actually corrupted (created on Nov 17)
 67 |             cur_created_iso = current.execute(
 68 |                 'SELECT created_at_iso FROM memories WHERE content_hash = ?',
 69 |                 (content_hash,)
 70 |             ).fetchone()[0]
 71 | 
 72 |             if '2025-11-17' in cur_created_iso:
 73 |                 # Corrupted - restore from backup
 74 |                 if not DRY_RUN:
 75 |                     current.execute('''
 76 |                         UPDATE memories
 77 |                         SET created_at = ?, created_at_iso = ?,
 78 |                             updated_at = ?, updated_at_iso = ?
 79 |                         WHERE content_hash = ?
 80 |                     ''', (created_at, created_at_iso, updated_at, updated_at_iso, content_hash))
 81 | 
 82 |                 restored_count += 1
 83 | 
 84 |                 if restored_count <= 10:  # Show first 10
 85 |                     print(f"  {restored_count:4}. {content_hash[:8]}: {cur_created_iso} => {created_at_iso}")
 86 | 
 87 |             if i % 100 == 0:
 88 |                 print(f"  Progress: {i}/{len(common)} ({i*100//len(common)}%)")
 89 | 
 90 |         except Exception as e:
 91 |             errors += 1
 92 |             if errors <= 5:
 93 |                 print(f"  ERROR: {content_hash[:8]}: {e}")
 94 | 
 95 |     if restored_count > 10:
 96 |         print(f"  ... and {restored_count - 10} more")
 97 | 
 98 |     # Commit changes
 99 |     if not DRY_RUN:
100 |         current.commit()
101 |         print(f"\n[SUCCESS] Committed {restored_count} timestamp restorations")
102 |     else:
103 |         print(f"\n[DRY RUN] Would restore {restored_count} timestamps")
104 | 
105 |     print()
106 |     print("="*80)
107 |     print("RECOVERY COMPLETE!")
108 |     print("="*80)
109 |     print(f"  Restored:  {restored_count:4} memories from backup")
110 |     print(f"  Preserved: {len(only_current):4} new memories (Nov 5-17)")
111 |     print(f"  Errors:    {errors:4}")
112 |     print()
113 | 
114 |     if restored_count > 0:
115 |         print("Verification:")
116 |         cur = current.execute('SELECT COUNT(*) FROM memories WHERE created_at_iso LIKE "2025-11-17%"')
117 |         remaining_corrupt = cur.fetchone()[0]
118 |         print(f"  Memories still with Nov 17 timestamp: {remaining_corrupt}")
119 |         print(f"  (Should be ~{len(only_current)} - the genuinely new ones)")
120 | 
121 |     current.close()
122 |     backup.close()
123 | 
124 |     if DRY_RUN:
125 |         print()
126 |         print("[DRY RUN] No changes were made.")
127 |         print("    Run with --apply to actually fix the database:")
128 |         print(f"    python {sys.argv[0]} --apply")
129 |     else:
130 |         print()
131 |         print("[SUCCESS] Database has been updated!")
132 |         print("   Restart HTTP server to use the fixed database.")
133 | 
134 | if __name__ == '__main__':
135 |     try:
136 |         selective_recovery()
137 |     except Exception as e:
138 |         print(f"\n[ERROR] {e}")
139 |         import traceback
140 |         traceback.print_exc()
141 |         sys.exit(1)
142 | 
```

--------------------------------------------------------------------------------
/claude-hooks/tests/test-cross-session.json:
--------------------------------------------------------------------------------

```json
  1 | {
  2 |   "sessions": [
  3 |     {
  4 |       "id": "cross-session-1",
  5 |       "startTime": "2025-10-03T13:00:20.986Z",
  6 |       "endTime": "2025-10-03T13:00:20.986Z",
  7 |       "projectContext": {
  8 |         "name": "mcp-memory-service",
  9 |         "type": "Multi-language Project",
 10 |         "languages": [
 11 |           "javascript",
 12 |           "python"
 13 |         ],
 14 |         "frameworks": [
 15 |           "node.js",
 16 |           "fastapi"
 17 |         ],
 18 |         "tools": [
 19 |           "git",
 20 |           "npm",
 21 |           "pip"
 22 |         ],
 23 |         "confidence": 0.95
 24 |       },
 25 |       "initialTopics": [],
 26 |       "finalTopics": [
 27 |         "auth",
 28 |         "jwt"
 29 |       ],
 30 |       "memoriesLoaded": [],
 31 |       "memoriesCreated": [],
 32 |       "conversationSummary": "Implemented user authentication",
 33 |       "outcome": {
 34 |         "type": "implementation",
 35 |         "summary": "Implemented user authentication",
 36 |         "topics": [
 37 |           "auth",
 38 |           "jwt"
 39 |         ]
 40 |       },
 41 |       "threadId": "thread-ee93501cb0e678ad",
 42 |       "parentSessionId": null,
 43 |       "childSessionIds": [],
 44 |       "status": "completed"
 45 |     }
 46 |   ],
 47 |   "conversationThreads": [
 48 |     {
 49 |       "id": "thread-93a8463caef492e9",
 50 |       "createdAt": "2025-08-20T11:38:33.004Z",
 51 |       "projectContext": {
 52 |         "name": "mcp-memory-service",
 53 |         "type": "Multi-language Project",
 54 |         "languages": [
 55 |           "javascript",
 56 |           "python"
 57 |         ],
 58 |         "frameworks": [
 59 |           "node.js",
 60 |           "fastapi"
 61 |         ],
 62 |         "tools": [
 63 |           "git",
 64 |           "npm",
 65 |           "pip"
 66 |         ],
 67 |         "confidence": 0.95
 68 |       },
 69 |       "sessionIds": [
 70 |         "cross-session-1"
 71 |       ],
 72 |       "topics": [
 73 |         "auth",
 74 |         "jwt"
 75 |       ],
 76 |       "outcomes": [
 77 |         {
 78 |           "sessionId": "cross-session-1",
 79 |           "outcome": {
 80 |             "type": "implementation",
 81 |             "summary": "Implemented user authentication",
 82 |             "topics": [
 83 |               "auth",
 84 |               "jwt"
 85 |             ]
 86 |           },
 87 |           "timestamp": "2025-08-20T11:38:33.004Z"
 88 |         },
 89 |         {
 90 |           "sessionId": "cross-session-1",
 91 |           "outcome": {
 92 |             "type": "implementation",
 93 |             "summary": "Implemented user authentication",
 94 |             "topics": [
 95 |               "auth",
 96 |               "jwt"
 97 |             ]
 98 |           },
 99 |           "timestamp": "2025-08-20T11:42:10.899Z"
100 |         },
101 |         {
102 |           "sessionId": "cross-session-1",
103 |           "outcome": {
104 |             "type": "implementation",
105 |             "summary": "Implemented user authentication",
106 |             "topics": [
107 |               "auth",
108 |               "jwt"
109 |             ]
110 |           },
111 |           "timestamp": "2025-08-20T11:43:24.011Z"
112 |         },
113 |         {
114 |           "sessionId": "cross-session-1",
115 |           "outcome": {
116 |             "type": "implementation",
117 |             "summary": "Implemented user authentication",
118 |             "topics": [
119 |               "auth",
120 |               "jwt"
121 |             ]
122 |           },
123 |           "timestamp": "2025-08-20T11:43:49.800Z"
124 |         },
125 |         {
126 |           "sessionId": "cross-session-1",
127 |           "outcome": {
128 |             "type": "implementation",
129 |             "summary": "Implemented user authentication",
130 |             "topics": [
131 |               "auth",
132 |               "jwt"
133 |             ]
134 |           },
135 |           "timestamp": "2025-08-20T11:44:35.340Z"
136 |         },
137 |         {
138 |           "sessionId": "cross-session-1",
139 |           "outcome": {
140 |             "type": "implementation",
141 |             "summary": "Implemented user authentication",
142 |             "topics": [
143 |               "auth",
144 |               "jwt"
145 |             ]
146 |           },
147 |           "timestamp": "2025-08-20T11:45:38.592Z"
148 |         }
149 |       ],
150 |       "status": "active",
151 |       "lastUpdated": "2025-08-20T11:45:38.592Z"
152 |     },
153 |     {
154 |       "id": "thread-ee93501cb0e678ad",
155 |       "createdAt": "2025-10-03T13:00:20.986Z",
156 |       "projectContext": {
157 |         "name": "mcp-memory-service",
158 |         "type": "Multi-language Project",
159 |         "languages": [
160 |           "javascript",
161 |           "python"
162 |         ],
163 |         "frameworks": [
164 |           "node.js",
165 |           "fastapi"
166 |         ],
167 |         "tools": [
168 |           "git",
169 |           "npm",
170 |           "pip"
171 |         ],
172 |         "confidence": 0.95
173 |       },
174 |       "sessionIds": [
175 |         "cross-session-1"
176 |       ],
177 |       "topics": [
178 |         "auth",
179 |         "jwt"
180 |       ],
181 |       "outcomes": [
182 |         {
183 |           "sessionId": "cross-session-1",
184 |           "outcome": {
185 |             "type": "implementation",
186 |             "summary": "Implemented user authentication",
187 |             "topics": [
188 |               "auth",
189 |               "jwt"
190 |             ]
191 |           },
192 |           "timestamp": "2025-10-03T13:00:20.986Z"
193 |         }
194 |       ],
195 |       "status": "active",
196 |       "lastUpdated": "2025-10-03T13:00:20.986Z"
197 |     }
198 |   ],
199 |   "projectSessions": {
200 |     "mcp-memory-service": [
201 |       "cross-session-1"
202 |     ]
203 |   },
204 |   "lastSaved": "2025-10-03T13:00:20.986Z"
205 | }
```

--------------------------------------------------------------------------------
/docs/mastery/configuration-guide.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Memory Service — Configuration Guide
  2 | 
  3 | All configuration is driven via environment variables and sensible defaults resolved in `src/mcp_memory_service/config.py`.
  4 | 
  5 | ## Base Paths
  6 | 
  7 | - `MCP_MEMORY_BASE_DIR`: Root directory for service data. Defaults per-OS to an app-data directory and is created if missing.
  8 | - Derived paths (auto-created):
  9 |   - Chroma path: `${BASE_DIR}/chroma_db` unless overridden.
 10 |   - Backups path: `${BASE_DIR}/backups` unless overridden.
 11 | 
 12 | Overrides:
 13 | 
 14 | - `MCP_MEMORY_CHROMA_PATH` or `mcpMemoryChromaPath`: ChromaDB directory path.
 15 | - `MCP_MEMORY_BACKUPS_PATH` or `mcpMemoryBackupsPath`: Backups directory path.
 16 | 
 17 | ## Storage Backend Selection
 18 | 
 19 | - `MCP_MEMORY_STORAGE_BACKEND`: `sqlite_vec` (default), `chroma`, or `cloudflare`.
 20 |   - `sqlite-vec` aliases to `sqlite_vec`.
 21 |   - Unknown values default to `sqlite_vec` with a warning.
 22 | 
 23 | SQLite-vec options:
 24 | 
 25 | - `MCP_MEMORY_SQLITE_PATH` or `MCP_MEMORY_SQLITEVEC_PATH`: Path to `.db` file. Default `${BASE_DIR}/sqlite_vec.db`.
 26 | - `MCP_MEMORY_SQLITE_PRAGMAS`: CSV list of custom pragmas e.g. `busy_timeout=15000,cache_size=20000` (v8.9.0+ recommended values for concurrent access).
 27 | 
 28 | Chroma options:
 29 | 
 30 | - `MCP_MEMORY_CHROMADB_HOST`: Hostname for remote Chroma.
 31 | - `MCP_MEMORY_CHROMADB_PORT`: Port (default 8000).
 32 | - `MCP_MEMORY_CHROMADB_SSL`: `true|false` for HTTPS.
 33 | - `MCP_MEMORY_CHROMADB_API_KEY`: API key when remote.
 34 | - `MCP_MEMORY_COLLECTION_NAME`: Collection name (default `memory_collection`).
 35 | 
 36 | Cloudflare options (required unless otherwise noted):
 37 | 
 38 | - `CLOUDFLARE_API_TOKEN` (required)
 39 | - `CLOUDFLARE_ACCOUNT_ID` (required)
 40 | - `CLOUDFLARE_VECTORIZE_INDEX` (required)
 41 | - `CLOUDFLARE_D1_DATABASE_ID` (required)
 42 | - `CLOUDFLARE_R2_BUCKET` (optional, for large content)
 43 | - `CLOUDFLARE_EMBEDDING_MODEL` (default `@cf/baai/bge-base-en-v1.5`)
 44 | - `CLOUDFLARE_LARGE_CONTENT_THRESHOLD` (bytes; default 1,048,576)
 45 | - `CLOUDFLARE_MAX_RETRIES` (default 3)
 46 | - `CLOUDFLARE_BASE_DELAY` (seconds; default 1.0)
 47 | 
 48 | ## Embedding Model
 49 | 
 50 | - `MCP_EMBEDDING_MODEL`: Model name (default `all-MiniLM-L6-v2`).
 51 | - `MCP_MEMORY_USE_ONNX`: `true|false` toggle for ONNX path.
 52 | 
 53 | ## HTTP/HTTPS Interface
 54 | 
 55 | - `MCP_HTTP_ENABLED`: `true|false` to enable HTTP interface.
 56 | - `MCP_HTTP_HOST`: Bind address (default `0.0.0.0`).
 57 | - `MCP_HTTP_PORT`: Port (default `8000`).
 58 | - `MCP_CORS_ORIGINS`: Comma-separated origins (default `*`).
 59 | - `MCP_SSE_HEARTBEAT`: SSE heartbeat interval seconds (default 30).
 60 | - `MCP_API_KEY`: Optional API key for HTTP.
 61 | 
 62 | TLS:
 63 | 
 64 | - `MCP_HTTPS_ENABLED`: `true|false`.
 65 | - `MCP_SSL_CERT_FILE`, `MCP_SSL_KEY_FILE`: Certificate and key paths.
 66 | 
 67 | ## mDNS Service Discovery
 68 | 
 69 | - `MCP_MDNS_ENABLED`: `true|false` (default `true`).
 70 | - `MCP_MDNS_SERVICE_NAME`: Service display name (default `MCP Memory Service`).
 71 | - `MCP_MDNS_SERVICE_TYPE`: Service type (default `_mcp-memory._tcp.local.`).
 72 | - `MCP_MDNS_DISCOVERY_TIMEOUT`: Seconds to wait for discovery (default 5).
 73 | 
 74 | ## Consolidation (Optional)
 75 | 
 76 | - `MCP_CONSOLIDATION_ENABLED`: `true|false`.
 77 | - Archive location:
 78 |   - `MCP_CONSOLIDATION_ARCHIVE_PATH` or `MCP_MEMORY_ARCHIVE_PATH` (default `${BASE_DIR}/consolidation_archive`).
 79 | - Config knobs:
 80 |   - Decay: `MCP_DECAY_ENABLED`, retention by type: `MCP_RETENTION_CRITICAL`, `MCP_RETENTION_REFERENCE`, `MCP_RETENTION_STANDARD`, `MCP_RETENTION_TEMPORARY`.
 81 |   - Associations: `MCP_ASSOCIATIONS_ENABLED`, `MCP_ASSOCIATION_MIN_SIMILARITY`, `MCP_ASSOCIATION_MAX_SIMILARITY`, `MCP_ASSOCIATION_MAX_PAIRS`.
 82 |   - Clustering: `MCP_CLUSTERING_ENABLED`, `MCP_CLUSTERING_MIN_SIZE`, `MCP_CLUSTERING_ALGORITHM`.
 83 |   - Compression: `MCP_COMPRESSION_ENABLED`, `MCP_COMPRESSION_MAX_LENGTH`, `MCP_COMPRESSION_PRESERVE_ORIGINALS`.
 84 |   - Forgetting: `MCP_FORGETTING_ENABLED`, `MCP_FORGETTING_RELEVANCE_THRESHOLD`, `MCP_FORGETTING_ACCESS_THRESHOLD`.
 85 | - Scheduling (APScheduler-ready):
 86 |   - `MCP_SCHEDULE_DAILY` (default `02:00`), `MCP_SCHEDULE_WEEKLY` (default `SUN 03:00`), `MCP_SCHEDULE_MONTHLY` (default `01 04:00`), `MCP_SCHEDULE_QUARTERLY` (default `disabled`), `MCP_SCHEDULE_YEARLY` (default `disabled`).
 87 | 
 88 | ## Machine Identification
 89 | 
 90 | - `MCP_MEMORY_INCLUDE_HOSTNAME`: `true|false` to tag memories with `source:<hostname>` and include `hostname` metadata.
 91 | 
 92 | ## Logging and Performance
 93 | 
 94 | - `LOG_LEVEL`: Root logging level (default `WARNING`).
 95 | - `DEBUG_MODE`: When unset, the service raises log levels for performance-critical libs (`chromadb`, `sentence_transformers`, `transformers`, `torch`, `numpy`).
 96 | 
 97 | ## Recommended Defaults by Backend
 98 | 
 99 | - SQLite-vec:
100 |   - Defaults enable WAL, busy timeout, optimized cache; customize with `MCP_MEMORY_SQLITE_PRAGMAS`.
101 |   - For multi-client setups, the service auto-detects and may start/use an HTTP coordinator.
102 | - ChromaDB:
103 |   - HNSW space/ef/M values tuned for balanced accuracy and speed; migration messaging warns of deprecation and recommends moving to SQLite-vec.
104 | - Cloudflare:
105 |   - Ensure required variables are set or the process exits with a clear error and checklist.
106 | 
107 | 
```

--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/performance_issue.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: Performance Issue
  2 | description: Report slow operations, memory leaks, or performance degradation
  3 | title: "[Performance]: "
  4 | labels: ["performance", "triage"]
  5 | body:
  6 |   - type: markdown
  7 |     attributes:
  8 |       value: |
  9 |         Thank you for reporting a performance issue! Detailed metrics help us diagnose and optimize the system.
 10 | 
 11 |   - type: textarea
 12 |     id: description
 13 |     attributes:
 14 |       label: Performance Issue Description
 15 |       description: Describe the performance problem you're experiencing
 16 |       placeholder: Memory search is taking >5 seconds when...
 17 |     validations:
 18 |       required: true
 19 | 
 20 |   - type: dropdown
 21 |     id: operation
 22 |     attributes:
 23 |       label: Affected Operation
 24 |       description: Which operation is experiencing performance issues?
 25 |       options:
 26 |         - Memory Storage (store_memory)
 27 |         - Memory Retrieval (recall_memory, retrieve)
 28 |         - Search Operations (semantic/tag/time search)
 29 |         - Document Ingestion (PDF/DOCX processing)
 30 |         - Dashboard Loading (HTTP UI)
 31 |         - Server Startup/Initialization
 32 |         - Background Sync (hybrid backend)
 33 |         - Database Operations (general)
 34 |         - Other
 35 |     validations:
 36 |       required: true
 37 | 
 38 |   - type: textarea
 39 |     id: metrics
 40 |     attributes:
 41 |       label: Performance Metrics
 42 |       description: Provide timing information or benchmarks
 43 |       placeholder: |
 44 |         Current performance: 10 seconds
 45 |         Expected performance: <1 second
 46 |         Tested with: 1000 memories, 5 concurrent requests
 47 |       value: |
 48 |         - Operation time:
 49 |         - Memory count in database:
 50 |         - Concurrent operations:
 51 |         - CPU usage:
 52 |         - Memory usage:
 53 |     validations:
 54 |       required: true
 55 | 
 56 |   - type: textarea
 57 |     id: reproduce
 58 |     attributes:
 59 |       label: Steps to Reproduce
 60 |       description: How can we reproduce this performance issue?
 61 |       placeholder: |
 62 |         1. Insert 10,000 memories using...
 63 |         2. Run search query...
 64 |         3. Observe >5 second response time
 65 |       value: |
 66 |         1.
 67 |         2.
 68 |         3.
 69 |     validations:
 70 |       required: true
 71 | 
 72 |   - type: dropdown
 73 |     id: storage-backend
 74 |     attributes:
 75 |       label: Storage Backend
 76 |       description: Which backend shows the performance issue?
 77 |       options:
 78 |         - sqlite-vec (local)
 79 |         - cloudflare (remote)
 80 |         - hybrid (sqlite + cloudflare)
 81 |         - All backends
 82 |         - Unsure
 83 |     validations:
 84 |       required: true
 85 | 
 86 |   - type: input
 87 |     id: database-size
 88 |     attributes:
 89 |       label: Database Size
 90 |       description: Number of memories and approximate disk size
 91 |       placeholder: "5000 memories, 50MB database file"
 92 |     validations:
 93 |       required: true
 94 | 
 95 |   - type: dropdown
 96 |     id: trend
 97 |     attributes:
 98 |       label: Performance Trend
 99 |       description: Is this a new issue or has it gotten worse over time?
100 |       options:
101 |         - Always been slow
102 |         - Recently degraded (specify version)
103 |         - Gets worse with more data
104 |         - Intermittent (sometimes fast, sometimes slow)
105 |         - After specific change/upgrade
106 |     validations:
107 |       required: true
108 | 
109 |   - type: textarea
110 |     id: environment
111 |     attributes:
112 |       label: Environment Details
113 |       description: System specifications that might affect performance
114 |       placeholder: |
115 |         OS: macOS 14.1
116 |         CPU: M2 Max (12 cores)
117 |         RAM: 32GB
118 |         Disk: SSD (NVMe)
119 |         Python: 3.11.5
120 |         Version: v8.17.0
121 |       value: |
122 |         - OS:
123 |         - CPU:
124 |         - RAM:
125 |         - Disk Type:
126 |         - Python Version:
127 |         - MCP Version:
128 |     validations:
129 |       required: true
130 | 
131 |   - type: textarea
132 |     id: profiling
133 |     attributes:
134 |       label: Profiling Data (Optional)
135 |       description: |
136 |         If you've profiled the operation, include results:
137 |         - Python cProfile output
138 |         - Database query EXPLAIN QUERY PLAN
139 |         - Network latency measurements (for remote backends)
140 |       placeholder: Paste profiling data here
141 |       render: shell
142 | 
143 |   - type: textarea
144 |     id: logs
145 |     attributes:
146 |       label: Relevant Logs
147 |       description: Logs showing timing or performance warnings
148 |       placeholder: Paste logs with timestamps
149 |       render: shell
150 | 
151 |   - type: textarea
152 |     id: workaround
153 |     attributes:
154 |       label: Workaround
155 |       description: Have you found any temporary workarounds?
156 |       placeholder: |
157 |         Reducing batch size helps...
158 |         Switching to different backend improves...
159 | 
160 |   - type: checkboxes
161 |     id: checks
162 |     attributes:
163 |       label: Pre-submission Checklist
164 |       description: Please verify you've completed these steps
165 |       options:
166 |         - label: I've verified this is a performance regression (not expected behavior)
167 |           required: true
168 |         - label: I've included specific timing measurements (not just "it's slow")
169 |           required: true
170 |         - label: I've tested with latest version to confirm issue still exists
171 |           required: true
172 |         - label: I've described database size and environment specifications
173 |           required: true
174 | 
```

--------------------------------------------------------------------------------
/claude-hooks/test-recency-scoring.js:
--------------------------------------------------------------------------------

```javascript
  1 | #!/usr/bin/env node
  2 | 
  3 | /**
  4 |  * Test script to validate recency-focused scoring improvements
  5 |  */
  6 | 
  7 | const { scoreMemoryRelevance, calculateTimeDecay, calculateRecencyBonus } = require('./utilities/memory-scorer');
  8 | const config = require('./config.json');
  9 | 
 10 | // Test project context
 11 | const projectContext = {
 12 |     name: 'mcp-memory-service',
 13 |     language: 'Python',
 14 |     frameworks: ['FastAPI'],
 15 |     tools: ['pytest']
 16 | };
 17 | 
 18 | // Test memories with different ages
 19 | const testMemories = [
 20 |     {
 21 |         content: 'Fixed critical bug in HTTP protocol implementation for memory hooks',
 22 |         tags: ['mcp-memory-service', 'bug-fix', 'http-protocol'],
 23 |         memory_type: 'bug-fix',
 24 |         created_at_iso: new Date(Date.now() - 3 * 24 * 60 * 60 * 1000).toISOString() // 3 days ago
 25 |     },
 26 |     {
 27 |         content: 'Comprehensive README restructuring and organization completed successfully for MCP Memory Service project',
 28 |         tags: ['mcp-memory-service', 'claude-code-reference', 'documentation'],
 29 |         memory_type: 'reference',
 30 |         created_at_iso: new Date(Date.now() - 60 * 24 * 60 * 60 * 1000).toISOString() // 60 days ago
 31 |     },
 32 |     {
 33 |         content: 'Implemented dashboard dark mode with improved UX',
 34 |         tags: ['mcp-memory-service', 'feature', 'dashboard'],
 35 |         memory_type: 'feature',
 36 |         created_at_iso: new Date(Date.now() - 5 * 24 * 60 * 60 * 1000).toISOString() // 5 days ago
 37 |     },
 38 |     {
 39 |         content: 'CONTRIBUTING.md Structure - Created comprehensive contribution guidelines',
 40 |         tags: ['mcp-memory-service', 'claude-code-reference', 'documentation'],
 41 |         memory_type: 'reference',
 42 |         created_at_iso: new Date(Date.now() - 30 * 24 * 60 * 60 * 1000).toISOString() // 30 days ago
 43 |     },
 44 |     {
 45 |         content: 'Removed ChromaDB backend - major refactoring for v8.0',
 46 |         tags: ['mcp-memory-service', 'refactor', 'architecture'],
 47 |         memory_type: 'architecture',
 48 |         created_at_iso: new Date(Date.now() - 4 * 24 * 60 * 60 * 1000).toISOString() // 4 days ago
 49 |     }
 50 | ];
 51 | 
 52 | // Calculate daysAgo once for each memory (DRY principle)
 53 | const memoriesWithAge = testMemories.map(mem => ({
 54 |     ...mem,
 55 |     daysAgo: Math.floor((Date.now() - new Date(mem.created_at_iso)) / (1000 * 60 * 60 * 24))
 56 | }));
 57 | 
 58 | console.log('\n=== RECENCY SCORING TEST ===\n');
 59 | 
 60 | // Show decay and bonus calculations
 61 | console.log('📊 Time Decay and Recency Bonus Analysis:');
 62 | console.log('─'.repeat(80));
 63 | memoriesWithAge.forEach((mem, idx) => {
 64 |     const decayScore = calculateTimeDecay(mem.created_at_iso, config.memoryScoring.timeDecayRate); // Using decay rate from config
 65 |     const recencyBonus = calculateRecencyBonus(mem.created_at_iso);
 66 | 
 67 |     console.log(`Memory ${idx + 1}: ${mem.daysAgo} days old`);
 68 |     console.log(`  Time Decay (${config.memoryScoring.timeDecayRate} rate): ${decayScore.toFixed(3)}`);
 69 |     console.log(`  Recency Bonus: ${recencyBonus > 0 ? '+' + recencyBonus.toFixed(3) : '0.000'}`);
 70 |     console.log(`  Content: ${mem.content.substring(0, 60)}...`);
 71 |     console.log('');
 72 | });
 73 | 
 74 | // Score memories with new algorithm
 75 | console.log('\n📈 Final Scoring Results (New Algorithm):');
 76 | console.log('─'.repeat(80));
 77 | 
 78 | const scoredMemories = scoreMemoryRelevance(memoriesWithAge, projectContext, {
 79 |     verbose: false,
 80 |     weights: config.memoryScoring.weights,
 81 |     timeDecayRate: config.memoryScoring.timeDecayRate
 82 | });
 83 | 
 84 | scoredMemories.forEach((memory, index) => {
 85 |     console.log(`${index + 1}. Score: ${memory.relevanceScore.toFixed(3)} (${memory.daysAgo} days old)`);
 86 |     console.log(`   Content: ${memory.content.substring(0, 70)}...`);
 87 |     console.log(`   Breakdown:`);
 88 |     console.log(`     - Time Decay: ${memory.scoreBreakdown.timeDecay.toFixed(3)} (weight: ${config.memoryScoring.weights.timeDecay})`);
 89 |     console.log(`     - Tag Relevance: ${memory.scoreBreakdown.tagRelevance.toFixed(3)} (weight: ${config.memoryScoring.weights.tagRelevance})`);
 90 |     console.log(`     - Content Quality: ${memory.scoreBreakdown.contentQuality.toFixed(3)} (weight: ${config.memoryScoring.weights.contentQuality})`);
 91 |     console.log(`     - Recency Bonus: ${memory.scoreBreakdown.recencyBonus.toFixed(3)} (direct boost)`);
 92 |     console.log('');
 93 | });
 94 | 
 95 | console.log('\n✅ Test Summary:');
 96 | console.log('─'.repeat(80));
 97 | console.log('Expected Behavior:');
 98 | console.log('  - Recent memories (3-5 days old) should rank higher');
 99 | console.log('  - Recency bonus (+0.15 for <7 days, +0.10 for <14 days, +0.05 for <30 days)');
100 | console.log('  - Gentler time decay (0.05 rate vs old 0.1 rate)');
101 | console.log('  - Higher time weight (0.40 vs old 0.25)');
102 | console.log('  - Old memories with perfect tags should rank lower despite tag advantage\n');
103 | 
104 | // Check if recent memories are ranked higher
105 | const top3 = scoredMemories.slice(0, 3);
106 | const recentInTop3 = top3.filter(m => m.daysAgo <= 7).length;
107 | 
108 | if (recentInTop3 >= 2) {
109 |     console.log('✅ SUCCESS: At least 2 of top 3 memories are from the last week');
110 | } else {
111 |     console.log('❌ ISSUE: Recent memories are not prioritized as expected');
112 | }
113 | 
114 | console.log('');
115 | 
```

--------------------------------------------------------------------------------
/tests/integration/test_http_server_startup.py:
--------------------------------------------------------------------------------

```python
  1 | """
  2 | Integration tests for HTTP server startup.
  3 | 
  4 | These tests verify that the HTTP server can actually start and respond,
  5 | catching issues like import-time errors, syntax errors, and module loading problems.
  6 | 
  7 | Added to prevent production bugs like those in v8.12.0 where 3 critical bugs
  8 | made it past 55 unit tests because we had zero HTTP server integration tests.
  9 | """
 10 | 
 11 | import pytest
 12 | from fastapi.testclient import TestClient
 13 | 
 14 | 
 15 | def test_http_server_starts():
 16 |     """Test that server imports and starts without errors.
 17 | 
 18 |     This test catches:
 19 |     - Import-time evaluation errors (like get_storage() called at import)
 20 |     - Syntax errors in route handlers
 21 |     - Module loading failures
 22 |     """
 23 |     from mcp_memory_service.web.app import app
 24 | 
 25 |     client = TestClient(app)
 26 |     response = client.get("/api/health")
 27 | 
 28 |     assert response.status_code == 200
 29 |     data = response.json()
 30 |     assert data["status"] == "healthy"
 31 |     assert "version" in data
 32 | 
 33 | 
 34 | def test_server_modules_importable():
 35 |     """Test that all server modules can be imported without errors.
 36 | 
 37 |     This catches syntax errors and import-time failures in module code.
 38 |     """
 39 |     # Core dependencies module
 40 |     import mcp_memory_service.web.dependencies
 41 |     assert hasattr(mcp_memory_service.web.dependencies, 'get_storage')
 42 |     assert hasattr(mcp_memory_service.web.dependencies, 'get_memory_service')
 43 | 
 44 |     # API endpoint modules
 45 |     import mcp_memory_service.web.api.memories
 46 |     import mcp_memory_service.web.api.search
 47 |     import mcp_memory_service.web.api.health
 48 |     import mcp_memory_service.web.api.manage
 49 | 
 50 |     # Main app module
 51 |     import mcp_memory_service.web.app
 52 |     assert hasattr(mcp_memory_service.web.app, 'app')
 53 | 
 54 | 
 55 | def test_all_api_routes_registered():
 56 |     """Test that all expected API routes are registered.
 57 | 
 58 |     This ensures route registration didn't fail silently.
 59 |     """
 60 |     from mcp_memory_service.web.app import app
 61 | 
 62 |     # Get all registered routes
 63 |     routes = [route.path for route in app.routes]
 64 | 
 65 |     # Essential routes that should always be present
 66 |     essential_routes = [
 67 |         "/api/health",
 68 |         "/api/memories",
 69 |         "/api/search",
 70 |         "/api/tags",
 71 |     ]
 72 | 
 73 |     for route in essential_routes:
 74 |         assert any(r.startswith(route) for r in routes), f"Route {route} not registered"
 75 | 
 76 | 
 77 | def test_health_endpoint_responds():
 78 |     """Test that health endpoint returns valid response structure.
 79 | 
 80 |     This is our canary - if this fails, server is broken.
 81 |     """
 82 |     from mcp_memory_service.web.app import app
 83 | 
 84 |     client = TestClient(app)
 85 |     response = client.get("/api/health")
 86 | 
 87 |     assert response.status_code == 200
 88 |     data = response.json()
 89 | 
 90 |     # Verify response structure
 91 |     assert isinstance(data, dict)
 92 |     assert "status" in data
 93 |     assert "version" in data
 94 |     assert "timestamp" in data
 95 | 
 96 |     # Verify values are sensible
 97 |     assert data["status"] in ["healthy", "degraded"]
 98 |     assert isinstance(data["version"], str)
 99 |     assert len(data["version"]) > 0
100 | 
101 | 
102 | def test_cors_middleware_configured():
103 |     """Test that CORS middleware is properly configured.
104 | 
105 |     This prevents issues with web dashboard access.
106 |     """
107 |     from mcp_memory_service.web.app import app
108 | 
109 |     client = TestClient(app)
110 | 
111 |     # Test CORS with actual GET request (OPTIONS may not be supported on all endpoints)
112 |     response = client.get(
113 |         "/api/health",
114 |         headers={"Origin": "http://localhost:3000"}
115 |     )
116 | 
117 |     # Should have CORS headers (FastAPI's CORSMiddleware adds these)
118 |     assert response.status_code == 200
119 |     # Check for CORS headers in response
120 |     assert "access-control-allow-origin" in response.headers or response.status_code == 200
121 | 
122 | 
123 | def test_static_files_mounted():
124 |     """Test that static files (dashboard) are properly mounted."""
125 |     from mcp_memory_service.web.app import app
126 | 
127 |     client = TestClient(app)
128 | 
129 |     # Try to access root (should serve index.html)
130 |     response = client.get("/")
131 | 
132 |     # Should return HTML content (status 200) or redirect
133 |     assert response.status_code in [200, 307, 308]
134 | 
135 |     if response.status_code == 200:
136 |         assert "text/html" in response.headers.get("content-type", "")
137 | 
138 | 
139 | def test_server_handles_404():
140 |     """Test that server returns proper 404 for non-existent routes."""
141 |     from mcp_memory_service.web.app import app
142 | 
143 |     client = TestClient(app)
144 |     response = client.get("/api/nonexistent-route-that-should-not-exist")
145 | 
146 |     assert response.status_code == 404
147 | 
148 | 
149 | def test_server_handles_invalid_json():
150 |     """Test that server handles malformed JSON requests gracefully."""
151 |     from mcp_memory_service.web.app import app
152 | 
153 |     client = TestClient(app)
154 | 
155 |     # Send malformed JSON
156 |     response = client.post(
157 |         "/api/memories",
158 |         data="{'this': 'is not valid json}",  # Missing quote on 'json'
159 |         headers={"Content-Type": "application/json"}
160 |     )
161 | 
162 |     # Should return 400 or 422, not 500
163 |     assert response.status_code in [400, 422]
164 | 
165 | 
166 | if __name__ == "__main__":
167 |     # Allow running tests directly for quick verification
168 |     pytest.main([__file__, "-v"])
169 | 
```

--------------------------------------------------------------------------------
/docs/ide-compatability.md:
--------------------------------------------------------------------------------

```markdown
  1 | ## IDE Compatibility
  2 | 
  3 | [![Works with Claude](https://img.shields.io/badge/Works%20with-Claude-blue)](https://claude.ai)
  4 | [![Works with Cursor](https://img.shields.io/badge/Works%20with-Cursor-orange)](https://cursor.sh)
  5 | [![Works with WindSurf](https://img.shields.io/badge/Works%20with-WindSurf-green)](https://codeium.com/windsurf)
  6 | [![Works with Cline](https://img.shields.io/badge/Works%20with-Cline-purple)](https://github.com/saoudrizwan/claude-dev)
  7 | [![Works with RooCode](https://img.shields.io/badge/Works%20with-RooCode-red)](https://roo.ai)
  8 | 
  9 | As of June 2025, MCP (Model Context Protocol) has become the standard for AI-IDE integrations. The MCP Memory Service is **fully compatible** with all major AI-powered development environments:
 10 | 
 11 | ### Supported IDEs
 12 | 
 13 | | IDE | MCP Support | Configuration Location | Notes |
 14 | |-----|------------|----------------------|--------|
 15 | | **Claude Desktop** | ✅ Full | `claude_desktop_config.json` | Official MCP support |
 16 | | **Claude Code** | ✅ Full | CLI configuration | Official MCP support |
 17 | | **Cursor** | ✅ Full | `.cursor/mcp.json` or global config | Supports stdio, SSE, HTTP |
 18 | | **WindSurf** | ✅ Full | MCP config file | Built-in server management |
 19 | | **Cline** | ✅ Full | VS Code MCP config | Can create/share MCP servers |
 20 | | **RooCode** | ✅ Full | IDE config | Full MCP client implementation |
 21 | | **VS Code** | ✅ Full | `.vscode/mcp.json` | Via MCP extension |
 22 | | **Zed** | ✅ Full | Built-in config | Native MCP support |
 23 | 
 24 | ### Quick Setup for Popular IDEs
 25 | 
 26 | #### Cursor
 27 | Add to `.cursor/mcp.json` in your project or global Cursor config:
 28 | 
 29 | ```json
 30 | {
 31 |   "mcpServers": {
 32 |     "memory": {
 33 |       "command": "uv",
 34 |       "args": [
 35 |         "--directory",
 36 |         "/path/to/mcp-memory-service",
 37 |         "run",
 38 |         "memory"
 39 |       ],
 40 |       "env": {
 41 |         "MCP_MEMORY_CHROMA_PATH": "/path/to/chroma_db",
 42 |         "MCP_MEMORY_BACKUPS_PATH": "/path/to/backups"
 43 |       }
 44 |     }
 45 |   }
 46 | }
 47 | ```
 48 | 
 49 | #### WindSurf
 50 | WindSurf offers the easiest setup with built-in server management. Add to your WindSurf MCP configuration:
 51 | 
 52 | ```json
 53 | {
 54 |   "mcpServers": {
 55 |     "memory": {
 56 |       "command": "uv",
 57 |       "args": ["--directory", "/path/to/mcp-memory-service", "run", "memory"],
 58 |       "env": {
 59 |         "MCP_MEMORY_CHROMA_PATH": "/path/to/chroma_db",
 60 |         "MCP_MEMORY_BACKUPS_PATH": "/path/to/backups"
 61 |       }
 62 |     }
 63 |   }
 64 | }
 65 | ```
 66 | 
 67 | #### Cline (VS Code)
 68 | 1. Open the Cline extension in VS Code
 69 | 2. Click the MCP Servers icon
 70 | 3. Click "Configure MCP Servers"
 71 | 4. Add the memory service configuration (same format as above)
 72 | 
 73 | #### RooCode
 74 | RooCode uses a similar configuration format. Refer to RooCode's MCP documentation for the exact config file location.
 75 | 
 76 | ### Working with Multiple MCP Servers
 77 | 
 78 | MCP servers are designed to be composable. You can use the Memory Service alongside other popular MCP servers:
 79 | 
 80 | ```json
 81 | {
 82 |   "mcpServers": {
 83 |     "memory": {
 84 |       "command": "uv",
 85 |       "args": ["--directory", "/path/to/mcp-memory-service", "run", "memory"]
 86 |     },
 87 |     "github": {
 88 |       "command": "npx",
 89 |       "args": ["-y", "@modelcontextprotocol/server-github"],
 90 |       "env": {
 91 |         "GITHUB_TOKEN": "your-github-token"
 92 |       }
 93 |     },
 94 |     "task-master": {
 95 |       "command": "npx",
 96 |       "args": ["-y", "task-master-mcp"]
 97 |     }
 98 |   }
 99 | }
100 | ```
101 | 
102 | ### Alternative Installation Methods
103 | 
104 | #### Using NPX (if published to npm)
105 | ```json
106 | {
107 |   "mcpServers": {
108 |     "memory": {
109 |       "command": "npx",
110 |       "args": ["-y", "@doobidoo/mcp-memory-service"]
111 |     }
112 |   }
113 | }
114 | ```
115 | 
116 | #### Using Python directly
117 | ```json
118 | {
119 |   "mcpServers": {
120 |     "memory": {
121 |       "command": "python",
122 |       "args": ["/path/to/mcp-memory-service/memory_wrapper.py"]
123 |     }
124 |   }
125 | }
126 | ```
127 | 
128 | ### Why Choose MCP Memory Service?
129 | 
130 | Unlike IDE-specific memory solutions, MCP Memory Service offers:
131 | 
132 | - **Cross-IDE Compatibility**: Your memories work across ALL supported IDEs
133 | - **Persistent Storage**: Memories survive IDE restarts and updates
134 | - **Semantic Search**: Find memories by meaning, not just keywords
135 | - **Natural Language Time Queries**: "What did I work on last week?"
136 | - **Tag-based Organization**: Organize memories with flexible tagging
137 | - **Cross-Platform**: Works on macOS, Windows, and Linux
138 | 
139 | ### Troubleshooting IDE Connections
140 | 
141 | If the memory service isn't connecting in your IDE:
142 | 
143 | 1. **Verify Installation**: Run `python scripts/test_installation.py`
144 | 2. **Check Logs**: Most IDEs show MCP server logs in their output panels
145 | 3. **Test Standalone**: Try running the server directly: `uv run memory`
146 | 4. **Path Issues**: Use absolute paths in your configuration
147 | 5. **Python Environment**: Ensure the IDE can access your Python environment
148 | 
149 | ### IDE-Specific Tips
150 | 
151 | **Cursor**: If using multiple MCP servers, be aware of Cursor's server limit. Prioritize based on your needs.
152 | 
153 | **WindSurf**: Take advantage of WindSurf's built-in server management UI for easier configuration.
154 | 
155 | **Cline**: Cline can display MCP server status - check for green indicators after configuration.
156 | 
157 | **VS Code with MCP Extension**: Install the official MCP extension from the marketplace for better integration.
158 | 
```
Page 6/47FirstPrevNextLast