This is page 2 of 35. Use http://codebase.md/doobidoo/mcp-memory-service?page={x} to view the full context.
# Directory Structure
```
├── .claude
│ ├── agents
│ │ ├── amp-bridge.md
│ │ ├── amp-pr-automator.md
│ │ ├── code-quality-guard.md
│ │ ├── gemini-pr-automator.md
│ │ └── github-release-manager.md
│ ├── settings.local.json.backup
│ └── settings.local.json.local
├── .commit-message
├── .dockerignore
├── .env.example
├── .env.sqlite.backup
├── .envnn#
├── .gitattributes
├── .github
│ ├── FUNDING.yml
│ ├── ISSUE_TEMPLATE
│ │ ├── bug_report.yml
│ │ ├── config.yml
│ │ ├── feature_request.yml
│ │ └── performance_issue.yml
│ ├── pull_request_template.md
│ └── workflows
│ ├── bridge-tests.yml
│ ├── CACHE_FIX.md
│ ├── claude-code-review.yml
│ ├── claude.yml
│ ├── cleanup-images.yml.disabled
│ ├── dev-setup-validation.yml
│ ├── docker-publish.yml
│ ├── LATEST_FIXES.md
│ ├── main-optimized.yml.disabled
│ ├── main.yml
│ ├── publish-and-test.yml
│ ├── README_OPTIMIZATION.md
│ ├── release-tag.yml.disabled
│ ├── release.yml
│ ├── roadmap-review-reminder.yml
│ ├── SECRET_CONDITIONAL_FIX.md
│ └── WORKFLOW_FIXES.md
├── .gitignore
├── .mcp.json.backup
├── .mcp.json.template
├── .pyscn
│ ├── .gitignore
│ └── reports
│ └── analyze_20251123_214224.html
├── AGENTS.md
├── archive
│ ├── deployment
│ │ ├── deploy_fastmcp_fixed.sh
│ │ ├── deploy_http_with_mcp.sh
│ │ └── deploy_mcp_v4.sh
│ ├── deployment-configs
│ │ ├── empty_config.yml
│ │ └── smithery.yaml
│ ├── development
│ │ └── test_fastmcp.py
│ ├── docs-removed-2025-08-23
│ │ ├── authentication.md
│ │ ├── claude_integration.md
│ │ ├── claude-code-compatibility.md
│ │ ├── claude-code-integration.md
│ │ ├── claude-code-quickstart.md
│ │ ├── claude-desktop-setup.md
│ │ ├── complete-setup-guide.md
│ │ ├── database-synchronization.md
│ │ ├── development
│ │ │ ├── autonomous-memory-consolidation.md
│ │ │ ├── CLEANUP_PLAN.md
│ │ │ ├── CLEANUP_README.md
│ │ │ ├── CLEANUP_SUMMARY.md
│ │ │ ├── dream-inspired-memory-consolidation.md
│ │ │ ├── hybrid-slm-memory-consolidation.md
│ │ │ ├── mcp-milestone.md
│ │ │ ├── multi-client-architecture.md
│ │ │ ├── test-results.md
│ │ │ └── TIMESTAMP_FIX_SUMMARY.md
│ │ ├── distributed-sync.md
│ │ ├── invocation_guide.md
│ │ ├── macos-intel.md
│ │ ├── master-guide.md
│ │ ├── mcp-client-configuration.md
│ │ ├── multi-client-server.md
│ │ ├── service-installation.md
│ │ ├── sessions
│ │ │ └── MCP_ENHANCEMENT_SESSION_MEMORY_v4.1.0.md
│ │ ├── UBUNTU_SETUP.md
│ │ ├── ubuntu.md
│ │ ├── windows-setup.md
│ │ └── windows.md
│ ├── docs-root-cleanup-2025-08-23
│ │ ├── AWESOME_LIST_SUBMISSION.md
│ │ ├── CLOUDFLARE_IMPLEMENTATION.md
│ │ ├── DOCUMENTATION_ANALYSIS.md
│ │ ├── DOCUMENTATION_CLEANUP_PLAN.md
│ │ ├── DOCUMENTATION_CONSOLIDATION_COMPLETE.md
│ │ ├── LITESTREAM_SETUP_GUIDE.md
│ │ ├── lm_studio_system_prompt.md
│ │ ├── PYTORCH_DOWNLOAD_FIX.md
│ │ └── README-ORIGINAL-BACKUP.md
│ ├── investigations
│ │ └── MACOS_HOOKS_INVESTIGATION.md
│ ├── litestream-configs-v6.3.0
│ │ ├── install_service.sh
│ │ ├── litestream_master_config_fixed.yml
│ │ ├── litestream_master_config.yml
│ │ ├── litestream_replica_config_fixed.yml
│ │ ├── litestream_replica_config.yml
│ │ ├── litestream_replica_simple.yml
│ │ ├── litestream-http.service
│ │ ├── litestream.service
│ │ └── requirements-cloudflare.txt
│ ├── release-notes
│ │ └── release-notes-v7.1.4.md
│ └── setup-development
│ ├── README.md
│ ├── setup_consolidation_mdns.sh
│ ├── STARTUP_SETUP_GUIDE.md
│ └── test_service.sh
├── CHANGELOG-HISTORIC.md
├── CHANGELOG.md
├── claude_commands
│ ├── memory-context.md
│ ├── memory-health.md
│ ├── memory-ingest-dir.md
│ ├── memory-ingest.md
│ ├── memory-recall.md
│ ├── memory-search.md
│ ├── memory-store.md
│ ├── README.md
│ └── session-start.md
├── claude-hooks
│ ├── config.json
│ ├── config.template.json
│ ├── CONFIGURATION.md
│ ├── core
│ │ ├── memory-retrieval.js
│ │ ├── mid-conversation.js
│ │ ├── session-end.js
│ │ ├── session-start.js
│ │ └── topic-change.js
│ ├── debug-pattern-test.js
│ ├── install_claude_hooks_windows.ps1
│ ├── install_hooks.py
│ ├── memory-mode-controller.js
│ ├── MIGRATION.md
│ ├── README-NATURAL-TRIGGERS.md
│ ├── README-phase2.md
│ ├── README.md
│ ├── simple-test.js
│ ├── statusline.sh
│ ├── test-adaptive-weights.js
│ ├── test-dual-protocol-hook.js
│ ├── test-mcp-hook.js
│ ├── test-natural-triggers.js
│ ├── test-recency-scoring.js
│ ├── tests
│ │ ├── integration-test.js
│ │ ├── phase2-integration-test.js
│ │ ├── test-code-execution.js
│ │ ├── test-cross-session.json
│ │ ├── test-session-tracking.json
│ │ └── test-threading.json
│ ├── utilities
│ │ ├── adaptive-pattern-detector.js
│ │ ├── context-formatter.js
│ │ ├── context-shift-detector.js
│ │ ├── conversation-analyzer.js
│ │ ├── dynamic-context-updater.js
│ │ ├── git-analyzer.js
│ │ ├── mcp-client.js
│ │ ├── memory-client.js
│ │ ├── memory-scorer.js
│ │ ├── performance-manager.js
│ │ ├── project-detector.js
│ │ ├── session-tracker.js
│ │ ├── tiered-conversation-monitor.js
│ │ └── version-checker.js
│ └── WINDOWS-SESSIONSTART-BUG.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Development-Sprint-November-2025.md
├── docs
│ ├── amp-cli-bridge.md
│ ├── api
│ │ ├── code-execution-interface.md
│ │ ├── memory-metadata-api.md
│ │ ├── PHASE1_IMPLEMENTATION_SUMMARY.md
│ │ ├── PHASE2_IMPLEMENTATION_SUMMARY.md
│ │ ├── PHASE2_REPORT.md
│ │ └── tag-standardization.md
│ ├── architecture
│ │ ├── search-enhancement-spec.md
│ │ └── search-examples.md
│ ├── architecture.md
│ ├── archive
│ │ └── obsolete-workflows
│ │ ├── load_memory_context.md
│ │ └── README.md
│ ├── assets
│ │ └── images
│ │ ├── dashboard-v3.3.0-preview.png
│ │ ├── memory-awareness-hooks-example.png
│ │ ├── project-infographic.svg
│ │ └── README.md
│ ├── CLAUDE_CODE_QUICK_REFERENCE.md
│ ├── cloudflare-setup.md
│ ├── deployment
│ │ ├── docker.md
│ │ ├── dual-service.md
│ │ ├── production-guide.md
│ │ └── systemd-service.md
│ ├── development
│ │ ├── ai-agent-instructions.md
│ │ ├── code-quality
│ │ │ ├── phase-2a-completion.md
│ │ │ ├── phase-2a-handle-get-prompt.md
│ │ │ ├── phase-2a-index.md
│ │ │ ├── phase-2a-install-package.md
│ │ │ └── phase-2b-session-summary.md
│ │ ├── code-quality-workflow.md
│ │ ├── dashboard-workflow.md
│ │ ├── issue-management.md
│ │ ├── pr-review-guide.md
│ │ ├── refactoring-notes.md
│ │ ├── release-checklist.md
│ │ └── todo-tracker.md
│ ├── docker-optimized-build.md
│ ├── document-ingestion.md
│ ├── DOCUMENTATION_AUDIT.md
│ ├── enhancement-roadmap-issue-14.md
│ ├── examples
│ │ ├── analysis-scripts.js
│ │ ├── maintenance-session-example.md
│ │ ├── memory-distribution-chart.jsx
│ │ └── tag-schema.json
│ ├── first-time-setup.md
│ ├── glama-deployment.md
│ ├── guides
│ │ ├── advanced-command-examples.md
│ │ ├── chromadb-migration.md
│ │ ├── commands-vs-mcp-server.md
│ │ ├── mcp-enhancements.md
│ │ ├── mdns-service-discovery.md
│ │ ├── memory-consolidation-guide.md
│ │ ├── migration.md
│ │ ├── scripts.md
│ │ └── STORAGE_BACKENDS.md
│ ├── HOOK_IMPROVEMENTS.md
│ ├── hooks
│ │ └── phase2-code-execution-migration.md
│ ├── http-server-management.md
│ ├── ide-compatability.md
│ ├── IMAGE_RETENTION_POLICY.md
│ ├── images
│ │ └── dashboard-placeholder.md
│ ├── implementation
│ │ ├── health_checks.md
│ │ └── performance.md
│ ├── IMPLEMENTATION_PLAN_HTTP_SSE.md
│ ├── integration
│ │ ├── homebrew.md
│ │ └── multi-client.md
│ ├── integrations
│ │ ├── gemini.md
│ │ ├── groq-bridge.md
│ │ ├── groq-integration-summary.md
│ │ └── groq-model-comparison.md
│ ├── integrations.md
│ ├── legacy
│ │ └── dual-protocol-hooks.md
│ ├── LM_STUDIO_COMPATIBILITY.md
│ ├── maintenance
│ │ └── memory-maintenance.md
│ ├── mastery
│ │ ├── api-reference.md
│ │ ├── architecture-overview.md
│ │ ├── configuration-guide.md
│ │ ├── local-setup-and-run.md
│ │ ├── testing-guide.md
│ │ └── troubleshooting.md
│ ├── migration
│ │ └── code-execution-api-quick-start.md
│ ├── natural-memory-triggers
│ │ ├── cli-reference.md
│ │ ├── installation-guide.md
│ │ └── performance-optimization.md
│ ├── oauth-setup.md
│ ├── pr-graphql-integration.md
│ ├── quick-setup-cloudflare-dual-environment.md
│ ├── README.md
│ ├── remote-configuration-wiki-section.md
│ ├── research
│ │ ├── code-execution-interface-implementation.md
│ │ └── code-execution-interface-summary.md
│ ├── ROADMAP.md
│ ├── sqlite-vec-backend.md
│ ├── statistics
│ │ ├── charts
│ │ │ ├── activity_patterns.png
│ │ │ ├── contributors.png
│ │ │ ├── growth_trajectory.png
│ │ │ ├── monthly_activity.png
│ │ │ └── october_sprint.png
│ │ ├── data
│ │ │ ├── activity_by_day.csv
│ │ │ ├── activity_by_hour.csv
│ │ │ ├── contributors.csv
│ │ │ └── monthly_activity.csv
│ │ ├── generate_charts.py
│ │ └── REPOSITORY_STATISTICS.md
│ ├── technical
│ │ ├── development.md
│ │ ├── memory-migration.md
│ │ ├── migration-log.md
│ │ ├── sqlite-vec-embedding-fixes.md
│ │ └── tag-storage.md
│ ├── testing
│ │ └── regression-tests.md
│ ├── testing-cloudflare-backend.md
│ ├── troubleshooting
│ │ ├── cloudflare-api-token-setup.md
│ │ ├── cloudflare-authentication.md
│ │ ├── general.md
│ │ ├── hooks-quick-reference.md
│ │ ├── pr162-schema-caching-issue.md
│ │ ├── session-end-hooks.md
│ │ └── sync-issues.md
│ └── tutorials
│ ├── advanced-techniques.md
│ ├── data-analysis.md
│ └── demo-session-walkthrough.md
├── examples
│ ├── claude_desktop_config_template.json
│ ├── claude_desktop_config_windows.json
│ ├── claude-desktop-http-config.json
│ ├── config
│ │ └── claude_desktop_config.json
│ ├── http-mcp-bridge.js
│ ├── memory_export_template.json
│ ├── README.md
│ ├── setup
│ │ └── setup_multi_client_complete.py
│ └── start_https_example.sh
├── install_service.py
├── install.py
├── LICENSE
├── NOTICE
├── pyproject.toml
├── pytest.ini
├── README.md
├── run_server.py
├── scripts
│ ├── .claude
│ │ └── settings.local.json
│ ├── archive
│ │ └── check_missing_timestamps.py
│ ├── backup
│ │ ├── backup_memories.py
│ │ ├── backup_sqlite_vec.sh
│ │ ├── export_distributable_memories.sh
│ │ └── restore_memories.py
│ ├── benchmarks
│ │ ├── benchmark_code_execution_api.py
│ │ ├── benchmark_hybrid_sync.py
│ │ └── benchmark_server_caching.py
│ ├── database
│ │ ├── analyze_sqlite_vec_db.py
│ │ ├── check_sqlite_vec_status.py
│ │ ├── db_health_check.py
│ │ └── simple_timestamp_check.py
│ ├── development
│ │ ├── debug_server_initialization.py
│ │ ├── find_orphaned_files.py
│ │ ├── fix_mdns.sh
│ │ ├── fix_sitecustomize.py
│ │ ├── remote_ingest.sh
│ │ ├── setup-git-merge-drivers.sh
│ │ ├── uv-lock-merge.sh
│ │ └── verify_hybrid_sync.py
│ ├── hooks
│ │ └── pre-commit
│ ├── installation
│ │ ├── install_linux_service.py
│ │ ├── install_macos_service.py
│ │ ├── install_uv.py
│ │ ├── install_windows_service.py
│ │ ├── install.py
│ │ ├── setup_backup_cron.sh
│ │ ├── setup_claude_mcp.sh
│ │ └── setup_cloudflare_resources.py
│ ├── linux
│ │ ├── service_status.sh
│ │ ├── start_service.sh
│ │ ├── stop_service.sh
│ │ ├── uninstall_service.sh
│ │ └── view_logs.sh
│ ├── maintenance
│ │ ├── assign_memory_types.py
│ │ ├── check_memory_types.py
│ │ ├── cleanup_corrupted_encoding.py
│ │ ├── cleanup_memories.py
│ │ ├── cleanup_organize.py
│ │ ├── consolidate_memory_types.py
│ │ ├── consolidation_mappings.json
│ │ ├── delete_orphaned_vectors_fixed.py
│ │ ├── fast_cleanup_duplicates_with_tracking.sh
│ │ ├── find_all_duplicates.py
│ │ ├── find_cloudflare_duplicates.py
│ │ ├── find_duplicates.py
│ │ ├── memory-types.md
│ │ ├── README.md
│ │ ├── recover_timestamps_from_cloudflare.py
│ │ ├── regenerate_embeddings.py
│ │ ├── repair_malformed_tags.py
│ │ ├── repair_memories.py
│ │ ├── repair_sqlite_vec_embeddings.py
│ │ ├── repair_zero_embeddings.py
│ │ ├── restore_from_json_export.py
│ │ └── scan_todos.sh
│ ├── migration
│ │ ├── cleanup_mcp_timestamps.py
│ │ ├── legacy
│ │ │ └── migrate_chroma_to_sqlite.py
│ │ ├── mcp-migration.py
│ │ ├── migrate_sqlite_vec_embeddings.py
│ │ ├── migrate_storage.py
│ │ ├── migrate_tags.py
│ │ ├── migrate_timestamps.py
│ │ ├── migrate_to_cloudflare.py
│ │ ├── migrate_to_sqlite_vec.py
│ │ ├── migrate_v5_enhanced.py
│ │ ├── TIMESTAMP_CLEANUP_README.md
│ │ └── verify_mcp_timestamps.py
│ ├── pr
│ │ ├── amp_collect_results.sh
│ │ ├── amp_detect_breaking_changes.sh
│ │ ├── amp_generate_tests.sh
│ │ ├── amp_pr_review.sh
│ │ ├── amp_quality_gate.sh
│ │ ├── amp_suggest_fixes.sh
│ │ ├── auto_review.sh
│ │ ├── detect_breaking_changes.sh
│ │ ├── generate_tests.sh
│ │ ├── lib
│ │ │ └── graphql_helpers.sh
│ │ ├── quality_gate.sh
│ │ ├── resolve_threads.sh
│ │ ├── run_pyscn_analysis.sh
│ │ ├── run_quality_checks.sh
│ │ ├── thread_status.sh
│ │ └── watch_reviews.sh
│ ├── quality
│ │ ├── fix_dead_code_install.sh
│ │ ├── phase1_dead_code_analysis.md
│ │ ├── phase2_complexity_analysis.md
│ │ ├── README_PHASE1.md
│ │ ├── README_PHASE2.md
│ │ ├── track_pyscn_metrics.sh
│ │ └── weekly_quality_review.sh
│ ├── README.md
│ ├── run
│ │ ├── run_mcp_memory.sh
│ │ ├── run-with-uv.sh
│ │ └── start_sqlite_vec.sh
│ ├── run_memory_server.py
│ ├── server
│ │ ├── check_http_server.py
│ │ ├── check_server_health.py
│ │ ├── memory_offline.py
│ │ ├── preload_models.py
│ │ ├── run_http_server.py
│ │ ├── run_memory_server.py
│ │ ├── start_http_server.bat
│ │ └── start_http_server.sh
│ ├── service
│ │ ├── deploy_dual_services.sh
│ │ ├── install_http_service.sh
│ │ ├── mcp-memory-http.service
│ │ ├── mcp-memory.service
│ │ ├── memory_service_manager.sh
│ │ ├── service_control.sh
│ │ ├── service_utils.py
│ │ └── update_service.sh
│ ├── sync
│ │ ├── check_drift.py
│ │ ├── claude_sync_commands.py
│ │ ├── export_memories.py
│ │ ├── import_memories.py
│ │ ├── litestream
│ │ │ ├── apply_local_changes.sh
│ │ │ ├── enhanced_memory_store.sh
│ │ │ ├── init_staging_db.sh
│ │ │ ├── io.litestream.replication.plist
│ │ │ ├── manual_sync.sh
│ │ │ ├── memory_sync.sh
│ │ │ ├── pull_remote_changes.sh
│ │ │ ├── push_to_remote.sh
│ │ │ ├── README.md
│ │ │ ├── resolve_conflicts.sh
│ │ │ ├── setup_local_litestream.sh
│ │ │ ├── setup_remote_litestream.sh
│ │ │ ├── staging_db_init.sql
│ │ │ ├── stash_local_changes.sh
│ │ │ ├── sync_from_remote_noconfig.sh
│ │ │ └── sync_from_remote.sh
│ │ ├── README.md
│ │ ├── safe_cloudflare_update.sh
│ │ ├── sync_memory_backends.py
│ │ └── sync_now.py
│ ├── testing
│ │ ├── run_complete_test.py
│ │ ├── run_memory_test.sh
│ │ ├── simple_test.py
│ │ ├── test_cleanup_logic.py
│ │ ├── test_cloudflare_backend.py
│ │ ├── test_docker_functionality.py
│ │ ├── test_installation.py
│ │ ├── test_mdns.py
│ │ ├── test_memory_api.py
│ │ ├── test_memory_simple.py
│ │ ├── test_migration.py
│ │ ├── test_search_api.py
│ │ ├── test_sqlite_vec_embeddings.py
│ │ ├── test_sse_events.py
│ │ ├── test-connection.py
│ │ └── test-hook.js
│ ├── utils
│ │ ├── claude_commands_utils.py
│ │ ├── generate_personalized_claude_md.sh
│ │ ├── groq
│ │ ├── groq_agent_bridge.py
│ │ ├── list-collections.py
│ │ ├── memory_wrapper_uv.py
│ │ ├── query_memories.py
│ │ ├── smithery_wrapper.py
│ │ ├── test_groq_bridge.sh
│ │ └── uv_wrapper.py
│ └── validation
│ ├── check_dev_setup.py
│ ├── check_documentation_links.py
│ ├── diagnose_backend_config.py
│ ├── validate_configuration_complete.py
│ ├── validate_memories.py
│ ├── validate_migration.py
│ ├── validate_timestamp_integrity.py
│ ├── verify_environment.py
│ ├── verify_pytorch_windows.py
│ └── verify_torch.py
├── SECURITY.md
├── selective_timestamp_recovery.py
├── SPONSORS.md
├── src
│ └── mcp_memory_service
│ ├── __init__.py
│ ├── api
│ │ ├── __init__.py
│ │ ├── client.py
│ │ ├── operations.py
│ │ ├── sync_wrapper.py
│ │ └── types.py
│ ├── backup
│ │ ├── __init__.py
│ │ └── scheduler.py
│ ├── cli
│ │ ├── __init__.py
│ │ ├── ingestion.py
│ │ ├── main.py
│ │ └── utils.py
│ ├── config.py
│ ├── consolidation
│ │ ├── __init__.py
│ │ ├── associations.py
│ │ ├── base.py
│ │ ├── clustering.py
│ │ ├── compression.py
│ │ ├── consolidator.py
│ │ ├── decay.py
│ │ ├── forgetting.py
│ │ ├── health.py
│ │ └── scheduler.py
│ ├── dependency_check.py
│ ├── discovery
│ │ ├── __init__.py
│ │ ├── client.py
│ │ └── mdns_service.py
│ ├── embeddings
│ │ ├── __init__.py
│ │ └── onnx_embeddings.py
│ ├── ingestion
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── chunker.py
│ │ ├── csv_loader.py
│ │ ├── json_loader.py
│ │ ├── pdf_loader.py
│ │ ├── registry.py
│ │ ├── semtools_loader.py
│ │ └── text_loader.py
│ ├── lm_studio_compat.py
│ ├── mcp_server.py
│ ├── models
│ │ ├── __init__.py
│ │ └── memory.py
│ ├── server.py
│ ├── services
│ │ ├── __init__.py
│ │ └── memory_service.py
│ ├── storage
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── cloudflare.py
│ │ ├── factory.py
│ │ ├── http_client.py
│ │ ├── hybrid.py
│ │ └── sqlite_vec.py
│ ├── sync
│ │ ├── __init__.py
│ │ ├── exporter.py
│ │ ├── importer.py
│ │ └── litestream_config.py
│ ├── utils
│ │ ├── __init__.py
│ │ ├── cache_manager.py
│ │ ├── content_splitter.py
│ │ ├── db_utils.py
│ │ ├── debug.py
│ │ ├── document_processing.py
│ │ ├── gpu_detection.py
│ │ ├── hashing.py
│ │ ├── http_server_manager.py
│ │ ├── port_detection.py
│ │ ├── system_detection.py
│ │ └── time_parser.py
│ └── web
│ ├── __init__.py
│ ├── api
│ │ ├── __init__.py
│ │ ├── analytics.py
│ │ ├── backup.py
│ │ ├── consolidation.py
│ │ ├── documents.py
│ │ ├── events.py
│ │ ├── health.py
│ │ ├── manage.py
│ │ ├── mcp.py
│ │ ├── memories.py
│ │ ├── search.py
│ │ └── sync.py
│ ├── app.py
│ ├── dependencies.py
│ ├── oauth
│ │ ├── __init__.py
│ │ ├── authorization.py
│ │ ├── discovery.py
│ │ ├── middleware.py
│ │ ├── models.py
│ │ ├── registration.py
│ │ └── storage.py
│ ├── sse.py
│ └── static
│ ├── app.js
│ ├── index.html
│ ├── README.md
│ ├── sse_test.html
│ └── style.css
├── start_http_debug.bat
├── start_http_server.sh
├── test_document.txt
├── test_version_checker.js
├── tests
│ ├── __init__.py
│ ├── api
│ │ ├── __init__.py
│ │ ├── test_compact_types.py
│ │ └── test_operations.py
│ ├── bridge
│ │ ├── mock_responses.js
│ │ ├── package-lock.json
│ │ ├── package.json
│ │ └── test_http_mcp_bridge.js
│ ├── conftest.py
│ ├── consolidation
│ │ ├── __init__.py
│ │ ├── conftest.py
│ │ ├── test_associations.py
│ │ ├── test_clustering.py
│ │ ├── test_compression.py
│ │ ├── test_consolidator.py
│ │ ├── test_decay.py
│ │ └── test_forgetting.py
│ ├── contracts
│ │ └── api-specification.yml
│ ├── integration
│ │ ├── package-lock.json
│ │ ├── package.json
│ │ ├── test_api_key_fallback.py
│ │ ├── test_api_memories_chronological.py
│ │ ├── test_api_tag_time_search.py
│ │ ├── test_api_with_memory_service.py
│ │ ├── test_bridge_integration.js
│ │ ├── test_cli_interfaces.py
│ │ ├── test_cloudflare_connection.py
│ │ ├── test_concurrent_clients.py
│ │ ├── test_data_serialization_consistency.py
│ │ ├── test_http_server_startup.py
│ │ ├── test_mcp_memory.py
│ │ ├── test_mdns_integration.py
│ │ ├── test_oauth_basic_auth.py
│ │ ├── test_oauth_flow.py
│ │ ├── test_server_handlers.py
│ │ └── test_store_memory.py
│ ├── performance
│ │ ├── test_background_sync.py
│ │ └── test_hybrid_live.py
│ ├── README.md
│ ├── smithery
│ │ └── test_smithery.py
│ ├── sqlite
│ │ └── simple_sqlite_vec_test.py
│ ├── test_client.py
│ ├── test_content_splitting.py
│ ├── test_database.py
│ ├── test_hybrid_cloudflare_limits.py
│ ├── test_hybrid_storage.py
│ ├── test_memory_ops.py
│ ├── test_semantic_search.py
│ ├── test_sqlite_vec_storage.py
│ ├── test_time_parser.py
│ ├── test_timestamp_preservation.py
│ ├── timestamp
│ │ ├── test_hook_vs_manual_storage.py
│ │ ├── test_issue99_final_validation.py
│ │ ├── test_search_retrieval_inconsistency.py
│ │ ├── test_timestamp_issue.py
│ │ └── test_timestamp_simple.py
│ └── unit
│ ├── conftest.py
│ ├── test_cloudflare_storage.py
│ ├── test_csv_loader.py
│ ├── test_fastapi_dependencies.py
│ ├── test_import.py
│ ├── test_json_loader.py
│ ├── test_mdns_simple.py
│ ├── test_mdns.py
│ ├── test_memory_service.py
│ ├── test_memory.py
│ ├── test_semtools_loader.py
│ ├── test_storage_interface_compatibility.py
│ └── test_tag_time_filtering.py
├── tools
│ ├── docker
│ │ ├── DEPRECATED.md
│ │ ├── docker-compose.http.yml
│ │ ├── docker-compose.pythonpath.yml
│ │ ├── docker-compose.standalone.yml
│ │ ├── docker-compose.uv.yml
│ │ ├── docker-compose.yml
│ │ ├── docker-entrypoint-persistent.sh
│ │ ├── docker-entrypoint-unified.sh
│ │ ├── docker-entrypoint.sh
│ │ ├── Dockerfile
│ │ ├── Dockerfile.glama
│ │ ├── Dockerfile.slim
│ │ ├── README.md
│ │ └── test-docker-modes.sh
│ └── README.md
└── uv.lock
```
# Files
--------------------------------------------------------------------------------
/src/mcp_memory_service/services/__init__.py:
--------------------------------------------------------------------------------
```python
"""
Services package for MCP Memory Service.
This package contains shared business logic services that provide
consistent behavior across different interfaces (API, MCP tools).
"""
from .memory_service import MemoryService, MemoryResult
__all__ = ["MemoryService", "MemoryResult"]
```
--------------------------------------------------------------------------------
/examples/claude-desktop-http-config.json:
--------------------------------------------------------------------------------
```json
{
"mcpServers": {
"memory": {
"command": "node",
"args": ["/path/to/mcp-memory-service/examples/http-mcp-bridge.js"],
"env": {
"MCP_MEMORY_HTTP_ENDPOINT": "http://your-server:8000/api",
"MCP_MEMORY_API_KEY": "your-secure-api-key"
}
}
}
}
```
--------------------------------------------------------------------------------
/archive/litestream-configs-v6.3.0/litestream_replica_config.yml:
--------------------------------------------------------------------------------
```yaml
# Litestream Replica Configuration for local macOS machine
# This configuration syncs from the remote master at narrowbox.local
dbs:
- path: /Users/hkr/Library/Application Support/mcp-memory/sqlite_vec.db
replicas:
- type: file
url: http://10.0.1.30:8080/mcp-memory
sync-interval: 10s
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/embeddings/__init__.py:
--------------------------------------------------------------------------------
```python
"""Embedding generation modules for MCP Memory Service."""
from .onnx_embeddings import (
ONNXEmbeddingModel,
get_onnx_embedding_model,
ONNX_AVAILABLE,
TOKENIZERS_AVAILABLE
)
__all__ = [
'ONNXEmbeddingModel',
'get_onnx_embedding_model',
'ONNX_AVAILABLE',
'TOKENIZERS_AVAILABLE'
]
```
--------------------------------------------------------------------------------
/examples/config/claude_desktop_config.json:
--------------------------------------------------------------------------------
```json
{
"mcpServers": {
"memory": {
"command": "python",
"args": [
"-m",
"mcp_memory_service.server"
],
"env": {
"MCP_MEMORY_STORAGE_BACKEND": "sqlite_vec",
"MCP_MEMORY_BACKUPS_PATH": "C:\\Users\\heinrich.krupp\\AppData\\Local\\mcp-memory"
}
}
}
}
```
--------------------------------------------------------------------------------
/archive/litestream-configs-v6.3.0/litestream_replica_config_fixed.yml:
--------------------------------------------------------------------------------
```yaml
# Litestream Replica Configuration for local macOS machine (FIXED)
# This configuration syncs from the remote master at 10.0.1.30:8080
dbs:
- path: /Users/hkr/Library/Application Support/mcp-memory/sqlite_vec.db
replicas:
- name: "remote-master"
type: "http"
url: http://10.0.1.30:8080/mcp-memory
sync-interval: 10s
```
--------------------------------------------------------------------------------
/tests/conftest.py:
--------------------------------------------------------------------------------
```python
import pytest
import os
import sys
import tempfile
import shutil
# Add src directory to Python path
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'src'))
@pytest.fixture
def temp_db_path():
'''Create a temporary directory for database testing.'''
temp_dir = tempfile.mkdtemp()
yield temp_dir
# Clean up after test
shutil.rmtree(temp_dir)
```
--------------------------------------------------------------------------------
/scripts/testing/run_memory_test.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
set -e
# Activate virtual environment
source ./venv/bin/activate
# Set environment variables
export MCP_MEMORY_STORAGE_BACKEND="sqlite_vec"
export MCP_MEMORY_SQLITE_PATH="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec.db"
export MCP_MEMORY_BACKUPS_PATH="/Users/hkr/Library/Application Support/mcp-memory/backups"
export MCP_MEMORY_USE_ONNX="1"
# Run the memory server
python -m mcp_memory_service.server
```
--------------------------------------------------------------------------------
/scripts/service/update_service.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
echo "Updating MCP Memory Service configuration..."
# Copy the updated service file
sudo cp mcp-memory.service /etc/systemd/system/
# Set proper permissions
sudo chmod 644 /etc/systemd/system/mcp-memory.service
# Reload systemd daemon
sudo systemctl daemon-reload
echo "✅ Service updated successfully!"
echo ""
echo "Now try starting the service:"
echo " sudo systemctl start mcp-memory"
echo " sudo systemctl status mcp-memory"
```
--------------------------------------------------------------------------------
/scripts/pr/run_quality_checks.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# scripts/pr/run_quality_checks.sh - Run quality checks on a PR
# Wrapper for quality_gate.sh to maintain consistent naming in workflows
#
# Usage: bash scripts/pr/run_quality_checks.sh <PR_NUMBER>
set -e
PR_NUMBER=$1
if [ -z "$PR_NUMBER" ]; then
echo "Usage: $0 <PR_NUMBER>"
exit 1
fi
# Get script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Run quality gate checks
exec "$SCRIPT_DIR/quality_gate.sh" "$PR_NUMBER"
```
--------------------------------------------------------------------------------
/tests/bridge/package.json:
--------------------------------------------------------------------------------
```json
{
"name": "mcp-bridge-tests",
"version": "1.0.0",
"description": "Unit tests for HTTP-MCP bridge",
"main": "test_http_mcp_bridge.js",
"scripts": {
"test": "mocha test_http_mcp_bridge.js --reporter spec",
"test:watch": "mocha test_http_mcp_bridge.js --reporter spec --watch"
},
"dependencies": {
"mocha": "^10.0.0",
"sinon": "^17.0.0"
},
"devDependencies": {},
"keywords": ["mcp", "bridge", "testing"],
"author": "",
"license": "Apache-2.0"
}
```
--------------------------------------------------------------------------------
/scripts/development/setup-git-merge-drivers.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Setup script for git merge drivers
# Run this once after cloning the repository
echo "Setting up git merge drivers for uv.lock..."
# Configure the uv.lock merge driver
git config merge.uv-lock-merge.driver './scripts/uv-lock-merge.sh %O %A %B %L %P'
git config merge.uv-lock-merge.name 'UV lock file merge driver'
# Make the merge script executable
chmod +x scripts/uv-lock-merge.sh
echo "✓ Git merge drivers configured successfully!"
echo " uv.lock conflicts will now be resolved automatically"
```
--------------------------------------------------------------------------------
/archive/litestream-configs-v6.3.0/litestream_master_config.yml:
--------------------------------------------------------------------------------
```yaml
# Litestream Master Configuration for narrowbox.local
# This configuration sets up the remote server as the master database
dbs:
- path: /home/user/.local/share/mcp-memory/sqlite_vec.db
replicas:
# Local file replica for serving via HTTP
- type: file
path: /var/www/litestream/mcp-memory
sync-interval: 10s
# Local backup
- type: file
path: /backup/litestream/mcp-memory
sync-interval: 1m
# Performance settings
checkpoint-interval: 30s
wal-retention: 10m
```
--------------------------------------------------------------------------------
/tests/integration/package.json:
--------------------------------------------------------------------------------
```json
{
"name": "mcp-integration-tests",
"version": "1.0.0",
"description": "Integration tests for HTTP-MCP bridge",
"main": "test_bridge_integration.js",
"scripts": {
"test": "mocha test_bridge_integration.js --reporter spec --timeout 10000",
"test:watch": "mocha test_bridge_integration.js --reporter spec --timeout 10000 --watch"
},
"dependencies": {
"mocha": "^10.0.0",
"sinon": "^17.0.0"
},
"devDependencies": {},
"keywords": ["mcp", "bridge", "integration", "testing"],
"author": "",
"license": "Apache-2.0"
}
```
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/config.yml:
--------------------------------------------------------------------------------
```yaml
blank_issues_enabled: false
contact_links:
- name: 📚 Documentation & Wiki
url: https://github.com/doobidoo/mcp-memory-service/wiki
about: Check the Wiki for setup guides, troubleshooting, and advanced usage
- name: 💬 GitHub Discussions
url: https://github.com/doobidoo/mcp-memory-service/discussions
about: Ask questions, share ideas, or discuss general topics with the community
- name: 🔍 Search Existing Issues
url: https://github.com/doobidoo/mcp-memory-service/issues
about: Check if your issue has already been reported or solved
```
--------------------------------------------------------------------------------
/scripts/linux/uninstall_service.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
echo "This will uninstall MCP Memory Service."
read -p "Are you sure? (y/N): " confirm
if [[ ! "$confirm" =~ ^[Yy]$ ]]; then
exit 0
fi
echo "Stopping service..."
systemctl --user stop mcp-memory 2>/dev/null
systemctl --user disable mcp-memory 2>/dev/null
echo "Removing service files..."
if [ -f "$HOME/.config/systemd/user/mcp-memory.service" ]; then
rm -f "$HOME/.config/systemd/user/mcp-memory.service"
systemctl --user daemon-reload
else
sudo rm -f /etc/systemd/system/mcp-memory.service
sudo systemctl daemon-reload
fi
echo "✅ Service uninstalled"
```
--------------------------------------------------------------------------------
/archive/litestream-configs-v6.3.0/litestream_master_config_fixed.yml:
--------------------------------------------------------------------------------
```yaml
# Litestream Master Configuration for narrowbox.local (FIXED)
# This configuration sets up the remote server as the master database
dbs:
- path: /home/hkr/.local/share/mcp-memory/sqlite_vec.db
replicas:
# HTTP replica for serving to clients
- name: "http-replica"
type: file
path: /var/www/litestream/mcp-memory
sync-interval: 10s
# Local backup
- name: "backup-replica"
type: file
path: /backup/litestream/mcp-memory
sync-interval: 1m
# Performance settings
checkpoint-interval: 30s
wal-retention: 10m
```
--------------------------------------------------------------------------------
/start_http_server.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
export MCP_MEMORY_STORAGE_BACKEND=hybrid
export MCP_MEMORY_SQLITE_PATH="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec.db"
export MCP_HTTP_ENABLED=true
export MCP_OAUTH_ENABLED=false
export CLOUDFLARE_API_TOKEN="Y9qwW1rYkwiE63iWYASxnzfTQlIn-mtwCihRTwZa"
export CLOUDFLARE_ACCOUNT_ID="be0e35a26715043ef8df90253268c33f"
export CLOUDFLARE_D1_DATABASE_ID="f745e9b4-ba8e-4d47-b38f-12af91060d5a"
export CLOUDFLARE_VECTORIZE_INDEX="mcp-memory-index"
cd /Users/hkr/Documents/GitHub/mcp-memory-service
python -m uvicorn mcp_memory_service.web.app:app --host 127.0.0.1 --port 8889 --reload
```
--------------------------------------------------------------------------------
/tests/api/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for code execution API."""
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/web/api/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
API routes for the HTTP interface.
"""
```
--------------------------------------------------------------------------------
/tools/docker/docker-compose.pythonpath.yml:
--------------------------------------------------------------------------------
```yaml
services:
memory-service:
image: python:3.10-slim
working_dir: /app
stdin_open: true
tty: true
ports:
- "8000:8000"
volumes:
- .:/app
- ${CHROMA_DB_PATH:-$HOME/mcp-memory/chroma_db}:/app/chroma_db
- ${BACKUPS_PATH:-$HOME/mcp-memory/backups}:/app/backups
environment:
- MCP_MEMORY_CHROMA_PATH=/app/chroma_db
- MCP_MEMORY_BACKUPS_PATH=/app/backups
- LOG_LEVEL=INFO
- MAX_RESULTS_PER_QUERY=10
- SIMILARITY_THRESHOLD=0.7
- PYTHONPATH=/app/src:/app
- PYTHONUNBUFFERED=1
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/models/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .memory import Memory, MemoryQueryResult
__all__ = ['Memory', 'MemoryQueryResult']
```
--------------------------------------------------------------------------------
/tools/docker/docker-compose.uv.yml:
--------------------------------------------------------------------------------
```yaml
services:
memory-service:
image: python:3.10-slim
working_dir: /app
stdin_open: true
tty: true
ports:
- "8000:8000"
volumes:
- .:/app
- ${CHROMA_DB_PATH:-$HOME/mcp-memory/chroma_db}:/app/chroma_db
- ${BACKUPS_PATH:-$HOME/mcp-memory/backups}:/app/backups
environment:
- MCP_MEMORY_CHROMA_PATH=/app/chroma_db
- MCP_MEMORY_BACKUPS_PATH=/app/backups
- LOG_LEVEL=INFO
- MAX_RESULTS_PER_QUERY=10
- SIMILARITY_THRESHOLD=0.7
- PYTHONPATH=/app
- PYTHONUNBUFFERED=1
- UV_ACTIVE=1
- CHROMA_TELEMETRY_IMPL=none
- ANONYMIZED_TELEMETRY=false
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
```
--------------------------------------------------------------------------------
/scripts/sync/litestream/init_staging_db.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Initialize staging database for offline memory changes
STAGING_DB="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec_staging.db"
INIT_SQL="$(dirname "$0")/deployment/staging_db_init.sql"
echo "$(date): Initializing staging database..."
# Create directory if it doesn't exist
mkdir -p "$(dirname "$STAGING_DB")"
# Initialize database with schema
sqlite3 "$STAGING_DB" < "$INIT_SQL"
if [ $? -eq 0 ]; then
echo "$(date): Staging database initialized at: $STAGING_DB"
echo "$(date): Database size: $(du -h "$STAGING_DB" | cut -f1)"
else
echo "$(date): ERROR: Failed to initialize staging database"
exit 1
fi
# Set permissions
chmod 644 "$STAGING_DB"
echo "$(date): Staging database ready for use"
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/utils/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .hashing import generate_content_hash
from .document_processing import create_memory_from_chunk, _process_and_store_chunk
__all__ = ['generate_content_hash', 'create_memory_from_chunk', '_process_and_store_chunk']
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/backup/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Automatic backup module for MCP Memory Service.
Provides scheduled backups and backup management functionality.
"""
from .scheduler import BackupScheduler, BackupService
__all__ = ['BackupScheduler', 'BackupService']
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/cli/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Command Line Interface for MCP Memory Service
Provides CLI commands for document ingestion, memory management, and database operations.
"""
from .ingestion import add_ingestion_commands
__all__ = ['add_ingestion_commands']
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/web/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Web interface for MCP Memory Service.
Provides HTTP REST API and Server-Sent Events (SSE) interface
using FastAPI and SQLite-vec backend.
"""
# Import version from main package to maintain consistency
from .. import __version__
```
--------------------------------------------------------------------------------
/tools/docker/docker-compose.standalone.yml:
--------------------------------------------------------------------------------
```yaml
services:
memory-service:
image: python:3.10-slim
working_dir: /app
stdin_open: true
tty: true
ports:
- "8000:8000"
volumes:
- .:/app
- ${CHROMA_DB_PATH:-$HOME/mcp-memory/chroma_db}:/app/chroma_db
- ${BACKUPS_PATH:-$HOME/mcp-memory/backups}:/app/backups
environment:
- MCP_MEMORY_CHROMA_PATH=/app/chroma_db
- MCP_MEMORY_BACKUPS_PATH=/app/backups
- LOG_LEVEL=INFO
- MAX_RESULTS_PER_QUERY=10
- SIMILARITY_THRESHOLD=0.7
- PYTHONPATH=/app
- PYTHONUNBUFFERED=1
- UV_ACTIVE=1
- MCP_STANDALONE_MODE=1
- CHROMA_TELEMETRY_IMPL=none
- ANONYMIZED_TELEMETRY=false
restart: unless-stopped
build:
context: .
dockerfile: Dockerfile
entrypoint: ["/usr/local/bin/docker-entrypoint-persistent.sh"]
```
--------------------------------------------------------------------------------
/.github/FUNDING.yml:
--------------------------------------------------------------------------------
```yaml
# These are supported funding model platforms
# github: doobidoo # Uncomment when enrolled in GitHub Sponsors
# patreon: # Replace with a single Patreon username
# open_collective: # Replace with a single Open Collective username
ko_fi: doobidoo # Replace with a single Ko-fi username
# tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
# community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
# liberapay: # Replace with a single Liberapay username
# issuehunt: # Replace with a single IssueHunt username
# lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: ['https://www.buymeacoffee.com/doobidoo', 'https://paypal.me/heinrichkrupp1'] # Replace with up to 4 custom
# sponsorship URLs e.g., ['', 'link2']
```
--------------------------------------------------------------------------------
/archive/deployment-configs/smithery.yaml:
--------------------------------------------------------------------------------
```yaml
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
startCommand:
type: stdio
configSchema:
# JSON Schema defining the configuration options for the MCP.
type: object
required:
- chromaDbPath
- backupsPath
properties:
chromaDbPath:
type: string
description: Path to ChromaDB storage.
backupsPath:
type: string
description: Path for backups.
commandFunction:
# A function that produces the CLI command to start the MCP on stdio.
|-
(config) => ({
command: 'python',
args: ['-m', 'mcp_memory_service.server'],
env: {
MCP_MEMORY_CHROMA_PATH: config.chromaDbPath,
MCP_MEMORY_BACKUPS_PATH: config.backupsPath,
PYTHONUNBUFFERED: '1',
PYTORCH_ENABLE_MPS_FALLBACK: '1'
}
})
```
--------------------------------------------------------------------------------
/examples/claude_desktop_config_template.json:
--------------------------------------------------------------------------------
```json
{
"mcpServers": {
"memory": {
"_comment": "Recommended: Use Python module approach (most stable, no path dependencies)",
"command": "python",
"args": [
"-m",
"mcp_memory_service.server"
],
"_alternative_approaches": [
"Option 1 (UV): command='uv', args=['--directory', '${PROJECT_PATH}', 'run', 'memory', 'server']",
"Option 2 (New script path): command='python', args=['${PROJECT_PATH}/scripts/server/run_memory_server.py']",
"Option 3 (Legacy, shows migration notice): command='python', args=['${PROJECT_PATH}/scripts/run_memory_server.py']"
],
"env": {
"MCP_MEMORY_STORAGE_BACKEND": "sqlite_vec",
"MCP_MEMORY_BACKUPS_PATH": "${USER_DATA_PATH}/mcp-memory/backups",
"PYTORCH_ENABLE_MPS_FALLBACK": "1",
"PYTORCH_CUDA_ALLOC_CONF": "max_split_size_mb:128"
}
}
}
}
```
--------------------------------------------------------------------------------
/scripts/server/start_http_server.bat:
--------------------------------------------------------------------------------
```
@echo off
REM Start the MCP Memory Service HTTP server in the background on Windows
echo Starting MCP Memory Service HTTP server...
REM Check if server is already running
uv run python scripts\server\check_http_server.py -q
if %errorlevel% == 0 (
echo HTTP server is already running!
uv run python scripts\server\check_http_server.py
exit /b 0
)
REM Start the server in a new window
start "MCP Memory HTTP Server" uv run python scripts\server\run_http_server.py
REM Wait up to 5 seconds for the server to start
FOR /L %%i IN (1,1,5) DO (
timeout /t 1 /nobreak >nul
uv run python scripts\server\check_http_server.py -q
if %errorlevel% == 0 (
echo.
echo [OK] HTTP server started successfully!
uv run python scripts\server\check_http_server.py
goto :eof
)
)
echo.
echo [WARN] Server did not start within 5 seconds. Check the server window for errors.
```
--------------------------------------------------------------------------------
/scripts/sync/litestream/sync_from_remote.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Sync script to pull latest database from remote master
DB_PATH="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec.db"
REMOTE_URL="http://10.0.1.30:8080/mcp-memory"
BACKUP_PATH="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec.db.backup"
echo "$(date): Starting sync from remote master..."
# Create backup of current database
if [ -f "$DB_PATH" ]; then
cp "$DB_PATH" "$BACKUP_PATH"
echo "$(date): Created backup at $BACKUP_PATH"
fi
# Restore from remote
litestream restore -o "$DB_PATH" "$REMOTE_URL"
if [ $? -eq 0 ]; then
echo "$(date): Successfully synced database from remote master"
# Remove backup on success
rm -f "$BACKUP_PATH"
else
echo "$(date): ERROR: Failed to sync from remote master"
# Restore backup on failure
if [ -f "$BACKUP_PATH" ]; then
mv "$BACKUP_PATH" "$DB_PATH"
echo "$(date): Restored backup"
fi
exit 1
fi
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/discovery/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
mDNS service discovery module for MCP Memory Service.
This module provides mDNS service advertisement and discovery capabilities
for the MCP Memory Service HTTP/HTTPS interface.
"""
from .mdns_service import ServiceAdvertiser, ServiceDiscovery
from .client import DiscoveryClient
__all__ = ['ServiceAdvertiser', 'ServiceDiscovery', 'DiscoveryClient']
```
--------------------------------------------------------------------------------
/scripts/development/uv-lock-merge.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Git merge driver for uv.lock files
# Automatically resolves conflicts and regenerates the lock file
# Arguments from git:
# %O = ancestor's version
# %A = current version
# %B = other version
# %L = conflict marker length
# %P = path to file
ANCESTOR="$1"
CURRENT="$2"
OTHER="$3"
MARKER_LENGTH="$4"
PATH="$5"
echo "Auto-resolving uv.lock conflict by regenerating lock file..."
# Accept the incoming version first (this resolves the conflict)
cp "$OTHER" "$PATH"
# Check if uv is available
if command -v uv >/dev/null 2>&1; then
echo "Running uv sync to regenerate lock file..."
# Regenerate the lock file based on pyproject.toml
uv sync --quiet
if [ $? -eq 0 ]; then
echo "✓ uv.lock regenerated successfully"
exit 0
else
echo "⚠ Warning: uv sync failed, using incoming version"
exit 0
fi
else
echo "⚠ Warning: uv not found, using incoming version"
exit 0
fi
```
--------------------------------------------------------------------------------
/archive/deployment/deploy_fastmcp_fixed.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
echo "🚀 Deploying Fixed FastMCP Server v4.0.0-alpha.1..."
# Stop current service
echo "⏹️ Stopping current service..."
sudo systemctl stop mcp-memory
# Install the fixed FastMCP service configuration
echo "📝 Installing fixed FastMCP service configuration..."
sudo cp /tmp/fastmcp-server-fixed.service /etc/systemd/system/mcp-memory.service
# Reload systemd daemon
echo "🔄 Reloading systemd daemon..."
sudo systemctl daemon-reload
# Start the FastMCP server
echo "▶️ Starting FastMCP server..."
sudo systemctl start mcp-memory
# Wait a moment for startup
sleep 3
# Check status
echo "🔍 Checking service status..."
sudo systemctl status mcp-memory --no-pager
echo ""
echo "📊 Service logs (last 10 lines):"
sudo journalctl -u mcp-memory -n 10 --no-pager
echo ""
echo "✅ FastMCP Server deployment complete!"
echo "🔗 Native MCP Protocol should be available on port 8000"
echo "📋 Monitor logs: sudo journalctl -u mcp-memory -f"
```
--------------------------------------------------------------------------------
/archive/development/test_fastmcp.py:
--------------------------------------------------------------------------------
```python
#!/usr/bin/env python3
"""Simple test of FastMCP server structure for memory service."""
import sys
import os
from pathlib import Path
# Add src to path
sys.path.insert(0, 'src')
from mcp.server.fastmcp import FastMCP
# Create a simple FastMCP server for testing
mcp = FastMCP("Test Memory Service")
@mcp.tool()
def test_store_memory(content: str, tags: list[str] = None) -> dict:
"""Test memory storage function."""
return {
"success": True,
"message": f"Stored: {content}",
"tags": tags or []
}
@mcp.tool()
def test_health() -> dict:
"""Test health check."""
return {
"status": "healthy",
"version": "4.0.0-alpha.1"
}
if __name__ == "__main__":
print("FastMCP Memory Service Test")
print("Server configured with basic tools")
print("Available tools:")
print("- test_store_memory")
print("- test_health")
print("\nTo run server: mcp.run(transport='streamable-http')")
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/sync/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Database synchronization module for MCP Memory Service.
This module provides tools for synchronizing SQLite-vec databases across
multiple machines using JSON export/import and Litestream replication.
"""
from .exporter import MemoryExporter
from .importer import MemoryImporter
from .litestream_config import LitestreamManager
__all__ = ['MemoryExporter', 'MemoryImporter', 'LitestreamManager']
```
--------------------------------------------------------------------------------
/docs/images/dashboard-placeholder.md:
--------------------------------------------------------------------------------
```markdown
# Dashboard Screenshot Placeholder
This directory will contain screenshots of the MCP Memory Service dashboard.
## v3.3.0 Dashboard Features
The new dashboard includes:
- **Modern Design**: Gradient backgrounds with professional card layout
- **Live Statistics**: Real-time server metrics and memory counts
- **Interactive Endpoints**: Organized API documentation with hover effects
- **Tech Stack Badges**: Visual representation of FastAPI, SQLite-vec, PyTorch, etc.
- **Responsive Layout**: Works on desktop and mobile devices
- **Auto-Refresh**: Stats update every 30 seconds
## Access URLs
- Dashboard: http://localhost:8000
- mDNS: http://mcp-memory-service.local:8000
- API Docs: http://localhost:8000/api/docs
- ReDoc: http://localhost:8000/api/redoc
## Screenshot Instructions
To capture the dashboard:
1. Ensure the HTTP server is running
2. Open browser to http://localhost:8000
3. Wait for stats to load (shows actual memory count)
4. Take full-page screenshot
5. Save as `dashboard-v3.3.0.png` in this directory
```
--------------------------------------------------------------------------------
/tests/unit/test_import.py:
--------------------------------------------------------------------------------
```python
#!/usr/bin/env python3
"""
Test script to verify the memory service can be imported and run.
"""
import sys
import os
# Add the src directory to the Python path
script_dir = os.path.dirname(os.path.abspath(__file__))
src_dir = os.path.join(script_dir, "src")
sys.path.insert(0, src_dir)
try:
from mcp_memory_service.server import main
print("SUCCESS: Successfully imported mcp_memory_service.server.main")
# Test basic configuration
from mcp_memory_service.config import (
SERVER_NAME,
SERVER_VERSION,
STORAGE_BACKEND,
DATABASE_PATH
)
print(f"SUCCESS: Server name: {SERVER_NAME}")
print(f"SUCCESS: Server version: {SERVER_VERSION}")
print(f"SUCCESS: Storage backend: {STORAGE_BACKEND}")
print(f"SUCCESS: Database path: {DATABASE_PATH}")
print("SUCCESS: All imports successful - the memory service is ready to use!")
except ImportError as e:
print(f"ERROR: Import failed: {e}")
sys.exit(1)
except Exception as e:
print(f"ERROR: Error: {e}")
sys.exit(1)
```
--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/development/CLEANUP_README.md:
--------------------------------------------------------------------------------
```markdown
# MCP-MEMORY-SERVICE Cleanup and Organization
This branch contains cleanup and reorganization changes for the MCP-MEMORY-SERVICE project.
## Changes Implemented
1. **Code Organization**
- Restructured test files into proper directories
- Organized documentation into a docs/ directory
- Archived old backup files
2. **Documentation Updates**
- Updated CHANGELOG.md with v1.2.0 entries
- Created comprehensive documentation structure
- Added READMEs for each directory
3. **Test Infrastructure**
- Created proper pytest configuration
- Added fixtures for common test scenarios
- Organized tests by type (unit, integration, performance)
## Running the Cleanup Script
To apply these changes, run:
```bash
cd C:\REPOSITORIES\mcp-memory-service
python scripts/cleanup_organize.py
```
## Testing on Different Hardware
After organization is complete, create a hardware testing branch:
```bash
git checkout -b test/hardware-validation
```
The changes have been tracked in the memory system with the tag `memory-driven-development`.
```
--------------------------------------------------------------------------------
/scripts/server/start_http_server.sh:
--------------------------------------------------------------------------------
```bash
#!/usr/bin/env bash
# Start the MCP Memory Service HTTP server in the background on Unix/macOS
set -e
echo "Starting MCP Memory Service HTTP server..."
# Check if server is already running
if uv run python scripts/server/check_http_server.py -q; then
echo "✅ HTTP server is already running!"
uv run python scripts/server/check_http_server.py -v
exit 0
fi
# Start the server in the background
nohup uv run python scripts/server/run_http_server.py > /tmp/mcp-http-server.log 2>&1 &
SERVER_PID=$!
echo "Server started with PID: $SERVER_PID"
echo "Logs available at: /tmp/mcp-http-server.log"
# Wait up to 5 seconds for the server to start
for i in {1..5}; do
if uv run python scripts/server/check_http_server.py -q; then
break
fi
sleep 1
done
# Check if it started successfully
if uv run python scripts/server/check_http_server.py -v; then
echo ""
echo "✅ HTTP server started successfully!"
echo "PID: $SERVER_PID"
else
echo ""
echo "⚠️ Server may still be starting... Check logs at /tmp/mcp-http-server.log"
fi
```
--------------------------------------------------------------------------------
/claude-hooks/simple-test.js:
--------------------------------------------------------------------------------
```javascript
#!/usr/bin/env node
const { AdaptivePatternDetector } = require('./utilities/adaptive-pattern-detector');
async function simpleTest() {
const detector = new AdaptivePatternDetector({ sensitivity: 0.7 });
const testCases = [
{ message: "What did we decide about the authentication approach?", shouldTrigger: true },
{ message: "Remind me how we handled user sessions", shouldTrigger: true },
{ message: "Remember when we discussed the database schema?", shouldTrigger: true },
{ message: "Just implementing a new feature", shouldTrigger: false }
];
for (const testCase of testCases) {
const result = await detector.detectPatterns(testCase.message);
const actualTrigger = result.triggerRecommendation;
console.log(`Message: "${testCase.message}"`);
console.log(`Expected: ${testCase.shouldTrigger}, Actual: ${actualTrigger}`);
console.log(`Confidence: ${result.confidence}`);
console.log(`Matches: ${result.matches.length}`);
console.log('---');
}
}
simpleTest().catch(console.error);
```
--------------------------------------------------------------------------------
/scripts/sync/litestream/sync_from_remote_noconfig.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Sync script to pull latest database from remote master (no config file)
DB_PATH="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec.db"
REMOTE_URL="http://10.0.1.30:8080/mcp-memory"
BACKUP_PATH="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec.db.backup"
echo "$(date): Starting sync from remote master..."
# Create backup of current database
if [ -f "$DB_PATH" ]; then
cp "$DB_PATH" "$BACKUP_PATH"
echo "$(date): Created backup at $BACKUP_PATH"
fi
# Restore from remote (no config file)
litestream restore -o "$DB_PATH" "$REMOTE_URL"
if [ $? -eq 0 ]; then
echo "$(date): Successfully synced database from remote master"
# Remove backup on success
rm -f "$BACKUP_PATH"
# Show database info
echo "$(date): Database size: $(du -h "$DB_PATH" | cut -f1)"
echo "$(date): Database modified: $(stat -f "%Sm" "$DB_PATH")"
else
echo "$(date): ERROR: Failed to sync from remote master"
# Restore backup on failure
if [ -f "$BACKUP_PATH" ]; then
mv "$BACKUP_PATH" "$DB_PATH"
echo "$(date): Restored backup"
fi
exit 1
fi
```
--------------------------------------------------------------------------------
/scripts/development/fix_mdns.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
echo "=== Fixing mDNS Configuration ==="
echo "1. Stopping any conflicting processes..."
# Kill the old process that might be interfering
pkill -f "/home/hkr/repositories/mcp-memory-service/.venv/bin/memory"
echo "2. Stopping systemd service..."
sudo systemctl stop mcp-memory
echo "3. Updating systemd service configuration..."
sudo cp mcp-memory.service /etc/systemd/system/
sudo chmod 644 /etc/systemd/system/mcp-memory.service
echo "4. Reloading systemd daemon..."
sudo systemctl daemon-reload
echo "5. Starting service with new configuration..."
sudo systemctl start mcp-memory
echo "6. Checking service status..."
sudo systemctl status mcp-memory --no-pager -l
echo ""
echo "7. Testing mDNS resolution..."
sleep 3
echo "Checking avahi browse:"
avahi-browse -t _http._tcp | grep memory
echo ""
echo "Testing memory.local resolution:"
avahi-resolve-host-name memory.local
echo ""
echo "Testing HTTPS access:"
curl -k -s https://memory.local:8000/api/health --connect-timeout 5 || echo "HTTPS test failed"
echo ""
echo "=== Fix Complete ==="
echo "If memory.local resolves and HTTPS works, you're all set!"
```
--------------------------------------------------------------------------------
/archive/deployment/deploy_mcp_v4.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Deploy FastAPI MCP Server v4.0.0-alpha.1
echo "🚀 Deploying FastAPI MCP Server v4.0.0-alpha.1..."
# Stop current service
echo "⏹️ Stopping current HTTP API service..."
sudo systemctl stop mcp-memory
# Update systemd service file
echo "📝 Updating systemd service configuration..."
sudo cp /tmp/mcp-memory-v4.service /etc/systemd/system/mcp-memory.service
# Reload systemd daemon
echo "🔄 Reloading systemd daemon..."
sudo systemctl daemon-reload
# Start the new MCP server
echo "▶️ Starting FastAPI MCP server..."
sudo systemctl start mcp-memory
# Check status
echo "🔍 Checking service status..."
sudo systemctl status mcp-memory --no-pager
echo ""
echo "✅ FastAPI MCP Server v4.0.0-alpha.1 deployment complete!"
echo ""
echo "🌐 Service Access:"
echo " - MCP Protocol: Available on port 8000"
echo " - Health Check: curl http://localhost:8000/health"
echo " - Service Logs: sudo journalctl -u mcp-memory -f"
echo ""
echo "🔧 Service Management:"
echo " - Status: sudo systemctl status mcp-memory"
echo " - Stop: sudo systemctl stop mcp-memory"
echo " - Start: sudo systemctl start mcp-memory"
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/storage/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .base import MemoryStorage
# Conditional imports based on available dependencies
__all__ = ['MemoryStorage']
try:
from .sqlite_vec import SqliteVecMemoryStorage
__all__.append('SqliteVecMemoryStorage')
except ImportError:
SqliteVecMemoryStorage = None
try:
from .cloudflare import CloudflareStorage
__all__.append('CloudflareStorage')
except ImportError:
CloudflareStorage = None
try:
from .hybrid import HybridMemoryStorage
__all__.append('HybridMemoryStorage')
except ImportError:
HybridMemoryStorage = None
```
--------------------------------------------------------------------------------
/scripts/backup/export_distributable_memories.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Export distributable reference memories for sharing across local network
# Usage: ./export_distributable_memories.sh [output_file]
OUTPUT_FILE="${1:-mcp_reference_memories_$(date +%Y%m%d).json}"
MCP_ENDPOINT="https://10.0.1.30:8443/mcp"
API_KEY="test-key-123"
echo "Exporting distributable reference memories..."
echo "Output file: $OUTPUT_FILE"
curl -k -s -X POST "$MCP_ENDPOINT" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $API_KEY" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "search_by_tag",
"arguments": {
"tags": ["distributable-reference"]
}
}
}' | jq -r '.result.content[0].text' > "$OUTPUT_FILE"
if [ $? -eq 0 ]; then
echo "✅ Export completed: $OUTPUT_FILE"
echo "📊 Memory count: $(cat "$OUTPUT_FILE" | jq '. | length' 2>/dev/null || echo "Unknown")"
echo ""
echo "To import to another MCP Memory Service:"
echo "1. Copy $OUTPUT_FILE to target machine"
echo "2. Use store_memory calls for each entry"
echo "3. Update CLAUDE.md with new memory hashes"
else
echo "❌ Export failed"
exit 1
fi
```
--------------------------------------------------------------------------------
/.github/workflows/release.yml:
--------------------------------------------------------------------------------
```yaml
name: Release (Manual)
on:
workflow_dispatch:
jobs:
release:
runs-on: ubuntu-latest
concurrency: release
permissions:
id-token: write
contents: write
actions: write
pull-requests: write
repository-projects: write
steps:
- uses: actions/checkout@v3 # would probably be better to use v4
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.9' # this setup python action uses a separate version than the python-semantic-release, thats why we had the error
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install build hatchling python-semantic-release
- name: Verify build module installation
run: python -m pip show build
- name: Build package
run: python -m build
- name: Python Semantic Release
uses: python-semantic-release/[email protected]
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
verbosity: 2
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
```
--------------------------------------------------------------------------------
/scripts/installation/setup_backup_cron.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Setup automated backups for MCP Memory Service
# Creates cron jobs for regular SQLite-vec database backups
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
BACKUP_SCRIPT="$SCRIPT_DIR/backup_sqlite_vec.sh"
# Check if backup script exists
if [[ ! -f "$BACKUP_SCRIPT" ]]; then
echo "Error: Backup script not found at $BACKUP_SCRIPT"
exit 1
fi
# Make sure backup script is executable
chmod +x "$BACKUP_SCRIPT"
# Create cron job entry
CRON_ENTRY="0 2 * * * $BACKUP_SCRIPT > /tmp/mcp-backup.log 2>&1"
# Check if cron job already exists
if crontab -l 2>/dev/null | grep -q "$BACKUP_SCRIPT"; then
echo "Backup cron job already exists. Current crontab:"
crontab -l | grep "$BACKUP_SCRIPT"
else
# Add cron job
(crontab -l 2>/dev/null || true; echo "$CRON_ENTRY") | crontab -
echo "Added daily backup cron job:"
echo "$CRON_ENTRY"
fi
echo ""
echo "Backup automation setup complete!"
echo "- Daily backups at 2:00 AM"
echo "- Backup script: $BACKUP_SCRIPT"
echo "- Log file: /tmp/mcp-backup.log"
echo ""
echo "To check cron jobs: crontab -l"
echo "To remove cron job: crontab -l | grep -v backup_sqlite_vec.sh | crontab -"
```
--------------------------------------------------------------------------------
/scripts/sync/litestream/setup_local_litestream.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Setup script for Litestream replica on local macOS machine
set -e
echo "🔧 Setting up Litestream replica on local macOS..."
# Copy configuration to system location
echo "⚙️ Installing Litestream configuration..."
sudo mkdir -p /usr/local/etc
sudo cp litestream_replica_config.yml /usr/local/etc/litestream.yml
# Create log directory
sudo mkdir -p /var/log
sudo touch /var/log/litestream.log
sudo chmod 644 /var/log/litestream.log
# Install LaunchDaemon
echo "🚀 Installing LaunchDaemon..."
sudo cp deployment/io.litestream.replication.plist /Library/LaunchDaemons/
# Set permissions
sudo chown root:wheel /Library/LaunchDaemons/io.litestream.replication.plist
sudo chmod 644 /Library/LaunchDaemons/io.litestream.replication.plist
echo "✅ Local Litestream setup completed!"
echo ""
echo "Next steps:"
echo "1. Load service: sudo launchctl load /Library/LaunchDaemons/io.litestream.replication.plist"
echo "2. Start service: sudo launchctl start io.litestream.replication"
echo "3. Check status: litestream replicas -config /usr/local/etc/litestream.yml"
echo ""
echo "⚠️ Before starting the replica service, make sure the master is running on narrowbox.local"
```
--------------------------------------------------------------------------------
/docs/technical/tag-storage.md:
--------------------------------------------------------------------------------
```markdown
# Tag Storage Procedure
## File Structure Overview
```
mcp_memory_service/
├── tests/
│ └── test_tag_storage.py # Integration tests
├── scripts/
│ ├── validate_memories.py # Validation script
│ └── migrate_tags.py # Migration script
```
## Execution Steps
1. **Run Initial Validation**
```bash
python scripts/validate_memories.py
```
- Generates validation report of current state
2. **Run Integration Tests**
```bash
python tests/test_tag_storage.py
```
- Verifies functionality
3. **Execute Migration**
```bash
python scripts/migrate_tags.py
```
The script will:
- Create a backup automatically
- Run validation check
- Ask for confirmation before proceeding
- Perform migration
- Verify the migration
4. **Post-Migration Validation**
```bash
python scripts/validate_memories.py
```
- Confirms successful migration
## Monitoring Requirements
- Keep backup files for at least 7 days
- Monitor logs for any tag-related errors
- Run validation script daily for the first week
- Check search functionality with various tag formats
## Rollback Process
If issues are detected, use:
```bash
python scripts/migrate_tags.py --rollback
```
```
--------------------------------------------------------------------------------
/scripts/maintenance/check_memory_types.py:
--------------------------------------------------------------------------------
```python
#!/usr/bin/env python3
"""Quick script to check memory types in local database."""
import sqlite3
from pathlib import Path
# Windows database path
db_path = Path.home() / "AppData/Local/mcp-memory/sqlite_vec.db"
if not db_path.exists():
print(f"❌ Database not found at: {db_path}")
exit(1)
conn = sqlite3.connect(db_path)
cursor = conn.cursor()
# Get memory type distribution
cursor.execute("""
SELECT memory_type, COUNT(*) as count
FROM memories
GROUP BY memory_type
ORDER BY count DESC
""")
results = cursor.fetchall()
total = sum(count for _, count in results)
print(f"\nMemory Type Distribution")
print("=" * 60)
print(f"Total memories: {total:,}")
print(f"Unique types: {len(results)}\n")
print(f"{'Memory Type':<40} {'Count':>8} {'%':>6}")
print("-" * 60)
for memory_type, count in results[:30]: # Show top 30
pct = (count / total) * 100 if total > 0 else 0
type_display = memory_type if memory_type else "(empty/NULL)"
print(f"{type_display:<40} {count:>8,} {pct:>5.1f}%")
if len(results) > 30:
remaining = len(results) - 30
remaining_count = sum(count for _, count in results[30:])
print(f"\n... and {remaining} more types ({remaining_count:,} memories)")
conn.close()
```
--------------------------------------------------------------------------------
/scripts/utils/list-collections.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from chromadb import HttpClient
def list_collections():
try:
# Connect to local ChromaDB
client = HttpClient(host='localhost', port=8000)
# List all collections
collections = client.list_collections()
print("\nFound Collections:")
print("------------------")
for collection in collections:
print(f"Name: {collection.name}")
print(f"Metadata: {collection.metadata}")
print(f"Count: {collection.count()}")
print("------------------")
except Exception as e:
print(f"Error connecting to local ChromaDB: {str(e)}")
if __name__ == "__main__":
list_collections()
```
--------------------------------------------------------------------------------
/tests/unit/conftest.py:
--------------------------------------------------------------------------------
```python
"""
Shared test fixtures and helpers for unit tests.
"""
import tempfile
from pathlib import Path
from typing import List, Any, Optional
async def extract_chunks_from_temp_file(
loader: Any,
filename: str,
content: str,
encoding: str = 'utf-8',
**extract_kwargs
) -> List[Any]:
"""
Helper to extract chunks from a temporary file.
Args:
loader: Loader instance (CSVLoader, JSONLoader, etc.)
filename: Name of the temporary file to create
content: Content to write to the file
encoding: File encoding (default: utf-8)
**extract_kwargs: Additional keyword arguments to pass to extract_chunks()
Returns:
List of extracted chunks
Example:
>>> loader = CSVLoader(chunk_size=1000, chunk_overlap=200)
>>> chunks = await extract_chunks_from_temp_file(
... loader,
... "test.csv",
... "name,age\\nJohn,25",
... delimiter=','
... )
"""
with tempfile.TemporaryDirectory() as tmpdir:
file_path = Path(tmpdir) / filename
file_path.write_text(content, encoding=encoding)
chunks = []
async for chunk in loader.extract_chunks(file_path, **extract_kwargs):
chunks.append(chunk)
return chunks
```
--------------------------------------------------------------------------------
/test_version_checker.js:
--------------------------------------------------------------------------------
```javascript
#!/usr/bin/env node
/**
* Test script for version-checker.js utility
*/
const { getVersionInfo, formatVersionDisplay } = require('./claude-hooks/utilities/version-checker');
const CONSOLE_COLORS = {
RESET: '\x1b[0m',
BRIGHT: '\x1b[1m',
DIM: '\x1b[2m',
CYAN: '\x1b[36m',
GREEN: '\x1b[32m',
YELLOW: '\x1b[33m',
GRAY: '\x1b[90m',
RED: '\x1b[31m'
};
async function test() {
console.log('Testing version-checker utility...\n');
const projectRoot = __dirname;
// Test with PyPI check
console.log('1. Testing with PyPI check enabled:');
const versionInfo = await getVersionInfo(projectRoot, { checkPyPI: true, timeout: 3000 });
console.log(' Raw version info:', JSON.stringify(versionInfo, null, 2));
const display = formatVersionDisplay(versionInfo, CONSOLE_COLORS);
console.log(' Formatted:', display);
console.log('\n2. Testing without PyPI check:');
const localOnly = await getVersionInfo(projectRoot, { checkPyPI: false });
console.log(' Raw version info:', JSON.stringify(localOnly, null, 2));
const localDisplay = formatVersionDisplay(localOnly, CONSOLE_COLORS);
console.log(' Formatted:', localDisplay);
console.log('\n✅ Test completed!');
}
test().catch(error => {
console.error('❌ Test failed:', error);
process.exit(1);
});
```
--------------------------------------------------------------------------------
/docs/deployment/production-guide.md:
--------------------------------------------------------------------------------
```markdown
# MCP Memory Service - Production Setup
## 🚀 Quick Start
This MCP Memory Service is configured with **consolidation system**, **mDNS auto-discovery**, **HTTPS**, and **automatic startup**.
### **Installation**
```bash
# 1. Install the service
bash install_service.sh
# 2. Update configuration (if needed)
./update_service.sh
# 3. Start the service
sudo systemctl start mcp-memory
```
### **Verification**
```bash
# Check service status
sudo systemctl status mcp-memory
# Test API health
curl -k https://localhost:8000/api/health
# Verify mDNS discovery
avahi-browse -t _mcp-memory._tcp
```
## 📋 **Service Details**
- **Service Name**: `memory._mcp-memory._tcp.local.`
- **HTTPS Address**: https://localhost:8000
- **API Key**: `mcp-0b1ccbde2197a08dcb12d41af4044be6`
- **Auto-Startup**: ✅ Enabled
- **Consolidation**: ✅ Active
- **mDNS Discovery**: ✅ Working
## 🛠️ **Management**
```bash
./service_control.sh start # Start service
./service_control.sh stop # Stop service
./service_control.sh status # Show status
./service_control.sh logs # View logs
./service_control.sh health # Test API
```
## 📖 **Documentation**
- **Complete Guide**: `COMPLETE_SETUP_GUIDE.md`
- **Service Files**: `mcp-memory.service`, management scripts
- **Archive**: `archive/setup-development/` (development files)
**✅ Ready for production use!**
```
--------------------------------------------------------------------------------
/claude-hooks/statusline.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Claude Code Status Line Script
# Displays session memory context in status line
# Format: 🧠 8 (5 recent) memories | 📊 12 commits
# Path to session cache file
CACHE_FILE="$HOME/.claude/hooks/utilities/session-cache.json"
# ANSI color codes for styling
CYAN='\033[36m'
GREEN='\033[32m'
GRAY='\033[90m'
RESET='\033[0m'
# Check if cache file exists
if [ ! -f "$CACHE_FILE" ]; then
# No cache file - session not started yet or hook failed
echo ""
exit 0
fi
# Read cache file and extract data
MEMORIES=$(jq -r '.memoriesLoaded // 0' "$CACHE_FILE" 2>/dev/null)
RECENT=$(jq -r '.recentCount // 0' "$CACHE_FILE" 2>/dev/null)
GIT_COMMITS=$(jq -r '.gitCommits // 0' "$CACHE_FILE" 2>/dev/null)
# Handle jq errors
if [ $? -ne 0 ]; then
echo ""
exit 0
fi
# Build status line output
STATUS=""
# Memory section
if [ "$MEMORIES" -gt 0 ]; then
if [ "$RECENT" -gt 0 ]; then
STATUS="${CYAN}🧠 ${MEMORIES}${RESET} ${GREEN}(${RECENT} recent)${RESET} memories"
else
STATUS="${CYAN}🧠 ${MEMORIES}${RESET} memories"
fi
fi
# Git section
if [ "$GIT_COMMITS" -gt 0 ]; then
if [ -n "$STATUS" ]; then
STATUS="${STATUS} ${GRAY}|${RESET} ${CYAN}📊 ${GIT_COMMITS} commits${RESET}"
else
STATUS="${CYAN}📊 ${GIT_COMMITS} commits${RESET}"
fi
fi
# Output first line becomes status line
echo -e "$STATUS"
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/web/oauth/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
OAuth 2.1 Dynamic Client Registration implementation for MCP Memory Service.
Provides OAuth 2.1 DCR endpoints to enable Claude Code HTTP transport integration.
This module implements:
- RFC 8414: OAuth 2.0 Authorization Server Metadata
- RFC 7591: OAuth 2.0 Dynamic Client Registration Protocol
- OAuth 2.1 security requirements and best practices
Key features:
- Dynamic client registration for automated OAuth client setup
- JWT-based access tokens with proper validation
- Authorization code flow with PKCE support
- Client credentials flow for server-to-server authentication
- Comprehensive scope-based authorization
- Backward compatibility with existing API key authentication
"""
__all__ = [
"discovery",
"models",
"registration",
"authorization",
"middleware",
"storage"
]
```
--------------------------------------------------------------------------------
/scripts/sync/litestream/setup_remote_litestream.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Setup script for Litestream master on remote server (narrowbox.local)
set -e
echo "🔧 Setting up Litestream master on remote server..."
# Install Litestream
echo "📦 Installing Litestream..."
curl -LsS https://github.com/benbjohnson/litestream/releases/latest/download/litestream-linux-amd64.tar.gz | tar -xzf -
sudo mv litestream /usr/local/bin/
sudo chmod +x /usr/local/bin/litestream
# Create directories
echo "📁 Creating directories..."
sudo mkdir -p /var/www/litestream/mcp-memory
sudo mkdir -p /backup/litestream/mcp-memory
# Set permissions
sudo chown -R www-data:www-data /var/www/litestream
sudo chmod -R 755 /var/www/litestream
# Copy configuration
echo "⚙️ Installing Litestream configuration..."
sudo cp litestream_master_config.yml /etc/litestream.yml
# Install systemd services
echo "🚀 Installing systemd services..."
sudo cp litestream.service /etc/systemd/system/
sudo cp litestream-http.service /etc/systemd/system/
# Reload systemd and enable services
sudo systemctl daemon-reload
sudo systemctl enable litestream.service
sudo systemctl enable litestream-http.service
echo "✅ Remote Litestream setup completed!"
echo ""
echo "Next steps:"
echo "1. Start services: sudo systemctl start litestream litestream-http"
echo "2. Check status: sudo systemctl status litestream litestream-http"
echo "3. Verify HTTP endpoint: curl http://localhost:8080/mcp-memory/"
```
--------------------------------------------------------------------------------
/tools/docker/docker-compose.yml:
--------------------------------------------------------------------------------
```yaml
version: '3.8'
# Docker Compose configuration for MCP protocol mode
# For use with MCP clients (Claude Desktop, VS Code extension, etc.)
# For HTTP/API mode, use docker-compose.http.yml instead
services:
mcp-memory-service:
build:
context: ../..
dockerfile: tools/docker/Dockerfile
# Required for MCP protocol communication
stdin_open: true
tty: true
volumes:
# Single data directory for all storage
- ./data:/app/data
# Model cache (prevents re-downloading models on each restart)
# Uncomment the following line to persist Hugging Face models
# - ${HOME}/.cache/huggingface:/root/.cache/huggingface
environment:
# Mode selection
- MCP_MODE=mcp
# Storage configuration
- MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
- MCP_MEMORY_SQLITE_PATH=/app/data/sqlite_vec.db
- MCP_MEMORY_BACKUPS_PATH=/app/data/backups
# Performance tuning
- LOG_LEVEL=${LOG_LEVEL:-INFO}
- MAX_RESULTS_PER_QUERY=10
- SIMILARITY_THRESHOLD=0.7
# Python configuration
- PYTHONUNBUFFERED=1
- PYTHONPATH=/app/src
# Offline mode (uncomment if models are pre-cached and network is restricted)
# - HF_HUB_OFFLINE=1
# - TRANSFORMERS_OFFLINE=1
# Use the unified entrypoint
entrypoint: ["/usr/local/bin/docker-entrypoint-unified.sh"]
restart: unless-stopped
```
--------------------------------------------------------------------------------
/scripts/testing/test-connection.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from chromadb import HttpClient
def test_connection(port=8000):
try:
# Try to connect to local ChromaDB
client = HttpClient(host='localhost', port=port)
# Try a simple operation
heartbeat = client.heartbeat()
print(f"Successfully connected to ChromaDB on port {port}")
print(f"Heartbeat: {heartbeat}")
# List collections
collections = client.list_collections()
print("\nFound collections:")
for collection in collections:
print(f"- {collection.name} (count: {collection.count()})")
except Exception as e:
print(f"Error connecting to ChromaDB on port {port}: {str(e)}")
if __name__ == "__main__":
# Try default port
test_connection()
# If the above fails, you might want to try other common ports:
# test_connection(8080)
# test_connection(9000)
```
--------------------------------------------------------------------------------
/docs/ROADMAP.md:
--------------------------------------------------------------------------------
```markdown
# Development Roadmap
**The official roadmap has moved to the Wiki for easier maintenance and community collaboration.**
📖 **[View Development Roadmap on Wiki](https://github.com/doobidoo/mcp-memory-service/wiki/13-Development-Roadmap)**
The Wiki version includes:
- ✅ Completed milestones (v8.0-v8.38)
- 🎯 Current focus (v8.39-v9.0 - Q1 2026)
- 🚀 Future enhancements (Q2 2026+)
- 🎯 Medium term vision (Q3-Q4 2026)
- 🌟 Long-term aspirations (2027+)
- 📊 Success metrics and KPIs
- 🤝 Community contribution opportunities
## Why the Wiki?
The Wiki provides several advantages for roadmap documentation:
- ✅ **Easier Updates**: No PR required for roadmap changes
- ✅ **Better Navigation**: Integrated with other wiki guides
- ✅ **Community Collaboration**: Lower barrier for community input
- ✅ **Rich Formatting**: Enhanced markdown features
- ✅ **Cleaner Repository**: Reduces noise in commit history
## For Active Development Tracking
The roadmap on the Wiki tracks strategic direction. For day-to-day development:
- **[GitHub Projects](https://github.com/doobidoo/mcp-memory-service/projects)** - Sprint planning and task boards
- **[Open Issues](https://github.com/doobidoo/mcp-memory-service/issues)** - Bug reports and feature requests
- **[Pull Requests](https://github.com/doobidoo/mcp-memory-service/pulls)** - Active code changes
- **[CHANGELOG.md](../CHANGELOG.md)** - Release history and completed features
---
**Maintainer**: @doobidoo
**Last Updated**: November 26, 2025
```
--------------------------------------------------------------------------------
/scripts/installation/setup_claude_mcp.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Setup script for Claude Code MCP configuration
echo "🔧 Setting up MCP Memory Service for Claude Code..."
echo "=================================================="
# Get the absolute path to the repository
REPO_PATH="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
VENV_PYTHON="$REPO_PATH/venv/bin/python"
echo "Repository path: $REPO_PATH"
echo "Python path: $VENV_PYTHON"
# Check if virtual environment exists
if [ ! -f "$VENV_PYTHON" ]; then
echo "❌ Virtual environment not found at: $VENV_PYTHON"
echo "Please run: python -m venv venv && source venv/bin/activate && pip install -r requirements.txt"
exit 1
fi
# Create MCP configuration
cat > "$REPO_PATH/mcp_server_config.json" << EOF
{
"mcpServers": {
"memory": {
"command": "$VENV_PYTHON",
"args": ["-m", "src.mcp_memory_service.server"],
"cwd": "$REPO_PATH",
"env": {
"MCP_MEMORY_STORAGE_BACKEND": "sqlite_vec",
"PYTHONPATH": "$REPO_PATH/src"
}
}
}
}
EOF
echo "✅ Created MCP configuration: $REPO_PATH/mcp_server_config.json"
echo ""
echo "📋 Manual Configuration Steps:"
echo "1. Copy the configuration below"
echo "2. Add it to your Claude Code MCP settings"
echo ""
echo "Configuration to add:"
echo "====================="
cat "$REPO_PATH/mcp_server_config.json"
echo ""
echo "🚀 Alternative: Start server manually and use Claude Code normally"
echo " cd $REPO_PATH"
echo " source venv/bin/activate"
echo " export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec"
echo " python -m src.mcp_memory_service.server"
```
--------------------------------------------------------------------------------
/scripts/run_memory_server.py:
--------------------------------------------------------------------------------
```python
#!/usr/bin/env python3
"""
Backward compatibility redirect to new location (v6.17.0+).
This stub ensures existing Claude Desktop configurations continue working
after the v6.17.0 script reorganization. The actual script has moved to
scripts/server/run_memory_server.py.
For best stability, consider using one of these approaches instead:
1. python -m mcp_memory_service.server (recommended)
2. uv run memory server
3. scripts/server/run_memory_server.py (direct path)
"""
import sys
import os
# Add informational notice (not a warning to avoid alarming users)
print("[INFO] Note: scripts/run_memory_server.py has moved to scripts/server/run_memory_server.py", file=sys.stderr)
print("[INFO] Consider using 'python -m mcp_memory_service.server' for better stability", file=sys.stderr)
print("[INFO] See https://github.com/doobidoo/mcp-memory-service for migration guide", file=sys.stderr)
# Execute the relocated script
script_dir = os.path.dirname(os.path.abspath(__file__))
new_script = os.path.join(script_dir, "server", "run_memory_server.py")
if os.path.exists(new_script):
# Preserve the original __file__ context for the new script
global_vars = {
'__file__': new_script,
'__name__': '__main__',
'sys': sys,
'os': os
}
with open(new_script, 'r', encoding='utf-8') as f:
exec(compile(f.read(), new_script, 'exec'), global_vars)
else:
print(f"[ERROR] Could not find {new_script}", file=sys.stderr)
print("[ERROR] Please ensure you have the complete mcp-memory-service repository", file=sys.stderr)
sys.exit(1)
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/ingestion/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Document Ingestion Module
Provides functionality to side-load documents into the memory database,
supporting multiple formats including PDF, text, and structured data.
This module enables users to pre-populate the vector database with
documentation, knowledge bases, and other content for semantic retrieval.
"""
from .base import DocumentLoader, DocumentChunk, IngestionResult
from .chunker import TextChunker
from .registry import get_loader_for_file, register_loader, SUPPORTED_FORMATS, is_supported_file
# Import loaders to trigger registration
# Order matters! Import SemtoolsLoader first, then specialized loaders
# This allows specialized loaders to override if semtools is unavailable
from . import text_loader
from . import semtools_loader
from . import pdf_loader
from . import json_loader
from . import csv_loader
__all__ = [
'DocumentLoader',
'DocumentChunk',
'IngestionResult',
'TextChunker',
'get_loader_for_file',
'register_loader',
'SUPPORTED_FORMATS',
'is_supported_file'
]
```
--------------------------------------------------------------------------------
/scripts/run/start_sqlite_vec.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Quick start script for MCP Memory Service with SQLite-vec backend
echo "🚀 Starting MCP Memory Service with SQLite-vec backend..."
echo "=================================================="
# Check if virtual environment exists
if [ ! -d "venv" ]; then
echo "❌ Virtual environment not found. Please run setup first."
exit 1
fi
# Activate virtual environment
echo "📦 Activating virtual environment..."
source venv/bin/activate
# Set SQLite-vec backend
echo "🔧 Configuring SQLite-vec backend..."
export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
# Display configuration
echo "✅ Configuration:"
echo " Backend: $MCP_MEMORY_STORAGE_BACKEND"
echo " Database: ~/.local/share/mcp-memory/sqlite_vec.db"
echo " Python: $(which python)"
# Check key dependencies
echo ""
echo "🧪 Checking dependencies..."
python -c "
import sqlite_vec
import sentence_transformers
import mcp
print(' ✅ sqlite-vec available')
print(' ✅ sentence-transformers available')
print(' ✅ mcp available')
"
echo ""
echo "🎯 Ready! The MCP Memory Service is configured for sqlite-vec."
echo ""
echo "To start the server:"
echo " python -m src.mcp_memory_service.server"
echo ""
echo "🧪 Testing server startup..."
timeout 3 python -m src.mcp_memory_service.server 2>/dev/null || echo "✅ Server can start successfully!"
echo ""
echo "For Claude Code integration:"
echo " - The service will automatically use sqlite-vec"
echo " - Memory database: ~/.local/share/mcp-memory/sqlite_vec.db"
echo " - 75% less memory usage vs ChromaDB"
echo ""
echo "To test the setup:"
echo " python simple_sqlite_vec_test.py"
```
--------------------------------------------------------------------------------
/claude-hooks/debug-pattern-test.js:
--------------------------------------------------------------------------------
```javascript
#!/usr/bin/env node
/**
* Debug Pattern Detection
*/
const { AdaptivePatternDetector } = require('./utilities/adaptive-pattern-detector');
async function debugPatternDetection() {
console.log('🔍 Debugging Pattern Detection');
console.log('═'.repeat(50));
const detector = new AdaptivePatternDetector({ sensitivity: 0.7 });
const testMessage = "What did we decide about the authentication approach?";
console.log(`\nTesting message: "${testMessage}"`);
const result = await detector.detectPatterns(testMessage);
console.log('\nResults:');
console.log('- Matches found:', result.matches.length);
console.log('- Confidence:', result.confidence);
console.log('- Processing tier:', result.processingTier);
console.log('- Trigger recommendation:', result.triggerRecommendation);
if (result.matches.length > 0) {
console.log('\nMatches:');
result.matches.forEach((match, i) => {
console.log(` ${i + 1}. Category: ${match.category}`);
console.log(` Pattern: ${match.pattern}`);
console.log(` Confidence: ${match.confidence}`);
console.log(` Type: ${match.type}`);
});
}
// Test the instant patterns directly
console.log('\n🔍 Testing Instant Patterns Directly');
const instantMatches = detector.detectInstantPatterns(testMessage);
console.log('Instant matches:', instantMatches.length);
instantMatches.forEach((match, i) => {
console.log(` ${i + 1}. ${match.category}: ${match.confidence}`);
});
}
debugPatternDetection().catch(console.error);
```
--------------------------------------------------------------------------------
/docs/development/todo-tracker.md:
--------------------------------------------------------------------------------
```markdown
# TODO Tracker
**Last Updated:** 2025-11-08 10:25:25
**Scan Directory:** src
**Total TODOs:** 5
## Summary
| Priority | Count | Description |
|----------|-------|-------------|
| CRITICAL (P0) | 1 | Security, data corruption, blocking bugs |
| HIGH (P1) | 2 | Performance, user-facing, incomplete features |
| MEDIUM (P2) | 2 | Code quality, optimizations, technical debt |
| LOW (P3) | 0
0 | Documentation, cosmetic, nice-to-haves |
---
## CRITICAL (P0)
- `src/mcp_memory_service/web/api/analytics.py:625` - Period filtering is not implemented, leading to incorrect analytics data.
## HIGH (P1)
- `src/mcp_memory_service/storage/cloudflare.py:185` - Lack of a fallback for embedding generation makes the service vulnerable to external API failures.
- `src/mcp_memory_service/web/api/manage.py:231` - Inefficient queries can cause significant performance bottlenecks, especially with large datasets.
## MEDIUM (P2)
- `src/mcp_memory_service/web/api/documents.py:592` - Using a deprecated FastAPI event handler; should be migrated to the modern `lifespan` context manager to reduce technical debt.
- `src/mcp_memory_service/web/api/analytics.py:213` - The `storage.get_stats()` method is missing a data point, leading to API inconsistency.
## LOW (P3)
*(None in this list)*
---
## How to Address
1. **CRITICAL**: Address immediately, block releases if necessary
2. **HIGH**: Schedule for current/next sprint
3. **MEDIUM**: Add to backlog, address in refactoring sprints
4. **LOW**: Address opportunistically or when touching related code
## Updating This Tracker
Run: `bash scripts/maintenance/scan_todos.sh`
```
--------------------------------------------------------------------------------
/scripts/backup/backup_sqlite_vec.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# SQLite-vec Database Backup Script
# Creates timestamped backups of the SQLite-vec database
set -e
# Configuration
MEMORY_DIR="${MCP_MEMORY_BASE_DIR:-$HOME/.local/share/mcp-memory}"
BACKUP_DIR="$MEMORY_DIR/backups"
DATABASE_FILE="$MEMORY_DIR/sqlite_vec.db"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_NAME="sqlite_backup_$TIMESTAMP"
BACKUP_PATH="$BACKUP_DIR/$BACKUP_NAME"
# Check if database exists
if [[ ! -f "$DATABASE_FILE" ]]; then
echo "Error: SQLite database not found at $DATABASE_FILE"
exit 1
fi
# Create backup directory
mkdir -p "$BACKUP_PATH"
# Copy database files (main, WAL, and SHM files)
echo "Creating backup: $BACKUP_NAME"
cp "$DATABASE_FILE" "$BACKUP_PATH/" 2>/dev/null || true
cp "${DATABASE_FILE}-wal" "$BACKUP_PATH/" 2>/dev/null || true
cp "${DATABASE_FILE}-shm" "$BACKUP_PATH/" 2>/dev/null || true
# Get backup size
BACKUP_SIZE=$(du -sh "$BACKUP_PATH" | cut -f1)
# Count files backed up
FILE_COUNT=$(find "$BACKUP_PATH" -type f | wc -l)
# Create backup metadata
cat > "$BACKUP_PATH/backup_info.json" << EOF
{
"backup_name": "$BACKUP_NAME",
"timestamp": "$TIMESTAMP",
"source_database": "$DATABASE_FILE",
"backup_path": "$BACKUP_PATH",
"backup_size": "$BACKUP_SIZE",
"files_count": $FILE_COUNT,
"backend": "sqlite_vec",
"created_at": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
}
EOF
echo "Backup completed successfully:"
echo " Name: $BACKUP_NAME"
echo " Path: $BACKUP_PATH"
echo " Size: $BACKUP_SIZE"
echo " Files: $FILE_COUNT"
# Cleanup old backups (keep last 7 days)
find "$BACKUP_DIR" -name "sqlite_backup_*" -type d -mtime +7 -exec rm -rf {} \; 2>/dev/null || true
exit 0
```
--------------------------------------------------------------------------------
/docs/legacy/dual-protocol-hooks.md:
--------------------------------------------------------------------------------
```markdown
# Dual Protocol Memory Hooks (Legacy)
> **Note**: This feature has been superseded by Natural Memory Triggers v7.1.3+. This documentation is kept for reference only.
**Dual Protocol Memory Hooks** (v7.0.0+) provide intelligent memory awareness with automatic protocol detection:
## Configuration
```json
{
"memoryService": {
"protocol": "auto",
"preferredProtocol": "mcp",
"fallbackEnabled": true,
"http": {
"endpoint": "https://localhost:8443",
"apiKey": "your-api-key",
"healthCheckTimeout": 3000,
"useDetailedHealthCheck": true
},
"mcp": {
"serverCommand": ["uv", "run", "memory", "server", "-s", "cloudflare"],
"serverWorkingDir": "/Users/yourname/path/to/mcp-memory-service",
"connectionTimeout": 5000,
"toolCallTimeout": 10000
}
}
}
```
## Protocol Options
- `"auto"`: Smart detection (MCP → HTTP → Environment fallback)
- `"http"`: HTTP-only mode (web server at localhost:8443)
- `"mcp"`: MCP-only mode (direct server process)
## Benefits
- **Reliability**: Multiple connection methods ensure hooks always work
- **Performance**: MCP direct for speed, HTTP for stability
- **Flexibility**: Works with local development or remote deployments
- **Compatibility**: Full backward compatibility with existing configurations
## Migration to Natural Memory Triggers
If you're using Dual Protocol Hooks, consider migrating to Natural Memory Triggers v7.1.3+ which offers:
- 85%+ trigger accuracy
- Multi-tier performance optimization
- CLI management system
- Git-aware context integration
- Adaptive learning
See main CLAUDE.md for migration instructions.
```
--------------------------------------------------------------------------------
/tools/docker/docker-entrypoint-persistent.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Docker entrypoint script for MCP Memory Service - Persistent mode
# This script keeps the container running even when there's no active MCP client
set -e
echo "[INFO] Starting MCP Memory Service in Docker container (persistent mode)"
# Function to handle signals
handle_signal() {
echo "[INFO] Received signal, shutting down..."
if [ -n "$SERVER_PID" ]; then
kill -TERM $SERVER_PID 2>/dev/null || true
fi
exit 0
}
# Set up signal handlers
trap handle_signal SIGTERM SIGINT
# Create named pipes for stdio communication
FIFO_DIR="/tmp/mcp-memory-fifo"
mkdir -p "$FIFO_DIR"
STDIN_FIFO="$FIFO_DIR/stdin"
STDOUT_FIFO="$FIFO_DIR/stdout"
# Remove old pipes if they exist
rm -f "$STDIN_FIFO" "$STDOUT_FIFO"
# Create new named pipes
mkfifo "$STDIN_FIFO"
mkfifo "$STDOUT_FIFO"
echo "[INFO] Created named pipes for stdio communication"
# Start the server in the background with the named pipes
if [ "${UV_ACTIVE}" = "1" ]; then
echo "[INFO] Running with UV wrapper (persistent mode)"
python -u uv_wrapper.py < "$STDIN_FIFO" > "$STDOUT_FIFO" 2>&1 &
else
echo "[INFO] Running directly with Python (persistent mode)"
python -u -m mcp_memory_service.server < "$STDIN_FIFO" > "$STDOUT_FIFO" 2>&1 &
fi
SERVER_PID=$!
echo "[INFO] Server started with PID: $SERVER_PID"
# Keep the stdin pipe open to prevent the server from exiting
exec 3> "$STDIN_FIFO"
# Monitor the server process
while true; do
if ! kill -0 $SERVER_PID 2>/dev/null; then
echo "[ERROR] Server process exited unexpectedly"
exit 1
fi
# Send a keep-alive message every 30 seconds
echo "" >&3
sleep 30
done
```
--------------------------------------------------------------------------------
/examples/claude_desktop_config_windows.json:
--------------------------------------------------------------------------------
```json
{
"_comment": "Windows-specific MCP Memory Service configuration for Claude Desktop",
"_instructions": [
"Replace 'YOUR_USERNAME' with your actual Windows username",
"Replace 'C:\\REPOSITORIES\\mcp-memory-service' with your actual repository path",
"Supported backends: sqlite_vec, cloudflare, hybrid (ChromaDB removed in v8.0.0)"
],
"mcpServers": {
"memory": {
"command": "python",
"args": [
"C:/REPOSITORIES/mcp-memory-service/scripts/memory_offline.py"
],
"env": {
"PYTHONPATH": "C://REPOSITORIES//mcp-memory-service",
"_comment_backend_choice": "Choose one of the backend configurations below",
"_comment_sqlite_vec": "=== SQLite-vec Backend (Recommended for local storage) ===",
"MCP_MEMORY_STORAGE_BACKEND": "sqlite_vec",
"MCP_MEMORY_SQLITE_PATH": "C:\\Users\\YOUR_USERNAME\\AppData\\Local\\mcp-memory\\memory_migrated.db",
"MCP_MEMORY_BACKUPS_PATH": "C:\\Users\\YOUR_USERNAME\\AppData\\Local\\mcp-memory\\backups",
"_comment_offline": "=== Offline Mode Configuration (prevents PyTorch downloads) ===",
"HF_HOME": "C:\\Users\\YOUR_USERNAME\\.cache\\huggingface",
"TRANSFORMERS_CACHE": "C:\\Users\\YOUR_USERNAME\\.cache\\huggingface\\transformers",
"SENTENCE_TRANSFORMERS_HOME": "C:\\Users\\YOUR_USERNAME\\.cache\\torch\\sentence_transformers",
"HF_HUB_OFFLINE": "1",
"TRANSFORMERS_OFFLINE": "1",
"_comment_performance": "=== Performance Settings ===",
"PYTORCH_ENABLE_MPS_FALLBACK": "1",
"PYTORCH_CUDA_ALLOC_CONF": "max_split_size_mb:128"
}
}
}
}
```
--------------------------------------------------------------------------------
/scripts/testing/simple_test.py:
--------------------------------------------------------------------------------
```python
#\!/usr/bin/env python3
"""
Simple test to use Homebrew Python's sentence-transformers
"""
import os
import sys
import subprocess
# Set environment variables for testing
os.environ["MCP_MEMORY_STORAGE_BACKEND"] = "sqlite_vec"
os.environ["MCP_MEMORY_SQLITE_PATH"] = os.path.expanduser("~/Library/Application Support/mcp-memory/sqlite_vec.db")
os.environ["MCP_MEMORY_BACKUPS_PATH"] = os.path.expanduser("~/Library/Application Support/mcp-memory/backups")
os.environ["MCP_MEMORY_USE_ONNX"] = "1"
# Get the Homebrew Python path
result = subprocess.run(
['brew', '--prefix', 'pytorch'],
capture_output=True,
text=True,
check=True
)
pytorch_prefix = result.stdout.strip()
homebrew_python_path = f"{pytorch_prefix}/libexec/bin/python3"
print(f"Using Homebrew Python: {homebrew_python_path}")
# Run a simple test with the Homebrew Python
test_script = """
import torch
import sentence_transformers
import sys
print(f"Python: {sys.version}")
print(f"PyTorch: {torch.__version__}")
print(f"sentence-transformers: {sentence_transformers.__version__}")
# Load a model
model = sentence_transformers.SentenceTransformer('paraphrase-MiniLM-L3-v2')
print(f"Model loaded: {model}")
# Encode a test sentence
test_text = "This is a test sentence for encoding with Homebrew PyTorch"
embedding = model.encode([test_text])
print(f"Embedding shape: {embedding.shape}")
print("Test successful\!")
"""
# Run the test with Homebrew Python
result = subprocess.run(
[homebrew_python_path, "-c", test_script],
capture_output=True,
text=True
)
print("=== STDOUT ===")
print(result.stdout)
if result.stderr:
print("=== STDERR ===")
print(result.stderr)
```
--------------------------------------------------------------------------------
/scripts/utils/test_groq_bridge.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Test script for Groq bridge integration
# Demonstrates usage without requiring API key
set -e
echo "=== Groq Bridge Integration Test ==="
echo ""
# Check if groq package is installed
echo "1. Checking Python groq package..."
if python3 -c "import groq" 2>/dev/null; then
echo " ✓ groq package installed"
else
echo " ✗ groq package NOT installed"
echo ""
echo "To install: pip install groq"
echo "Or: uv pip install groq"
exit 1
fi
# Check if API key is set
echo ""
echo "2. Checking GROQ_API_KEY environment variable..."
if [ -z "$GROQ_API_KEY" ]; then
echo " ✗ GROQ_API_KEY not set"
echo ""
echo "To set: export GROQ_API_KEY='your-api-key-here'"
echo "Get your API key from: https://console.groq.com/keys"
echo ""
echo "Skipping API test (would require valid key)"
else
echo " ✓ GROQ_API_KEY configured"
# Test the bridge with a simple query
echo ""
echo "3. Testing Groq bridge with sample query..."
echo ""
python3 scripts/utils/groq_agent_bridge.py \
"Rate the complexity of this Python function on a scale of 1-10: def add(a, b): return a + b" \
--json
fi
echo ""
echo "=== Integration Test Complete ==="
echo ""
echo "Usage examples:"
echo ""
echo "# Complexity analysis"
echo "python scripts/utils/groq_agent_bridge.py \"Analyze complexity 1-10: \$(cat file.py)\""
echo ""
echo "# Security scan"
echo "python scripts/utils/groq_agent_bridge.py \"Check for security issues: \$(cat file.py)\" --json"
echo ""
echo "# With custom model and temperature"
echo "python scripts/utils/groq_agent_bridge.py \"Your prompt\" --model llama2-70b-4096 --temperature 0.3"
```
--------------------------------------------------------------------------------
/tools/docker/DEPRECATED.md:
--------------------------------------------------------------------------------
```markdown
# Deprecated Docker Files
The following Docker files are deprecated as of v5.0.4 and will be removed in v6.0.0:
## Deprecated Files
### 1. `docker-compose.standalone.yml`
- **Replaced by**: `docker-compose.http.yml`
- **Reason**: Confusing name, mixed ChromaDB/SQLite configs, incorrect entrypoint for HTTP mode
- **Migration**: Use `docker-compose.http.yml` for HTTP/API access
### 2. `docker-compose.uv.yml`
- **Replaced by**: UV is now built into the main Dockerfile
- **Reason**: UV support should be in the image, not a separate compose file
- **Migration**: UV is automatically available in all configurations
### 3. `docker-compose.pythonpath.yml`
- **Replaced by**: Fixed PYTHONPATH in main Dockerfile
- **Reason**: PYTHONPATH fix belongs in Dockerfile, not compose variant
- **Migration**: All compose files now have correct PYTHONPATH=/app/src
### 4. `docker-entrypoint-persistent.sh`
- **Replaced by**: `docker-entrypoint-unified.sh`
- **Reason**: Overcomplicated, doesn't support HTTP mode, named pipes unnecessary
- **Migration**: Use unified entrypoint with MCP_MODE environment variable
## New Simplified Structure
Use one of these two configurations:
1. **MCP Protocol Mode** (for Claude Desktop, VS Code):
```bash
docker-compose up -d
```
2. **HTTP/API Mode** (for web access, REST API):
```bash
docker-compose -f docker-compose.http.yml up -d
```
## Timeline
- **v5.0.4**: Files marked as deprecated, new structure introduced
- **v5.1.0**: Warning messages added when using deprecated files
- **v6.0.0**: Deprecated files removed
## Credits
Thanks to Joe Esposito for identifying the Docker setup issues that led to this simplification.
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/utils/hashing.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import hashlib
import json
from typing import Any, Dict, Optional
def generate_content_hash(content: str, metadata: Optional[Dict[str, Any]] = None) -> str:
"""
Generate a unique hash for content and metadata.
This improved version ensures consistent hashing by:
1. Normalizing content (strip whitespace, lowercase)
2. Sorting metadata keys
3. Using a consistent JSON serialization
"""
# Normalize content
normalized_content = content.strip().lower()
# Create hash content with normalized content
hash_content = normalized_content
# Add metadata if present
if metadata:
# Filter out timestamp and dynamic fields
static_metadata = {
k: v for k, v in metadata.items()
if k not in ['timestamp', 'content_hash', 'embedding']
}
if static_metadata:
# Sort keys and use consistent JSON serialization
hash_content += json.dumps(static_metadata, sort_keys=True, ensure_ascii=True)
# Generate hash
return hashlib.sha256(hash_content.encode('utf-8')).hexdigest()
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/consolidation/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Dream-inspired memory consolidation system.
This module implements autonomous memory consolidation inspired by human cognitive
processes during sleep cycles, featuring exponential decay scoring, creative
association discovery, semantic compression, and controlled forgetting.
"""
from .base import ConsolidationBase
from .decay import ExponentialDecayCalculator
from .associations import CreativeAssociationEngine
from .clustering import SemanticClusteringEngine
from .compression import SemanticCompressionEngine
from .forgetting import ControlledForgettingEngine
from .consolidator import DreamInspiredConsolidator
from .scheduler import ConsolidationScheduler
from .health import ConsolidationHealthMonitor, HealthStatus, HealthMetric, HealthAlert
__all__ = [
'ConsolidationBase',
'ExponentialDecayCalculator',
'CreativeAssociationEngine',
'SemanticClusteringEngine',
'SemanticCompressionEngine',
'ControlledForgettingEngine',
'DreamInspiredConsolidator',
'ConsolidationScheduler',
'ConsolidationHealthMonitor',
'HealthStatus',
'HealthMetric',
'HealthAlert'
]
```
--------------------------------------------------------------------------------
/claude-hooks/config.template.json:
--------------------------------------------------------------------------------
```json
{
"memoryService": {
"endpoint": "https://your-server:8443",
"apiKey": "your-api-key-here",
"defaultTags": ["claude-code", "auto-generated"],
"maxMemoriesPerSession": 8,
"enableSessionConsolidation": true
},
"projectDetection": {
"gitRepository": true,
"packageFiles": ["package.json", "pyproject.toml", "Cargo.toml", "go.mod", "pom.xml"],
"frameworkDetection": true,
"languageDetection": true,
"confidenceThreshold": 0.3
},
"memoryScoring": {
"weights": {
"timeDecay": 0.3,
"tagRelevance": 0.4,
"contentRelevance": 0.2,
"typeBonus": 0.1
},
"minRelevanceScore": 0.3,
"timeDecayRate": 0.1
},
"contextFormatting": {
"includeProjectSummary": true,
"includeRelevanceScores": false,
"groupByCategory": true,
"maxContentLength": 200,
"includeTimestamps": true
},
"sessionAnalysis": {
"extractTopics": true,
"extractDecisions": true,
"extractInsights": true,
"extractCodeChanges": true,
"extractNextSteps": true,
"minSessionLength": 100,
"minConfidence": 0.1
},
"hooks": {
"sessionStart": {
"enabled": true,
"timeout": 10000,
"priority": "high"
},
"sessionEnd": {
"enabled": true,
"timeout": 15000,
"priority": "normal"
},
"topicChange": {
"enabled": false,
"timeout": 5000,
"priority": "low"
}
},
"output": {
"verbose": true,
"showMemoryDetails": false,
"showProjectDetails": true,
"showScoringDetails": false,
"cleanMode": false
},
"logging": {
"level": "info",
"enableDebug": false,
"logToFile": false,
"logFilePath": "./claude-hooks.log"
}
}
```
--------------------------------------------------------------------------------
/tools/docker/docker-entrypoint.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Docker entrypoint script for MCP Memory Service
set -e
echo "[INFO] Starting MCP Memory Service in Docker container"
# Function to handle signals
handle_signal() {
echo "[INFO] Received signal, shutting down..."
if [ -n "$SERVER_PID" ]; then
kill -TERM $SERVER_PID 2>/dev/null || true
fi
exit 0
}
# Set up signal handlers
trap handle_signal SIGTERM SIGINT
# Function to keep stdin alive
keep_stdin_alive() {
while true; do
# Send newline to stdin every 30 seconds to keep the pipe open
echo "" 2>/dev/null || break
sleep 30
done
}
# Check if running in standalone mode
if [ "${MCP_STANDALONE_MODE}" = "1" ]; then
echo "[INFO] Running in standalone mode"
exec /usr/local/bin/docker-entrypoint-persistent.sh "$@"
fi
# Check if UV_ACTIVE is set
if [ "${UV_ACTIVE}" = "1" ]; then
echo "[INFO] Running with UV wrapper"
# Start the keep-alive process in the background
keep_stdin_alive &
KEEPALIVE_PID=$!
# Run the server
python -u uv_wrapper.py "$@" &
SERVER_PID=$!
# Wait for the server process
wait $SERVER_PID
SERVER_EXIT_CODE=$?
# Clean up the keep-alive process
kill $KEEPALIVE_PID 2>/dev/null || true
exit $SERVER_EXIT_CODE
else
echo "[INFO] Running directly with Python"
# Start the keep-alive process in the background
keep_stdin_alive &
KEEPALIVE_PID=$!
# Run the server
python -u -m mcp_memory_service.server "$@" &
SERVER_PID=$!
# Wait for the server process
wait $SERVER_PID
SERVER_EXIT_CODE=$?
# Clean up the keep-alive process
kill $KEEPALIVE_PID 2>/dev/null || true
exit $SERVER_EXIT_CODE
fi
```
--------------------------------------------------------------------------------
/scripts/sync/litestream/resolve_conflicts.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Simple conflict resolution helper
STAGING_DB="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec_staging.db"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
if [ ! -f "$STAGING_DB" ]; then
echo -e "${RED}No staging database found${NC}"
exit 1
fi
# Get conflicts
CONFLICTS=$(sqlite3 "$STAGING_DB" "
SELECT id, content, staged_at, conflict_status
FROM staged_memories
WHERE conflict_status IN ('detected', 'push_failed')
ORDER BY staged_at DESC;
")
if [ -z "$CONFLICTS" ]; then
echo -e "${GREEN}No conflicts to resolve${NC}"
exit 0
fi
echo -e "${YELLOW}Found conflicts to resolve:${NC}"
echo ""
echo "$CONFLICTS" | while IFS='|' read -r id content staged_at status; do
echo -e "${RED}Conflict: $status${NC}"
echo -e "Content: ${content:0:80}..."
echo -e "Staged: $staged_at"
echo -e "ID: $id"
echo ""
echo "Actions:"
echo " 1. Keep and retry push"
echo " 2. Delete (abandon change)"
echo " 3. Skip for now"
echo ""
read -p "Choose action (1/2/3): " action
case $action in
1)
sqlite3 "$STAGING_DB" "
UPDATE staged_memories
SET conflict_status = 'none'
WHERE id = '$id';
"
echo -e "${GREEN}Marked for retry${NC}"
;;
2)
sqlite3 "$STAGING_DB" "DELETE FROM staged_memories WHERE id = '$id';"
echo -e "${YELLOW}Deleted${NC}"
;;
3)
echo -e "${YELLOW}Skipped${NC}"
;;
*)
echo -e "${YELLOW}Invalid choice, skipped${NC}"
;;
esac
echo ""
done
echo -e "${GREEN}Conflict resolution completed${NC}"
```
--------------------------------------------------------------------------------
/examples/memory_export_template.json:
--------------------------------------------------------------------------------
```json
{
"export_metadata": {
"source_machine": "example-hostname",
"export_timestamp": "2025-08-21T12:00:00.000000",
"total_memories": 3,
"database_path": "/path/to/sqlite_vec.db",
"platform": "Linux",
"python_version": "3.11.0",
"include_embeddings": false,
"filter_tags": null,
"exporter_version": "6.2.4"
},
"memories": [
{
"content": "MCP Memory Service is a Model Context Protocol server that provides semantic memory and persistent storage capabilities for Claude Desktop using SQLite-vec and sentence transformers.",
"content_hash": "example-hash-1234567890abcdef",
"tags": ["documentation", "project-overview"],
"created_at": 1692633600.0,
"updated_at": 1692633600.0,
"memory_type": "note",
"metadata": {
"source": "example-machine",
"project": "mcp-memory-service"
}
},
{
"content": "Key development commands: `uv run memory` to start server, `pytest tests/` for testing, `python install.py` for setup.",
"content_hash": "example-hash-abcdef1234567890",
"tags": ["commands", "development"],
"created_at": 1692634200.0,
"updated_at": 1692634200.0,
"memory_type": "reference",
"metadata": {
"source": "example-machine",
"category": "quick-reference"
}
},
{
"content": "SQLite-vec backend is now the default storage backend (v6.0+) offering fast performance and single-file database storage.",
"content_hash": "example-hash-fedcba0987654321",
"tags": ["architecture", "backend", "sqlite-vec"],
"created_at": 1692634800.0,
"updated_at": 1692634800.0,
"memory_type": "architectural-decision",
"metadata": {
"source": "example-machine",
"version": "v6.0.0"
}
}
]
}
```
--------------------------------------------------------------------------------
/docs/mastery/local-setup-and-run.md:
--------------------------------------------------------------------------------
```markdown
# MCP Memory Service — Local Setup and Run
Follow these steps to run the service locally, switch storage backends, and validate functionality.
## 1) Install Dependencies
Using uv (recommended):
```
uv sync
```
Using pip:
```
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -e .
```
If using SQLite-vec backend (recommended):
```
uv add sqlite-vec sentence-transformers torch
# or
pip install sqlite-vec sentence-transformers torch
```
## 2) Choose Storage Backend
SQLite-vec (default):
```
export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
# optional custom DB path
export MCP_MEMORY_SQLITE_PATH="$HOME/.local/share/mcp-memory/sqlite_vec.db"
```
ChromaDB (deprecated):
```
export MCP_MEMORY_STORAGE_BACKEND=chroma
export MCP_MEMORY_CHROMA_PATH="$HOME/.local/share/mcp-memory/chroma_db"
```
Cloudflare:
```
export MCP_MEMORY_STORAGE_BACKEND=cloudflare
export CLOUDFLARE_API_TOKEN=...
export CLOUDFLARE_ACCOUNT_ID=...
export CLOUDFLARE_VECTORIZE_INDEX=...
export CLOUDFLARE_D1_DATABASE_ID=...
```
## 3) Run the Server
Stdio MCP server (integrates with Claude Desktop):
```
uv run memory server
```
FastMCP HTTP server (for Claude Code / remote):
```
uv run mcp-memory-server
```
Configure Claude Desktop example (~/.claude/config.json):
```
{
"mcpServers": {
"memory": {
"command": "uv",
"args": ["--directory", "/path/to/mcp-memory-service", "run", "memory", "server"],
"env": { "MCP_MEMORY_STORAGE_BACKEND": "sqlite_vec" }
}
}
}
```
## 4) Verify Health and Basic Ops
CLI status:
```
uv run memory status
```
MCP tool flow (via client):
- store_memory → retrieve_memory → search_by_tag → delete_memory
## 5) Run Tests
```
pytest -q
# or
uv run pytest -q
```
See also: `docs/mastery/testing-guide.md` and `docs/sqlite-vec-backend.md`.
```
--------------------------------------------------------------------------------
/docs/integrations.md:
--------------------------------------------------------------------------------
```markdown
# MCP Memory Service Integrations
This document catalogs tools, utilities, and integrations that extend the functionality of the MCP Memory Service.
## Official Integrations
### [MCP Memory Dashboard](https://github.com/doobidoo/mcp-memory-dashboard)(This is still wip!)
A web-based dashboard for viewing, searching, and managing your MCP Memory Service data. The dashboard allows you to:
- Browse and search memories
- View memory metadata and tags
- Delete unwanted memories
- Perform semantic searches
- Monitor system health
## Community Integrations
### [Claude Memory Context](https://github.com/doobidoo/claude-memory-context)
A utility that enables Claude to start each conversation with awareness of the topics and important memories stored in your MCP Memory Service.
This tool:
- Queries your MCP memory service for recent and important memories
- Extracts topics and content summaries
- Formats this information into a structured context section
- Updates Claude project instructions automatically
The utility leverages Claude's project instructions feature without requiring any modifications to the MCP protocol. It can be automated to run periodically, ensuring Claude always has access to your latest memories.
See the [Claude Memory Context repository](https://github.com/doobidoo/claude-memory-context) for installation and usage instructions.
---
## Adding Your Integration
If you've built a tool or integration for the MCP Memory Service, we'd love to include it here. Please submit a pull request that adds your project to this document with:
1. The name of your integration (with link to repository)
2. A brief description (2-3 sentences)
3. A list of key features
4. Any installation notes or special requirements
All listed integrations should be functional, documented, and actively maintained.
```
--------------------------------------------------------------------------------
/tools/docker/docker-entrypoint-unified.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Unified Docker entrypoint script for MCP Memory Service
# Supports both MCP protocol mode and HTTP server mode
set -e
echo "[INFO] Starting MCP Memory Service in Docker container"
# Function to handle signals
handle_signal() {
echo "[INFO] Received signal, shutting down..."
if [ -n "$SERVER_PID" ]; then
kill -TERM $SERVER_PID 2>/dev/null || true
fi
exit 0
}
# Set up signal handlers
trap handle_signal SIGTERM SIGINT
# Determine mode based on environment variable
MODE="${MCP_MODE:-mcp}"
echo "[INFO] Running in $MODE mode"
if [ "$MODE" = "http" ] || [ "$MODE" = "api" ]; then
# HTTP Server Mode
echo "[INFO] Starting HTTP server with FastAPI/Uvicorn"
# Ensure we have the HTTP server file
if [ ! -f "/app/run_server.py" ]; then
echo "[ERROR] run_server.py not found. Please ensure it's copied in the Dockerfile"
exit 1
fi
# Start the HTTP server
exec python /app/run_server.py "$@"
elif [ "$MODE" = "mcp" ]; then
# MCP Protocol Mode (stdin/stdout)
echo "[INFO] Starting MCP protocol server (stdin/stdout communication)"
# Function to keep stdin alive
keep_stdin_alive() {
while true; do
# Send newline to stdin every 30 seconds to keep the pipe open
echo "" 2>/dev/null || break
sleep 30
done
}
# Start the keep-alive process in the background
keep_stdin_alive &
KEEPALIVE_PID=$!
# Run the MCP server
python -u -m mcp_memory_service.server "$@" &
SERVER_PID=$!
# Wait for the server process
wait $SERVER_PID
SERVER_EXIT_CODE=$?
# Clean up the keep-alive process
kill $KEEPALIVE_PID 2>/dev/null || true
exit $SERVER_EXIT_CODE
else
echo "[ERROR] Unknown mode: $MODE. Use 'mcp' for protocol mode or 'http' for API mode"
exit 1
fi
```
--------------------------------------------------------------------------------
/archive/setup-development/setup_consolidation_mdns.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Setup script for MCP Memory Service with Consolidation and mDNS
echo "Setting up MCP Memory Service with Consolidation and mDNS HTTPS..."
# Enable consolidation system
export MCP_CONSOLIDATION_ENABLED=true
# Configure consolidation settings
export MCP_DECAY_ENABLED=true
export MCP_RETENTION_CRITICAL=365
export MCP_RETENTION_REFERENCE=180
export MCP_RETENTION_STANDARD=30
export MCP_RETENTION_TEMPORARY=7
export MCP_ASSOCIATIONS_ENABLED=true
export MCP_ASSOCIATION_MIN_SIMILARITY=0.3
export MCP_ASSOCIATION_MAX_SIMILARITY=0.7
export MCP_ASSOCIATION_MAX_PAIRS=100
export MCP_CLUSTERING_ENABLED=true
export MCP_CLUSTERING_MIN_SIZE=5
export MCP_CLUSTERING_ALGORITHM=dbscan
export MCP_COMPRESSION_ENABLED=true
export MCP_COMPRESSION_MAX_LENGTH=500
export MCP_COMPRESSION_PRESERVE_ORIGINALS=true
export MCP_FORGETTING_ENABLED=true
export MCP_FORGETTING_RELEVANCE_THRESHOLD=0.1
export MCP_FORGETTING_ACCESS_THRESHOLD=90
# Set consolidation schedule (cron-like)
export MCP_SCHEDULE_DAILY="02:00"
export MCP_SCHEDULE_WEEKLY="SUN 03:00"
export MCP_SCHEDULE_MONTHLY="01 04:00"
# Configure mDNS multi-client server with HTTPS
export MCP_MDNS_ENABLED=true
export MCP_MDNS_SERVICE_NAME="memory"
export MCP_HTTPS_ENABLED=true
# HTTP server configuration
export MCP_HTTP_ENABLED=true
export MCP_HTTP_HOST=0.0.0.0
export MCP_HTTP_PORT=8000
# Storage backend
export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
# API security
export MCP_API_KEY="$(openssl rand -base64 32)"
echo "Configuration set! Environment variables:"
echo "- Consolidation enabled: $MCP_CONSOLIDATION_ENABLED"
echo "- mDNS enabled: $MCP_MDNS_ENABLED"
echo "- HTTPS enabled: $MCP_HTTPS_ENABLED"
echo "- Service name: $MCP_MDNS_SERVICE_NAME"
echo "- API Key generated: [SET]"
echo ""
echo "Starting MCP Memory Service HTTP server..."
# Activate virtual environment and start the server
source venv/bin/activate && python scripts/run_http_server.py
```
--------------------------------------------------------------------------------
/scripts/server/memory_offline.py:
--------------------------------------------------------------------------------
```python
#!/usr/bin/env python3
"""
Memory service launcher with forced offline mode.
This script sets offline mode BEFORE importing anything else.
"""
import os
import platform
import sys
def setup_offline_mode():
"""Setup offline mode environment variables BEFORE any imports."""
print("Setting up offline mode...", file=sys.stderr)
# Force offline mode
os.environ['HF_HUB_OFFLINE'] = '1'
os.environ['TRANSFORMERS_OFFLINE'] = '1'
# Configure cache paths for Windows
username = os.environ.get('USERNAME', os.environ.get('USER', ''))
if platform.system() == "Windows" and username:
hf_home = f"C:\\Users\\{username}\\.cache\\huggingface"
transformers_cache = f"C:\\Users\\{username}\\.cache\\huggingface\\transformers"
sentence_transformers_home = f"C:\\Users\\{username}\\.cache\\torch\\sentence_transformers"
else:
hf_home = os.path.expanduser("~/.cache/huggingface")
transformers_cache = os.path.expanduser("~/.cache/huggingface/transformers")
sentence_transformers_home = os.path.expanduser("~/.cache/torch/sentence_transformers")
# Set cache paths
os.environ['HF_HOME'] = hf_home
os.environ['TRANSFORMERS_CACHE'] = transformers_cache
os.environ['SENTENCE_TRANSFORMERS_HOME'] = sentence_transformers_home
print(f"HF_HUB_OFFLINE: {os.environ.get('HF_HUB_OFFLINE')}", file=sys.stderr)
print(f"HF_HOME: {os.environ.get('HF_HOME')}", file=sys.stderr)
# Add src to Python path
src_path = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), 'src')
if src_path not in sys.path:
sys.path.insert(0, src_path)
if __name__ == "__main__":
# Setup offline mode FIRST
setup_offline_mode()
# Now import and run the memory server
print("Starting MCP Memory Service in offline mode...", file=sys.stderr)
from mcp_memory_service.server import main
main()
```
--------------------------------------------------------------------------------
/scripts/sync/litestream/staging_db_init.sql:
--------------------------------------------------------------------------------
```sql
-- Staging Database Schema for Offline Memory Changes
-- This database stores local changes when remote server is unavailable
-- Staged memories that need to be synchronized
CREATE TABLE IF NOT EXISTS staged_memories (
id TEXT PRIMARY KEY,
content TEXT NOT NULL,
content_hash TEXT NOT NULL,
tags TEXT, -- JSON array as string
metadata TEXT, -- JSON metadata as string
memory_type TEXT DEFAULT 'note',
operation TEXT NOT NULL CHECK (operation IN ('INSERT', 'UPDATE', 'DELETE')),
staged_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
original_created_at TIMESTAMP,
source_machine TEXT,
conflict_status TEXT DEFAULT 'none' CHECK (conflict_status IN ('none', 'detected', 'resolved'))
);
-- Sync status tracking
CREATE TABLE IF NOT EXISTS sync_status (
key TEXT PRIMARY KEY,
value TEXT,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Index for performance
CREATE INDEX IF NOT EXISTS idx_staged_memories_hash ON staged_memories(content_hash);
CREATE INDEX IF NOT EXISTS idx_staged_memories_staged_at ON staged_memories(staged_at);
CREATE INDEX IF NOT EXISTS idx_staged_memories_operation ON staged_memories(operation);
-- Initialize sync status
INSERT OR REPLACE INTO sync_status (key, value) VALUES
('last_remote_sync', ''),
('last_local_sync', ''),
('staging_version', '1.0'),
('total_staged_changes', '0');
-- Triggers to maintain staged changes count
CREATE TRIGGER IF NOT EXISTS update_staged_count_insert
AFTER INSERT ON staged_memories
BEGIN
UPDATE sync_status
SET value = CAST((CAST(value AS INTEGER) + 1) AS TEXT),
updated_at = CURRENT_TIMESTAMP
WHERE key = 'total_staged_changes';
END;
CREATE TRIGGER IF NOT EXISTS update_staged_count_delete
AFTER DELETE ON staged_memories
BEGIN
UPDATE sync_status
SET value = CAST((CAST(value AS INTEGER) - 1) AS TEXT),
updated_at = CURRENT_TIMESTAMP
WHERE key = 'total_staged_changes';
END;
```
--------------------------------------------------------------------------------
/.github/workflows/claude.yml:
--------------------------------------------------------------------------------
```yaml
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
issues:
types: [opened, assigned]
pull_request_review:
types: [submitted]
jobs:
claude:
if: |
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
issues: read
id-token: write
actions: read # Required for Claude to read CI results on PRs
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
actions: read
# Optional: Give a custom prompt to Claude. If this is not specified, Claude will perform the instructions specified in the comment that tagged it.
# prompt: 'Update the pull request description to include a summary of changes.'
# Optional: Add claude_args to customize behavior and configuration
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/sdk#command-line for available options
# claude_args: '--model claude-opus-4-1-20250805 --allowed-tools Bash(gh pr:*)'
```
--------------------------------------------------------------------------------
/docs/guides/scripts.md:
--------------------------------------------------------------------------------
```markdown
# Scripts Documentation
This document provides an overview of the available scripts in the `scripts/` directory and their purposes.
## Essential Scripts
### Server Management
- `run_memory_server.py`: Main script to start the memory service server
```bash
python scripts/run_memory_server.py
```
### Environment Verification
- `verify_environment.py`: Verifies the installation environment and dependencies
```bash
python scripts/verify_environment.py
```
### Installation Testing
- `test_installation.py`: Tests the installation and basic functionality
```bash
python scripts/test_installation.py
```
### Memory Management
- `validate_memories.py`: Validates the integrity of stored memories
```bash
python scripts/validate_memories.py
```
- `repair_memories.py`: Repairs corrupted or invalid memories
```bash
python scripts/repair_memories.py
```
- `list-collections.py`: Lists all available memory collections
```bash
python scripts/list-collections.py
```
## Migration Scripts
- `mcp-migration.py`: Handles migration of MCP-related data
```bash
python scripts/mcp-migration.py
```
- `memory-migration.py`: Handles migration of memory data
```bash
python scripts/memory-migration.py
```
## Troubleshooting Scripts
- `verify_pytorch_windows.py`: Verifies PyTorch installation on Windows
```bash
python scripts/verify_pytorch_windows.py
```
- `verify_torch.py`: General PyTorch verification
```bash
python scripts/verify_torch.py
```
## Usage Notes
- Most scripts can be run directly with Python
- Some scripts may require specific environment variables to be set
- Always run verification scripts after installation or major updates
- Use migration scripts with caution and ensure backups are available
## Script Dependencies
- Python 3.10+
- Required packages listed in `requirements.txt`
- Some scripts may require additional dependencies listed in `requirements-migration.txt`
```
--------------------------------------------------------------------------------
/archive/setup-development/test_service.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Test script to debug service startup issues
echo "=== MCP Memory Service Debug Test ==="
# Set working directory
cd /home/hkr/repositories/mcp-memory-service
# Set environment variables (same as service)
export PATH=/home/hkr/repositories/mcp-memory-service/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export PYTHONPATH=/home/hkr/repositories/mcp-memory-service/src
export MCP_CONSOLIDATION_ENABLED=true
export MCP_MDNS_ENABLED=true
export MCP_HTTPS_ENABLED=true
export MCP_MDNS_SERVICE_NAME="MCP Memory"
export MCP_HTTP_ENABLED=true
export MCP_HTTP_HOST=0.0.0.0
export MCP_HTTP_PORT=8000
export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
export MCP_API_KEY=mcp-0b1ccbde2197a08dcb12d41af4044be6
echo "Working directory: $(pwd)"
echo "Python executable: $(which python)"
echo "Virtual env Python: /home/hkr/repositories/mcp-memory-service/venv/bin/python"
# Check if venv Python exists
if [ -f "/home/hkr/repositories/mcp-memory-service/venv/bin/python" ]; then
echo "✅ Virtual environment Python exists"
else
echo "❌ Virtual environment Python missing!"
exit 1
fi
# Check if run_http_server.py exists
if [ -f "/home/hkr/repositories/mcp-memory-service/scripts/run_http_server.py" ]; then
echo "✅ Server script exists"
else
echo "❌ Server script missing!"
exit 1
fi
# Test Python import
echo "=== Testing Python imports ==="
/home/hkr/repositories/mcp-memory-service/venv/bin/python -c "
import sys
sys.path.insert(0, '/home/hkr/repositories/mcp-memory-service/src')
try:
from mcp_memory_service.web.app import app
print('✅ Web app import successful')
except Exception as e:
print(f'❌ Web app import failed: {e}')
sys.exit(1)
"
echo "=== Testing server startup (5 seconds) ==="
timeout 5s /home/hkr/repositories/mcp-memory-service/venv/bin/python /home/hkr/repositories/mcp-memory-service/scripts/run_http_server.py || echo "Server test completed"
echo "=== Debug test finished ==="
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/web/dependencies.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
FastAPI dependencies for the HTTP interface.
"""
import logging
from typing import Optional
from fastapi import HTTPException, Depends
from ..storage.base import MemoryStorage
from ..services.memory_service import MemoryService
logger = logging.getLogger(__name__)
# Global storage instance
_storage: Optional[MemoryStorage] = None
def set_storage(storage: MemoryStorage) -> None:
"""Set the global storage instance."""
global _storage
_storage = storage
def get_storage() -> MemoryStorage:
"""Get the global storage instance."""
if _storage is None:
raise HTTPException(status_code=503, detail="Storage not initialized")
return _storage
def get_memory_service(storage: MemoryStorage = Depends(get_storage)) -> MemoryService:
"""Get a MemoryService instance with the configured storage backend."""
return MemoryService(storage)
async def create_storage_backend() -> MemoryStorage:
"""
Create and initialize storage backend for web interface based on configuration.
Returns:
Initialized storage backend
"""
from ..config import DATABASE_PATH
from ..storage.factory import create_storage_instance
logger.info("Creating storage backend for web interface...")
# Use shared factory with DATABASE_PATH for web interface
return await create_storage_instance(DATABASE_PATH, server_type="http")
```
--------------------------------------------------------------------------------
/.github/workflows/claude-code-review.yml:
--------------------------------------------------------------------------------
```yaml
name: Claude Code Review
on:
pull_request:
types: [opened, synchronize]
# Optional: Only run on specific file changes
# paths:
# - "src/**/*.ts"
# - "src/**/*.tsx"
# - "src/**/*.js"
# - "src/**/*.jsx"
jobs:
claude-review:
# Optional: Filter by PR author
# if: |
# github.event.pull_request.user.login == 'external-contributor' ||
# github.event.pull_request.user.login == 'new-developer' ||
# github.event.pull_request.author_association == 'FIRST_TIME_CONTRIBUTOR'
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
issues: read
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Run Claude Code Review
id: claude-review
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
prompt: |
REPO: ${{ github.repository }}
PR NUMBER: ${{ github.event.pull_request.number }}
Please review this pull request and provide feedback on:
- Code quality and best practices
- Potential bugs or issues
- Performance considerations
- Security concerns
- Test coverage
Use the repository's CLAUDE.md for guidance on style and conventions. Be constructive and helpful in your feedback.
Use `gh pr comment` with your Bash tool to leave your review as a comment on the PR.
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/sdk#command-line for available options
claude_args: '--allowed-tools "Bash(gh issue view:*),Bash(gh search:*),Bash(gh issue list:*),Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*),Bash(gh pr list:*)"'
```
--------------------------------------------------------------------------------
/claude-hooks/test-mcp-hook.js:
--------------------------------------------------------------------------------
```javascript
#!/usr/bin/env node
/**
* Test MCP-based Memory Hook
* Tests the updated session-start hook with MCP protocol
*/
const { onSessionStart } = require('./core/session-start.js');
// Test configuration
const testContext = {
workingDirectory: process.cwd(),
sessionId: 'mcp-test-session',
trigger: 'session-start',
userMessage: 'test memory hook with cloudflare backend',
injectSystemMessage: async (message) => {
console.log('\n' + '='.repeat(60));
console.log('🧠 MCP MEMORY CONTEXT INJECTION TEST');
console.log('='.repeat(60));
console.log(message);
console.log('='.repeat(60) + '\n');
return true;
}
};
async function testMCPHook() {
console.log('🔧 Testing MCP Memory Hook...');
console.log(`📂 Working Directory: ${process.cwd()}`);
console.log(`🔧 Testing with Cloudflare backend configuration\n`);
try {
await testContext.onSessionStart(testContext);
console.log('✅ MCP Hook test completed successfully');
} catch (error) {
console.error('❌ MCP Hook test failed:', error.message);
// Don't show full stack trace in test mode
if (process.env.DEBUG) {
console.error(error.stack);
}
// Test completed - hook should fail gracefully
console.log('✅ Hook failed gracefully as expected when MCP server unavailable');
}
}
// Handle the onSessionStart function correctly
const sessionStartModule = require('./core/session-start.js');
if (sessionStartModule.handler) {
testContext.onSessionStart = sessionStartModule.handler;
} else if (typeof sessionStartModule === 'function') {
testContext.onSessionStart = sessionStartModule;
} else {
// Try direct export
testContext.onSessionStart = sessionStartModule.onSessionStart || sessionStartModule.default;
}
if (!testContext.onSessionStart) {
console.error('❌ Could not find onSessionStart handler');
process.exit(1);
}
// Run the test
testMCPHook();
```
--------------------------------------------------------------------------------
/scripts/installation/install_uv.py:
--------------------------------------------------------------------------------
```python
#!/usr/bin/env python3
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Script to install UV package manager
"""
import os
import sys
import subprocess
import platform
def main():
print("Installing UV package manager...")
try:
# Install UV using pip
subprocess.check_call([
sys.executable, '-m', 'pip', 'install', 'uv'
])
print("UV installed successfully!")
print("You can now use UV for faster dependency management:")
print(" uv pip install -r requirements.txt")
# Create shortcut script
system = platform.system().lower()
if system == "windows":
# Create .bat file for Windows
with open("uv-run.bat", "w") as f:
f.write(f"@echo off\n")
f.write(f"python -m uv run memory %*\n")
print("Created uv-run.bat shortcut")
else:
# Create shell script for Unix-like systems
with open("uv-run.sh", "w") as f:
f.write("#!/bin/sh\n")
f.write("python -m uv run memory \"$@\"\n")
# Make it executable
try:
os.chmod("uv-run.sh", 0o755)
except:
pass
print("Created uv-run.sh shortcut")
except subprocess.SubprocessError as e:
print(f"Error installing UV: {e}")
sys.exit(1)
if __name__ == "__main__":
main()
```
--------------------------------------------------------------------------------
/archive/litestream-configs-v6.3.0/install_service.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Install MCP Memory Service as a systemd service
echo "Installing MCP Memory Service as a systemd service..."
# Check if running as regular user (not root)
if [ "$EUID" -eq 0 ]; then
echo "Error: Do not run this script as root. Run as your regular user."
exit 1
fi
# Get current user and working directory
CURRENT_USER=$(whoami)
CURRENT_DIR=$(pwd)
SERVICE_FILE="deployment/mcp-memory.service"
echo "User: $CURRENT_USER"
echo "Working directory: $CURRENT_DIR"
# Check if service file exists
if [ ! -f "$SERVICE_FILE" ]; then
echo "Error: Service file $SERVICE_FILE not found!"
exit 1
fi
# Generate a unique API key
API_KEY="mcp-$(openssl rand -hex 16)"
echo "Generated API key: $API_KEY"
# Update the service file with the actual API key
sed -i "s/Environment=MCP_API_KEY=.*/Environment=MCP_API_KEY=$API_KEY/" "$SERVICE_FILE"
# Copy service file to systemd directory
echo "Installing systemd service file..."
sudo cp "$SERVICE_FILE" /etc/systemd/system/
# Set proper permissions
sudo chmod 644 /etc/systemd/system/mcp-memory.service
# Reload systemd daemon
echo "Reloading systemd daemon..."
sudo systemctl daemon-reload
# Enable the service to start on boot
echo "Enabling service for startup..."
sudo systemctl enable mcp-memory.service
echo ""
echo "✅ MCP Memory Service installed successfully!"
echo ""
echo "Commands to manage the service:"
echo " Start: sudo systemctl start mcp-memory"
echo " Stop: sudo systemctl stop mcp-memory"
echo " Status: sudo systemctl status mcp-memory"
echo " Logs: sudo journalctl -u mcp-memory -f"
echo " Disable: sudo systemctl disable mcp-memory"
echo ""
echo "The service will now start automatically on system boot."
echo "API Key: $API_KEY"
echo ""
echo "Service will be available at:"
echo " Dashboard: https://localhost:8000"
echo " API Docs: https://localhost:8000/api/docs"
echo " Health: https://localhost:8000/api/health"
echo ""
echo "To start the service now, run:"
echo " sudo systemctl start mcp-memory"
```
--------------------------------------------------------------------------------
/scripts/utils/query_memories.py:
--------------------------------------------------------------------------------
```python
#!/usr/bin/env python3
"""Query memories from the SQLite database"""
import sqlite3
import json
import sys
def query_memories(tag_filter=None, query_text=None, limit=5):
"""Query memories from the database"""
conn = sqlite3.connect('/home/hkr/.local/share/mcp-memory/sqlite_vec.db')
cursor = conn.cursor()
if tag_filter:
sql = "SELECT content, tags FROM memories WHERE tags LIKE ? LIMIT ?"
cursor.execute(sql, (f'%{tag_filter}%', limit))
elif query_text:
sql = "SELECT content, tags FROM memories WHERE content LIKE ? LIMIT ?"
cursor.execute(sql, (f'%{query_text}%', limit))
else:
sql = "SELECT content, tags FROM memories ORDER BY created_at DESC LIMIT ?"
cursor.execute(sql, (limit,))
results = []
for row in cursor.fetchall():
content = row[0]
try:
tags = json.loads(row[1]) if row[1] else []
except (json.JSONDecodeError, TypeError):
# Tags might be stored differently
tags = row[1].split(',') if row[1] and isinstance(row[1], str) else []
results.append({
'content': content,
'tags': tags
})
conn.close()
return results
if __name__ == "__main__":
# Get memories with specific tags
print("=== Searching for README sections ===\n")
# Search for readme content
memories = query_memories(tag_filter="readme", limit=10)
for i, memory in enumerate(memories, 1):
print(f"Memory {i}:")
print(f"Content (first 500 chars):\n{memory['content'][:500]}")
print(f"Tags: {', '.join(memory['tags'])}")
print("-" * 80)
print()
# Search for specific content
print("\n=== Searching for Installation content ===\n")
memories = query_memories(query_text="installation", limit=5)
for i, memory in enumerate(memories, 1):
print(f"Memory {i}:")
print(f"Content (first 500 chars):\n{memory['content'][:500]}")
print(f"Tags: {', '.join(memory['tags'])}")
print("-" * 80)
print()
```
--------------------------------------------------------------------------------
/archive/deployment/deploy_http_with_mcp.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Deploy HTTP Server with MCP endpoints (hybrid approach)
echo "🔄 Switching to HTTP server with MCP protocol support..."
# Create updated service file for hybrid approach
cat > /tmp/mcp-memory-hybrid.service << 'EOF'
[Unit]
Description=MCP Memory Service HTTP+MCP Hybrid v4.0.0-alpha.1
Documentation=https://github.com/doobidoo/mcp-memory-service
After=network.target network-online.target
Wants=network-online.target
[Service]
Type=simple
User=hkr
Group=hkr
WorkingDirectory=/home/hkr/repositories/mcp-memory-service
Environment=PATH=/home/hkr/repositories/mcp-memory-service/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Environment=PYTHONPATH=/home/hkr/repositories/mcp-memory-service/src
Environment=MCP_CONSOLIDATION_ENABLED=true
Environment=MCP_MDNS_ENABLED=true
Environment=MCP_HTTPS_ENABLED=false
Environment=MCP_MDNS_SERVICE_NAME="MCP Memory Service - Hybrid"
Environment=MCP_HTTP_ENABLED=true
Environment=MCP_HTTP_HOST=0.0.0.0
Environment=MCP_HTTP_PORT=8000
Environment=MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
Environment=MCP_API_KEY=test-key-123
ExecStart=/home/hkr/repositories/mcp-memory-service/venv/bin/python /home/hkr/repositories/mcp-memory-service/scripts/run_http_server.py
Restart=always
RestartSec=10
StandardOutput=journal
StandardError=journal
SyslogIdentifier=mcp-memory-service
[Install]
WantedBy=multi-user.target
EOF
# Install the hybrid service configuration
echo "📝 Installing hybrid HTTP+MCP service configuration..."
sudo cp /tmp/mcp-memory-hybrid.service /etc/systemd/system/mcp-memory.service
# Reload and start
echo "🔄 Reloading systemd and starting hybrid service..."
sudo systemctl daemon-reload
sudo systemctl start mcp-memory
# Check status
echo "🔍 Checking service status..."
sudo systemctl status mcp-memory --no-pager
echo ""
echo "✅ HTTP server with MCP protocol support is now running!"
echo ""
echo "🌐 Available Services:"
echo " - HTTP API: http://localhost:8000/api/*"
echo " - Dashboard: http://localhost:8000/"
echo " - Health: http://localhost:8000/api/health"
echo ""
echo "🔧 Next: Add MCP protocol endpoints to the HTTP server"
```
--------------------------------------------------------------------------------
/tools/docker/docker-compose.http.yml:
--------------------------------------------------------------------------------
```yaml
version: '3.8'
# Docker Compose configuration for HTTP/API mode
# Usage: docker-compose -f docker-compose.http.yml up -d
services:
mcp-memory-service:
build:
context: ../..
dockerfile: tools/docker/Dockerfile
ports:
- "${HTTP_PORT:-8000}:8000" # Map to different port if needed
volumes:
# Single data directory for all storage
- ./data:/app/data
# Model cache (prevents re-downloading models on each restart)
# Uncomment the following line to persist Hugging Face models
# - ${HOME}/.cache/huggingface:/root/.cache/huggingface
# Optional: mount local config
# - ./config:/app/config:ro
environment:
# Mode selection
- MCP_MODE=http
# Storage configuration
- MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
- MCP_MEMORY_SQLITE_PATH=/app/data/sqlite_vec.db
- MCP_MEMORY_BACKUPS_PATH=/app/data/backups
# HTTP configuration
- MCP_HTTP_PORT=8000
- MCP_HTTP_HOST=0.0.0.0
- MCP_API_KEY=${MCP_API_KEY:-your-secure-api-key-here}
# Optional: HTTPS configuration
# - MCP_HTTPS_ENABLED=true
# - MCP_HTTPS_PORT=8443
# - MCP_SSL_CERT_FILE=/app/certs/cert.pem
# - MCP_SSL_KEY_FILE=/app/certs/key.pem
# Performance tuning
- LOG_LEVEL=${LOG_LEVEL:-INFO}
- MAX_RESULTS_PER_QUERY=10
- SIMILARITY_THRESHOLD=0.7
# Python configuration
- PYTHONUNBUFFERED=1
- PYTHONPATH=/app/src
# Offline mode (uncomment if models are pre-cached and network is restricted)
# - HF_HUB_OFFLINE=1
# - TRANSFORMERS_OFFLINE=1
# Use the unified entrypoint
entrypoint: ["/usr/local/bin/docker-entrypoint-unified.sh"]
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/api/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Resource limits (optional, adjust as needed)
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
```
--------------------------------------------------------------------------------
/scripts/testing/test-hook.js:
--------------------------------------------------------------------------------
```javascript
#!/usr/bin/env node
/**
* Test script for the enhanced session-start hook
*/
const path = require('path');
// Import the enhanced hook
const sessionStartHook = require('../../claude-hooks/core/session-start.js');
async function testEnhancedHook() {
console.log('🧪 Testing Enhanced Session Start Hook\n');
// Mock context for testing
const mockContext = {
workingDirectory: process.cwd(),
sessionId: 'test-session-' + Date.now(),
trigger: 'session-start',
userMessage: 'Help me understand the memory service improvements',
injectSystemMessage: async (message) => {
console.log('\n🎯 INJECTED CONTEXT:');
console.log('═'.repeat(60));
console.log(message);
console.log('═'.repeat(60));
return true;
}
};
console.log(`📂 Testing in directory: ${mockContext.workingDirectory}`);
console.log(`🔍 Test query: "${mockContext.userMessage}"`);
console.log(`⚙️ Trigger: ${mockContext.trigger}\n`);
try {
// Execute the enhanced hook
await sessionStartHook.handler(mockContext);
console.log('\n✅ Hook execution completed successfully!');
console.log('\n📊 Expected improvements:');
console.log(' • Multi-phase memory retrieval (recent + important + fallback)');
console.log(' • Enhanced recency indicators (🕒 today, 📅 this week)');
console.log(' • Better semantic queries with git context');
console.log(' • Improved categorization with "Recent Work" section');
console.log(' • Configurable memory ratios and time windows');
} catch (error) {
console.error('❌ Hook execution failed:', error.message);
console.error('Stack trace:', error.stack);
}
}
// Run the test
if (require.main === module) {
testEnhancedHook()
.then(() => {
console.log('\n🎉 Test completed');
process.exit(0);
})
.catch(error => {
console.error('\n💥 Test failed:', error.message);
process.exit(1);
});
}
module.exports = { testEnhancedHook };
```
--------------------------------------------------------------------------------
/examples/start_https_example.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Example HTTPS startup script for MCP Memory Service
# Copy and customize this file for your deployment
#
# This example shows how to configure the MCP Memory Service with custom SSL certificates.
# For easy local development with trusted certificates, consider using mkcert:
# https://github.com/FiloSottile/mkcert
# Storage configuration
export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
# API authentication - CHANGE THIS TO A SECURE KEY!
# Generate a secure key with: openssl rand -base64 32
export MCP_API_KEY="your-secure-api-key-here"
# HTTPS configuration with custom certificates
export MCP_HTTPS_ENABLED=true
export MCP_HTTPS_PORT=8443
# SSL Certificate paths - UPDATE THESE PATHS TO YOUR CERTIFICATES
#
# For mkcert certificates (recommended for development):
# 1. Install mkcert: https://github.com/FiloSottile/mkcert#installation
# 2. Create local CA: mkcert -install
# 3. Generate certificate: mkcert your-domain.local localhost 127.0.0.1
# 4. Update paths below to point to generated certificate files
#
# Example paths:
# export MCP_SSL_CERT_FILE="/path/to/your-domain.local+2.pem"
# export MCP_SSL_KEY_FILE="/path/to/your-domain.local+2-key.pem"
#
# For production, use certificates from your certificate authority:
export MCP_SSL_CERT_FILE="/path/to/your/certificate.pem"
export MCP_SSL_KEY_FILE="/path/to/your/certificate-key.pem"
# Optional: Disable HTTP if only HTTPS is needed
export MCP_HTTP_ENABLED=false
export MCP_HTTP_PORT=8080
# mDNS service discovery
export MCP_MDNS_ENABLED=true
export MCP_MDNS_SERVICE_NAME="MCP Memory Service"
# Optional: Additional configuration
# export MCP_MEMORY_INCLUDE_HOSTNAME=true
# export MCP_CONSOLIDATION_ENABLED=false
echo "Starting MCP Memory Service with HTTPS on port $MCP_HTTPS_PORT"
echo "Certificate: $MCP_SSL_CERT_FILE"
echo "Private Key: $MCP_SSL_KEY_FILE"
# Change to script directory and start server
cd "$(dirname "$0")/.."
# Check if virtual environment exists
if [ ! -f ".venv/bin/python" ]; then
echo "Error: Virtual environment not found at .venv/"
echo "Please run: python -m venv .venv && source .venv/bin/activate && pip install -e ."
exit 1
fi
# Start the server
exec ./.venv/bin/python run_server.py
```
--------------------------------------------------------------------------------
/docs/document-ingestion.md:
--------------------------------------------------------------------------------
```markdown
# Document Ingestion (v7.6.0+)
Enhanced document parsing with optional semtools integration for superior quality extraction.
## Supported Formats
| Format | Native Parser | With Semtools | Quality |
|--------|--------------|---------------|---------|
| PDF | PyPDF2/pdfplumber | LlamaParse | Excellent (OCR, tables) |
| DOCX | Not supported | LlamaParse | Excellent |
| PPTX | Not supported | LlamaParse | Excellent |
| TXT/MD | Built-in | N/A | Perfect |
## Semtools Integration (Optional)
Install [semtools](https://github.com/run-llama/semtools) for enhanced document parsing:
```bash
# Install via npm (recommended)
npm i -g @llamaindex/semtools
# Or via cargo
cargo install semtools
# Optional: Configure LlamaParse API key for best quality
export LLAMAPARSE_API_KEY="your-api-key"
```
## Configuration
```bash
# Document chunking settings
export MCP_DOCUMENT_CHUNK_SIZE=1000 # Characters per chunk
export MCP_DOCUMENT_CHUNK_OVERLAP=200 # Overlap between chunks
# LlamaParse API key (optional, improves quality)
export LLAMAPARSE_API_KEY="llx-..."
```
## Usage Examples
```bash
# Ingest a single document
claude /memory-ingest document.pdf --tags documentation
# Ingest directory
claude /memory-ingest-dir ./docs --tags knowledge-base
# Via Python
from mcp_memory_service.ingestion import get_loader_for_file
loader = get_loader_for_file(Path("document.pdf"))
async for chunk in loader.extract_chunks(Path("document.pdf")):
await store_memory(chunk.content, tags=["doc"])
```
## Features
- **Automatic format detection** - Selects best loader for each file
- **Intelligent chunking** - Respects paragraph/sentence boundaries
- **Metadata enrichment** - Preserves file info, extraction method, page numbers
- **Graceful fallback** - Uses native parsers if semtools unavailable
- **Progress tracking** - Reports chunks processed during ingestion
## Performance Considerations
- LlamaParse provides superior quality but requires API key and internet connection
- Native parsers work offline but may have lower extraction quality for complex documents
- Chunk size affects retrieval granularity vs context completeness
- Larger overlap improves continuity but increases storage
```
--------------------------------------------------------------------------------
/scripts/sync/litestream/manual_sync.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# Manual sync using HTTP downloads (alternative to Litestream restore)
DB_PATH="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec.db"
REMOTE_BASE="http://narrowbox.local:8080/mcp-memory"
BACKUP_PATH="/Users/hkr/Library/Application Support/mcp-memory/sqlite_vec.db.backup"
TEMP_DIR="/tmp/litestream_manual_$$"
echo "$(date): Starting manual sync from remote master..."
# Create temporary directory
mkdir -p "$TEMP_DIR"
# Get the latest generation ID
GENERATION=$(curl -s "$REMOTE_BASE/generations/" | grep -o 'href="[^"]*/"' | sed 's/href="//;s/\/"//g' | head -1)
if [ -z "$GENERATION" ]; then
echo "$(date): ERROR: Could not determine generation ID"
exit 1
fi
echo "$(date): Found generation: $GENERATION"
# Get the latest snapshot
SNAPSHOT_URL="$REMOTE_BASE/generations/$GENERATION/snapshots/"
SNAPSHOT_FILE=$(curl -s "$SNAPSHOT_URL" | grep -o 'href="[^"]*\.snapshot\.lz4"' | sed 's/href="//;s/"//g' | tail -1)
if [ -z "$SNAPSHOT_FILE" ]; then
echo "$(date): ERROR: Could not find snapshot file"
rm -rf "$TEMP_DIR"
exit 1
fi
echo "$(date): Downloading snapshot: $SNAPSHOT_FILE"
# Download and decompress snapshot
curl -s "$SNAPSHOT_URL$SNAPSHOT_FILE" -o "$TEMP_DIR/snapshot.lz4"
if command -v lz4 >/dev/null 2>&1; then
# Use lz4 if available
lz4 -d "$TEMP_DIR/snapshot.lz4" "$TEMP_DIR/database.db"
else
echo "$(date): ERROR: lz4 command not found. Please install: brew install lz4"
rm -rf "$TEMP_DIR"
exit 1
fi
# Backup current database
if [ -f "$DB_PATH" ]; then
cp "$DB_PATH" "$BACKUP_PATH"
echo "$(date): Created backup at $BACKUP_PATH"
fi
# Replace with new database
cp "$TEMP_DIR/database.db" "$DB_PATH"
if [ $? -eq 0 ]; then
echo "$(date): Successfully synced database from remote master"
# Remove backup on success
rm -f "$BACKUP_PATH"
# Show database info
echo "$(date): Database size: $(du -h "$DB_PATH" | cut -f1)"
echo "$(date): Database modified: $(stat -f "%Sm" "$DB_PATH")"
else
echo "$(date): ERROR: Failed to replace database"
# Restore backup on failure
if [ -f "$BACKUP_PATH" ]; then
mv "$BACKUP_PATH" "$DB_PATH"
echo "$(date): Restored backup"
fi
rm -rf "$TEMP_DIR"
exit 1
fi
# Cleanup
rm -rf "$TEMP_DIR"
echo "$(date): Manual sync completed successfully"
```
--------------------------------------------------------------------------------
/docs/archive/obsolete-workflows/load_memory_context.md:
--------------------------------------------------------------------------------
```markdown
# Memory Context Loading Prompt
Use this prompt at the start of Claude Code sessions on machines in your local network:
---
## Prompt for Claude Code
```
Load MCP Memory Service context for this project. Before we begin working, please retrieve and incorporate all stored knowledge about this codebase from my local memory service:
**Memory Service Endpoint**: https://your-server-ip:8443/mcp
**Authorization**: Bearer your-api-key
Execute this command to load context:
```bash
curl -k -s -X POST https://your-server-ip:8443/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{"jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": {"name": "retrieve_memory", "arguments": {"query": "claude-code-reference distributable-reference", "limit": 20}}}' \
| jq -r '.result.content[0].text'
```
This memory contains:
- Complete project structure and architecture
- All key development, testing, and deployment commands
- Environment variables and configuration patterns
- Recent changes including v5.0.2 ONNX implementation details
- Issue management approaches and current project status
- Testing practices and platform-specific optimizations
- Remote service deployment and health monitoring
After loading this context, you'll have comprehensive knowledge of the MCP Memory Service project equivalent to extensive codebase exploration, which will significantly reduce token usage and improve response accuracy.
Please confirm successful context loading and summarize the key project information you've retrieved.
```
---
## Alternative Short Prompt
For quick context loading:
```
Load project context from memory service: curl -k -s -X POST https://your-server-ip:8443/mcp -H "Content-Type: application/json" -H "Authorization: Bearer your-api-key" -d '{"jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": {"name": "retrieve_memory", "arguments": {"query": "claude-code-reference", "limit": 20}}}' | jq -r '.result.content[0].text'
Incorporate this MCP Memory Service project knowledge before proceeding.
```
---
## Network Distribution
1. **Copy this prompt file** to other machines in your network
2. **Update IP address** if memory service moves
3. **Test connectivity** with: `curl -k -s https://your-server-ip:8443/api/health`
4. **Use at session start** for instant project context
This eliminates repetitive codebase discovery across all your development machines.
```
--------------------------------------------------------------------------------
/scripts/service/service_control.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
# MCP Memory Service Control Script
SERVICE_NAME="mcp-memory"
case "$1" in
start)
echo "Starting MCP Memory Service..."
sudo systemctl start $SERVICE_NAME
sleep 2
sudo systemctl status $SERVICE_NAME --no-pager
;;
stop)
echo "Stopping MCP Memory Service..."
sudo systemctl stop $SERVICE_NAME
sudo systemctl status $SERVICE_NAME --no-pager
;;
restart)
echo "Restarting MCP Memory Service..."
sudo systemctl restart $SERVICE_NAME
sleep 2
sudo systemctl status $SERVICE_NAME --no-pager
;;
status)
sudo systemctl status $SERVICE_NAME --no-pager
;;
logs)
echo "Showing recent logs (Ctrl+C to exit)..."
sudo journalctl -u $SERVICE_NAME -f
;;
health)
echo "Checking service health..."
curl -k -s https://localhost:8000/api/health | jq '.' 2>/dev/null || curl -k -s https://localhost:8000/api/health
;;
enable)
echo "Enabling service for startup..."
sudo systemctl enable $SERVICE_NAME
echo "Service will start automatically on boot"
;;
disable)
echo "Disabling service from startup..."
sudo systemctl disable $SERVICE_NAME
echo "Service will not start automatically on boot"
;;
install)
echo "Installing service..."
./install_service.sh
;;
uninstall)
echo "Uninstalling service..."
sudo systemctl stop $SERVICE_NAME 2>/dev/null
sudo systemctl disable $SERVICE_NAME 2>/dev/null
sudo rm -f /etc/systemd/system/$SERVICE_NAME.service
sudo systemctl daemon-reload
echo "Service uninstalled"
;;
*)
echo "Usage: $0 {start|stop|restart|status|logs|health|enable|disable|install|uninstall}"
echo ""
echo "Commands:"
echo " start - Start the service"
echo " stop - Stop the service"
echo " restart - Restart the service"
echo " status - Show service status"
echo " logs - Show live service logs"
echo " health - Check API health endpoint"
echo " enable - Enable service for startup"
echo " disable - Disable service from startup"
echo " install - Install the systemd service"
echo " uninstall - Remove the systemd service"
exit 1
;;
esac
```
--------------------------------------------------------------------------------
/tests/smithery/test_smithery.py:
--------------------------------------------------------------------------------
```python
#!/usr/bin/env python3
"""
Test script to verify Smithery configuration works correctly.
This simulates how Smithery would invoke the service.
"""
import os
import sys
import subprocess
import tempfile
import json
def test_smithery_config():
"""Test the Smithery configuration by simulating the expected command."""
print("Testing Smithery configuration...")
# Create temporary paths for testing
with tempfile.TemporaryDirectory() as temp_dir:
chroma_path = os.path.join(temp_dir, "chroma_db")
backups_path = os.path.join(temp_dir, "backups")
# Create directories
os.makedirs(chroma_path, exist_ok=True)
os.makedirs(backups_path, exist_ok=True)
# Set environment variables as Smithery would
test_env = os.environ.copy()
test_env.update({
'MCP_MEMORY_CHROMA_PATH': chroma_path,
'MCP_MEMORY_BACKUPS_PATH': backups_path,
'PYTHONUNBUFFERED': '1',
'PYTORCH_ENABLE_MPS_FALLBACK': '1'
})
# Command that Smithery would run
cmd = [sys.executable, 'smithery_wrapper.py', '--version']
print(f"Running command: {' '.join(cmd)}")
print(f"Environment: {json.dumps({k: v for k, v in test_env.items() if k.startswith('MCP_') or k in ['PYTHONUNBUFFERED', 'PYTORCH_ENABLE_MPS_FALLBACK']}, indent=2)}")
try:
result = subprocess.run(
cmd,
env=test_env,
capture_output=True,
text=True,
timeout=30
)
print(f"Return code: {result.returncode}")
if result.stdout:
print(f"STDOUT:\n{result.stdout}")
if result.stderr:
print(f"STDERR:\n{result.stderr}")
if result.returncode == 0:
print("✅ SUCCESS: Smithery configuration test passed!")
return True
else:
print("❌ FAILED: Smithery configuration test failed!")
return False
except subprocess.TimeoutExpired:
print("❌ FAILED: Command timed out")
return False
except Exception as e:
print(f"❌ FAILED: Exception occurred: {e}")
return False
if __name__ == "__main__":
success = test_smithery_config()
sys.exit(0 if success else 1)
```
--------------------------------------------------------------------------------
/docs/integrations/groq-bridge.md:
--------------------------------------------------------------------------------
```markdown
# Groq Agent Bridge - Requirements
Install the required package:
```bash
pip install groq
# or
uv pip install groq
```
Set up your environment:
```bash
export GROQ_API_KEY="your-api-key-here"
```
## Available Models
The Groq bridge supports multiple high-performance models:
| Model | Context | Best For | Speed |
|-------|---------|----------|-------|
| **llama-3.3-70b-versatile** | 128K | General purpose (default) | ~300ms |
| **moonshotai/kimi-k2-instruct** | 256K | Agentic coding, tool calling | ~200ms |
| **llama-3.1-8b-instant** | 128K | Fast, simple tasks | ~100ms |
**Kimi K2 Features:**
- 256K context window (largest on GroqCloud)
- 1 trillion parameters (32B activated)
- Excellent for front-end development and complex coding
- Superior agentic intelligence and tool calling
- 185 tokens/second throughput
## Usage Examples
### As a library from another AI agent:
```python
from groq_agent_bridge import GroqAgentBridge
# Initialize the bridge
bridge = GroqAgentBridge()
# Simple call
response = bridge.call_model_raw("Explain quantum computing in simple terms")
print(response)
# Advanced call with options
result = bridge.call_model(
prompt="Generate Python code for a binary search tree",
model="llama-3.3-70b-versatile",
max_tokens=500,
temperature=0.3,
system_message="You are an expert Python programmer"
)
print(result)
```
### Command-line usage:
```bash
# Simple usage (uses default llama-3.3-70b-versatile)
./scripts/utils/groq "What is machine learning?"
# Use Kimi K2 for complex coding tasks
./scripts/utils/groq "Generate a React component with hooks" \
--model "moonshotai/kimi-k2-instruct"
# Fast simple queries with llama-3.1-8b-instant
./scripts/utils/groq "Rate complexity 1-10: def add(a,b): return a+b" \
--model "llama-3.1-8b-instant"
# Full options with default model
./scripts/utils/groq "Generate a SQL query" \
--model "llama-3.3-70b-versatile" \
--max-tokens 200 \
--temperature 0.5 \
--system "You are a database expert" \
--json
```
### Integration with bash scripts:
```bash
#!/bin/bash
export GROQ_API_KEY="your-key"
# Get response and save to file
python groq_agent_bridge.py "Write a haiku about code" --temperature 0.9 > response.txt
# JSON output for parsing
json_response=$(python groq_agent_bridge.py "Explain REST APIs" --json)
# Parse with jq or other tools
```
This provides a completely non-interactive way for other AI agents to call Groq's models!
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/cli/utils.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
CLI utilities for MCP Memory Service.
"""
import os
from typing import Optional
from ..storage.base import MemoryStorage
async def get_storage(backend: Optional[str] = None) -> MemoryStorage:
"""
Get storage backend for CLI operations.
Args:
backend: Storage backend name ('sqlite_vec', 'cloudflare', or 'hybrid')
Returns:
Initialized storage backend
"""
# Determine backend
if backend is None:
backend = os.getenv('MCP_MEMORY_STORAGE_BACKEND', 'sqlite_vec').lower()
backend = backend.lower()
if backend in ('sqlite_vec', 'sqlite-vec'):
from ..storage.sqlite_vec import SqliteVecMemoryStorage
from ..config import SQLITE_VEC_PATH
storage = SqliteVecMemoryStorage(SQLITE_VEC_PATH)
await storage.initialize()
return storage
elif backend == 'cloudflare':
from ..storage.cloudflare import CloudflareStorage
from ..config import (
CLOUDFLARE_API_TOKEN, CLOUDFLARE_ACCOUNT_ID,
CLOUDFLARE_VECTORIZE_INDEX, CLOUDFLARE_D1_DATABASE_ID,
CLOUDFLARE_R2_BUCKET, CLOUDFLARE_EMBEDDING_MODEL,
CLOUDFLARE_LARGE_CONTENT_THRESHOLD, CLOUDFLARE_MAX_RETRIES,
CLOUDFLARE_BASE_DELAY
)
storage = CloudflareStorage(
api_token=CLOUDFLARE_API_TOKEN,
account_id=CLOUDFLARE_ACCOUNT_ID,
vectorize_index=CLOUDFLARE_VECTORIZE_INDEX,
d1_database_id=CLOUDFLARE_D1_DATABASE_ID,
r2_bucket=CLOUDFLARE_R2_BUCKET,
embedding_model=CLOUDFLARE_EMBEDDING_MODEL,
large_content_threshold=CLOUDFLARE_LARGE_CONTENT_THRESHOLD,
max_retries=CLOUDFLARE_MAX_RETRIES,
base_delay=CLOUDFLARE_BASE_DELAY
)
await storage.initialize()
return storage
else:
raise ValueError(f"Unsupported storage backend: {backend}")
```
--------------------------------------------------------------------------------
/scripts/migration/TIMESTAMP_CLEANUP_README.md:
--------------------------------------------------------------------------------
```markdown
# MCP Memory Timestamp Cleanup Scripts
## Overview
These scripts help clean up the timestamp mess in your MCP Memory ChromaDB database where multiple timestamp formats and fields have accumulated over time.
## Files
1. **`verify_mcp_timestamps.py`** - Verification script to check current timestamp state
2. **`cleanup_mcp_timestamps.py`** - Migration script to fix timestamp issues
## The Problem
Your database has accumulated 8 different timestamp-related fields:
- `timestamp` (integer) - Original design
- `created_at` (float) - Duplicate data
- `created_at_iso` (string) - ISO format duplicate
- `timestamp_float` (float) - Another duplicate
- `timestamp_str` (string) - String format duplicate
- `updated_at` (float) - Update tracking
- `updated_at_iso` (string) - Update tracking in ISO
- `date` (generic) - Generic date field
This causes:
- 3x storage overhead for the same timestamp
- Confusion about which field to use
- Inconsistent data retrieval
## Usage
### Step 1: Verify Current State
```bash
python3 scripts/migrations/verify_mcp_timestamps.py
```
This will show:
- Total memories in database
- Distribution of timestamp fields
- Memories missing timestamps
- Sample values showing the redundancy
- Date ranges for each timestamp type
### Step 2: Run Migration
```bash
python3 scripts/migrations/cleanup_mcp_timestamps.py
```
The migration will:
1. **Create a backup** of your database
2. **Standardize** all timestamps to integer format in the `timestamp` field
3. **Remove** all redundant timestamp fields
4. **Ensure** all memories have valid timestamps
5. **Optimize** the database with VACUUM
### Step 3: Verify Results
```bash
python3 scripts/migrations/verify_mcp_timestamps.py
```
After migration, you should see:
- Only one timestamp field (`timestamp`)
- All memories have timestamps
- Clean data structure
## Safety
- The migration script **always creates a backup** before making changes
- Backup location: `/Users/hkr/Library/Application Support/mcp-memory/chroma_db/chroma.sqlite3.backup_YYYYMMDD_HHMMSS`
- If anything goes wrong, you can restore the backup
## Restoration (if needed)
If you need to restore from backup:
```bash
# Stop Claude Desktop first
cp "/path/to/backup" "/Users/hkr/Library/Application Support/mcp-memory/chroma_db/chroma.sqlite3"
```
## After Migration
Update your MCP Memory Service code to only use the `timestamp` field (integer format) for all timestamp operations. This prevents the issue from recurring.
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/utils/http_server_manager.py:
--------------------------------------------------------------------------------
```python
"""HTTP Server Manager for MCP Memory Service multi-client coordination."""
import asyncio
import logging
import os
import subprocess
import sys
from pathlib import Path
from typing import Optional
logger = logging.getLogger(__name__)
async def auto_start_http_server_if_needed() -> bool:
"""
Auto-start HTTP server if needed for multi-client coordination.
Returns:
bool: True if server was started or already running, False if failed
"""
try:
# Check if HTTP auto-start is enabled
if not os.getenv("MCP_MEMORY_HTTP_AUTO_START", "").lower() in ("true", "1"):
logger.debug("HTTP auto-start not enabled")
return False
# Check if server is already running
from ..utils.port_detection import is_port_in_use
port = int(os.getenv("MCP_HTTP_PORT", "8000"))
if await is_port_in_use("localhost", port):
logger.info(f"HTTP server already running on port {port}")
return True
# Try to start the HTTP server
logger.info(f"Starting HTTP server on port {port}")
# Get the repository root
repo_root = Path(__file__).parent.parent.parent.parent
# Start the HTTP server as a background process
cmd = [
sys.executable, "-m", "src.mcp_memory_service.app",
"--port", str(port),
"--host", "localhost"
]
process = subprocess.Popen(
cmd,
cwd=repo_root,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
start_new_session=True
)
# Wait a moment and check if the process started successfully
await asyncio.sleep(1)
if process.poll() is None: # Process is still running
# Wait a bit more and check if port is now in use
await asyncio.sleep(2)
if await is_port_in_use("localhost", port):
logger.info(f"Successfully started HTTP server on port {port}")
return True
else:
logger.warning("HTTP server process started but port not in use")
return False
else:
logger.warning(f"HTTP server process exited with code {process.returncode}")
return False
except Exception as e:
logger.error(f"Failed to auto-start HTTP server: {e}")
return False
```
--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/claude_integration.md:
--------------------------------------------------------------------------------
```markdown
# MCP Memory Service - Development Guidelines
## Commands
- Run memory server: `python scripts/run_memory_server.py`
- Run tests: `pytest tests/`
- Run specific test: `pytest tests/test_memory_ops.py::test_store_memory -v`
- Check environment: `python scripts/verify_environment_enhanced.py`
- Windows installation: `python scripts/install_windows.py`
- Build package: `python -m build`
## Installation Guidelines
- Always install in a virtual environment: `python -m venv venv`
- Use `install.py` for cross-platform installation
- Windows requires special PyTorch installation with correct index URL:
```bash
pip install torch==2.1.0 torchvision==2.1.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
```
- For recursion errors, run: `python scripts/fix_sitecustomize.py`
## Memory Service Invocation
- See the comprehensive [Invocation Guide](invocation_guide.md) for full details
- Key trigger phrases:
- **Storage**: "remember that", "remember this", "save to memory", "store in memory"
- **Retrieval**: "do you remember", "recall", "retrieve from memory", "search your memory for"
- **Tag-based**: "find memories with tag", "search for tag", "retrieve memories tagged"
- **Deletion**: "forget", "delete from memory", "remove from memory"
## Code Style
- Python 3.10+ with type hints
- Use dataclasses for models (see `models/memory.py`)
- Triple-quoted docstrings for modules and functions
- Async/await pattern for all I/O operations
- Error handling with specific exception types and informative messages
- Logging with appropriate levels for different severity
- Commit messages follow semantic release format: `type(scope): message`
## Project Structure
- `src/mcp_memory_service/` - Core package code
- `models/` - Data models
- `storage/` - Database abstraction
- `utils/` - Helper functions
- `server.py` - MCP protocol implementation
- `scripts/` - Utility scripts
- `memory_wrapper.py` - Windows wrapper script
- `install.py` - Cross-platform installation script
## Dependencies
- ChromaDB (0.5.23) for vector database
- sentence-transformers (>=2.2.2) for embeddings
- PyTorch (platform-specific installation)
- MCP protocol (>=1.0.0, <2.0.0) for client-server communication
## Troubleshooting
- For Windows installation issues, use `scripts/install_windows.py`
- Apple Silicon requires Python 3.10+ built for ARM64
- CUDA issues: verify with `torch.cuda.is_available()`
- For MCP protocol issues, check `server.py` for required methods
```
--------------------------------------------------------------------------------
/archive/investigations/MACOS_HOOKS_INVESTIGATION.md:
--------------------------------------------------------------------------------
```markdown
# macOS Memory Hooks Investigation
## Issue
Memory awareness hooks may work differently on macOS vs Linux when using MCP protocol.
## Current Linux Behavior (Manjaro)
- **Problem**: Hooks try to spawn duplicate MCP server via `MCPClient(serverCommand)`
- **Symptom**: Connection timeout when hooks execute
- **Root Cause**: Claude Code already has MCP server on stdio, can't have two servers on same streams
- **Current Workaround**: HTTP fallback (requires separate HTTP server on port 8443)
## Hypothesis: macOS May Work Differently
User reports hooks work on macOS without HTTP fallback. Possible reasons:
1. macOS Claude Code may provide hooks access to existing MCP connection
2. Different process/stdio handling on macOS vs Linux
3. `useExistingServer: true` config may actually work on macOS
## Investigation Needed (On MacBook)
### Test 1: MCP-Only Configuration
```json
{
"memoryService": {
"protocol": "mcp",
"preferredProtocol": "mcp",
"mcp": {
"useExistingServer": true,
"serverName": "memory"
}
}
}
```
**Expected on macOS (if hypothesis correct):**
- ✅ Hooks connect successfully
- ✅ No duplicate server spawned
- ✅ Memory context injected on session start
**Expected on Linux (current behavior):**
- ❌ Connection timeout
- ❌ Multiple server processes spawn
- ❌ Fallback to HTTP needed
### Test 2: Check Memory Client Behavior
1. Run hook manually: `node ~/.claude/hooks/core/session-start.js`
2. Check process list: Does it spawn new `memory server` process?
3. Monitor connection: Does it timeout or succeed?
### Test 3: Platform Comparison
```bash
# On macOS
ps aux | grep "memory server" # How many instances?
node ~/.claude/hooks/core/session-start.js # Does it work?
# On Linux (current)
ps aux | grep "memory server" # Multiple instances!
node ~/.claude/hooks/core/session-start.js # Times out!
```
## Files to Check
- `claude-hooks/utilities/memory-client.js` - MCP connection logic
- `claude-hooks/utilities/mcp-client.js` - Server spawning code
- `claude-hooks/install_hooks.py` - Config generation (line 268-273: useExistingServer)
## Next Steps
1. Test on MacBook with MCP-only config
2. If works on macOS: investigate platform-specific differences
3. Document proper cross-platform solution
4. Update hooks to work consistently on both platforms
## Current Status
- **Linux**: Requires HTTP fallback (confirmed working)
- **macOS**: TBD - needs verification
- **Goal**: Understand why different, achieve consistent behavior
---
Created: 2025-09-30
Platform: Linux (Manjaro)
Issue: Hooks/MCP connection conflict
```
--------------------------------------------------------------------------------
/scripts/service/deploy_dual_services.sh:
--------------------------------------------------------------------------------
```bash
#!/bin/bash
echo "🚀 Deploying Dual MCP Services with mDNS..."
echo " - FastMCP Server (port 8000) for Claude Code MCP clients"
echo " - HTTP Dashboard (port 8080) for web interface"
echo " - mDNS enabled for both services"
echo ""
# Stop existing services
echo "⏹️ Stopping existing services..."
sudo systemctl stop mcp-memory 2>/dev/null || true
sudo systemctl stop mcp-http-dashboard 2>/dev/null || true
# Install FastMCP service with mDNS
echo "📝 Installing FastMCP service (port 8000)..."
sudo cp /tmp/fastmcp-server-with-mdns.service /etc/systemd/system/mcp-memory.service
# Install HTTP Dashboard service
echo "📝 Installing HTTP Dashboard service (port 8080)..."
sudo cp /tmp/mcp-http-dashboard.service /etc/systemd/system/mcp-http-dashboard.service
# Reload systemd
echo "🔄 Reloading systemd daemon..."
sudo systemctl daemon-reload
# Enable both services
echo "🔛 Enabling both services for startup..."
sudo systemctl enable mcp-memory
sudo systemctl enable mcp-http-dashboard
# Start FastMCP service first
echo "▶️ Starting FastMCP server (port 8000)..."
sudo systemctl start mcp-memory
sleep 2
# Start HTTP Dashboard service
echo "▶️ Starting HTTP Dashboard (port 8080)..."
sudo systemctl start mcp-http-dashboard
sleep 2
# Check status of both services
echo ""
echo "🔍 Checking service status..."
echo ""
echo "=== FastMCP Server (port 8000) ==="
sudo systemctl status mcp-memory --no-pager
echo ""
echo "=== HTTP Dashboard (port 8080) ==="
sudo systemctl status mcp-http-dashboard --no-pager
echo ""
echo "📊 Port status:"
ss -tlnp | grep -E ":800[08]"
echo ""
echo "🌐 mDNS Services (if avahi is installed):"
avahi-browse -t _http._tcp 2>/dev/null | grep -E "(MCP|Memory)" || echo "No mDNS services found (avahi may not be installed)"
avahi-browse -t _mcp._tcp 2>/dev/null | grep -E "(MCP|Memory)" || echo "No MCP mDNS services found"
echo ""
echo "✅ Dual service deployment complete!"
echo ""
echo "🔗 Available Services:"
echo " - FastMCP Protocol: http://memory.local:8000/mcp (for Claude Code)"
echo " - HTTP Dashboard: http://memory.local:8080/ (for web access)"
echo " - API Endpoints: http://memory.local:8080/api/* (for curl/scripts)"
echo ""
echo "📋 Service Management:"
echo " - FastMCP logs: sudo journalctl -u mcp-memory -f"
echo " - Dashboard logs: sudo journalctl -u mcp-http-dashboard -f"
echo " - Stop FastMCP: sudo systemctl stop mcp-memory"
echo " - Stop Dashboard: sudo systemctl stop mcp-http-dashboard"
echo ""
echo "🔍 mDNS Discovery:"
echo " - Browse services: avahi-browse -t _http._tcp"
echo " - Browse MCP: avahi-browse -t _mcp._tcp"
```
--------------------------------------------------------------------------------
/archive/docs-root-cleanup-2025-08-23/PYTORCH_DOWNLOAD_FIX.md:
--------------------------------------------------------------------------------
```markdown
# PyTorch Download Issue - FIXED! 🎉
## Problem
Claude Desktop was downloading PyTorch models (230MB+) on every startup, even with offline environment variables set in the config.
## Root Cause
The issue was that **UV package manager isolation** prevented environment variables from being properly inherited, and model downloads happened before our offline configuration could take effect.
## Solution Applied
### 1. Created Offline Launcher Script
**File**: `scripts/memory_offline.py`
- Sets offline environment variables **before any imports**
- Configures cache paths for Windows
- Bypasses UV isolation by running Python directly
### 2. Updated Claude Desktop Config
**Your config now uses**:
```json
{
"command": "python",
"args": ["C:/REPOSITORIES/mcp-memory-service/scripts/memory_offline.py"]
}
```
**Instead of**:
```json
{
"command": "uv",
"args": ["--directory", "...", "run", "memory"]
}
```
### 3. Added Code-Level Offline Setup
**File**: `src/mcp_memory_service/__init__.py`
- Added `setup_offline_mode()` function
- Runs immediately when module is imported
- Provides fallback offline configuration
## Test Results ✅
**Before Fix**:
```
2025-08-11T19:04:48.249Z [memory] [info] Message from client: {...}
Downloading torch (230.2MiB) ← PROBLEM
2025-08-11T19:05:48.151Z [memory] [info] Request timed out
```
**After Fix**:
```
Setting up offline mode...
HF_HUB_OFFLINE: 1
HF_HOME: C:\Users\heinrich.krupp\.cache\huggingface
Starting MCP Memory Service in offline mode...
[No download messages] ← FIXED!
```
## Files Modified
1. **Your Claude Desktop Config**: `%APPDATA%\Claude\claude_desktop_config.json`
- Changed from UV to direct Python execution
- Uses new offline launcher script
2. **New Offline Launcher**: `scripts/memory_offline.py`
- Forces offline mode before any ML library imports
- Configures Windows cache paths automatically
3. **Core Module Init**: `src/mcp_memory_service/__init__.py`
- Added offline mode setup as backup
- Runs on module import
4. **Sample Config**: `examples/claude_desktop_config_windows.json`
- Updated for other users
- Uses new launcher approach
## Impact
✅ **No more 230MB PyTorch downloads on startup**
✅ **Faster Claude Desktop initialization**
✅ **Uses existing cached models (434 memories preserved)**
✅ **SQLite-vec backend still working**
## For Other Users
Use the updated `examples/claude_desktop_config_windows.json` template and:
1. Replace `C:/REPOSITORIES/mcp-memory-service` with your path
2. Replace `YOUR_USERNAME` with your Windows username
3. Use `python` command with `scripts/memory_offline.py`
The stubborn PyTorch download issue is now **completely resolved**! 🎉
```
--------------------------------------------------------------------------------
/src/mcp_memory_service/__init__.py:
--------------------------------------------------------------------------------
```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""MCP Memory Service initialization."""
# CRITICAL: Set offline mode BEFORE any other imports to prevent model downloads
import os
import platform
# Force offline mode for HuggingFace models - this MUST be done before any ML library imports
def setup_offline_mode():
"""Setup offline mode environment variables to prevent model downloads."""
# Set offline environment variables
os.environ['HF_HUB_OFFLINE'] = '1'
os.environ['TRANSFORMERS_OFFLINE'] = '1'
# Configure cache paths
username = os.environ.get('USERNAME', os.environ.get('USER', ''))
if platform.system() == "Windows" and username:
default_hf_home = f"C:\\Users\\{username}\\.cache\\huggingface"
default_transformers_cache = f"C:\\Users\\{username}\\.cache\\huggingface\\transformers"
default_sentence_transformers_home = f"C:\\Users\\{username}\\.cache\\torch\\sentence_transformers"
else:
default_hf_home = os.path.expanduser("~/.cache/huggingface")
default_transformers_cache = os.path.expanduser("~/.cache/huggingface/transformers")
default_sentence_transformers_home = os.path.expanduser("~/.cache/torch/sentence_transformers")
# Set cache paths if not already set
if 'HF_HOME' not in os.environ:
os.environ['HF_HOME'] = default_hf_home
if 'TRANSFORMERS_CACHE' not in os.environ:
os.environ['TRANSFORMERS_CACHE'] = default_transformers_cache
if 'SENTENCE_TRANSFORMERS_HOME' not in os.environ:
os.environ['SENTENCE_TRANSFORMERS_HOME'] = default_sentence_transformers_home
# Setup offline mode immediately when this module is imported
setup_offline_mode()
__version__ = "8.42.0"
from .models import Memory, MemoryQueryResult
from .storage import MemoryStorage
from .utils import generate_content_hash
# Conditional imports
__all__ = [
'Memory',
'MemoryQueryResult',
'MemoryStorage',
'generate_content_hash'
]
# Import storage backends conditionally
try:
from .storage import SqliteVecMemoryStorage
__all__.append('SqliteVecMemoryStorage')
except ImportError:
SqliteVecMemoryStorage = None
```
--------------------------------------------------------------------------------
/.github/workflows/CACHE_FIX.md:
--------------------------------------------------------------------------------
```markdown
# Python Cache Configuration Fix
## Issue Identified
**Date**: 2024-08-24
**Problem**: GitHub Actions workflows failing at Python setup step
### Root Cause
The `setup-python` action was configured with `cache: 'pip'` but couldn't find a `requirements.txt` file. The project uses `pyproject.toml` for dependency management instead.
### Error Message
```
Error: No file in /home/runner/work/mcp-memory-service/mcp-memory-service matched to [**/requirements.txt], make sure you have checked out the target repository
```
## Solution Applied
Added `cache-dependency-path: '**/pyproject.toml'` to all Python setup steps that use pip caching.
### Files Modified
#### 1. `.github/workflows/main-optimized.yml`
Fixed 2 instances:
- Line 34-39: Release job Python setup
- Line 112-117: Test job Python setup
#### 2. `.github/workflows/cleanup-images.yml`
Fixed 1 instance:
- Line 95-100: Docker Hub cleanup job Python setup
### Before
```yaml
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
cache: 'pip'
# ❌ Missing cache-dependency-path causes failure
```
### After
```yaml
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
cache: 'pip'
cache-dependency-path: '**/pyproject.toml'
# ✅ Explicitly tells setup-python where to find dependencies
```
## Benefits
1. **Immediate Fix**: Workflows will no longer fail at Python setup step
2. **Performance**: Dependencies are properly cached, reducing workflow execution time
3. **Compatibility**: Works with modern Python projects using `pyproject.toml` (PEP 621)
## Testing
All modified workflows have been validated:
- ✅ `main-optimized.yml` - Valid YAML syntax
- ✅ `cleanup-images.yml` - Valid YAML syntax
## Background
The `setup-python` action defaults to looking for `requirements.txt` when using pip cache. Since this project uses `pyproject.toml` for dependency management (following modern Python packaging standards), we need to explicitly specify the dependency file path.
This is a known issue in the setup-python action:
- Issue #502: Cache pip dependencies from pyproject.toml file
- Issue #529: Change pip default cache path to include pyproject.toml
## Next Steps
After pushing these changes:
1. Workflows should complete successfully
2. Monitor the Python setup steps to confirm caching works
3. Check workflow execution time improvements from proper caching
## Alternative Solutions (Not Applied)
1. **Remove caching**: Simply remove `cache: 'pip'` line (would work but slower)
2. **Create requirements.txt**: Generate from pyproject.toml (adds maintenance burden)
3. **Use uv directly**: Since project uses uv for package management (more complex change)
Date: 2024-08-24
Status: Fixed and ready for deployment
```