#
tokens: 49075/50000 37/625 files (page 4/47)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 4 of 47. Use http://codebase.md/doobidoo/mcp-memory-service?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .claude
│   ├── agents
│   │   ├── amp-bridge.md
│   │   ├── amp-pr-automator.md
│   │   ├── code-quality-guard.md
│   │   ├── gemini-pr-automator.md
│   │   └── github-release-manager.md
│   ├── settings.local.json.backup
│   └── settings.local.json.local
├── .commit-message
├── .dockerignore
├── .env.example
├── .env.sqlite.backup
├── .envnn#
├── .gitattributes
├── .github
│   ├── FUNDING.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── feature_request.yml
│   │   └── performance_issue.yml
│   ├── pull_request_template.md
│   └── workflows
│       ├── bridge-tests.yml
│       ├── CACHE_FIX.md
│       ├── claude-code-review.yml
│       ├── claude.yml
│       ├── cleanup-images.yml.disabled
│       ├── dev-setup-validation.yml
│       ├── docker-publish.yml
│       ├── LATEST_FIXES.md
│       ├── main-optimized.yml.disabled
│       ├── main.yml
│       ├── publish-and-test.yml
│       ├── README_OPTIMIZATION.md
│       ├── release-tag.yml.disabled
│       ├── release.yml
│       ├── roadmap-review-reminder.yml
│       ├── SECRET_CONDITIONAL_FIX.md
│       └── WORKFLOW_FIXES.md
├── .gitignore
├── .mcp.json.backup
├── .mcp.json.template
├── .pyscn
│   ├── .gitignore
│   └── reports
│       └── analyze_20251123_214224.html
├── AGENTS.md
├── archive
│   ├── deployment
│   │   ├── deploy_fastmcp_fixed.sh
│   │   ├── deploy_http_with_mcp.sh
│   │   └── deploy_mcp_v4.sh
│   ├── deployment-configs
│   │   ├── empty_config.yml
│   │   └── smithery.yaml
│   ├── development
│   │   └── test_fastmcp.py
│   ├── docs-removed-2025-08-23
│   │   ├── authentication.md
│   │   ├── claude_integration.md
│   │   ├── claude-code-compatibility.md
│   │   ├── claude-code-integration.md
│   │   ├── claude-code-quickstart.md
│   │   ├── claude-desktop-setup.md
│   │   ├── complete-setup-guide.md
│   │   ├── database-synchronization.md
│   │   ├── development
│   │   │   ├── autonomous-memory-consolidation.md
│   │   │   ├── CLEANUP_PLAN.md
│   │   │   ├── CLEANUP_README.md
│   │   │   ├── CLEANUP_SUMMARY.md
│   │   │   ├── dream-inspired-memory-consolidation.md
│   │   │   ├── hybrid-slm-memory-consolidation.md
│   │   │   ├── mcp-milestone.md
│   │   │   ├── multi-client-architecture.md
│   │   │   ├── test-results.md
│   │   │   └── TIMESTAMP_FIX_SUMMARY.md
│   │   ├── distributed-sync.md
│   │   ├── invocation_guide.md
│   │   ├── macos-intel.md
│   │   ├── master-guide.md
│   │   ├── mcp-client-configuration.md
│   │   ├── multi-client-server.md
│   │   ├── service-installation.md
│   │   ├── sessions
│   │   │   └── MCP_ENHANCEMENT_SESSION_MEMORY_v4.1.0.md
│   │   ├── UBUNTU_SETUP.md
│   │   ├── ubuntu.md
│   │   ├── windows-setup.md
│   │   └── windows.md
│   ├── docs-root-cleanup-2025-08-23
│   │   ├── AWESOME_LIST_SUBMISSION.md
│   │   ├── CLOUDFLARE_IMPLEMENTATION.md
│   │   ├── DOCUMENTATION_ANALYSIS.md
│   │   ├── DOCUMENTATION_CLEANUP_PLAN.md
│   │   ├── DOCUMENTATION_CONSOLIDATION_COMPLETE.md
│   │   ├── LITESTREAM_SETUP_GUIDE.md
│   │   ├── lm_studio_system_prompt.md
│   │   ├── PYTORCH_DOWNLOAD_FIX.md
│   │   └── README-ORIGINAL-BACKUP.md
│   ├── investigations
│   │   └── MACOS_HOOKS_INVESTIGATION.md
│   ├── litestream-configs-v6.3.0
│   │   ├── install_service.sh
│   │   ├── litestream_master_config_fixed.yml
│   │   ├── litestream_master_config.yml
│   │   ├── litestream_replica_config_fixed.yml
│   │   ├── litestream_replica_config.yml
│   │   ├── litestream_replica_simple.yml
│   │   ├── litestream-http.service
│   │   ├── litestream.service
│   │   └── requirements-cloudflare.txt
│   ├── release-notes
│   │   └── release-notes-v7.1.4.md
│   └── setup-development
│       ├── README.md
│       ├── setup_consolidation_mdns.sh
│       ├── STARTUP_SETUP_GUIDE.md
│       └── test_service.sh
├── CHANGELOG-HISTORIC.md
├── CHANGELOG.md
├── claude_commands
│   ├── memory-context.md
│   ├── memory-health.md
│   ├── memory-ingest-dir.md
│   ├── memory-ingest.md
│   ├── memory-recall.md
│   ├── memory-search.md
│   ├── memory-store.md
│   ├── README.md
│   └── session-start.md
├── claude-hooks
│   ├── config.json
│   ├── config.template.json
│   ├── CONFIGURATION.md
│   ├── core
│   │   ├── memory-retrieval.js
│   │   ├── mid-conversation.js
│   │   ├── session-end.js
│   │   ├── session-start.js
│   │   └── topic-change.js
│   ├── debug-pattern-test.js
│   ├── install_claude_hooks_windows.ps1
│   ├── install_hooks.py
│   ├── memory-mode-controller.js
│   ├── MIGRATION.md
│   ├── README-NATURAL-TRIGGERS.md
│   ├── README-phase2.md
│   ├── README.md
│   ├── simple-test.js
│   ├── statusline.sh
│   ├── test-adaptive-weights.js
│   ├── test-dual-protocol-hook.js
│   ├── test-mcp-hook.js
│   ├── test-natural-triggers.js
│   ├── test-recency-scoring.js
│   ├── tests
│   │   ├── integration-test.js
│   │   ├── phase2-integration-test.js
│   │   ├── test-code-execution.js
│   │   ├── test-cross-session.json
│   │   ├── test-session-tracking.json
│   │   └── test-threading.json
│   ├── utilities
│   │   ├── adaptive-pattern-detector.js
│   │   ├── context-formatter.js
│   │   ├── context-shift-detector.js
│   │   ├── conversation-analyzer.js
│   │   ├── dynamic-context-updater.js
│   │   ├── git-analyzer.js
│   │   ├── mcp-client.js
│   │   ├── memory-client.js
│   │   ├── memory-scorer.js
│   │   ├── performance-manager.js
│   │   ├── project-detector.js
│   │   ├── session-tracker.js
│   │   ├── tiered-conversation-monitor.js
│   │   └── version-checker.js
│   └── WINDOWS-SESSIONSTART-BUG.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Development-Sprint-November-2025.md
├── docs
│   ├── amp-cli-bridge.md
│   ├── api
│   │   ├── code-execution-interface.md
│   │   ├── memory-metadata-api.md
│   │   ├── PHASE1_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_REPORT.md
│   │   └── tag-standardization.md
│   ├── architecture
│   │   ├── search-enhancement-spec.md
│   │   └── search-examples.md
│   ├── architecture.md
│   ├── archive
│   │   └── obsolete-workflows
│   │       ├── load_memory_context.md
│   │       └── README.md
│   ├── assets
│   │   └── images
│   │       ├── dashboard-v3.3.0-preview.png
│   │       ├── memory-awareness-hooks-example.png
│   │       ├── project-infographic.svg
│   │       └── README.md
│   ├── CLAUDE_CODE_QUICK_REFERENCE.md
│   ├── cloudflare-setup.md
│   ├── deployment
│   │   ├── docker.md
│   │   ├── dual-service.md
│   │   ├── production-guide.md
│   │   └── systemd-service.md
│   ├── development
│   │   ├── ai-agent-instructions.md
│   │   ├── code-quality
│   │   │   ├── phase-2a-completion.md
│   │   │   ├── phase-2a-handle-get-prompt.md
│   │   │   ├── phase-2a-index.md
│   │   │   ├── phase-2a-install-package.md
│   │   │   └── phase-2b-session-summary.md
│   │   ├── code-quality-workflow.md
│   │   ├── dashboard-workflow.md
│   │   ├── issue-management.md
│   │   ├── pr-review-guide.md
│   │   ├── refactoring-notes.md
│   │   ├── release-checklist.md
│   │   └── todo-tracker.md
│   ├── docker-optimized-build.md
│   ├── document-ingestion.md
│   ├── DOCUMENTATION_AUDIT.md
│   ├── enhancement-roadmap-issue-14.md
│   ├── examples
│   │   ├── analysis-scripts.js
│   │   ├── maintenance-session-example.md
│   │   ├── memory-distribution-chart.jsx
│   │   └── tag-schema.json
│   ├── first-time-setup.md
│   ├── glama-deployment.md
│   ├── guides
│   │   ├── advanced-command-examples.md
│   │   ├── chromadb-migration.md
│   │   ├── commands-vs-mcp-server.md
│   │   ├── mcp-enhancements.md
│   │   ├── mdns-service-discovery.md
│   │   ├── memory-consolidation-guide.md
│   │   ├── migration.md
│   │   ├── scripts.md
│   │   └── STORAGE_BACKENDS.md
│   ├── HOOK_IMPROVEMENTS.md
│   ├── hooks
│   │   └── phase2-code-execution-migration.md
│   ├── http-server-management.md
│   ├── ide-compatability.md
│   ├── IMAGE_RETENTION_POLICY.md
│   ├── images
│   │   └── dashboard-placeholder.md
│   ├── implementation
│   │   ├── health_checks.md
│   │   └── performance.md
│   ├── IMPLEMENTATION_PLAN_HTTP_SSE.md
│   ├── integration
│   │   ├── homebrew.md
│   │   └── multi-client.md
│   ├── integrations
│   │   ├── gemini.md
│   │   ├── groq-bridge.md
│   │   ├── groq-integration-summary.md
│   │   └── groq-model-comparison.md
│   ├── integrations.md
│   ├── legacy
│   │   └── dual-protocol-hooks.md
│   ├── LM_STUDIO_COMPATIBILITY.md
│   ├── maintenance
│   │   └── memory-maintenance.md
│   ├── mastery
│   │   ├── api-reference.md
│   │   ├── architecture-overview.md
│   │   ├── configuration-guide.md
│   │   ├── local-setup-and-run.md
│   │   ├── testing-guide.md
│   │   └── troubleshooting.md
│   ├── migration
│   │   └── code-execution-api-quick-start.md
│   ├── natural-memory-triggers
│   │   ├── cli-reference.md
│   │   ├── installation-guide.md
│   │   └── performance-optimization.md
│   ├── oauth-setup.md
│   ├── pr-graphql-integration.md
│   ├── quick-setup-cloudflare-dual-environment.md
│   ├── README.md
│   ├── remote-configuration-wiki-section.md
│   ├── research
│   │   ├── code-execution-interface-implementation.md
│   │   └── code-execution-interface-summary.md
│   ├── ROADMAP.md
│   ├── sqlite-vec-backend.md
│   ├── statistics
│   │   ├── charts
│   │   │   ├── activity_patterns.png
│   │   │   ├── contributors.png
│   │   │   ├── growth_trajectory.png
│   │   │   ├── monthly_activity.png
│   │   │   └── october_sprint.png
│   │   ├── data
│   │   │   ├── activity_by_day.csv
│   │   │   ├── activity_by_hour.csv
│   │   │   ├── contributors.csv
│   │   │   └── monthly_activity.csv
│   │   ├── generate_charts.py
│   │   └── REPOSITORY_STATISTICS.md
│   ├── technical
│   │   ├── development.md
│   │   ├── memory-migration.md
│   │   ├── migration-log.md
│   │   ├── sqlite-vec-embedding-fixes.md
│   │   └── tag-storage.md
│   ├── testing
│   │   └── regression-tests.md
│   ├── testing-cloudflare-backend.md
│   ├── troubleshooting
│   │   ├── cloudflare-api-token-setup.md
│   │   ├── cloudflare-authentication.md
│   │   ├── general.md
│   │   ├── hooks-quick-reference.md
│   │   ├── pr162-schema-caching-issue.md
│   │   ├── session-end-hooks.md
│   │   └── sync-issues.md
│   └── tutorials
│       ├── advanced-techniques.md
│       ├── data-analysis.md
│       └── demo-session-walkthrough.md
├── examples
│   ├── claude_desktop_config_template.json
│   ├── claude_desktop_config_windows.json
│   ├── claude-desktop-http-config.json
│   ├── config
│   │   └── claude_desktop_config.json
│   ├── http-mcp-bridge.js
│   ├── memory_export_template.json
│   ├── README.md
│   ├── setup
│   │   └── setup_multi_client_complete.py
│   └── start_https_example.sh
├── install_service.py
├── install.py
├── LICENSE
├── NOTICE
├── pyproject.toml
├── pytest.ini
├── README.md
├── run_server.py
├── scripts
│   ├── .claude
│   │   └── settings.local.json
│   ├── archive
│   │   └── check_missing_timestamps.py
│   ├── backup
│   │   ├── backup_memories.py
│   │   ├── backup_sqlite_vec.sh
│   │   ├── export_distributable_memories.sh
│   │   └── restore_memories.py
│   ├── benchmarks
│   │   ├── benchmark_code_execution_api.py
│   │   ├── benchmark_hybrid_sync.py
│   │   └── benchmark_server_caching.py
│   ├── database
│   │   ├── analyze_sqlite_vec_db.py
│   │   ├── check_sqlite_vec_status.py
│   │   ├── db_health_check.py
│   │   └── simple_timestamp_check.py
│   ├── development
│   │   ├── debug_server_initialization.py
│   │   ├── find_orphaned_files.py
│   │   ├── fix_mdns.sh
│   │   ├── fix_sitecustomize.py
│   │   ├── remote_ingest.sh
│   │   ├── setup-git-merge-drivers.sh
│   │   ├── uv-lock-merge.sh
│   │   └── verify_hybrid_sync.py
│   ├── hooks
│   │   └── pre-commit
│   ├── installation
│   │   ├── install_linux_service.py
│   │   ├── install_macos_service.py
│   │   ├── install_uv.py
│   │   ├── install_windows_service.py
│   │   ├── install.py
│   │   ├── setup_backup_cron.sh
│   │   ├── setup_claude_mcp.sh
│   │   └── setup_cloudflare_resources.py
│   ├── linux
│   │   ├── service_status.sh
│   │   ├── start_service.sh
│   │   ├── stop_service.sh
│   │   ├── uninstall_service.sh
│   │   └── view_logs.sh
│   ├── maintenance
│   │   ├── assign_memory_types.py
│   │   ├── check_memory_types.py
│   │   ├── cleanup_corrupted_encoding.py
│   │   ├── cleanup_memories.py
│   │   ├── cleanup_organize.py
│   │   ├── consolidate_memory_types.py
│   │   ├── consolidation_mappings.json
│   │   ├── delete_orphaned_vectors_fixed.py
│   │   ├── fast_cleanup_duplicates_with_tracking.sh
│   │   ├── find_all_duplicates.py
│   │   ├── find_cloudflare_duplicates.py
│   │   ├── find_duplicates.py
│   │   ├── memory-types.md
│   │   ├── README.md
│   │   ├── recover_timestamps_from_cloudflare.py
│   │   ├── regenerate_embeddings.py
│   │   ├── repair_malformed_tags.py
│   │   ├── repair_memories.py
│   │   ├── repair_sqlite_vec_embeddings.py
│   │   ├── repair_zero_embeddings.py
│   │   ├── restore_from_json_export.py
│   │   └── scan_todos.sh
│   ├── migration
│   │   ├── cleanup_mcp_timestamps.py
│   │   ├── legacy
│   │   │   └── migrate_chroma_to_sqlite.py
│   │   ├── mcp-migration.py
│   │   ├── migrate_sqlite_vec_embeddings.py
│   │   ├── migrate_storage.py
│   │   ├── migrate_tags.py
│   │   ├── migrate_timestamps.py
│   │   ├── migrate_to_cloudflare.py
│   │   ├── migrate_to_sqlite_vec.py
│   │   ├── migrate_v5_enhanced.py
│   │   ├── TIMESTAMP_CLEANUP_README.md
│   │   └── verify_mcp_timestamps.py
│   ├── pr
│   │   ├── amp_collect_results.sh
│   │   ├── amp_detect_breaking_changes.sh
│   │   ├── amp_generate_tests.sh
│   │   ├── amp_pr_review.sh
│   │   ├── amp_quality_gate.sh
│   │   ├── amp_suggest_fixes.sh
│   │   ├── auto_review.sh
│   │   ├── detect_breaking_changes.sh
│   │   ├── generate_tests.sh
│   │   ├── lib
│   │   │   └── graphql_helpers.sh
│   │   ├── quality_gate.sh
│   │   ├── resolve_threads.sh
│   │   ├── run_pyscn_analysis.sh
│   │   ├── run_quality_checks.sh
│   │   ├── thread_status.sh
│   │   └── watch_reviews.sh
│   ├── quality
│   │   ├── fix_dead_code_install.sh
│   │   ├── phase1_dead_code_analysis.md
│   │   ├── phase2_complexity_analysis.md
│   │   ├── README_PHASE1.md
│   │   ├── README_PHASE2.md
│   │   ├── track_pyscn_metrics.sh
│   │   └── weekly_quality_review.sh
│   ├── README.md
│   ├── run
│   │   ├── run_mcp_memory.sh
│   │   ├── run-with-uv.sh
│   │   └── start_sqlite_vec.sh
│   ├── run_memory_server.py
│   ├── server
│   │   ├── check_http_server.py
│   │   ├── check_server_health.py
│   │   ├── memory_offline.py
│   │   ├── preload_models.py
│   │   ├── run_http_server.py
│   │   ├── run_memory_server.py
│   │   ├── start_http_server.bat
│   │   └── start_http_server.sh
│   ├── service
│   │   ├── deploy_dual_services.sh
│   │   ├── install_http_service.sh
│   │   ├── mcp-memory-http.service
│   │   ├── mcp-memory.service
│   │   ├── memory_service_manager.sh
│   │   ├── service_control.sh
│   │   ├── service_utils.py
│   │   └── update_service.sh
│   ├── sync
│   │   ├── check_drift.py
│   │   ├── claude_sync_commands.py
│   │   ├── export_memories.py
│   │   ├── import_memories.py
│   │   ├── litestream
│   │   │   ├── apply_local_changes.sh
│   │   │   ├── enhanced_memory_store.sh
│   │   │   ├── init_staging_db.sh
│   │   │   ├── io.litestream.replication.plist
│   │   │   ├── manual_sync.sh
│   │   │   ├── memory_sync.sh
│   │   │   ├── pull_remote_changes.sh
│   │   │   ├── push_to_remote.sh
│   │   │   ├── README.md
│   │   │   ├── resolve_conflicts.sh
│   │   │   ├── setup_local_litestream.sh
│   │   │   ├── setup_remote_litestream.sh
│   │   │   ├── staging_db_init.sql
│   │   │   ├── stash_local_changes.sh
│   │   │   ├── sync_from_remote_noconfig.sh
│   │   │   └── sync_from_remote.sh
│   │   ├── README.md
│   │   ├── safe_cloudflare_update.sh
│   │   ├── sync_memory_backends.py
│   │   └── sync_now.py
│   ├── testing
│   │   ├── run_complete_test.py
│   │   ├── run_memory_test.sh
│   │   ├── simple_test.py
│   │   ├── test_cleanup_logic.py
│   │   ├── test_cloudflare_backend.py
│   │   ├── test_docker_functionality.py
│   │   ├── test_installation.py
│   │   ├── test_mdns.py
│   │   ├── test_memory_api.py
│   │   ├── test_memory_simple.py
│   │   ├── test_migration.py
│   │   ├── test_search_api.py
│   │   ├── test_sqlite_vec_embeddings.py
│   │   ├── test_sse_events.py
│   │   ├── test-connection.py
│   │   └── test-hook.js
│   ├── utils
│   │   ├── claude_commands_utils.py
│   │   ├── generate_personalized_claude_md.sh
│   │   ├── groq
│   │   ├── groq_agent_bridge.py
│   │   ├── list-collections.py
│   │   ├── memory_wrapper_uv.py
│   │   ├── query_memories.py
│   │   ├── smithery_wrapper.py
│   │   ├── test_groq_bridge.sh
│   │   └── uv_wrapper.py
│   └── validation
│       ├── check_dev_setup.py
│       ├── check_documentation_links.py
│       ├── diagnose_backend_config.py
│       ├── validate_configuration_complete.py
│       ├── validate_memories.py
│       ├── validate_migration.py
│       ├── validate_timestamp_integrity.py
│       ├── verify_environment.py
│       ├── verify_pytorch_windows.py
│       └── verify_torch.py
├── SECURITY.md
├── selective_timestamp_recovery.py
├── SPONSORS.md
├── src
│   └── mcp_memory_service
│       ├── __init__.py
│       ├── api
│       │   ├── __init__.py
│       │   ├── client.py
│       │   ├── operations.py
│       │   ├── sync_wrapper.py
│       │   └── types.py
│       ├── backup
│       │   ├── __init__.py
│       │   └── scheduler.py
│       ├── cli
│       │   ├── __init__.py
│       │   ├── ingestion.py
│       │   ├── main.py
│       │   └── utils.py
│       ├── config.py
│       ├── consolidation
│       │   ├── __init__.py
│       │   ├── associations.py
│       │   ├── base.py
│       │   ├── clustering.py
│       │   ├── compression.py
│       │   ├── consolidator.py
│       │   ├── decay.py
│       │   ├── forgetting.py
│       │   ├── health.py
│       │   └── scheduler.py
│       ├── dependency_check.py
│       ├── discovery
│       │   ├── __init__.py
│       │   ├── client.py
│       │   └── mdns_service.py
│       ├── embeddings
│       │   ├── __init__.py
│       │   └── onnx_embeddings.py
│       ├── ingestion
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── chunker.py
│       │   ├── csv_loader.py
│       │   ├── json_loader.py
│       │   ├── pdf_loader.py
│       │   ├── registry.py
│       │   ├── semtools_loader.py
│       │   └── text_loader.py
│       ├── lm_studio_compat.py
│       ├── mcp_server.py
│       ├── models
│       │   ├── __init__.py
│       │   └── memory.py
│       ├── server.py
│       ├── services
│       │   ├── __init__.py
│       │   └── memory_service.py
│       ├── storage
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── cloudflare.py
│       │   ├── factory.py
│       │   ├── http_client.py
│       │   ├── hybrid.py
│       │   └── sqlite_vec.py
│       ├── sync
│       │   ├── __init__.py
│       │   ├── exporter.py
│       │   ├── importer.py
│       │   └── litestream_config.py
│       ├── utils
│       │   ├── __init__.py
│       │   ├── cache_manager.py
│       │   ├── content_splitter.py
│       │   ├── db_utils.py
│       │   ├── debug.py
│       │   ├── document_processing.py
│       │   ├── gpu_detection.py
│       │   ├── hashing.py
│       │   ├── http_server_manager.py
│       │   ├── port_detection.py
│       │   ├── system_detection.py
│       │   └── time_parser.py
│       └── web
│           ├── __init__.py
│           ├── api
│           │   ├── __init__.py
│           │   ├── analytics.py
│           │   ├── backup.py
│           │   ├── consolidation.py
│           │   ├── documents.py
│           │   ├── events.py
│           │   ├── health.py
│           │   ├── manage.py
│           │   ├── mcp.py
│           │   ├── memories.py
│           │   ├── search.py
│           │   └── sync.py
│           ├── app.py
│           ├── dependencies.py
│           ├── oauth
│           │   ├── __init__.py
│           │   ├── authorization.py
│           │   ├── discovery.py
│           │   ├── middleware.py
│           │   ├── models.py
│           │   ├── registration.py
│           │   └── storage.py
│           ├── sse.py
│           └── static
│               ├── app.js
│               ├── index.html
│               ├── README.md
│               ├── sse_test.html
│               └── style.css
├── start_http_debug.bat
├── start_http_server.sh
├── test_document.txt
├── test_version_checker.js
├── tests
│   ├── __init__.py
│   ├── api
│   │   ├── __init__.py
│   │   ├── test_compact_types.py
│   │   └── test_operations.py
│   ├── bridge
│   │   ├── mock_responses.js
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   └── test_http_mcp_bridge.js
│   ├── conftest.py
│   ├── consolidation
│   │   ├── __init__.py
│   │   ├── conftest.py
│   │   ├── test_associations.py
│   │   ├── test_clustering.py
│   │   ├── test_compression.py
│   │   ├── test_consolidator.py
│   │   ├── test_decay.py
│   │   └── test_forgetting.py
│   ├── contracts
│   │   └── api-specification.yml
│   ├── integration
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   ├── test_api_key_fallback.py
│   │   ├── test_api_memories_chronological.py
│   │   ├── test_api_tag_time_search.py
│   │   ├── test_api_with_memory_service.py
│   │   ├── test_bridge_integration.js
│   │   ├── test_cli_interfaces.py
│   │   ├── test_cloudflare_connection.py
│   │   ├── test_concurrent_clients.py
│   │   ├── test_data_serialization_consistency.py
│   │   ├── test_http_server_startup.py
│   │   ├── test_mcp_memory.py
│   │   ├── test_mdns_integration.py
│   │   ├── test_oauth_basic_auth.py
│   │   ├── test_oauth_flow.py
│   │   ├── test_server_handlers.py
│   │   └── test_store_memory.py
│   ├── performance
│   │   ├── test_background_sync.py
│   │   └── test_hybrid_live.py
│   ├── README.md
│   ├── smithery
│   │   └── test_smithery.py
│   ├── sqlite
│   │   └── simple_sqlite_vec_test.py
│   ├── test_client.py
│   ├── test_content_splitting.py
│   ├── test_database.py
│   ├── test_hybrid_cloudflare_limits.py
│   ├── test_hybrid_storage.py
│   ├── test_memory_ops.py
│   ├── test_semantic_search.py
│   ├── test_sqlite_vec_storage.py
│   ├── test_time_parser.py
│   ├── test_timestamp_preservation.py
│   ├── timestamp
│   │   ├── test_hook_vs_manual_storage.py
│   │   ├── test_issue99_final_validation.py
│   │   ├── test_search_retrieval_inconsistency.py
│   │   ├── test_timestamp_issue.py
│   │   └── test_timestamp_simple.py
│   └── unit
│       ├── conftest.py
│       ├── test_cloudflare_storage.py
│       ├── test_csv_loader.py
│       ├── test_fastapi_dependencies.py
│       ├── test_import.py
│       ├── test_json_loader.py
│       ├── test_mdns_simple.py
│       ├── test_mdns.py
│       ├── test_memory_service.py
│       ├── test_memory.py
│       ├── test_semtools_loader.py
│       ├── test_storage_interface_compatibility.py
│       └── test_tag_time_filtering.py
├── tools
│   ├── docker
│   │   ├── DEPRECATED.md
│   │   ├── docker-compose.http.yml
│   │   ├── docker-compose.pythonpath.yml
│   │   ├── docker-compose.standalone.yml
│   │   ├── docker-compose.uv.yml
│   │   ├── docker-compose.yml
│   │   ├── docker-entrypoint-persistent.sh
│   │   ├── docker-entrypoint-unified.sh
│   │   ├── docker-entrypoint.sh
│   │   ├── Dockerfile
│   │   ├── Dockerfile.glama
│   │   ├── Dockerfile.slim
│   │   ├── README.md
│   │   └── test-docker-modes.sh
│   └── README.md
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/scripts/maintenance/find_all_duplicates.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """Find all near-duplicate memories across the database."""
 3 | 
 4 | import sqlite3
 5 | from pathlib import Path
 6 | from collections import defaultdict
 7 | import hashlib
 8 | 
 9 | import platform
10 | 
11 | # Platform-specific database path
12 | if platform.system() == "Darwin":  # macOS
13 |     DB_PATH = Path.home() / "Library/Application Support/mcp-memory/sqlite_vec.db"
14 | else:  # Linux/Windows
15 |     DB_PATH = Path.home() / ".local/share/mcp-memory/sqlite_vec.db"
16 | 
17 | def normalize_content(content):
18 |     """Normalize content by removing timestamps and session-specific data."""
19 |     # Remove common timestamp patterns
20 |     import re
21 |     normalized = content
22 |     normalized = re.sub(r'\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z', 'TIMESTAMP', normalized)
23 |     normalized = re.sub(r'\*\*Date\*\*: \d{2,4}[./]\d{2}[./]\d{2,4}', '**Date**: DATE', normalized)
24 |     normalized = re.sub(r'Timestamp: \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}', 'Timestamp: TIMESTAMP', normalized)
25 | 
26 |     return normalized.strip()
27 | 
28 | def content_hash(content):
29 |     """Create a hash of normalized content."""
30 |     normalized = normalize_content(content)
31 |     return hashlib.md5(normalized.encode()).hexdigest()
32 | 
33 | def main():
34 |     conn = sqlite3.connect(DB_PATH)
35 |     cursor = conn.cursor()
36 | 
37 |     print("Analyzing memories for duplicates...")
38 |     cursor.execute("SELECT content_hash, content, tags, created_at FROM memories ORDER BY created_at DESC")
39 | 
40 |     memories = cursor.fetchall()
41 |     print(f"Total memories: {len(memories)}")
42 | 
43 |     # Group by normalized content
44 |     content_groups = defaultdict(list)
45 |     for mem_hash, content, tags, created_at in memories:
46 |         norm_hash = content_hash(content)
47 |         content_groups[norm_hash].append({
48 |             'hash': mem_hash,
49 |             'content': content[:200],  # First 200 chars
50 |             'tags': tags,
51 |             'created_at': created_at
52 |         })
53 | 
54 |     # Find duplicates (groups with >1 memory)
55 |     duplicates = {k: v for k, v in content_groups.items() if len(v) > 1}
56 | 
57 |     if not duplicates:
58 |         print("✅ No duplicates found!")
59 |         conn.close()
60 |         return
61 | 
62 |     print(f"\n❌ Found {len(duplicates)} groups of duplicates:")
63 | 
64 |     total_duplicate_count = 0
65 |     for i, (norm_hash, group) in enumerate(duplicates.items(), 1):
66 |         count = len(group)
67 |         total_duplicate_count += count - 1  # Keep one, delete rest
68 | 
69 |         print(f"\n{i}. Group with {count} duplicates:")
70 |         print(f"   Content preview: {group[0]['content'][:100]}...")
71 |         print(f"   Tags: {group[0]['tags'][:80]}...")
72 |         print(f"   Hashes to keep: {group[0]['hash'][:16]}... (newest)")
73 |         print(f"   Hashes to delete: {count-1} older duplicates")
74 | 
75 |         if i >= 10:  # Show only first 10 groups
76 |             remaining = len(duplicates) - 10
77 |             print(f"\n... and {remaining} more duplicate groups")
78 |             break
79 | 
80 |     print(f"\n📊 Summary:")
81 |     print(f"   Total duplicate groups: {len(duplicates)}")
82 |     print(f"   Total memories to delete: {total_duplicate_count}")
83 |     print(f"   Total memories after cleanup: {len(memories) - total_duplicate_count}")
84 | 
85 |     conn.close()
86 | 
87 | if __name__ == "__main__":
88 |     main()
89 | 
```

--------------------------------------------------------------------------------
/docs/integrations/gemini.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Gemini Context: MCP Memory Service
 2 | 
 3 | ## Project Overview
 4 | 
 5 | This project is a sophisticated and feature-rich MCP (Memory Component Protocol) server designed to provide a persistent, semantic memory layer for AI assistants, particularly "Claude Desktop". It's built with Python and leverages a variety of technologies to deliver a robust and performant memory service.
 6 | 
 7 | The core of the project is the `MemoryServer` class, which handles all MCP tool calls. It features a "dream-inspired" memory consolidation system that autonomously organizes and compresses memories over time. The server is built on top of FastAPI, providing a modern and asynchronous API.
 8 | 
 9 | The project offers two distinct storage backends, allowing users to choose the best fit for their needs:
10 | 
11 | *   **ChromaDB:** A feature-rich vector database that provides advanced search capabilities and is well-suited for large memory collections.
12 | *   **SQLite-vec:** A lightweight, file-based backend that uses the `sqlite-vec` extension for vector similarity search. This is a great option for resource-constrained environments.
13 | 
14 | The project also includes a comprehensive suite of scripts for installation, testing, and maintenance, as well as detailed documentation.
15 | 
16 | ## Building and Running
17 | 
18 | ### Installation
19 | 
20 | The project uses a custom installer that intelligently detects the user's hardware and selects the optimal configuration. To install the project, run the following commands:
21 | 
22 | ```bash
23 | # Clone the repository
24 | git clone https://github.com/doobidoo/mcp-memory-service.git
25 | cd mcp-memory-service
26 | 
27 | # Create and activate a virtual environment
28 | python -m venv venv
29 | source venv/bin/activate # On Windows: venv\Scripts\activate
30 | 
31 | # Run the intelligent installer
32 | python install.py
33 | ```
34 | 
35 | ### Running the Server
36 | 
37 | The server can be run in several ways, depending on the desired configuration. The primary entry point is the `mcp_memory_service.server:main` script, which can be executed as a Python module:
38 | 
39 | ```bash
40 | python -m mcp_memory_service.server
41 | ```
42 | 
43 | Alternatively, the `pyproject.toml` file defines a `memory` script that can be used to run the server:
44 | 
45 | ```bash
46 | memory
47 | ```
48 | 
49 | The server can also be run as a FastAPI application using `uvicorn`:
50 | 
51 | ```bash
52 | uvicorn mcp_memory_service.server:app --host 0.0.0.0 --port 8000
53 | ```
54 | 
55 | ### Testing
56 | 
57 | The project includes a comprehensive test suite that can be run using `pytest`:
58 | 
59 | ```bash
60 | pytest tests/
61 | ```
62 | 
63 | ## Development Conventions
64 | 
65 | *   **Python 3.10+:** The project requires Python 3.10 or higher.
66 | *   **Type Hinting:** The codebase uses type hints extensively to improve code clarity and maintainability.
67 | *   **Async/Await:** The project uses the `async/await` pattern for all I/O operations to ensure high performance and scalability.
68 | *   **PEP 8:** The code follows the PEP 8 style guide.
69 | *   **Dataclasses:** The project uses dataclasses for data models to reduce boilerplate code.
70 | *   **Docstrings:** All modules and functions have triple-quoted docstrings that explain their purpose, arguments, and return values.
71 | *   **Testing:** All new features should be accompanied by tests to ensure they work as expected and don't introduce regressions.
72 | 
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/web/api/events.py:
--------------------------------------------------------------------------------

```python
 1 | """
 2 | Server-Sent Events endpoints for real-time updates.
 3 | """
 4 | 
 5 | from fastapi import APIRouter, Request, Depends
 6 | from pydantic import BaseModel
 7 | from typing import Dict, Any, List, TYPE_CHECKING
 8 | 
 9 | from ...config import OAUTH_ENABLED
10 | from ..sse import create_event_stream, sse_manager
11 | from ..dependencies import get_storage
12 | 
13 | # OAuth authentication imports (conditional)
14 | if OAUTH_ENABLED or TYPE_CHECKING:
15 |     from ..oauth.middleware import require_read_access, AuthenticationResult
16 | else:
17 |     # Provide type stubs when OAuth is disabled
18 |     AuthenticationResult = None
19 |     require_read_access = None
20 | 
21 | router = APIRouter()
22 | 
23 | 
24 | class ConnectionInfo(BaseModel):
25 |     """Individual connection information."""
26 |     connection_id: str
27 |     client_ip: str
28 |     user_agent: str
29 |     connected_duration_seconds: float
30 |     last_heartbeat_seconds_ago: float
31 | 
32 | 
33 | class SSEStatsResponse(BaseModel):
34 |     """Response model for SSE connection statistics."""
35 |     total_connections: int
36 |     heartbeat_interval: int
37 |     connections: List[ConnectionInfo]
38 | 
39 | 
40 | @router.get("/events")
41 | async def events_endpoint(
42 |     request: Request,
43 |     user: AuthenticationResult = Depends(require_read_access) if OAUTH_ENABLED else None
44 | ):
45 |     """
46 |     Server-Sent Events endpoint for real-time updates.
47 |     
48 |     Provides a continuous stream of events including:
49 |     - memory_stored: When new memories are added
50 |     - memory_deleted: When memories are removed
51 |     - search_completed: When searches finish
52 |     - health_update: System status changes
53 |     - heartbeat: Periodic keep-alive signals
54 |     - connection_established: Welcome message
55 |     """
56 |     return await create_event_stream(request)
57 | 
58 | 
59 | @router.get("/events/stats")
60 | async def get_sse_stats(
61 |     user: AuthenticationResult = Depends(require_read_access) if OAUTH_ENABLED else None
62 | ):
63 |     """
64 |     Get statistics about current SSE connections.
65 |     
66 |     Returns information about active connections, connection duration,
67 |     and heartbeat status.
68 |     """
69 |     try:
70 |         # Get raw stats first to debug the structure
71 |         stats = sse_manager.get_connection_stats()
72 |         
73 |         # Validate structure and transform if needed
74 |         connections = []
75 |         for conn_data in stats.get('connections', []):
76 |             connections.append({
77 |                 "connection_id": conn_data.get("connection_id", "unknown"),
78 |                 "client_ip": conn_data.get("client_ip", "unknown"),
79 |                 "user_agent": conn_data.get("user_agent", "unknown"),
80 |                 "connected_duration_seconds": conn_data.get("connected_duration_seconds", 0.0),
81 |                 "last_heartbeat_seconds_ago": conn_data.get("last_heartbeat_seconds_ago", 0.0)
82 |             })
83 |         
84 |         return {
85 |             "total_connections": stats.get("total_connections", 0),
86 |             "heartbeat_interval": stats.get("heartbeat_interval", 30),
87 |             "connections": connections
88 |         }
89 |     except Exception as e:
90 |         import logging
91 |         logging.getLogger(__name__).error(f"Error getting SSE stats: {str(e)}")
92 |         # Return safe default stats if there's an error
93 |         return {
94 |             "total_connections": 0,
95 |             "heartbeat_interval": 30,
96 |             "connections": []
97 |         }
```

--------------------------------------------------------------------------------
/start_http_debug.bat:
--------------------------------------------------------------------------------

```
  1 | @echo off
  2 | REM MCP Memory Service HTTP Debug Mode Startup Script
  3 | REM This script starts the MCP Memory Service in HTTP mode for debugging and testing
  4 | 
  5 | echo ========================================
  6 | echo MCP Memory Service HTTP Debug Mode
  7 | echo Using uv for dependency management
  8 | echo ========================================
  9 | 
 10 | REM Check if uv is available
 11 | uv --version >nul 2>&1
 12 | if %errorlevel% neq 0 (
 13 |     echo ERROR: uv is not installed or not in PATH
 14 |     echo Please install uv: https://docs.astral.sh/uv/getting-started/installation/
 15 |     pause
 16 |     exit /b 1
 17 | )
 18 | 
 19 | echo uv version:
 20 | uv --version
 21 | 
 22 | REM Install dependencies using uv sync (recommended)
 23 | echo.
 24 | echo Installing dependencies...
 25 | echo This may take a few minutes on first run...
 26 | echo Installing core dependencies...
 27 | uv sync
 28 | 
 29 | REM Check if installation was successful
 30 | if %errorlevel% neq 0 (
 31 |     echo ERROR: Failed to install dependencies
 32 |     echo Please check the error messages above
 33 |     pause
 34 |     exit /b 1
 35 | )
 36 | 
 37 | echo Dependencies installed successfully!
 38 | 
 39 | REM Verify Python can import the service
 40 | echo.
 41 | echo Verifying installation...
 42 | python -c "import sys; sys.path.insert(0, 'src'); import mcp_memory_service; print('✓ MCP Memory Service imported successfully')"
 43 | if %errorlevel% neq 0 (
 44 |     echo ERROR: Failed to import MCP Memory Service
 45 |     echo Please check the error messages above
 46 |     pause
 47 |     exit /b 1
 48 | )
 49 | 
 50 | REM Set environment variables for HTTP mode
 51 | set MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
 52 | set MCP_HTTP_ENABLED=true
 53 | set MCP_HTTP_PORT=8000
 54 | set MCP_HTTPS_ENABLED=false
 55 | set MCP_MDNS_ENABLED=true
 56 | set MCP_MDNS_SERVICE_NAME=MCP-Memory-Service-Debug
 57 | 
 58 | REM Fix Transformers cache warning
 59 | set HF_HOME=%USERPROFILE%\.cache\huggingface
 60 | set TRANSFORMERS_CACHE=%USERPROFILE%\.cache\huggingface\transformers
 61 | 
 62 | REM Optional: Set API key for security
 63 | REM To use authentication, set your own API key in the environment variable:
 64 | REM set MCP_API_KEY=your-secure-api-key-here
 65 | REM Or pass it when running this script: set MCP_API_KEY=mykey && start_http_debug.bat
 66 | if "%MCP_API_KEY%"=="" (
 67 |     echo WARNING: No API key set. Running without authentication.
 68 |     echo          To enable auth, set MCP_API_KEY environment variable.
 69 | )
 70 | 
 71 | REM Optional: Enable debug logging
 72 | set MCP_DEBUG=true
 73 | 
 74 | echo Configuration:
 75 | echo   Storage Backend: %MCP_MEMORY_STORAGE_BACKEND%
 76 | echo   HTTP Port: %MCP_HTTP_PORT%
 77 | echo   HTTPS Enabled: %MCP_HTTPS_ENABLED%
 78 | echo   mDNS Enabled: %MCP_MDNS_ENABLED%
 79 | echo   Service Name: %MCP_MDNS_SERVICE_NAME%
 80 | if "%MCP_API_KEY%"=="" (
 81 |     echo   API Key Set: No ^(WARNING: No authentication^)
 82 | ) else (
 83 |     echo   API Key Set: Yes
 84 | )
 85 | echo   Debug Mode: %MCP_DEBUG%
 86 | echo.
 87 | 
 88 | echo Service will be available at:
 89 | echo   HTTP: http://localhost:%MCP_HTTP_PORT%
 90 | echo   API: http://localhost:%MCP_HTTP_PORT%/api
 91 | echo   Health: http://localhost:%MCP_HTTP_PORT%/api/health
 92 | echo   Dashboard: http://localhost:%MCP_HTTP_PORT%/dashboard
 93 | echo.
 94 | echo Press Ctrl+C to stop the service
 95 | echo.
 96 | echo ========================================
 97 | echo Starting MCP Memory Service...
 98 | echo ========================================
 99 | 
100 | REM Start the service using Python directly (required for HTTP mode)
101 | echo Starting service with Python...
102 | echo Note: Using Python directly for HTTP server mode
103 | uv run python run_server.py
```

--------------------------------------------------------------------------------
/scripts/pr/amp_detect_breaking_changes.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | # scripts/pr/amp_detect_breaking_changes.sh - Detect breaking API changes using Amp CLI
 3 | #
 4 | # Usage: bash scripts/pr/amp_detect_breaking_changes.sh <BASE_BRANCH> <HEAD_BRANCH>
 5 | # Example: bash scripts/pr/amp_detect_breaking_changes.sh main feature/new-api
 6 | 
 7 | set -e
 8 | 
 9 | BASE_BRANCH=${1:-main}
10 | HEAD_BRANCH=${2:-$(git branch --show-current)}
11 | 
12 | echo "=== Amp CLI Breaking Change Detection ==="
13 | echo "Base Branch: $BASE_BRANCH"
14 | echo "Head Branch: $HEAD_BRANCH"
15 | echo ""
16 | 
17 | # Ensure Amp directories exist
18 | mkdir -p .claude/amp/prompts/pending
19 | mkdir -p .claude/amp/responses/ready
20 | 
21 | # Get API-related file changes
22 | echo "Analyzing API changes..."
23 | api_changes=$(git diff origin/$BASE_BRANCH...origin/$HEAD_BRANCH -- \
24 |     src/mcp_memory_service/tools.py \
25 |     src/mcp_memory_service/web/api/ \
26 |     2>/dev/null || echo "")
27 | 
28 | if [ -z "$api_changes" ]; then
29 |     echo "✅ No API changes detected"
30 |     exit 0
31 | fi
32 | 
33 | echo "API changes detected ($(echo "$api_changes" | wc -l) lines)"
34 | echo ""
35 | 
36 | # Generate UUID for breaking change analysis
37 | breaking_uuid=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid)
38 | 
39 | echo "Creating Amp prompt for breaking change analysis..."
40 | 
41 | # Truncate large diffs to avoid token overflow
42 | api_changes_truncated=$(echo "$api_changes" | head -300)
43 | 
44 | # Create breaking change analysis prompt
45 | cat > .claude/amp/prompts/pending/breaking-${breaking_uuid}.json << EOF
46 | {
47 |   "id": "${breaking_uuid}",
48 |   "timestamp": "$(date -u +"%Y-%m-%dT%H:%M:%S.000Z")",
49 |   "prompt": "Analyze these API changes for breaking changes. A breaking change is:\n- Removed function/method/endpoint\n- Changed function signature (parameters removed/reordered)\n- Changed return type in incompatible way\n- Renamed public API\n- Changed HTTP endpoint path/method\n- Changed MCP tool schema (added required parameters, removed optional parameters, changed parameter types)\n\nReport ONLY breaking changes with severity (CRITICAL/HIGH/MEDIUM). If no breaking changes, respond: 'BREAKING_CHANGES_NONE'.\n\nOutput format:\nSeverity: [CRITICAL|HIGH|MEDIUM]\nType: [removal|signature-change|rename|schema-change]\nLocation: [file:function/endpoint]\nDetails: [explanation]\nMigration: [suggested migration path]\n\nAPI Changes:\n${api_changes_truncated}",
50 |   "context": {
51 |     "project": "mcp-memory-service",
52 |     "task": "breaking-change-detection",
53 |     "base_branch": "${BASE_BRANCH}",
54 |     "head_branch": "${HEAD_BRANCH}"
55 |   },
56 |   "options": {
57 |     "timeout": 120000,
58 |     "format": "text"
59 |   }
60 | }
61 | EOF
62 | 
63 | echo "✅ Created Amp prompt for breaking change analysis"
64 | echo ""
65 | echo "=== Run this Amp command ==="
66 | echo "amp @.claude/amp/prompts/pending/breaking-${breaking_uuid}.json"
67 | echo ""
68 | echo "=== Then collect the analysis ==="
69 | echo "bash scripts/pr/amp_collect_results.sh --timeout 120 --uuids '${breaking_uuid}'"
70 | echo ""
71 | 
72 | # Alternative: Direct analysis with custom result handler
73 | echo "=== Or use this one-liner for immediate analysis ==="
74 | echo "(amp @.claude/amp/prompts/pending/breaking-${breaking_uuid}.json > /tmp/amp-breaking.log 2>&1); sleep 5 && bash scripts/pr/amp_analyze_breaking_changes.sh '${breaking_uuid}'"
75 | echo ""
76 | 
77 | # Save UUID for later collection
78 | echo "${breaking_uuid}" > /tmp/amp_breaking_changes_uuid.txt
79 | echo "UUID saved to /tmp/amp_breaking_changes_uuid.txt"
80 | 
```

--------------------------------------------------------------------------------
/docs/HOOK_IMPROVEMENTS.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Claude Code Session Hook Improvements
 2 | 
 3 | ## Overview
 4 | Enhanced the session start hook to prioritize recent memories and provide better context awareness for Claude Code sessions.
 5 | 
 6 | ## Key Improvements Made
 7 | 
 8 | ### 1. Multi-Phase Memory Retrieval
 9 | - **Phase 1**: Recent memories (last week) - 60% of available slots
10 | - **Phase 2**: Important tagged memories (architecture, decisions) - remaining slots
11 | - **Phase 3**: Fallback to general project context if needed
12 | 
13 | ### 2. Enhanced Recency Prioritization
14 | - Recent memories get higher priority in initial search
15 | - Time-based indicators: 🕒 today, 📅 this week, regular dates for older
16 | - Configurable time windows (`last-week`, `last-2-weeks`, `last-month`)
17 | 
18 | ### 3. Better Memory Categorization
19 | - New "Recent Work" category for memories from last 7 days
20 | - Improved categorization: Recent → Decisions → Architecture → Insights → Features → Context
21 | - Visual indicators for recency in CLI output
22 | 
23 | ### 4. Enhanced Semantic Queries  
24 | - Git context integration (branch, recent commits)
25 | - Framework and language context in queries
26 | - User message context when available
27 | 
28 | ### 5. Improved Configuration
29 | ```json
30 | {
31 |   "memoryService": {
32 |     "recentFirstMode": true,           // Enable multi-phase retrieval
33 |     "recentMemoryRatio": 0.6,          // 60% for recent memories
34 |     "recentTimeWindow": "last-week",   // Time window for recent search
35 |     "fallbackTimeWindow": "last-month" // Fallback time window
36 |   },
37 |   "output": {
38 |     "showMemoryDetails": true,         // Show detailed memory info
39 |     "showRecencyInfo": true,           // Show recency indicators
40 |     "showPhaseDetails": true           // Show search phase details
41 |   }
42 | }
43 | ```
44 | 
45 | ### 6. Better Visual Feedback
46 | - Phase-by-phase search reporting
47 | - Recency indicators in memory display
48 | - Enhanced scoring display with time flags
49 | - Better deduplication reporting
50 | 
51 | ## Expected Impact
52 | 
53 | ### Before
54 | - Single query for all memories
55 | - No recency prioritization
56 | - Limited context in queries
57 | - Basic categorization
58 | - Truncated output
59 | 
60 | ### After  
61 | - Multi-phase approach prioritizing recent memories
62 | - Smart time-based retrieval
63 | - Git and framework-aware queries
64 | - Enhanced categorization with "Recent Work"
65 | - Full context display with recency indicators
66 | 
67 | ## Usage
68 | 
69 | The improvements are **backward compatible** - existing installations will automatically use the enhanced system. To disable, set:
70 | 
71 | ```json
72 | {
73 |   "memoryService": {
74 |     "recentFirstMode": false
75 |   }
76 | }
77 | ```
78 | 
79 | ## Files Modified
80 | 
81 | 1. `claude-hooks/core/session-start.js` - Multi-phase retrieval logic
82 | 2. `claude-hooks/utilities/context-formatter.js` - Enhanced display and categorization  
83 | 3. `claude-hooks/config.json` - New configuration options
84 | 4. `test-hook.js` - Test script for validation
85 | 
86 | ## Testing
87 | 
88 | Run `node test-hook.js` to test the enhanced hook with mock context. The test demonstrates:
89 | - Project detection and context building
90 | - Multi-phase memory retrieval
91 | - Enhanced categorization and display
92 | - Git context integration
93 | - Configurable time windows
94 | 
95 | ## Result
96 | 
97 | Session hooks now provide more relevant, recent context while maintaining access to important historical decisions and architecture information. Users get better continuity with their recent work while preserving long-term project memory.
```

--------------------------------------------------------------------------------
/claude_commands/session-start.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Display Session Memory Context
 2 | 
 3 | Run the session-start memory awareness hook manually to display relevant memories, project context, and git analysis.
 4 | 
 5 | ## What this does:
 6 | 
 7 | Executes the session-start.js hook to:
 8 | 1. **Load Project Context**: Detect current project and framework
 9 | 2. **Analyze Git History**: Review recent commits and changes
10 | 3. **Retrieve Relevant Memories**: Find memories related to current project
11 | 4. **Display Memory Context**: Show categorized memories:
12 |    - 🔥 Recent Work
13 |    - ⚠️ Current Problems
14 |    - 📋 Additional Context
15 | 
16 | ## Usage:
17 | 
18 | ```bash
19 | claude /session-start
20 | ```
21 | 
22 | ## Windows Compatibility:
23 | 
24 | This command is specifically designed as a **Windows workaround** for the SessionStart hook bug (#160).
25 | 
26 | On Windows, SessionStart hooks cause Claude Code to hang indefinitely. This slash command provides the same functionality but can be triggered manually when you start a new session.
27 | 
28 | **Works on all platforms**: Windows, macOS, Linux
29 | 
30 | ## When to use:
31 | 
32 | - At the start of each coding session
33 | - When switching projects or contexts
34 | - After compacting conversations to refresh memory context
35 | - When you need to see what memories are available
36 | 
37 | ## What you'll see:
38 | 
39 | ```
40 | 🧠 Memory Hook → Initializing session awareness...
41 | 📂 Project: mcp-memory-service
42 | 💾 Storage: sqlite-vec (Connected) • 1968 memories • 15.37MB
43 | 📊 Git Context → 10 commits, 3 changelog entries
44 | 
45 | 📚 Memory Search → Found 4 relevant memories (2 recent)
46 | 
47 | ┌─ 🧠 Injected Memory Context → mcp-memory-service, FastAPI, Python
48 | │
49 | ├─ 🔥 Recent Work:
50 | │  ├─ MCP Memory Service v8.6... 📅 6d ago
51 | │  └─ Session Summary - mcp-memory-service... 📅 6d ago
52 | │
53 | ├─ ⚠️ Current Problems:
54 | │  └─ Dream-Inspired Memory Consolidation... 📅 Oct 22
55 | │
56 | └─ 📋 Additional Context:
57 |    └─ MCP Memory Service v8.5... 📅 Oct 22
58 | ```
59 | 
60 | ## Alternative: Automatic Mid-Conversation Hook
61 | 
62 | Your UserPromptSubmit hook already runs automatically and retrieves memories when appropriate patterns are detected. This command is for when you want to **explicitly see** the memory context at session start.
63 | 
64 | ## Technical Details:
65 | 
66 | - Runs: `node ~/.claude/hooks/core/session-start.js`
67 | - HTTP endpoint: http://127.0.0.1:8000
68 | - Protocol: HTTP (MCP fallback if HTTP unavailable)
69 | - Performance: <2 seconds typical execution time
70 | 
71 | ## Troubleshooting:
72 | 
73 | ### Command not found
74 | - Ensure hooks are installed: `ls ~/.claude/hooks/core/session-start.js`
75 | - Reinstall: `cd claude-hooks && python install_hooks.py --basic`
76 | 
77 | ### No memories displayed
78 | - Check HTTP server is running: `curl http://127.0.0.1:8000/api/health`
79 | - Verify hooks config: `cat ~/.claude/hooks/config.json`
80 | - Check endpoint matches: Should be `http://127.0.0.1:8000`
81 | 
82 | ### Error: Cannot find module
83 | - **Windows**: Ensure path is quoted properly in hooks config
84 | - Check Node.js installed: `node --version`
85 | - Verify hook file exists at expected location
86 | 
87 | ## Related:
88 | 
89 | - **GitHub Issue**: [#160 - Windows SessionStart hook bug](https://github.com/doobidoo/mcp-memory-service/issues/160)
90 | - **Technical Analysis**: `claude-hooks/WINDOWS-SESSIONSTART-BUG.md`
91 | - **Hook Documentation**: `claude-hooks/README.md`
92 | 
93 | ---
94 | 
95 | **For Windows Users**: This is the **recommended workaround** for session initialization until the SessionStart hook bug is fixed in Claude Code core.
96 | 
```

--------------------------------------------------------------------------------
/scripts/maintenance/repair_memories.py:
--------------------------------------------------------------------------------

```python
 1 | # Copyright 2024 Heinrich Krupp
 2 | #
 3 | # Licensed under the Apache License, Version 2.0 (the "License");
 4 | # you may not use this file except in compliance with the License.
 5 | # You may obtain a copy of the License at
 6 | #
 7 | #     http://www.apache.org/licenses/LICENSE-2.0
 8 | #
 9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 | 
15 | # scripts/repair_memories.py
16 | 
17 | import asyncio
18 | import json
19 | import logging
20 | from mcp_memory_service.storage.chroma import ChromaMemoryStorage
21 | from mcp_memory_service.utils.hashing import generate_content_hash
22 | import argparse
23 | 
24 | logger = logging.getLogger(__name__)
25 | 
26 | async def repair_missing_hashes(storage):
27 |     """Repair memories missing content_hash by generating new ones"""
28 |     results = storage.collection.get(
29 |         include=["metadatas", "documents"]
30 |     )
31 |     
32 |     fixed_count = 0
33 |     for i, meta in enumerate(results["metadatas"]):
34 |         memory_id = results["ids"][i]
35 |         
36 |         if "content_hash" not in meta:
37 |             try:
38 |                 # Generate hash from content and metadata
39 |                 content = results["documents"][i]
40 |                 # Create a copy of metadata without the content_hash field
41 |                 meta_for_hash = {k: v for k, v in meta.items() if k != "content_hash"}
42 |                 new_hash = generate_content_hash(content, meta_for_hash)
43 |                 
44 |                 # Update metadata with new hash
45 |                 new_meta = meta.copy()
46 |                 new_meta["content_hash"] = new_hash
47 |                 
48 |                 # Update the memory
49 |                 storage.collection.update(
50 |                     ids=[memory_id],
51 |                     metadatas=[new_meta]
52 |                 )
53 |                 
54 |                 logger.info(f"Fixed memory {memory_id} with new hash: {new_hash}")
55 |                 fixed_count += 1
56 |                 
57 |             except Exception as e:
58 |                 logger.error(f"Error fixing memory {memory_id}: {str(e)}")
59 |     
60 |     return fixed_count
61 | 
62 | async def main():
63 | log_level = os.getenv('LOG_LEVEL', 'ERROR').upper()
64 | logging.basicConfig(
65 |     level=getattr(logging, log_level, logging.ERROR),
66 |     format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
67 |     stream=sys.stderr
68 | )
69 |     
70 |     parser = argparse.ArgumentParser(description='Repair memories with missing content hashes')
71 |     parser.add_argument('--db-path', required=True, help='Path to ChromaDB database')
72 |     args = parser.parse_args()
73 |     
74 |     logger.info(f"Connecting to database at: {args.db_path}")
75 |     storage = ChromaMemoryStorage(args.db_path)
76 |     
77 |     logger.info("Starting repair process...")
78 |     fixed_count = await repair_missing_hashes(storage)
79 |     logger.info(f"Repair completed. Fixed {fixed_count} memories")
80 |     
81 |     # Run validation again to confirm fixes
82 |     logger.info("Running validation to confirm fixes...")
83 |     from validate_memories import run_validation_report
84 |     report = await run_validation_report(storage)
85 |     print("\nPost-repair validation report:")
86 |     print(report)
87 | 
88 | if __name__ == "__main__":
89 |     asyncio.run(main())
```

--------------------------------------------------------------------------------
/tests/test_database.py:
--------------------------------------------------------------------------------

```python
  1 | """
  2 | MCP Memory Service
  3 | Copyright (c) 2024 Heinrich Krupp
  4 | Licensed under the MIT License. See LICENSE file in the project root for full license text.
  5 | """
  6 | """
  7 | Test database operations of the MCP Memory Service.
  8 | """
  9 | import pytest
 10 | import pytest_asyncio
 11 | import asyncio
 12 | import os
 13 | from mcp.server import Server
 14 | from mcp.server.models import InitializationOptions
 15 | 
 16 | @pytest_asyncio.fixture
 17 | async def memory_server():
 18 |     """Create a test instance of the memory server."""
 19 |     server = Server("test-memory")
 20 |     await server.initialize(InitializationOptions(
 21 |         server_name="test-memory",
 22 |         server_version="0.1.0"
 23 |     ))
 24 |     yield server
 25 |     await server.shutdown()
 26 | 
 27 | @pytest.mark.asyncio
 28 | async def test_create_backup(memory_server):
 29 |     """Test database backup creation."""
 30 |     # Store some test data
 31 |     await memory_server.store_memory(
 32 |         content="Test memory for backup"
 33 |     )
 34 |     
 35 |     # Create backup
 36 |     backup_response = await memory_server.create_backup()
 37 |     
 38 |     assert backup_response.get("success") is True
 39 |     assert backup_response.get("backup_path") is not None
 40 |     assert os.path.exists(backup_response.get("backup_path"))
 41 | 
 42 | @pytest.mark.asyncio
 43 | async def test_database_health(memory_server):
 44 |     """Test database health check functionality."""
 45 |     health_status = await memory_server.check_database_health()
 46 |     
 47 |     assert health_status is not None
 48 |     assert "status" in health_status
 49 |     assert "memory_count" in health_status
 50 |     assert "database_size" in health_status
 51 | 
 52 | @pytest.mark.asyncio
 53 | async def test_optimize_database(memory_server):
 54 |     """Test database optimization."""
 55 |     # Store multiple memories to trigger optimization
 56 |     for i in range(10):
 57 |         await memory_server.store_memory(
 58 |             content=f"Test memory {i}"
 59 |         )
 60 |     
 61 |     # Run optimization
 62 |     optimize_response = await memory_server.optimize_db()
 63 |     
 64 |     assert optimize_response.get("success") is True
 65 |     assert "optimized_size" in optimize_response
 66 | 
 67 | @pytest.mark.asyncio
 68 | async def test_cleanup_duplicates(memory_server):
 69 |     """Test duplicate memory cleanup."""
 70 |     # Store duplicate memories
 71 |     duplicate_content = "This is a duplicate memory"
 72 |     await memory_server.store_memory(content=duplicate_content)
 73 |     await memory_server.store_memory(content=duplicate_content)
 74 |     
 75 |     # Clean up duplicates
 76 |     cleanup_response = await memory_server.cleanup_duplicates()
 77 |     
 78 |     assert cleanup_response.get("success") is True
 79 |     assert cleanup_response.get("duplicates_removed") >= 1
 80 |     
 81 |     # Verify only one copy remains
 82 |     results = await memory_server.exact_match_retrieve(
 83 |         content=duplicate_content
 84 |     )
 85 |     assert len(results) == 1
 86 | 
 87 | @pytest.mark.asyncio
 88 | async def test_database_persistence(memory_server):
 89 |     """Test database persistence across server restarts."""
 90 |     test_content = "Persistent memory test"
 91 |     
 92 |     # Store memory
 93 |     await memory_server.store_memory(content=test_content)
 94 |     
 95 |     # Simulate server restart
 96 |     await memory_server.shutdown()
 97 |     await memory_server.initialize(InitializationOptions(
 98 |         server_name="test-memory",
 99 |         server_version="0.1.0"
100 |     ))
101 |     
102 |     # Verify memory persists
103 |     results = await memory_server.exact_match_retrieve(
104 |         content=test_content
105 |     )
106 |     assert len(results) == 1
107 |     assert results[0] == test_content
```

--------------------------------------------------------------------------------
/claude_commands/memory-ingest-dir.md:
--------------------------------------------------------------------------------

```markdown
  1 | # memory-ingest-dir
  2 | 
  3 | Batch ingest all supported documents from a directory into the MCP Memory Service database.
  4 | 
  5 | ## Usage
  6 | 
  7 | ```
  8 | claude /memory-ingest-dir <directory_path> [--tags TAG1,TAG2] [--recursive] [--file-extensions EXT1,EXT2] [--chunk-size SIZE] [--chunk-overlap SIZE] [--max-files COUNT]
  9 | ```
 10 | 
 11 | ## Parameters
 12 | 
 13 | - `directory_path`: Path to the directory containing documents to ingest (required)
 14 | - `--tags`: Comma-separated list of tags to apply to all memories created
 15 | - `--recursive`: Process subdirectories recursively (default: true)
 16 | - `--file-extensions`: Comma-separated list of file extensions to process (default: pdf,txt,md,json)
 17 | - `--chunk-size`: Target size for text chunks in characters (default: 1000)
 18 | - `--chunk-overlap`: Characters to overlap between chunks (default: 200)
 19 | - `--max-files`: Maximum number of files to process (default: 100)
 20 | 
 21 | ## Supported Formats
 22 | 
 23 | - PDF files (.pdf)
 24 | - Text files (.txt, .md, .markdown, .rst)
 25 | - JSON files (.json)
 26 | 
 27 | ## Implementation
 28 | 
 29 | I need to upload multiple documents from a directory to the MCP Memory Service HTTP API endpoint.
 30 | 
 31 | First, let me check if the service is running and find all supported files in the directory:
 32 | 
 33 | ```bash
 34 | # Check if the service is running
 35 | curl -s http://localhost:8080/api/health || curl -s http://localhost:8443/api/health || echo "Service not running"
 36 | 
 37 | # Find supported files in the directory
 38 | find "$DIRECTORY_PATH" -type f \( -iname "*.pdf" -o -iname "*.txt" -o -iname "*.md" -o -iname "*.json" \) | head -n $MAX_FILES
 39 | ```
 40 | 
 41 | Then I'll upload the files in batch:
 42 | 
 43 | ```bash
 44 | # Create a temporary script to upload files
 45 | UPLOAD_SCRIPT=$(mktemp)
 46 | cat > "$UPLOAD_SCRIPT" << 'EOF'
 47 | #!/bin/bash
 48 | TAGS="$1"
 49 | CHUNK_SIZE="$2"
 50 | CHUNK_OVERLAP="$3"
 51 | MAX_FILES="$4"
 52 | shift 4
 53 | FILES=("$@")
 54 | 
 55 | for file in "${FILES[@]}"; do
 56 |   echo "Uploading: $file"
 57 |   curl -X POST "http://localhost:8080/api/documents/upload" \
 58 |     -F "file=@$file" \
 59 |     -F "tags=$TAGS" \
 60 |     -F "chunk_size=$CHUNK_SIZE" \
 61 |     -F "chunk_overlap=$CHUNK_OVERLAP" \
 62 |     -F "memory_type=document"
 63 |   echo ""
 64 | done
 65 | EOF
 66 | 
 67 | chmod +x "$UPLOAD_SCRIPT"
 68 | ```
 69 | 
 70 | ## Examples
 71 | 
 72 | ```
 73 | # Ingest all PDFs from a directory
 74 | claude /memory-ingest-dir ./docs --file-extensions pdf --tags documentation
 75 | 
 76 | # Recursively ingest from knowledge base
 77 | claude /memory-ingest-dir ./knowledge-base --recursive --tags knowledge,reference
 78 | 
 79 | # Limit processing to specific formats
 80 | claude /memory-ingest-dir ./articles --file-extensions md,txt --max-files 50 --tags articles
 81 | ```
 82 | 
 83 | ## Actual Execution Steps
 84 | 
 85 | When you run this command, I will:
 86 | 
 87 | 1. **Scan the directory** for supported file types (PDF, TXT, MD, JSON)
 88 | 2. **Apply filtering** based on file extensions and max files limit
 89 | 3. **Validate the service** is running and accessible
 90 | 4. **Upload files in batch** using the documents API endpoint
 91 | 5. **Monitor progress** for each file and show real-time updates
 92 | 6. **Report results** including total chunks created and any errors
 93 | 
 94 | All documents will be processed with consistent tagging and chunking parameters.
 95 | 
 96 | ## Notes
 97 | 
 98 | - Files are processed in parallel for efficiency
 99 | - Progress is displayed with file counts and chunk statistics
100 | - Each document gets processed independently - failures in one don't stop others
101 | - Automatic tagging includes source directory and file type information
102 | - Large directories may take time - consider using --max-files for testing
103 | 
```

--------------------------------------------------------------------------------
/tests/timestamp/test_timestamp_simple.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """Test script to debug timestamp issues in recall functionality."""
 3 | 
 4 | import time
 5 | from datetime import datetime, timedelta
 6 | 
 7 | def test_timestamp_precision():
 8 |     """Test timestamp storage and retrieval issues."""
 9 |     
10 |     print("=== Testing Timestamp Precision Issue ===")
11 |     
12 |     # Test 1: Precision loss when converting float to int
13 |     print("\n1. Testing precision loss:")
14 |     current_time = time.time()
15 |     print(f"Current time (float): {current_time}")
16 |     print(f"Current time (int): {int(current_time)}")
17 |     print(f"Difference: {current_time - int(current_time)} seconds")
18 |     
19 |     # Test 2: Edge case demonstration
20 |     print("\n2. Testing edge case with timestamps:")
21 |     
22 |     # Create timestamps for yesterday at midnight
23 |     today = datetime.now().replace(hour=0, minute=0, second=0, microsecond=0)
24 |     yesterday_start = (today - timedelta(days=1)).timestamp()
25 |     yesterday_end = (today - timedelta(microseconds=1)).timestamp()
26 |     
27 |     print(f"\nYesterday range:")
28 |     print(f"  Start (float): {yesterday_start} ({datetime.fromtimestamp(yesterday_start)})")
29 |     print(f"  End (float): {yesterday_end} ({datetime.fromtimestamp(yesterday_end)})")
30 |     print(f"  Start (int): {int(yesterday_start)}")
31 |     print(f"  End (int): {int(yesterday_end)}")
32 |     
33 |     # Test a memory created at various times yesterday
34 |     test_times = [
35 |         ("00:00:00.5", yesterday_start + 0.5),
36 |         ("00:00:30", yesterday_start + 30),
37 |         ("12:00:00", yesterday_start + 12*3600),
38 |         ("23:59:59.5", yesterday_end - 0.5)
39 |     ]
40 |     
41 |     print("\n3. Testing memory inclusion with float vs int comparison:")
42 |     for time_desc, timestamp in test_times:
43 |         print(f"\n  Memory at {time_desc} (timestamp: {timestamp}):")
44 |         
45 |         # Float comparison
46 |         float_included = (timestamp >= yesterday_start and timestamp <= yesterday_end)
47 |         print(f"    Float comparison: {float_included}")
48 |         
49 |         # Int comparison (current implementation)
50 |         int_included = (int(timestamp) >= int(yesterday_start) and int(timestamp) <= int(yesterday_end))
51 |         print(f"    Int comparison: {int_included}")
52 |         
53 |         if float_included != int_included:
54 |             print(f"    ⚠️  MISMATCH! Memory would be {'excluded' if float_included else 'included'} incorrectly!")
55 |     
56 |     # Test 4: Demonstrate the issue with ChromaDB filtering
57 |     print("\n4. ChromaDB filter comparison issue:")
58 |     print("  ChromaDB uses integer comparisons for numeric fields.")
59 |     print("  When we store timestamp as int(created_at), we lose sub-second precision.")
60 |     print("  This can cause memories to be excluded from time-based queries.")
61 |     
62 |     # Example of the fix
63 |     print("\n5. Proposed fix:")
64 |     print("  Option 1: Store timestamp as float in metadata (if ChromaDB supports it)")
65 |     print("  Option 2: Store timestamp with higher precision (e.g., milliseconds as int)")
66 |     print("  Option 3: Use ISO string timestamps for filtering")
67 |     
68 |     # Test millisecond precision
69 |     print("\n6. Testing millisecond precision:")
70 |     current_ms = int(current_time * 1000)
71 |     print(f"  Current time in ms: {current_ms}")
72 |     print(f"  Reconstructed time: {current_ms / 1000}")
73 |     print(f"  Precision preserved: {abs((current_ms / 1000) - current_time) < 0.001}")
74 | 
75 | if __name__ == "__main__":
76 |     test_timestamp_precision()
77 | 
```

--------------------------------------------------------------------------------
/run_server.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """Run the MCP Memory Service with HTTP/HTTPS/mDNS support via FastAPI."""
 3 | 
 4 | import os
 5 | import sys
 6 | import uvicorn
 7 | import logging
 8 | 
 9 | # Add src to path
10 | sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'src'))
11 | 
12 | # Set up logging
13 | logging.basicConfig(level=logging.INFO)
14 | logger = logging.getLogger(__name__)
15 | 
16 | if __name__ == "__main__":
17 |     # Log configuration
18 |     logger.info("Starting MCP Memory Service FastAPI server with the following configuration:")
19 |     logger.info(f"  Storage Backend: {os.environ.get('MCP_MEMORY_STORAGE_BACKEND', 'sqlite_vec')}")
20 |     logger.info(f"  HTTP Port: {os.environ.get('MCP_HTTP_PORT', '8000')}")
21 |     logger.info(f"  HTTPS Enabled: {os.environ.get('MCP_HTTPS_ENABLED', 'false')}")
22 |     logger.info(f"  HTTPS Port: {os.environ.get('MCP_HTTPS_PORT', '8443')}")
23 |     logger.info(f"  mDNS Enabled: {os.environ.get('MCP_MDNS_ENABLED', 'false')}")
24 |     logger.info(f"  API Key Set: {'Yes' if os.environ.get('MCP_API_KEY') else 'No'}")
25 |     
26 |     http_port = int(os.environ.get('MCP_HTTP_PORT', 8000))
27 |     
28 |     # Check if HTTPS is enabled
29 |     if os.environ.get('MCP_HTTPS_ENABLED', 'false').lower() == 'true':
30 |         https_port = int(os.environ.get('MCP_HTTPS_PORT', 8443))
31 |         
32 |         # Check for environment variable certificates first
33 |         cert_file = os.environ.get('MCP_SSL_CERT_FILE')
34 |         key_file = os.environ.get('MCP_SSL_KEY_FILE')
35 |         
36 |         if cert_file and key_file:
37 |             # Use provided certificates
38 |             if not os.path.exists(cert_file):
39 |                 logger.error(f"Certificate file not found: {cert_file}")
40 |                 sys.exit(1)
41 |             if not os.path.exists(key_file):
42 |                 logger.error(f"Key file not found: {key_file}")
43 |                 sys.exit(1)
44 |             logger.info(f"Using provided certificates: {cert_file}")
45 |         else:
46 |             # Generate self-signed certificate if needed
47 |             cert_dir = os.path.expanduser("~/.mcp_memory_certs")
48 |             os.makedirs(cert_dir, exist_ok=True)
49 |             cert_file = os.path.join(cert_dir, "cert.pem")
50 |             key_file = os.path.join(cert_dir, "key.pem")
51 |             
52 |             if not os.path.exists(cert_file) or not os.path.exists(key_file):
53 |                 logger.info("Generating self-signed certificate for HTTPS...")
54 |                 import subprocess
55 |                 subprocess.run([
56 |                     "openssl", "req", "-x509", "-newkey", "rsa:4096",
57 |                     "-keyout", key_file, "-out", cert_file,
58 |                     "-days", "365", "-nodes",
59 |                     "-subj", "/C=US/ST=State/L=City/O=MCP/CN=localhost"
60 |                 ], check=True)
61 |                 logger.info(f"Certificate generated at {cert_dir}")
62 |         
63 |         # Run with HTTPS
64 |         logger.info(f"Starting HTTPS server on port {https_port}")
65 |         uvicorn.run(
66 |             "mcp_memory_service.web.app:app",
67 |             host="0.0.0.0",
68 |             port=https_port,
69 |             ssl_keyfile=key_file,
70 |             ssl_certfile=cert_file,
71 |             reload=False,
72 |             log_level="info"
73 |         )
74 |     else:
75 |         # Run HTTP only
76 |         logger.info(f"Starting HTTP server on port {http_port}")
77 |         uvicorn.run(
78 |             "mcp_memory_service.web.app:app",
79 |             host="0.0.0.0",
80 |             port=http_port,
81 |             reload=False,
82 |             log_level="info"
83 |         )
```

--------------------------------------------------------------------------------
/tools/docker/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | ## Slim (CPU-only) Docker image optimized for sqlite-vec + ONNX
 2 | ## Consolidated as the primary Dockerfile to avoid confusion.
 3 | FROM python:3.12-slim
 4 | 
 5 | # Build arguments for conditional features
 6 | ARG SKIP_MODEL_DOWNLOAD=false
 7 | ARG PLATFORM=linux/amd64
 8 | ARG INSTALL_EXTRA="[sqlite]"
 9 | ARG FORCE_CPU_PYTORCH=false
10 | 
11 | # Set environment variables
12 | ENV PYTHONUNBUFFERED=1 \
13 |     MCP_MEMORY_STORAGE_BACKEND=sqlite_vec \
14 |     MCP_MEMORY_SQLITE_PATH=/app/sqlite_db \
15 |     MCP_MEMORY_BACKUPS_PATH=/app/backups \
16 |     PYTHONPATH=/app/src \
17 |     DOCKER_CONTAINER=1 \
18 |     CHROMA_TELEMETRY_IMPL=none \
19 |     ANONYMIZED_TELEMETRY=false \
20 |     HF_HUB_DISABLE_TELEMETRY=1
21 | 
22 | # Set the working directory
23 | WORKDIR /app
24 | 
25 | # Minimal system packages with security updates
26 | RUN apt-get update && \
27 |     apt-get install -y --no-install-recommends \
28 |     curl \
29 |     bash \
30 |     && apt-get upgrade -y \
31 |     && rm -rf /var/lib/apt/lists/* \
32 |     && apt-get clean
33 | 
34 | # Copy essential files
35 | COPY pyproject.toml .
36 | COPY uv.lock .
37 | COPY README.md .
38 | COPY scripts/installation/install_uv.py .
39 | 
40 | # Install UV
41 | RUN python install_uv.py
42 | 
43 | # Create directories for data persistence
44 | RUN mkdir -p /app/sqlite_db /app/backups
45 | 
46 | # Copy source code
47 | COPY src/ /app/src/
48 | COPY run_server.py ./
49 | 
50 | # Copy utility scripts if they exist
51 | COPY scripts/utils/uv_wrapper.py ./uv_wrapper.py
52 | COPY scripts/utils/memory_wrapper_uv.py ./memory_wrapper_uv.py
53 | 
54 | # Copy Docker entrypoint scripts
55 | COPY tools/docker/docker-entrypoint.sh /usr/local/bin/
56 | COPY tools/docker/docker-entrypoint-persistent.sh /usr/local/bin/
57 | COPY tools/docker/docker-entrypoint-unified.sh /usr/local/bin/
58 | 
59 | # Install the package with UV (configurable dependency group)
60 | # Use CPU-only PyTorch by default to save disk space in CI/test environments
61 | RUN if [ "$FORCE_CPU_PYTORCH" = "true" ] || [ "$INSTALL_EXTRA" = "[sqlite]" ]; then \
62 |         echo "Installing CPU-only PyTorch to save disk space..."; \
63 |         python -m uv pip install torch --index-url https://download.pytorch.org/whl/cpu; \
64 |         python -m uv pip install -e .${INSTALL_EXTRA}; \
65 |     else \
66 |         python -m uv pip install -e .${INSTALL_EXTRA}; \
67 |     fi
68 | 
69 | # Conditionally pre-download ONNX models for lightweight embedding
70 | RUN if [ "$SKIP_MODEL_DOWNLOAD" != "true" ]; then \
71 |         echo "Pre-downloading ONNX embedding models..." && \
72 |         python -c "try:\
73 |             import onnxruntime as ort; \
74 |             print('ONNX runtime available for lightweight embeddings'); \
75 |             print('ONNX models will be downloaded at runtime as needed')\
76 |         except Exception as e:\
77 |             print(f'Warning: ONNX runtime not available: {e}'); \
78 |             print('Embedding functionality may be limited')" || echo "ONNX check failed, continuing..."; \
79 |     else \
80 |         echo "Skipping model download (SKIP_MODEL_DOWNLOAD=true)"; \
81 |     fi
82 | 
83 | # Configure stdio for MCP communication and make entrypoints executable
84 | RUN chmod a+rw /dev/stdin /dev/stdout /dev/stderr && \
85 |     chmod +x /usr/local/bin/docker-entrypoint.sh && \
86 |     chmod +x /usr/local/bin/docker-entrypoint-persistent.sh && \
87 |     chmod +x /usr/local/bin/docker-entrypoint-unified.sh
88 | 
89 | # Add volume mount points for data persistence
90 | VOLUME ["/app/sqlite_db", "/app/backups"]
91 | 
92 | # Expose the port (if needed)
93 | EXPOSE 8000
94 | 
95 | # Use the unified entrypoint script by default
96 | # Can be overridden with docker-entrypoint.sh for backward compatibility
97 | ENTRYPOINT ["/usr/local/bin/docker-entrypoint-unified.sh"]
98 | 
```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/claude-desktop-setup.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Claude Desktop Setup Guide - Windows
  2 | 
  3 | This guide helps you configure the MCP Memory Service to work with Claude Desktop on Windows without repeated PyTorch downloads.
  4 | 
  5 | ## Problem and Solution
  6 | 
  7 | **Issue**: Claude Desktop was downloading PyTorch models (230MB+) on every startup due to missing offline configuration.
  8 | 
  9 | **Solution**: Added offline mode environment variables to your Claude Desktop config to use cached models.
 10 | 
 11 | ## What Was Fixed
 12 | 
 13 | ✅ **Your Claude Desktop Config Updated**:
 14 | - Added offline mode environment variables (`HF_HUB_OFFLINE=1`, `TRANSFORMERS_OFFLINE=1`)
 15 | - Added cache path configurations 
 16 | - Kept your existing SQLite-vec backend setup
 17 | 
 18 | ✅ **Verified Components Working**:
 19 | - SQLite-vec database: 434 memories accessible ✅
 20 | - sentence-transformers: Loading models without downloads ✅
 21 | - Offline mode: No network requests when properly configured ✅
 22 | 
 23 | ## Current Working Configuration
 24 | 
 25 | Your active config at `%APPDATA%\Claude\claude_desktop_config.json` now has:
 26 | 
 27 | - **Backend**: SQLite-vec (single database file)
 28 | - **Database**: `memory_migrated.db` with 434 memories
 29 | - **Offline Mode**: Enabled to prevent downloads
 30 | - **UV Package Manager**: For better dependency management
 31 | 
 32 | ## For Other Users
 33 | 
 34 | See `examples/claude_desktop_config_windows.json` for an anonymized template with:
 35 | - SQLite-vec backend configuration (recommended)
 36 | - ChromaDB alternative configuration  
 37 | - Offline mode settings
 38 | - Performance optimizations
 39 | 
 40 | Replace `YOUR_USERNAME` with your actual Windows username.
 41 | 
 42 | ## Key Changes Made
 43 | 
 44 | ### 1. Config Template Updates
 45 | - Removed `PYTHONNOUSERSITE=1`, `PIP_NO_DEPENDENCIES=1`, `PIP_NO_INSTALL=1`
 46 | - These were blocking access to globally installed packages
 47 | 
 48 | ### 2. Server Path Detection
 49 | Enhanced `src/mcp_memory_service/server.py`:
 50 | - Better virtual environment detection
 51 | - Windows-specific path handling
 52 | - Global site-packages access when not blocked
 53 | 
 54 | ### 3. Dependency Checking
 55 | Improved `src/mcp_memory_service/dependency_check.py`:
 56 | - Enhanced model cache detection for Windows
 57 | - Better first-run detection logic
 58 | - Multiple cache location checking
 59 | 
 60 | ### 4. Storage Backend Fixes
 61 | Updated both ChromaDB and SQLite-vec storage:
 62 | - Fixed hardcoded Linux paths
 63 | - Added offline mode configuration
 64 | - Better cache path detection
 65 | 
 66 | ## Verification
 67 | 
 68 | After updating your Claude Desktop config:
 69 | 
 70 | 1. **Restart Claude Desktop** completely
 71 | 2. **Check the logs** - you should see:
 72 |    ```
 73 |    ✅ All dependencies are installed
 74 |    DEBUG: Found cached model in C:\Users\[username]\.cache\huggingface\hub
 75 |    ```
 76 | 3. **No more downloads** - The 230MB PyTorch download should not occur
 77 | 
 78 | ## Testing
 79 | 
 80 | You can test the server directly:
 81 | ```bash
 82 | python scripts/run_memory_server.py --debug
 83 | ```
 84 | 
 85 | You should see dependency checking passes and models load from cache.
 86 | 
 87 | ## Troubleshooting
 88 | 
 89 | If you still see downloads:
 90 | 1. Verify your username in the config paths
 91 | 2. Check that models exist in `%USERPROFILE%\.cache\huggingface\hub`
 92 | 3. Ensure Claude Desktop has been fully restarted
 93 | 
 94 | ## Files Modified
 95 | 
 96 | - `examples/claude_desktop_config_template.json` - Removed blocking env vars
 97 | - `examples/claude_desktop_config_windows.json` - New Windows-specific config
 98 | - `src/mcp_memory_service/server.py` - Enhanced path detection
 99 | - `src/mcp_memory_service/dependency_check.py` - Better cache detection
100 | - `src/mcp_memory_service/storage/sqlite_vec.py` - Fixed hardcoded paths
101 | - `src/mcp_memory_service/storage/chroma.py` - Added offline mode support
```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/development/TIMESTAMP_FIX_SUMMARY.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Timestamp Recall Fix Summary
 2 | 
 3 | ## Issue Description
 4 | The memory recall functionality in mcp-memory-service was experiencing issues with timestamp-based queries. Memories stored with precise timestamps (including sub-second precision) were being retrieved incorrectly or not at all when using time-based recall functions.
 5 | 
 6 | ## Root Cause
 7 | The issue was caused by timestamps being converted to integers at multiple points in the codebase:
 8 | 
 9 | 1. **Storage**: In `ChromaMemoryStorage._optimize_metadata_for_chroma()`, timestamps were being stored as `int(memory.created_at)`
10 | 2. **Querying**: In the `recall()` method, timestamp comparisons were using `int(start_timestamp)` and `int(end_timestamp)`
11 | 3. **Memory Model**: In `Memory.to_dict()`, the timestamp field was being converted to `int(self.created_at)`
12 | 
13 | This integer conversion caused loss of sub-second precision, making all memories within the same second indistinguishable by timestamp.
14 | 
15 | ## Changes Made
16 | 
17 | ### 1. Fixed Timestamp Storage (chroma.py)
18 | Changed line 949 in `_optimize_metadata_for_chroma()`:
19 | ```python
20 | # Before:
21 | "timestamp": int(memory.created_at),
22 | 
23 | # After:
24 | "timestamp": float(memory.created_at),  # Changed from int() to float()
25 | ```
26 | 
27 | ### 2. Fixed Timestamp Queries (chroma.py)
28 | Changed lines 739 and 743 in the `recall()` method:
29 | ```python
30 | # Before:
31 | where_clause["$and"].append({"timestamp": {"$gte": int(start_timestamp)}})
32 | where_clause["$and"].append({"timestamp": {"$lte": int(end_timestamp)}})
33 | 
34 | # After:
35 | where_clause["$and"].append({"timestamp": {"$gte": float(start_timestamp)}})
36 | where_clause["$and"].append({"timestamp": {"$lte": float(end_timestamp)}})
37 | ```
38 | 
39 | ### 3. Fixed Memory Model (memory.py)
40 | Changed line 161 in `Memory.to_dict()`:
41 | ```python
42 | # Before:
43 | "timestamp": int(self.created_at),  # Legacy timestamp (int)
44 | 
45 | # After:
46 | "timestamp": float(self.created_at),  # Changed from int() to preserve precision
47 | ```
48 | 
49 | ### 4. Fixed Date Parsing Order (time_parser.py)
50 | Moved the full ISO date pattern check before the specific date pattern check to prevent "2024-06-15" from being incorrectly parsed as "24-06-15".
51 | 
52 | ## Tests Added
53 | 
54 | ### 1. Timestamp Recall Tests (`tests/test_timestamp_recall.py`)
55 | - Tests for timestamp precision storage
56 | - Tests for natural language time parsing
57 | - Tests for various recall scenarios (yesterday, last week, specific dates)
58 | - Tests for combined semantic and time-based recall
59 | - Edge case tests
60 | 
61 | ### 2. Time Parser Tests (`tests/test_time_parser.py`)
62 | - Comprehensive tests for all supported time expressions
63 | - Tests for relative dates (yesterday, 3 days ago, last week)
64 | - Tests for specific date formats (MM/DD/YYYY, YYYY-MM-DD)
65 | - Tests for seasons, holidays, and named periods
66 | - Tests for time extraction from queries
67 | 
68 | ## Verification
69 | The fix has been verified with:
70 | 1. Unit tests covering individual components
71 | 2. Integration tests demonstrating end-to-end functionality
72 | 3. Precision tests showing that sub-second timestamps are now preserved
73 | 
74 | ## Impact
75 | - Memories can now be recalled with precise timestamp filtering
76 | - Sub-second precision is maintained throughout storage and retrieval
77 | - Natural language time expressions work correctly
78 | - No breaking changes to existing functionality
79 | 
80 | ## Recommendations
81 | 1. Consider adding database migration for existing memories with integer timestamps
82 | 2. Monitor performance impact of float vs integer comparisons in large datasets
83 | 3. Add documentation about supported time expressions for users
84 | 
```

--------------------------------------------------------------------------------
/.github/workflows/LATEST_FIXES.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Latest Workflow Fixes (2024-08-24)
  2 | 
  3 | ## Issues Resolved
  4 | 
  5 | ### 1. Workflow Conflicts
  6 | **Problem**: Both `main.yml` and `main-optimized.yml` running simultaneously with same triggers, concurrency groups, and job names.
  7 | 
  8 | **Solution**: 
  9 | - Temporarily disabled `main.yml` → `main.yml.disabled`
 10 | - Allows `main-optimized.yml` to run without conflicts
 11 | 
 12 | ### 2. Matrix Strategy Failures
 13 | **Problem**: Matrix jobs failing fast, stopping entire workflow on single failure.
 14 | 
 15 | **Solutions Applied**:
 16 | - Added `fail-fast: false` to both test and publish-docker matrix strategies
 17 | - Allows other matrix combinations to continue even if one fails
 18 | - Improved fault tolerance
 19 | 
 20 | ### 3. Missing Debugging Information
 21 | **Problem**: Workflow failures lacked context about what exactly was failing.
 22 | 
 23 | **Solutions Applied**:
 24 | - Added comprehensive debugging steps to test jobs
 25 | - Environment information logging (Python version, disk space, etc.)
 26 | - File system validation before operations
 27 | - Progress indicators with emojis for better visibility
 28 | 
 29 | ### 4. Poor Error Handling
 30 | **Problem**: Jobs failed completely on minor issues, preventing workflow completion.
 31 | 
 32 | **Solutions Applied**:
 33 | - Added `continue-on-error: true` to optional steps
 34 | - Improved conditional logic for Docker Hub authentication
 35 | - Better fallback handling for missing test directories
 36 | - More informative error messages
 37 | 
 38 | ### 5. Dependency Issues
 39 | **Problem**: Jobs failing due to missing files, credentials, or dependencies.
 40 | 
 41 | **Solutions Applied**:
 42 | - Added pre-flight checks for required files (Dockerfile, src/, pyproject.toml)
 43 | - Enhanced credential validation
 44 | - Created fallback test when test directory missing
 45 | - Improved job dependency conditions
 46 | 
 47 | ## Specific Changes Made
 48 | 
 49 | ### main-optimized.yml
 50 | ```yaml
 51 | # Added debugging
 52 | - name: Debug environment
 53 |   run: |
 54 |     echo "🐍 Python version: $(python --version)"
 55 |     # ... more debug info
 56 | 
 57 | # Fixed matrix strategies  
 58 | strategy:
 59 |   fail-fast: false  # ← Key addition
 60 |   matrix:
 61 |     # ... existing matrix
 62 | 
 63 | # Enhanced test steps with validation
 64 | - name: Run unit tests
 65 |   if: matrix.test-type == 'unit'
 66 |   run: |
 67 |     echo "🧪 Starting unit tests..."
 68 |     # ... detailed steps with error handling
 69 | 
 70 | # Improved Docker build validation
 71 | - name: Check Docker requirements
 72 |   run: |
 73 |     echo "🐳 Checking Docker build requirements..."
 74 |     # ... file validation
 75 | ```
 76 | 
 77 | ### File Changes
 78 | - `main.yml` → `main.yml.disabled` (temporary)
 79 | - Enhanced error handling in both workflows
 80 | - Added comprehensive debugging throughout
 81 | 
 82 | ## Expected Improvements
 83 | 
 84 | 1. **Workflow Stability**: No more conflicts between competing workflows
 85 | 2. **Better Diagnostics**: Clear logging shows where issues occur
 86 | 3. **Fault Tolerance**: Individual job failures don't stop entire workflow
 87 | 4. **Graceful Degradation**: Missing credentials/dependencies handled elegantly
 88 | 5. **Developer Experience**: Emojis and clear messages improve log readability
 89 | 
 90 | ## Testing Strategy
 91 | 
 92 | 1. **Immediate**: Push changes trigger main-optimized.yml only
 93 | 2. **Monitor**: Watch for improved error messages and debugging info
 94 | 3. **Validate**: Ensure matrix jobs complete independently
 95 | 4. **Rollback**: Original main.yml available if needed
 96 | 
 97 | ## Success Metrics
 98 | 
 99 | - ✅ Workflows complete without conflicts
100 | - ✅ Matrix jobs show individual results
101 | - ✅ Clear error messages when issues occur  
102 | - ✅ Graceful handling of missing credentials
103 | - ✅ Debugging information helps troubleshoot future issues
104 | 
105 | Date: 2024-08-24  
106 | Status: Applied and ready for testing
```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/development/CLEANUP_SUMMARY.md:
--------------------------------------------------------------------------------

```markdown
  1 | # 🧹 MCP-Memory-Service Cleanup Summary
  2 | 
  3 | **Date:** June 7, 2025  
  4 | **Operation:** Artifact Cleanup & Reorganization
  5 | 
  6 | ## 📊 **Cleanup Statistics**
  7 | 
  8 | ### **Files Archived:**
  9 | - **Memory Service**: 11 test/debug files → `/archive/`
 10 | - **Dashboard**: 37 test/debug files → `/archive/test-files/`
 11 | - **Total**: 48 obsolete artifacts safely preserved
 12 | 
 13 | ### **Backups Created:**
 14 | - `mcp-memory-service-backup-20250607-0705.tar.gz` (193MB)
 15 | - `mcp-memory-dashboard-backup-20250607-0706.tar.gz` (215MB)
 16 | 
 17 | ## 🗂️ **Files Moved to Archive**
 18 | 
 19 | ### **Memory Service (`/archive/`):**
 20 | ```
 21 | alternative_test_server.py
 22 | compatibility_test_server.py
 23 | diagnose_mcp.py
 24 | fixed_memory_server.py
 25 | memory_wrapper.py
 26 | memory_wrapper_uv.py
 27 | minimal_uv_server.py
 28 | simplified_memory_server.py
 29 | test_client.py
 30 | ultimate_protocol_debug.py
 31 | uv_wrapper.py
 32 | ```
 33 | 
 34 | ### **Dashboard (`/archive/test-files/`):**
 35 | ```
 36 | All test_*.js files (20+ files)
 37 | All test_*.py files (5+ files)  
 38 | All test_*.sh files (5+ files)
 39 | *_minimal_server.py files
 40 | investigation.js & investigation_report.json
 41 | comprehensive_*test* files
 42 | final_*test* files
 43 | ```
 44 | 
 45 | ### **Dashboard (`/archive/`):**
 46 | ```
 47 | CLAUDE_INTEGRATION_TEST.md
 48 | INTEGRATION_ACTION_PLAN.md
 49 | RESTORATION_COMPLETE.md
 50 | investigation.js
 51 | investigation_report.json
 52 | ultimate_investigation.js
 53 | ultimate_investigation.sh
 54 | ```
 55 | 
 56 | ## ✅ **Post-Cleanup Verification**
 57 | 
 58 | ### **Memory Service Status:**
 59 | - ✅ Database Health: HEALTHY
 60 | - ✅ Total Memories: 164 (increased from previous 162)
 61 | - ✅ Storage: 8.36 MB
 62 | - ✅ Dashboard Integration: WORKING
 63 | - ✅ Core Operations: ALL FUNCTIONAL
 64 | 
 65 | ### **Tests Performed:**
 66 | 1. Database health check ✅
 67 | 2. Dashboard health check ✅  
 68 | 3. Memory storage operation ✅
 69 | 4. Memory retrieval operation ✅
 70 | 
 71 | ## 🎯 **Production Files Preserved**
 72 | 
 73 | ### **Memory Service Core:**
 74 | - `src/mcp_memory_service/server.py` - Main server
 75 | - `src/mcp_memory_service/server copy.py` - **CRITICAL BACKUP**
 76 | - All core implementation files
 77 | - Configuration files (pyproject.toml, etc.)
 78 | - Documentation (README.md, CHANGELOG.md)
 79 | 
 80 | ### **Dashboard Core:**
 81 | - `src/` directory - Main dashboard implementation
 82 | - Configuration files (package.json, vite.config.ts, etc.)
 83 | - Build scripts and deployment files
 84 | 
 85 | ## 📁 **Directory Structure (Cleaned)**
 86 | 
 87 | ### **Memory Service:**
 88 | ```
 89 | mcp-memory-service/
 90 | ├── src/mcp_memory_service/    # Core implementation
 91 | ├── scripts/                   # Utility scripts
 92 | ├── tests/                     # Test suite
 93 | ├── archive/                   # Archived test artifacts
 94 | ├── pyproject.toml            # Project config
 95 | ├── requirements.txt          # Dependencies
 96 | └── README.md                 # Documentation
 97 | ```
 98 | 
 99 | ### **Dashboard:**
100 | ```
101 | mcp-memory-dashboard/
102 | ├── src/                      # Core dashboard
103 | ├── dist/                     # Built files
104 | ├── archive/                  # Archived test artifacts
105 | ├── package.json             # Project config
106 | ├── vite.config.ts           # Build config
107 | └── README.md                # Documentation
108 | ```
109 | 
110 | ## 🔒 **Safety Measures**
111 | 
112 | 1. **Full backups created** before any file operations
113 | 2. **Archives created** instead of deletion (nothing lost)
114 | 3. **Critical files preserved** (especially `server copy.py`)
115 | 4. **Functionality verified** after cleanup
116 | 5. **Production code untouched**
117 | 
118 | ## 📝 **Next Steps**
119 | 
120 | 1. ✅ Memory service cleanup complete
121 | 2. 🔄 Dashboard integration testing (next phase)
122 | 3. 🎯 Focus on remaining dashboard issues
123 | 4. 📊 Performance optimization if needed
124 | 
125 | ---
126 | 
127 | **Result: Clean, organized codebase with all production functionality intact! 🚀**
128 | 
```

--------------------------------------------------------------------------------
/docs/glama-deployment.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Glama Deployment Guide
  2 | 
  3 | This guide provides instructions for deploying the MCP Memory Service on the Glama platform.
  4 | 
  5 | ## Overview
  6 | 
  7 | The MCP Memory Service is now available on Glama at: https://glama.ai/mcp/servers/bzvl3lz34o
  8 | 
  9 | Glama is a directory for MCP servers that provides easy discovery and deployment options for users.
 10 | 
 11 | ## Docker Configuration for Glama
 12 | 
 13 | ### Primary Dockerfile
 14 | 
 15 | The repository includes an optimized Dockerfile specifically for Glama deployment:
 16 | - `Dockerfile` - Main production Dockerfile
 17 | - `Dockerfile.glama` - Glama-optimized version with enhanced labels and health checks
 18 | 
 19 | ### Key Features
 20 | 
 21 | 1. **Multi-platform Support**: Works on x86_64 and ARM64 architectures
 22 | 2. **Health Checks**: Built-in health monitoring for container status
 23 | 3. **Data Persistence**: Proper volume configuration for ChromaDB and backups
 24 | 4. **Environment Configuration**: Pre-configured for optimal performance
 25 | 5. **Security**: Minimal attack surface with slim Python base image
 26 | 
 27 | ### Quick Start from Glama
 28 | 
 29 | Users can deploy the service using:
 30 | 
 31 | ```bash
 32 | # Using the Glama-provided configuration
 33 | docker run -d -p 8000:8000 \
 34 |   -v $(pwd)/data/chroma_db:/app/chroma_db \
 35 |   -v $(pwd)/data/backups:/app/backups \
 36 |   doobidoo/mcp-memory-service:latest
 37 | ```
 38 | 
 39 | ### Environment Variables
 40 | 
 41 | The following environment variables are pre-configured:
 42 | 
 43 | | Variable | Value | Purpose |
 44 | |----------|-------|---------|
 45 | | `MCP_MEMORY_CHROMA_PATH` | `/app/chroma_db` | ChromaDB storage location |
 46 | | `MCP_MEMORY_BACKUPS_PATH` | `/app/backups` | Backup storage location |
 47 | | `DOCKER_CONTAINER` | `1` | Indicates Docker environment |
 48 | | `CHROMA_TELEMETRY_IMPL` | `none` | Disables ChromaDB telemetry |
 49 | | `PYTORCH_ENABLE_MPS_FALLBACK` | `1` | Enables MPS fallback for Apple Silicon |
 50 | 
 51 | ### Standalone Mode
 52 | 
 53 | For deployment without an active MCP client, use:
 54 | 
 55 | ```bash
 56 | docker run -d -p 8000:8000 \
 57 |   -e MCP_STANDALONE_MODE=1 \
 58 |   -v $(pwd)/data/chroma_db:/app/chroma_db \
 59 |   -v $(pwd)/data/backups:/app/backups \
 60 |   doobidoo/mcp-memory-service:latest
 61 | ```
 62 | 
 63 | ## Glama Platform Integration
 64 | 
 65 | ### Server Verification
 66 | 
 67 | The Dockerfile passes all Glama server checks:
 68 | - ✅ Valid Dockerfile syntax
 69 | - ✅ Proper base image
 70 | - ✅ Security best practices
 71 | - ✅ Health check implementation
 72 | - ✅ Volume configuration
 73 | - ✅ Port exposure
 74 | 
 75 | ### User Experience
 76 | 
 77 | Glama users benefit from:
 78 | 1. **One-click deployment** from the Glama interface
 79 | 2. **Pre-configured settings** for immediate use
 80 | 3. **Documentation integration** with setup instructions
 81 | 4. **Community feedback** and ratings
 82 | 5. **Version tracking** and update notifications
 83 | 
 84 | ### Monitoring and Health
 85 | 
 86 | The Docker image includes health checks that verify:
 87 | - Python environment is working
 88 | - MCP Memory Service can be imported
 89 | - Dependencies are properly loaded
 90 | 
 91 | ## Maintenance
 92 | 
 93 | ### Updates
 94 | 
 95 | The Glama listing is automatically updated when:
 96 | 1. New versions are tagged in the GitHub repository
 97 | 2. Docker images are published to Docker Hub
 98 | 3. Documentation is updated
 99 | 
100 | ### Support
101 | 
102 | For Glama-specific issues:
103 | 1. Check the Glama platform documentation
104 | 2. Verify Docker configuration
105 | 3. Review container logs for errors
106 | 4. Test with standalone mode for debugging
107 | 
108 | ## Contributing
109 | 
110 | To improve the Glama integration:
111 | 1. Test the deployment on different platforms
112 | 2. Provide feedback on the installation experience
113 | 3. Suggest improvements to the Docker configuration
114 | 4. Report any platform-specific issues
115 | 
116 | The goal is to make the MCP Memory Service as accessible as possible to the 60k+ monthly Glama users.
```

--------------------------------------------------------------------------------
/scripts/testing/test_memory_simple.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | # Copyright 2024 Heinrich Krupp
  3 | #
  4 | # Licensed under the Apache License, Version 2.0 (the "License");
  5 | # you may not use this file except in compliance with the License.
  6 | # You may obtain a copy of the License at
  7 | #
  8 | #     http://www.apache.org/licenses/LICENSE-2.0
  9 | #
 10 | # Unless required by applicable law or agreed to in writing, software
 11 | # distributed under the License is distributed on an "AS IS" BASIS,
 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 13 | # See the License for the specific language governing permissions and
 14 | # limitations under the License.
 15 | 
 16 | """Simple test for memory CRUD operations."""
 17 | 
 18 | import requests
 19 | import json
 20 | import time
 21 | 
 22 | BASE_URL = "http://localhost:8000"
 23 | 
 24 | def test_memory_crud():
 25 |     """Test the complete CRUD workflow for memories."""
 26 |     
 27 |     print("Testing Memory CRUD Operations")
 28 |     print("=" * 40)
 29 |     
 30 |     # Test 1: Health check
 31 |     print("\n[1] Health check...")
 32 |     try:
 33 |         resp = requests.get(f"{BASE_URL}/api/health", timeout=5)
 34 |         if resp.status_code == 200:
 35 |             print("[PASS] Server is healthy")
 36 |         else:
 37 |             print(f"[FAIL] Health check failed: {resp.status_code}")
 38 |             return
 39 |     except Exception as e:
 40 |         print(f"[FAIL] Cannot connect: {e}")
 41 |         return
 42 |     
 43 |     # Test 2: Store memory
 44 |     print("\n[2] Storing memory...")
 45 |     test_memory = {
 46 |         "content": "Test memory for API verification",
 47 |         "tags": ["test", "api"],
 48 |         "memory_type": "test",
 49 |         "metadata": {"timestamp": time.time()}
 50 |     }
 51 |     
 52 |     try:
 53 |         resp = requests.post(
 54 |             f"{BASE_URL}/api/memories",
 55 |             json=test_memory,
 56 |             headers={"Content-Type": "application/json"},
 57 |             timeout=10
 58 |         )
 59 |         
 60 |         if resp.status_code == 200:
 61 |             result = resp.json()
 62 |             if result["success"]:
 63 |                 content_hash = result["content_hash"]
 64 |                 print(f"[PASS] Memory stored: {content_hash[:12]}...")
 65 |             else:
 66 |                 print(f"[FAIL] Storage failed: {result['message']}")
 67 |                 return
 68 |         else:
 69 |             print(f"[FAIL] Storage failed: {resp.status_code}")
 70 |             print(f"Error: {resp.text}")
 71 |             return
 72 |     except Exception as e:
 73 |         print(f"[FAIL] Storage error: {e}")
 74 |         return
 75 |     
 76 |     # Test 3: List memories
 77 |     print("\n[3] Listing memories...")
 78 |     try:
 79 |         resp = requests.get(f"{BASE_URL}/api/memories?page=1&page_size=5", timeout=10)
 80 |         if resp.status_code == 200:
 81 |             result = resp.json()
 82 |             print(f"[PASS] Found {len(result['memories'])} memories")
 83 |             print(f"Total: {result['total']}")
 84 |         else:
 85 |             print(f"[FAIL] Listing failed: {resp.status_code}")
 86 |     except Exception as e:
 87 |         print(f"[FAIL] Listing error: {e}")
 88 |     
 89 |     # Test 4: Delete memory
 90 |     print("\n[4] Deleting memory...")
 91 |     try:
 92 |         resp = requests.delete(f"{BASE_URL}/api/memories/{content_hash}", timeout=10)
 93 |         if resp.status_code == 200:
 94 |             result = resp.json()
 95 |             if result["success"]:
 96 |                 print("[PASS] Memory deleted")
 97 |             else:
 98 |                 print(f"[FAIL] Deletion failed: {result['message']}")
 99 |         else:
100 |             print(f"[FAIL] Deletion failed: {resp.status_code}")
101 |     except Exception as e:
102 |         print(f"[FAIL] Deletion error: {e}")
103 |     
104 |     print("\n" + "=" * 40)
105 |     print("CRUD testing completed!")
106 | 
107 | if __name__ == "__main__":
108 |     test_memory_crud()
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/web/oauth/discovery.py:
--------------------------------------------------------------------------------

```python
 1 | # Copyright 2024 Heinrich Krupp
 2 | #
 3 | # Licensed under the Apache License, Version 2.0 (the "License");
 4 | # you may not use this file except in compliance with the License.
 5 | # You may obtain a copy of the License at
 6 | #
 7 | #     http://www.apache.org/licenses/LICENSE-2.0
 8 | #
 9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 | 
15 | """
16 | OAuth 2.1 Discovery endpoints for MCP Memory Service.
17 | 
18 | Implements .well-known endpoints required for OAuth 2.1 Dynamic Client Registration.
19 | """
20 | 
21 | import logging
22 | from fastapi import APIRouter
23 | from ...config import OAUTH_ISSUER, get_jwt_algorithm
24 | from .models import OAuthServerMetadata
25 | 
26 | logger = logging.getLogger(__name__)
27 | 
28 | router = APIRouter()
29 | 
30 | 
31 | 
32 | @router.get("/.well-known/oauth-authorization-server/mcp")
33 | async def oauth_authorization_server_metadata() -> OAuthServerMetadata:
34 |     """
35 |     OAuth 2.1 Authorization Server Metadata endpoint.
36 | 
37 |     Returns metadata about the OAuth 2.1 authorization server as specified
38 |     in RFC 8414. This endpoint is required for OAuth 2.1 Dynamic Client Registration.
39 |     """
40 |     logger.info("OAuth authorization server metadata requested")
41 | 
42 |     # Use OAUTH_ISSUER consistently for both issuer field and endpoint URLs
43 |     # This ensures URL consistency across discovery and JWT token validation
44 |     algorithm = get_jwt_algorithm()
45 |     metadata = OAuthServerMetadata(
46 |         issuer=OAUTH_ISSUER,
47 |         authorization_endpoint=f"{OAUTH_ISSUER}/oauth/authorize",
48 |         token_endpoint=f"{OAUTH_ISSUER}/oauth/token",
49 |         registration_endpoint=f"{OAUTH_ISSUER}/oauth/register",
50 |         grant_types_supported=["authorization_code", "client_credentials"],
51 |         response_types_supported=["code"],
52 |         token_endpoint_auth_methods_supported=["client_secret_basic", "client_secret_post"],
53 |         scopes_supported=["read", "write", "admin"],
54 |         id_token_signing_alg_values_supported=[algorithm]
55 |     )
56 | 
57 |     logger.debug(f"Returning OAuth metadata: issuer={metadata.issuer}")
58 |     return metadata
59 | 
60 | 
61 | @router.get("/.well-known/openid-configuration/mcp")
62 | async def openid_configuration() -> OAuthServerMetadata:
63 |     """
64 |     OpenID Connect Discovery endpoint.
65 | 
66 |     Some OAuth 2.1 clients may also check this endpoint for compatibility.
67 |     For now, we return the same metadata as the OAuth authorization server.
68 |     """
69 |     logger.info("OpenID Connect configuration requested")
70 | 
71 |     # Return the same metadata as OAuth authorization server for compatibility
72 |     return await oauth_authorization_server_metadata()
73 | 
74 | 
75 | @router.get("/.well-known/oauth-authorization-server")
76 | async def oauth_authorization_server_metadata_generic() -> OAuthServerMetadata:
77 |     """
78 |     Generic OAuth 2.1 Authorization Server Metadata endpoint.
79 | 
80 |     Fallback endpoint for clients that don't append the /mcp suffix.
81 |     """
82 |     logger.info("Generic OAuth authorization server metadata requested")
83 |     return await oauth_authorization_server_metadata()
84 | 
85 | 
86 | @router.get("/.well-known/openid-configuration")
87 | async def openid_configuration_generic() -> OAuthServerMetadata:
88 |     """
89 |     Generic OpenID Connect Discovery endpoint.
90 | 
91 |     Fallback endpoint for clients that don't append the /mcp suffix.
92 |     """
93 |     logger.info("Generic OpenID Connect configuration requested")
94 |     return await oauth_authorization_server_metadata()
```

--------------------------------------------------------------------------------
/docs/mastery/api-reference.md:
--------------------------------------------------------------------------------

```markdown
 1 | # MCP Memory Service — API Reference
 2 | 
 3 | This document catalogs available APIs exposed via the MCP servers and summarizes request and response patterns.
 4 | 
 5 | ## MCP (FastMCP HTTP) Tools
 6 | 
 7 | Defined in `src/mcp_memory_service/mcp_server.py` using `@mcp.tool()`:
 8 | 
 9 | - `store_memory(content, tags=None, memory_type="note", metadata=None, client_hostname=None)`
10 |   - Stores a new memory; tags and metadata optional. If `INCLUDE_HOSTNAME=true`, a `source:<hostname>` tag and `hostname` metadata are added.
11 |   - Response: `{ success: bool, message: str, content_hash: str }`.
12 | 
13 | - `retrieve_memory(query, n_results=5, min_similarity=0.0)`
14 |   - Semantic search by query; returns up to `n_results` matching memories.
15 |   - Response: `{ memories: [{ content, content_hash, tags, memory_type, created_at, similarity_score }...], query, total_results }`.
16 | 
17 | - `search_by_tag(tags, match_all=False)`
18 |   - Search by a tag or list of tags. `match_all=true` requires all tags; otherwise any.
19 |   - Response: `{ memories: [{ content, content_hash, tags, memory_type, created_at }...], search_tags: [...], match_all, total_results }`.
20 | 
21 | - `delete_memory(content_hash)`
22 |   - Deletes a memory by its content hash.
23 |   - Response: `{ success: bool, message: str, content_hash }`.
24 | 
25 | - `check_database_health()`
26 |   - Health and status of the configured backend.
27 |   - Response: `{ status: "healthy"|"error", backend, statistics: { total_memories, total_tags, storage_size, last_backup }, timestamp? }`.
28 | 
29 | Transport: `mcp.run("streamable-http")`, default host `0.0.0.0`, default port `8000` or `MCP_SERVER_PORT`/`MCP_SERVER_HOST`.
30 | 
31 | ## MCP (stdio) Server Tools and Prompts
32 | 
33 | Defined in `src/mcp_memory_service/server.py` using `mcp.server.Server`. Exposes a broader set of tools/prompts beyond the core FastMCP tools above.
34 | 
35 | Highlights:
36 | 
37 | - Core memory ops: store, retrieve/search, search_by_tag(s), delete, delete_by_tag, cleanup_duplicates, update_memory_metadata, time-based recall.
38 | - Analysis/export: knowledge_analysis, knowledge_export (supports `format: json|markdown|text`, optional filters).
39 | - Maintenance: memory_cleanup (duplicate detection heuristics), health/stats, tag listing.
40 | - Consolidation (optional): association, clustering, compression, forgetting tasks and schedulers when enabled.
41 | 
42 | Note: The stdio server dynamically picks storage mode for multi-client scenarios (direct SQLite-vec with WAL vs. HTTP coordination), suppresses stdout for Claude Desktop, and prints richer diagnostics for LM Studio.
43 | 
44 | ## HTTP Interface
45 | 
46 | - For FastMCP, HTTP transport is used to carry MCP protocol; endpoints are handled by the FastMCP layer and not intended as a REST API surface.
47 | - A dedicated HTTP API and dashboard exist under `src/mcp_memory_service/web/` in some distributions. In this repo version, coordination HTTP is internal and the recommended external interface is MCP.
48 | 
49 | ## Error Model and Logging
50 | 
51 | - MCP tool errors are surfaced as `{ success: false, message: <details> }` or include `error` fields.
52 | - Logging routes WARNING+ to stderr (Claude Desktop strict mode), info/debug to stdout only for LM Studio; set `LOG_LEVEL` for verbosity.
53 | 
54 | ## Examples
55 | 
56 | Store memory:
57 | 
58 | ```
59 | tool: store_memory
60 | args: { "content": "Refactored auth flow to use OAuth 2.1", "tags": ["auth", "refactor"], "memory_type": "note" }
61 | ```
62 | 
63 | Retrieve by query:
64 | 
65 | ```
66 | tool: retrieve_memory
67 | args: { "query": "OAuth refactor", "n_results": 5 }
68 | ```
69 | 
70 | Search by tags:
71 | 
72 | ```
73 | tool: search_by_tag
74 | args: { "tags": ["auth", "refactor"], "match_all": true }
75 | ```
76 | 
77 | Delete by hash:
78 | 
79 | ```
80 | tool: delete_memory
81 | args: { "content_hash": "<hash>" }
82 | ```
83 | 
84 | 
```

--------------------------------------------------------------------------------
/scripts/pr/detect_breaking_changes.sh:
--------------------------------------------------------------------------------

```bash
  1 | #!/bin/bash
  2 | # scripts/pr/detect_breaking_changes.sh - Analyze API changes for breaking changes
  3 | #
  4 | # Usage: bash scripts/pr/detect_breaking_changes.sh <BASE_BRANCH> [HEAD_BRANCH]
  5 | # Example: bash scripts/pr/detect_breaking_changes.sh main feature/new-api
  6 | 
  7 | set -e
  8 | 
  9 | BASE_BRANCH=${1:-main}
 10 | HEAD_BRANCH=${2:-$(git branch --show-current)}
 11 | 
 12 | if [ -z "$BASE_BRANCH" ]; then
 13 |     echo "Usage: $0 <BASE_BRANCH> [HEAD_BRANCH]"
 14 |     echo "Example: $0 main feature/new-api"
 15 |     exit 1
 16 | fi
 17 | 
 18 | if ! command -v gemini &> /dev/null; then
 19 |     echo "Error: Gemini CLI is not installed"
 20 |     exit 1
 21 | fi
 22 | 
 23 | echo "=== Breaking Change Detection ==="
 24 | echo "Base branch: $BASE_BRANCH"
 25 | echo "Head branch: $HEAD_BRANCH"
 26 | echo ""
 27 | 
 28 | # Get API-related file changes
 29 | echo "Analyzing API changes..."
 30 | api_changes=$(git diff $BASE_BRANCH...$HEAD_BRANCH -- \
 31 |     src/mcp_memory_service/tools.py \
 32 |     src/mcp_memory_service/web/api/ \
 33 |     src/mcp_memory_service/storage/base.py \
 34 |     2>/dev/null || echo "")
 35 | 
 36 | if [ -z "$api_changes" ]; then
 37 |     echo "✅ No API changes detected"
 38 |     echo ""
 39 |     echo "Checked paths:"
 40 |     echo "- src/mcp_memory_service/tools.py (MCP tools)"
 41 |     echo "- src/mcp_memory_service/web/api/ (Web API endpoints)"
 42 |     echo "- src/mcp_memory_service/storage/base.py (Storage interface)"
 43 |     exit 0
 44 | fi
 45 | 
 46 | echo "API changes detected. Analyzing for breaking changes..."
 47 | echo ""
 48 | 
 49 | # Check diff size and warn if large
 50 | diff_lines=$(echo "$api_changes" | wc -l)
 51 | if [ $diff_lines -gt 200 ]; then
 52 |     echo "⚠️  Warning: Large diff ($diff_lines lines) - analysis may miss changes beyond model context window"
 53 |     echo "   Consider reviewing the full diff manually for breaking changes"
 54 | fi
 55 | 
 56 | # Analyze with Gemini (full diff, not truncated)
 57 | result=$(gemini "Analyze these API changes for BREAKING CHANGES ONLY.
 58 | 
 59 | A breaking change is:
 60 | 1. **Removed** function, method, class, or HTTP endpoint
 61 | 2. **Changed function signature**: parameters removed, reordered, or made required
 62 | 3. **Changed return type**: incompatible return value structure
 63 | 4. **Renamed public API**: function, class, endpoint renamed without alias
 64 | 5. **Changed HTTP endpoint**: path or method changed
 65 | 6. **Removed configuration option**: environment variable or config field removed
 66 | 
 67 | NON-BREAKING changes (ignore these):
 68 | - Added new functions/endpoints (backward compatible)
 69 | - Added optional parameters with defaults
 70 | - Improved documentation
 71 | - Internal implementation changes
 72 | - Refactoring that preserves public interface
 73 | 
 74 | For each breaking change, provide:
 75 | - Severity: CRITICAL (data loss/security) / HIGH (blocks upgrade) / MEDIUM (migration effort)
 76 | - Type: Removed / Signature Changed / Renamed / etc.
 77 | - Location: File and function/endpoint name
 78 | - Impact: What breaks for users
 79 | - Migration: How users should adapt
 80 | 
 81 | API Changes:
 82 | \`\`\`diff
 83 | $api_changes
 84 | \`\`\`
 85 | 
 86 | Output format:
 87 | If breaking changes found:
 88 | ## BREAKING CHANGES DETECTED
 89 | 
 90 | ### [SEVERITY] Type: Location
 91 | **Impact:** <description>
 92 | **Migration:** <instructions>
 93 | 
 94 | If no breaking changes:
 95 | No breaking changes detected.")
 96 | 
 97 | echo "$result"
 98 | echo ""
 99 | 
100 | # Check severity
101 | if echo "$result" | grep -qi "CRITICAL"; then
102 |     echo "🔴 CRITICAL breaking changes detected!"
103 |     exit 3
104 | elif echo "$result" | grep -qi "HIGH"; then
105 |     echo "🟠 HIGH severity breaking changes detected!"
106 |     exit 2
107 | elif echo "$result" | grep -qi "MEDIUM"; then
108 |     echo "🟡 MEDIUM severity breaking changes detected"
109 |     exit 1
110 | elif echo "$result" | grep -qi "breaking"; then
111 |     echo "⚠️  Breaking changes detected (unspecified severity)"
112 |     exit 1
113 | else
114 |     echo "✅ No breaking changes detected"
115 |     exit 0
116 | fi
117 | 
```

--------------------------------------------------------------------------------
/claude_commands/memory-recall.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Recall Memories by Time and Context
 2 | 
 3 | I'll help you retrieve memories from your MCP Memory Service using natural language time expressions and contextual queries. This command excels at finding past conversations, decisions, and notes based on when they occurred.
 4 | 
 5 | ## What I'll do:
 6 | 
 7 | 1. **Parse Time Expressions**: I'll interpret natural language time queries like:
 8 |    - "yesterday", "last week", "two months ago"
 9 |    - "last Tuesday", "this morning", "last summer"
10 |    - "before the database migration", "since we started using SQLite"
11 | 
12 | 2. **Context-Aware Search**: I'll consider the current project context to find relevant memories related to your current work.
13 | 
14 | 3. **Smart Filtering**: I'll automatically filter results to show the most relevant memories first, considering:
15 |    - Temporal relevance to your query
16 |    - Project and directory context matching
17 |    - Semantic similarity to current work
18 | 
19 | 4. **Present Results**: I'll format the retrieved memories with clear context about when they were created and why they're relevant.
20 | 
21 | ## Usage Examples:
22 | 
23 | ```bash
24 | claude /memory-recall "what did we decide about the database last week?"
25 | claude /memory-recall "yesterday's architectural decisions"
26 | claude /memory-recall "memories from when we were working on the mDNS feature"
27 | claude /memory-recall --project "mcp-memory-service" "last month's progress"
28 | ```
29 | 
30 | ## Implementation:
31 | 
32 | I'll connect to your MCP Memory Service at `https://memory.local:8443/` and use its API endpoints. The recall process involves:
33 | 
34 | 1. **Query Processing**: Parse the natural language time expression and extract context clues
35 | 2. **Memory Retrieval**: Use the appropriate API endpoints:
36 |    - `POST /api/search/by-time` - Natural language time-based queries
37 |    - `POST /api/search` - Semantic search for context-based recall
38 |    - `GET /api/memories` - List memories with pagination and filtering
39 |    - `GET /api/memories/{hash}` - Retrieve specific memory by hash
40 | 3. **Context Matching**: Filter results based on current project and directory context
41 | 4. **Relevance Scoring**: Use similarity scores from the API responses
42 | 5. **Result Presentation**: Format memories with timestamps, tags, and relevance context
43 | 
44 | All requests use curl with `-k` flag for HTTPS and proper JSON formatting.
45 | 
46 | For each recalled memory, I'll show:
47 | - **Content**: The actual memory content
48 | - **Created**: When the memory was stored
49 | - **Tags**: Associated tags and categories
50 | - **Context**: Project and session context when stored
51 | - **Relevance**: Why this memory matches your query
52 | 
53 | ## Time Expression Examples:
54 | 
55 | - **Relative**: "yesterday", "last week", "two days ago", "this month"
56 | - **Seasonal**: "last summer", "this winter", "spring 2024"
57 | 
58 | **Note**: Some expressions like "last hour" may not be supported by the time parser. Standard expressions like "today", "yesterday", "last week" work reliably.
59 | - **Event-based**: "before the refactor", "since we switched to SQLite", "during the testing phase"
60 | - **Specific**: "January 15th", "last Tuesday morning", "end of last month"
61 | 
62 | ## Arguments:
63 | 
64 | - `$ARGUMENTS` - The time-based query, with optional flags:
65 |   - `--limit N` - Maximum number of memories to retrieve (default: 10)
66 |   - `--project "name"` - Filter by specific project
67 |   - `--tags "tag1,tag2"` - Additional tag filtering
68 |   - `--type "note|decision|task"` - Filter by memory type
69 |   - `--include-context` - Show full session context for each memory
70 | 
71 | If no memories are found for the specified time period, I'll suggest broadening the search or checking if the MCP Memory Service contains data for that timeframe.
```

--------------------------------------------------------------------------------
/tests/test_memory_ops.py:
--------------------------------------------------------------------------------

```python
  1 | """
  2 | MCP Memory Service
  3 | Copyright (c) 2024 Heinrich Krupp
  4 | Licensed under the MIT License. See LICENSE file in the project root for full license text.
  5 | """
  6 | """
  7 | Test core memory operations of the MCP Memory Service.
  8 | """
  9 | import pytest
 10 | import pytest_asyncio
 11 | import asyncio
 12 | from mcp.server import Server
 13 | from mcp.server.models import InitializationOptions
 14 | 
 15 | @pytest_asyncio.fixture
 16 | async def memory_server():
 17 |     """Create a test instance of the memory server."""
 18 |     server = Server("test-memory")
 19 |     # Initialize with test configuration
 20 |     await server.initialize(InitializationOptions(
 21 |         server_name="test-memory",
 22 |         server_version="0.1.0"
 23 |     ))
 24 |     yield server
 25 |     # Cleanup after tests
 26 |     await server.shutdown()
 27 | 
 28 | @pytest.mark.asyncio
 29 | async def test_store_memory(memory_server):
 30 |     """Test storing new memory entries."""
 31 |     test_content = "The capital of France is Paris"
 32 |     test_metadata = {
 33 |         "tags": ["geography", "cities", "europe"],
 34 |         "type": "fact"
 35 |     }
 36 |     
 37 |     response = await memory_server.store_memory(
 38 |         content=test_content,
 39 |         metadata=test_metadata
 40 |     )
 41 |     
 42 |     assert response is not None
 43 |     # Add more specific assertions based on expected response format
 44 | 
 45 | @pytest.mark.asyncio
 46 | async def test_retrieve_memory(memory_server):
 47 |     """Test retrieving memories using semantic search."""
 48 |     # First store some test data
 49 |     test_memories = [
 50 |         "The capital of France is Paris",
 51 |         "London is the capital of England",
 52 |         "Berlin is the capital of Germany"
 53 |     ]
 54 |     
 55 |     for memory in test_memories:
 56 |         await memory_server.store_memory(content=memory)
 57 |     
 58 |     # Test retrieval
 59 |     query = "What is the capital of France?"
 60 |     results = await memory_server.retrieve_memory(
 61 |         query=query,
 62 |         n_results=1
 63 |     )
 64 |     
 65 |     assert results is not None
 66 |     assert len(results) == 1
 67 |     assert "Paris" in results[0]  # The most relevant result should mention Paris
 68 | 
 69 | @pytest.mark.asyncio
 70 | async def test_search_by_tag(memory_server):
 71 |     """Test retrieving memories by tags."""
 72 |     # Store memory with tags
 73 |     await memory_server.store_memory(
 74 |         content="Paris is beautiful in spring",
 75 |         metadata={"tags": ["travel", "cities", "europe"]}
 76 |     )
 77 |     
 78 |     # Search by tags
 79 |     results = await memory_server.search_by_tag(
 80 |         tags=["travel", "europe"]
 81 |     )
 82 |     
 83 |     assert results is not None
 84 |     assert len(results) > 0
 85 |     assert "Paris" in results[0]
 86 | 
 87 | @pytest.mark.asyncio
 88 | async def test_delete_memory(memory_server):
 89 |     """Test deleting specific memories."""
 90 |     # Store a memory and get its hash
 91 |     content = "Memory to be deleted"
 92 |     response = await memory_server.store_memory(content=content)
 93 |     content_hash = response.get("hash")
 94 |     
 95 |     # Delete the memory
 96 |     delete_response = await memory_server.delete_memory(
 97 |         content_hash=content_hash
 98 |     )
 99 |     
100 |     assert delete_response.get("success") is True
101 |     
102 |     # Verify memory is deleted
103 |     results = await memory_server.exact_match_retrieve(content=content)
104 |     assert len(results) == 0
105 | 
106 | @pytest.mark.asyncio
107 | async def test_memory_with_empty_content(memory_server):
108 |     """Test handling of empty or invalid content."""
109 |     with pytest.raises(ValueError):
110 |         await memory_server.store_memory(content="")
111 | 
112 | @pytest.mark.asyncio
113 | async def test_memory_with_invalid_tags(memory_server):
114 |     """Test handling of invalid tags metadata."""
115 |     with pytest.raises(ValueError):
116 |         await memory_server.store_memory(
117 |             content="Test content",
118 |             metadata={"tags": "invalid"}  # Should be a list
119 |         )
```

--------------------------------------------------------------------------------
/claude_commands/memory-store.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Store Memory with Context
 2 | 
 3 | I'll help you store information in your MCP Memory Service with proper context and tagging. This command captures the current session context and stores it as a persistent memory that can be recalled later.
 4 | 
 5 | ## What I'll do:
 6 | 
 7 | 1. **Detect Current Context**: I'll analyze the current working directory, recent files, and conversation context to understand what we're working on.
 8 | 
 9 | 2. **Capture Memory Content**: I'll take the provided information or current session summary and prepare it for storage.
10 | 
11 | 3. **Add Smart Tags**: I'll automatically generate relevant tags based on:
12 |    - Machine hostname (source identifier)
13 |    - Current project directory name
14 |    - Programming languages detected
15 |    - File types and patterns
16 |    - Any explicit tags you provide
17 | 
18 | 4. **Store with Metadata**: I'll include useful metadata like:
19 |    - Machine hostname for source tracking
20 |    - Timestamp and session context
21 |    - Project path and git repository info
22 |    - File associations and dependencies
23 | 
24 | ## Usage Examples:
25 | 
26 | ```bash
27 | claude /memory-store "We decided to use SQLite-vec instead of ChromaDB for better performance"
28 | claude /memory-store --tags "decision,architecture" "Database backend choice rationale"
29 | claude /memory-store --type "note" "Remember to update the Docker configuration after the database change"
30 | ```
31 | 
32 | ## Implementation:
33 | 
34 | I'll use a **hybrid remote-first approach** with local fallback for reliability:
35 | 
36 | ### Primary: Remote API Storage
37 | - **Try remote first**: `https://narrowbox.local:8443/api/memories` 
38 | - **Real-time sync**: Changes immediately available across all clients
39 | - **Single source of truth**: Consolidated database on remote server
40 | 
41 | ### Fallback: Local Staging
42 | - **If remote fails**: Store locally in staging database for later sync
43 | - **Offline capability**: Continue working when remote is unreachable  
44 | - **Auto-sync**: Changes pushed to remote when connectivity returns
45 | 
46 | ### Smart Sync Workflow
47 | ```
48 | 1. Try remote API directly (fastest path)
49 | 2. If offline/failed: Stage locally + notify user  
50 | 3. On reconnect: ./sync/memory_sync.sh automatically syncs
51 | 4. Conflict resolution: Remote wins, with user notification
52 | ```
53 | 
54 | The content will be stored with automatic context detection:
55 | - **Machine Context**: Hostname automatically added as tag (e.g., "source:your-machine-name")
56 | - **Project Context**: Current directory, git repository, recent commits
57 | - **Session Context**: Current conversation topics and decisions
58 | - **Technical Context**: Programming language, frameworks, and tools in use
59 | - **Temporal Context**: Date, time, and relationship to recent activities
60 | 
61 | ### Service Endpoints:
62 | - **Primary API**: `https://narrowbox.local:8443/api/memories`
63 | - **Sync Status**: Use `./sync/memory_sync.sh status` to check pending changes
64 | - **Manual Sync**: Use `./sync/memory_sync.sh sync` for full synchronization
65 | 
66 | I'll use the correct curl syntax with `-k` flag for HTTPS, proper JSON payload formatting, and automatic client hostname detection using the `X-Client-Hostname` header.
67 | 
68 | ## Arguments:
69 | 
70 | - `$ARGUMENTS` - The content to store as memory, or additional flags:
71 |   - `--tags "tag1,tag2"` - Explicit tags to add
72 |   - `--type "note|decision|task|reference"` - Memory type classification
73 |   - `--project "name"` - Override project name detection
74 |   - `--private` - Mark as private/sensitive content
75 | 
76 | I'll store the memory automatically without asking for confirmation. The memory will be saved immediately using proper JSON formatting with the curl command. You'll receive a brief confirmation showing the content hash and applied tags after successful storage.
```

--------------------------------------------------------------------------------
/scripts/maintenance/delete_orphaned_vectors_fixed.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Delete orphaned vectors from Cloudflare Vectorize using correct endpoint.
  4 | 
  5 | Uses /delete_by_ids (underscores, not hyphens) with proper JSON payload format.
  6 | """
  7 | 
  8 | import asyncio
  9 | import os
 10 | import sys
 11 | from pathlib import Path
 12 | 
 13 | 
 14 | async def main():
 15 |     # Set OAuth to false to avoid validation issues
 16 |     os.environ['MCP_OAUTH_ENABLED'] = 'false'
 17 | 
 18 |     # Import after setting environment
 19 |     from mcp_memory_service.storage.cloudflare import CloudflareStorage
 20 |     from mcp_memory_service.config import (
 21 |         CLOUDFLARE_API_TOKEN, CLOUDFLARE_ACCOUNT_ID,
 22 |         CLOUDFLARE_VECTORIZE_INDEX, CLOUDFLARE_D1_DATABASE_ID,
 23 |         EMBEDDING_MODEL_NAME
 24 |     )
 25 | 
 26 |     # Read vector IDs from the completed hash file
 27 |     hash_file = Path.home() / "cloudflare_d1_cleanup_completed.txt"
 28 | 
 29 |     if not hash_file.exists():
 30 |         print(f"❌ Error: Completed hash file not found: {hash_file}")
 31 |         print(f"   The D1 cleanup must be run first")
 32 |         sys.exit(1)
 33 | 
 34 |     print(f"📄 Reading vector IDs from: {hash_file}")
 35 | 
 36 |     with open(hash_file) as f:
 37 |         vector_ids = [line.strip() for line in f if line.strip()]
 38 | 
 39 |     if not vector_ids:
 40 |         print(f"✅ No vector IDs to delete (file is empty)")
 41 |         sys.exit(0)
 42 | 
 43 |     print(f"📋 Found {len(vector_ids)} orphaned vectors to delete")
 44 |     print(f"🔗 Connecting to Cloudflare...\n")
 45 | 
 46 |     # Initialize Cloudflare storage
 47 |     cloudflare = CloudflareStorage(
 48 |         api_token=CLOUDFLARE_API_TOKEN,
 49 |         account_id=CLOUDFLARE_ACCOUNT_ID,
 50 |         vectorize_index=CLOUDFLARE_VECTORIZE_INDEX,
 51 |         d1_database_id=CLOUDFLARE_D1_DATABASE_ID,
 52 |         embedding_model=EMBEDDING_MODEL_NAME
 53 |     )
 54 | 
 55 |     await cloudflare.initialize()
 56 | 
 57 |     print(f"✅ Connected to Cloudflare")
 58 |     print(f"🗑️  Deleting {len(vector_ids)} vectors using correct /delete_by_ids endpoint...\n")
 59 | 
 60 |     deleted = 0
 61 |     failed = []
 62 | 
 63 |     # Batch delete in groups of 100 (API recommended batch size)
 64 |     batch_size = 100
 65 |     total_batches = (len(vector_ids) + batch_size - 1) // batch_size
 66 | 
 67 |     for batch_num, i in enumerate(range(0, len(vector_ids), batch_size), 1):
 68 |         batch = vector_ids[i:i+batch_size]
 69 | 
 70 |         try:
 71 |             # Use the public API method for better encapsulation
 72 |             result = await cloudflare.delete_vectors_by_ids(batch)
 73 | 
 74 |             if result.get("success"):
 75 |                 deleted += len(batch)
 76 |                 mutation_id = result.get("result", {}).get("mutationId", "N/A")
 77 |                 print(f"Batch {batch_num}/{total_batches}: ✓ Deleted {len(batch)} vectors (mutation: {mutation_id[:16]}...)")
 78 |             else:
 79 |                 failed.extend(batch)
 80 |                 print(f"Batch {batch_num}/{total_batches}: ✗ Failed - {result.get('errors', 'Unknown error')}")
 81 | 
 82 |         except Exception as e:
 83 |             failed.extend(batch)
 84 |             print(f"Batch {batch_num}/{total_batches}: ✗ Exception - {str(e)[:100]}")
 85 | 
 86 |     # Final summary
 87 |     print(f"\n{'='*60}")
 88 |     print(f"📊 Vector Cleanup Summary")
 89 |     print(f"{'='*60}")
 90 |     print(f"✅ Successfully deleted: {deleted}/{len(vector_ids)}")
 91 |     print(f"✗  Failed: {len(failed)}/{len(vector_ids)}")
 92 |     print(f"{'='*60}\n")
 93 | 
 94 |     if deleted > 0:
 95 |         print(f"🎉 Vector cleanup complete!")
 96 |         print(f"📋 {deleted} orphaned vectors removed from Vectorize")
 97 |         print(f"⏱️  Note: Deletions are asynchronous and may take a few seconds to propagate\n")
 98 | 
 99 |     if failed:
100 |         print(f"⚠️  {len(failed)} vectors failed to delete")
101 |         print(f"   You may need to retry these manually\n")
102 | 
103 |     return 0 if len(failed) == 0 else 1
104 | 
105 | 
106 | if __name__ == "__main__":
107 |     sys.exit(asyncio.run(main()))
108 | 
```

--------------------------------------------------------------------------------
/scripts/validation/verify_torch.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | # Copyright 2024 Heinrich Krupp
  3 | #
  4 | # Licensed under the Apache License, Version 2.0 (the "License");
  5 | # you may not use this file except in compliance with the License.
  6 | # You may obtain a copy of the License at
  7 | #
  8 | #     http://www.apache.org/licenses/LICENSE-2.0
  9 | #
 10 | # Unless required by applicable law or agreed to in writing, software
 11 | # distributed under the License is distributed on an "AS IS" BASIS,
 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 13 | # See the License for the specific language governing permissions and
 14 | # limitations under the License.
 15 | 
 16 | """
 17 | Verify PyTorch installation and functionality.
 18 | This script attempts to import PyTorch and run basic operations.
 19 | """
 20 | import os
 21 | import sys
 22 | 
 23 | # Disable sitecustomize.py and other import hooks to prevent recursion issues
 24 | os.environ["PYTHONNOUSERSITE"] = "1"  # Disable user site-packages
 25 | os.environ["PYTHONPATH"] = ""  # Clear PYTHONPATH
 26 | 
 27 | def print_info(text):
 28 |     """Print formatted info text."""
 29 |     print(f"[INFO] {text}")
 30 | 
 31 | def print_error(text):
 32 |     """Print formatted error text."""
 33 |     print(f"[ERROR] {text}")
 34 | 
 35 | def print_success(text):
 36 |     """Print formatted success text."""
 37 |     print(f"[SUCCESS] {text}")
 38 | 
 39 | def print_warning(text):
 40 |     """Print formatted warning text."""
 41 |     print(f"[WARNING] {text}")
 42 | 
 43 | def verify_torch():
 44 |     """Verify PyTorch installation and functionality."""
 45 |     print_info("Verifying PyTorch installation")
 46 |     
 47 |     # Add site-packages to sys.path
 48 |     site_packages = os.path.join(sys.prefix, 'Lib', 'site-packages')
 49 |     if site_packages not in sys.path:
 50 |         sys.path.insert(0, site_packages)
 51 |     
 52 |     # Print sys.path for debugging
 53 |     print_info("Python path:")
 54 |     for path in sys.path:
 55 |         print(f"  - {path}")
 56 |     
 57 |     # Try to import torch
 58 |     try:
 59 |         print_info("Attempting to import torch")
 60 |         import torch
 61 |         print_success(f"PyTorch is installed (version {torch.__version__})")
 62 |         print_info(f"PyTorch location: {torch.__file__}")
 63 |         
 64 |         # Check if CUDA is available
 65 |         if torch.cuda.is_available():
 66 |             print_success(f"CUDA is available (version {torch.version.cuda})")
 67 |             print_info(f"GPU: {torch.cuda.get_device_name(0)}")
 68 |             
 69 |             # Test a simple CUDA operation
 70 |             try:
 71 |                 x = torch.rand(5, 3).cuda()
 72 |                 y = torch.rand(5, 3).cuda()
 73 |                 z = x + y
 74 |                 print_success("Basic CUDA tensor operations work correctly")
 75 |             except Exception as e:
 76 |                 print_warning(f"CUDA tensor operations failed: {e}")
 77 |                 print_warning("Falling back to CPU mode")
 78 |         else:
 79 |             print_info("CUDA is not available, using CPU-only mode")
 80 |         
 81 |         # Test a simple tensor operation
 82 |         try:
 83 |             x = torch.rand(5, 3)
 84 |             y = torch.rand(5, 3)
 85 |             z = x + y
 86 |             print_success("Basic tensor operations work correctly")
 87 |         except Exception as e:
 88 |             print_error(f"Failed to perform basic tensor operations: {e}")
 89 |             return False
 90 |         
 91 |         return True
 92 |     except ImportError as e:
 93 |         print_error(f"PyTorch is not installed: {e}")
 94 |         return False
 95 |     except Exception as e:
 96 |         print_error(f"Error checking PyTorch installation: {e}")
 97 |         import traceback
 98 |         traceback.print_exc()
 99 |         return False
100 | 
101 | def main():
102 |     """Main function."""
103 |     if verify_torch():
104 |         print_success("PyTorch verification completed successfully")
105 |     else:
106 |         print_error("PyTorch verification failed")
107 |         sys.exit(1)
108 | 
109 | if __name__ == "__main__":
110 |     main()
```

--------------------------------------------------------------------------------
/scripts/migration/migrate_to_sqlite_vec.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Simple migration script to help users migrate from ChromaDB to sqlite-vec.
  4 | This provides an easy way to switch to the lighter sqlite-vec backend.
  5 | """
  6 | 
  7 | import os
  8 | import sys
  9 | import asyncio
 10 | from pathlib import Path
 11 | 
 12 | # Add scripts directory to path
 13 | sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'scripts'))
 14 | 
 15 | from migrate_storage import MigrationTool
 16 | 
 17 | async def main():
 18 |     """Simple migration from ChromaDB to sqlite-vec with sensible defaults."""
 19 |     print("🔄 MCP Memory Service - Migrate to SQLite-vec")
 20 |     print("=" * 50)
 21 |     
 22 |     # Get default paths
 23 |     home = Path.home()
 24 |     if sys.platform == 'darwin':  # macOS
 25 |         base_dir = home / 'Library' / 'Application Support' / 'mcp-memory'
 26 |     elif sys.platform == 'win32':  # Windows
 27 |         base_dir = Path(os.getenv('LOCALAPPDATA', '')) / 'mcp-memory'
 28 |     else:  # Linux
 29 |         base_dir = home / '.local' / 'share' / 'mcp-memory'
 30 |     
 31 |     chroma_path = base_dir / 'chroma_db'
 32 |     sqlite_path = base_dir / 'sqlite_vec.db'
 33 |     backup_path = base_dir / 'migration_backup.json'
 34 |     
 35 |     print(f"📁 Source (ChromaDB): {chroma_path}")
 36 |     print(f"📁 Target (SQLite-vec): {sqlite_path}")
 37 |     print(f"💾 Backup: {backup_path}")
 38 |     print()
 39 |     
 40 |     # Check if source exists
 41 |     if not chroma_path.exists():
 42 |         print(f"❌ ChromaDB path not found: {chroma_path}")
 43 |         print("💡 Make sure you have some memories stored first.")
 44 |         return 1
 45 |     
 46 |     # Check if target already exists
 47 |     if sqlite_path.exists():
 48 |         response = input(f"⚠️  SQLite-vec database already exists. Overwrite? (y/N): ")
 49 |         if response.lower() != 'y':
 50 |             print("❌ Migration cancelled")
 51 |             return 1
 52 |     
 53 |     # Confirm migration
 54 |     print("🚀 Ready to migrate!")
 55 |     print("   This will:")
 56 |     print("   - Export all memories from ChromaDB")
 57 |     print("   - Create a backup file")
 58 |     print("   - Import memories to SQLite-vec")
 59 |     print()
 60 |     
 61 |     response = input("Continue? (Y/n): ")
 62 |     if response.lower() == 'n':
 63 |         print("❌ Migration cancelled")
 64 |         return 1
 65 |     
 66 |     # Perform migration
 67 |     migration_tool = MigrationTool()
 68 |     
 69 |     try:
 70 |         success = await migration_tool.migrate(
 71 |             from_backend='chroma',
 72 |             to_backend='sqlite_vec',
 73 |             source_path=str(chroma_path),
 74 |             target_path=str(sqlite_path),
 75 |             create_backup=True,
 76 |             backup_path=str(backup_path)
 77 |         )
 78 |         
 79 |         if success:
 80 |             print()
 81 |             print("✅ Migration completed successfully!")
 82 |             print()
 83 |             print("📝 Next steps:")
 84 |             print("   1. Set environment variable: MCP_MEMORY_STORAGE_BACKEND=sqlite_vec")
 85 |             print("   2. Restart your MCP client (Claude Desktop)")
 86 |             print("   3. Test that your memories are accessible")
 87 |             print()
 88 |             print("🔧 Environment variable examples:")
 89 |             print("   # Bash/Zsh:")
 90 |             print("   export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec")
 91 |             print()
 92 |             print("   # Windows Command Prompt:")
 93 |             print("   set MCP_MEMORY_STORAGE_BACKEND=sqlite_vec")
 94 |             print()
 95 |             print("   # Windows PowerShell:")
 96 |             print("   $env:MCP_MEMORY_STORAGE_BACKEND='sqlite_vec'")
 97 |             print()
 98 |             print(f"💾 Backup available at: {backup_path}")
 99 |             return 0
100 |         else:
101 |             print("❌ Migration failed. Check logs for details.")
102 |             return 1
103 |             
104 |     except Exception as e:
105 |         print(f"❌ Migration failed: {e}")
106 |         return 1
107 | 
108 | if __name__ == "__main__":
109 |     sys.exit(asyncio.run(main()))
```

--------------------------------------------------------------------------------
/docs/development/code-quality/phase-2a-index.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Phase 2a Refactoring - Complete Documentation Index
  2 | 
  3 | **Status:** ✅ COMPLETE  
  4 | **Date:** November 24, 2025  
  5 | **Issue:** #246 - Code Quality Phase 2
  6 | 
  7 | ---
  8 | 
  9 | ## 📋 Documentation Files
 10 | 
 11 | ### 1. PHASE_2A_COMPLETION_REPORT.md
 12 | **Comprehensive completion report with full metrics**
 13 | 
 14 | - Executive summary of achievements
 15 | - Detailed before/after analysis for each function
 16 | - Quality improvements across all dimensions
 17 | - Test suite status verification
 18 | - Lessons learned and recommendations
 19 | - 433 lines of detailed analysis
 20 | 
 21 | **Read this for:** Complete project overview and detailed metrics
 22 | 
 23 | ### 2. REFACTORING_HANDLE_GET_PROMPT.md
 24 | **Function #6 refactoring specification - Latest completion**
 25 | 
 26 | - Function complexity reduction: 33 → 6 (82%)
 27 | - 5 specialized prompt handlers documented
 28 | - Design rationale and strategy
 29 | - Testing recommendations
 30 | - Code review checklist
 31 | - 194 lines of detailed specification
 32 | 
 33 | **Read this for:** In-depth look at the final refactoring completed
 34 | 
 35 | ---
 36 | 
 37 | ## 🔧 Code Changes
 38 | 
 39 | ### Modified Files
 40 | 
 41 | **src/mcp_memory_service/server.py**
 42 | - Refactored `handle_get_prompt()` method
 43 | - Created 5 new helper methods:
 44 |   - `_prompt_memory_review()`
 45 |   - `_prompt_memory_analysis()`
 46 |   - `_prompt_knowledge_export()`
 47 |   - `_prompt_memory_cleanup()`
 48 |   - `_prompt_learning_session()`
 49 | 
 50 | **src/mcp_memory_service/mcp_server.py**
 51 | - Fixed test collection error
 52 | - Added graceful FastMCP fallback
 53 | - `_DummyFastMCP` class for compatibility
 54 | 
 55 | ---
 56 | 
 57 | ## 📊 Summary Metrics
 58 | 
 59 | | Metric | Value |
 60 | |--------|-------|
 61 | | Functions Refactored | 6 of 27 (22%) |
 62 | | Average Complexity Reduction | 77% |
 63 | | Peak Complexity Reduction | 87% (62 → 8) |
 64 | | Tests Passing | 431 |
 65 | | Backward Compatibility | 100% |
 66 | | Health Score Improvement | 73/100 (target: 80/100) |
 67 | 
 68 | ---
 69 | 
 70 | ## ✅ Functions Completed
 71 | 
 72 | 1. **install.py::main()** - 62 → 8 (87% ↓)
 73 | 2. **sqlite_vec.py::initialize()** - Nesting 10 → 3 (70% ↓)
 74 | 3. **config.py::__main__()** - 42 (validated extraction)
 75 | 4. **oauth/authorization.py::token()** - 35 → 8 (77% ↓)
 76 | 5. **install_package()** - 33 → 7 (78% ↓)
 77 | 6. **handle_get_prompt()** - 33 → 6 (82% ↓) ⭐
 78 | 
 79 | ---
 80 | 
 81 | ## 🔗 Related Resources
 82 | 
 83 | - **GitHub Issue:** [#246 - Code Quality Phase 2](https://github.com/doobidoo/mcp-memory-service/issues/246)
 84 | - **Issue Comment:** [Phase 2a Progress Update](https://github.com/doobidoo/mcp-memory-service/issues/246#issuecomment-3572351946)
 85 | 
 86 | ---
 87 | 
 88 | ## 📈 Next Phases
 89 | 
 90 | ### Phase 2a Continuation
 91 | - 21 remaining high-complexity functions
 92 | - Estimated: 2-3 release cycles
 93 | - Apply same successful patterns
 94 | 
 95 | ### Phase 2b
 96 | - Code duplication consolidation
 97 | - 14 duplicate groups → reduce to <3%
 98 | - Estimated: 1-2 release cycles
 99 | 
100 | ### Phase 2c
101 | - Architecture compliance violations
102 | - 95.8% → 100% compliance
103 | - Estimated: 1 release cycle
104 | 
105 | ---
106 | 
107 | ## 🎯 How to Use This Documentation
108 | 
109 | **For Code Review:**
110 | 1. Start with PHASE_2A_COMPLETION_REPORT.md for overview
111 | 2. Review REFACTORING_HANDLE_GET_PROMPT.md for detailed design
112 | 3. Check git commits for actual code changes
113 | 
114 | **For Continuation (Phase 2a):**
115 | 1. Review quality improvements in PHASE_2A_COMPLETION_REPORT.md
116 | 2. Follow same patterns: dispatcher + specialized handlers
117 | 3. Apply extract method for nesting reduction
118 | 4. Ensure backward compatibility maintained
119 | 
120 | **For Future Refactoring:**
121 | - Use dispatcher pattern for multi-branch logic
122 | - Extract methods for nesting depth >3
123 | - Maintain single responsibility principle
124 | - Always keep backward compatibility
125 | 
126 | ---
127 | 
128 | ## 🚀 Key Achievements
129 | 
130 | ✅ 77% average complexity reduction  
131 | ✅ 100% backward compatibility  
132 | ✅ 431 tests passing  
133 | ✅ Clear path for Phase 2b & 2c  
134 | ✅ Comprehensive documentation  
135 | ✅ Ready for review and merge  
136 | 
137 | ---
138 | 
139 | **Last Updated:** November 24, 2025  
140 | **Status:** COMPLETE AND VERIFIED
141 | 
```

--------------------------------------------------------------------------------
/scripts/testing/run_complete_test.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | # Copyright 2024 Heinrich Krupp
  3 | #
  4 | # Licensed under the Apache License, Version 2.0 (the "License");
  5 | # you may not use this file except in compliance with the License.
  6 | # You may obtain a copy of the License at
  7 | #
  8 | #     http://www.apache.org/licenses/LICENSE-2.0
  9 | #
 10 | # Unless required by applicable law or agreed to in writing, software
 11 | # distributed under the License is distributed on an "AS IS" BASIS,
 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 13 | # See the License for the specific language governing permissions and
 14 | # limitations under the License.
 15 | 
 16 | """
 17 | Complete test suite for the HTTP/SSE + SQLite-vec implementation.
 18 | Runs all tests in sequence to validate the entire system.
 19 | """
 20 | 
 21 | import subprocess
 22 | import sys
 23 | import time
 24 | import requests
 25 | from pathlib import Path
 26 | 
 27 | def check_server_health():
 28 |     """Check if the server is running and healthy."""
 29 |     try:
 30 |         response = requests.get("http://localhost:8000/api/health", timeout=5)
 31 |         return response.status_code == 200
 32 |     except:
 33 |         return False
 34 | 
 35 | def run_test_script(script_name, description):
 36 |     """Run a test script and return success status."""
 37 |     print(f"\n{'='*60}")
 38 |     print(f"🧪 {description}")
 39 |     print('='*60)
 40 |     
 41 |     try:
 42 |         # Run the test script
 43 |         result = subprocess.run([
 44 |             sys.executable, 
 45 |             str(Path(__file__).parent / script_name)
 46 |         ], capture_output=True, text=True, timeout=60)
 47 |         
 48 |         if result.returncode == 0:
 49 |             print("✅ Test PASSED")
 50 |             if result.stdout:
 51 |                 print("Output:", result.stdout[-500:])  # Last 500 chars
 52 |             return True
 53 |         else:
 54 |             print("❌ Test FAILED")
 55 |             print("Error:", result.stderr)
 56 |             return False
 57 |             
 58 |     except subprocess.TimeoutExpired:
 59 |         print("⏰ Test TIMED OUT")
 60 |         return False
 61 |     except Exception as e:
 62 |         print(f"❌ Test ERROR: {e}")
 63 |         return False
 64 | 
 65 | def main():
 66 |     """Run the complete test suite."""
 67 |     print("🚀 MCP Memory Service - Complete Test Suite")
 68 |     print("=" * 60)
 69 |     
 70 |     # Check if server is running
 71 |     if not check_server_health():
 72 |         print("❌ Server is not running or not healthy!")
 73 |         print("💡 Please start the server first:")
 74 |         print("   python scripts/run_http_server.py")
 75 |         return 1
 76 |     
 77 |     print("✅ Server is healthy and ready for testing")
 78 |     
 79 |     # Test suite configuration
 80 |     tests = [
 81 |         ("test_memory_simple.py", "Memory CRUD Operations Test"),
 82 |         ("test_search_api.py", "Search API Functionality Test"),
 83 |         ("test_sse_events.py", "Real-time SSE Events Test"),
 84 |     ]
 85 |     
 86 |     results = []
 87 |     
 88 |     # Run each test
 89 |     for script, description in tests:
 90 |         success = run_test_script(script, description)
 91 |         results.append((description, success))
 92 |         
 93 |         if success:
 94 |             print(f"✅ {description} - PASSED")
 95 |         else:
 96 |             print(f"❌ {description} - FAILED")
 97 |         
 98 |         # Brief pause between tests
 99 |         time.sleep(2)
100 |     
101 |     # Summary
102 |     print(f"\n{'='*60}")
103 |     print("📊 TEST SUMMARY")
104 |     print('='*60)
105 |     
106 |     passed = sum(1 for _, success in results if success)
107 |     total = len(results)
108 |     
109 |     for description, success in results:
110 |         status = "✅ PASS" if success else "❌ FAIL"
111 |         print(f"{status} {description}")
112 |     
113 |     print(f"\nResults: {passed}/{total} tests passed")
114 |     
115 |     if passed == total:
116 |         print("\n🎉 ALL TESTS PASSED! System is working perfectly!")
117 |         return 0
118 |     else:
119 |         print(f"\n⚠️  {total - passed} tests failed. Check the logs above.")
120 |         return 1
121 | 
122 | if __name__ == "__main__":
123 |     sys.exit(main())
```

--------------------------------------------------------------------------------
/tests/integration/test_store_memory.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Test script to store a memory in the MCP Memory Service.
  4 | """
  5 | import asyncio
  6 | import json
  7 | import os
  8 | import sys
  9 | 
 10 | # Import MCP client
 11 | try:
 12 |     from mcp import ClientSession
 13 |     from mcp.client.stdio import stdio_client, StdioServerParameters
 14 | except ImportError as e:
 15 |     print(f"MCP client not found: {e}")
 16 |     print("Install with: pip install mcp")
 17 |     sys.exit(1)
 18 | 
 19 | async def store_memory():
 20 |     """Store a test memory."""
 21 |     try:
 22 |         # Configure MCP server connection
 23 |         server_params = StdioServerParameters(
 24 |             command="uv",
 25 |             args=["run", "memory", "server"],
 26 |             env=None
 27 |         )
 28 | 
 29 |         # Connect to memory service using stdio_client
 30 |         async with stdio_client(server_params) as (read, write):
 31 |             async with ClientSession(read, write) as session:
 32 |                 # Initialize the session
 33 |                 await session.initialize()
 34 |                 print("Connected to memory service!")
 35 | 
 36 |                 # List available tools
 37 |                 tools_response = await session.list_tools()
 38 |                 print(f"Found {len(tools_response.tools)} tools")
 39 | 
 40 |                 # Check if store_memory tool exists
 41 |                 if not any(tool.name == "store_memory" for tool in tools_response.tools):
 42 |                     print("ERROR: store_memory tool not found")
 43 |                     return
 44 | 
 45 |                 # Create a test memory
 46 |                 memory_data = {
 47 |                     "content": "This is a test memory created by the test_store_memory.py script.",
 48 |                     "metadata": {
 49 |                         "tags": "test,example,python",  # Comma-separated string format
 50 |                         "type": "note"
 51 |                     }
 52 |                 }
 53 | 
 54 |                 # Store the memory
 55 |                 print(f"\nStoring test memory: {memory_data['content']}")
 56 |                 result = await session.call_tool("store_memory", memory_data)
 57 | 
 58 |                 # Print result
 59 |                 if result:
 60 |                     print("\nResult:")
 61 |                     for content_item in result.content:
 62 |                         if hasattr(content_item, 'text'):
 63 |                             print(content_item.text)
 64 |                 else:
 65 |                     print("No result returned")
 66 | 
 67 |                 # Try to retrieve the memory
 68 |                 print("\nRetrieving memory...")
 69 |                 retrieve_result = await session.call_tool("retrieve_memory", {"query": "test memory", "n_results": 5})
 70 | 
 71 |                 # Print result
 72 |                 if retrieve_result:
 73 |                     print("\nRetrieve Result:")
 74 |                     for content_item in retrieve_result.content:
 75 |                         if hasattr(content_item, 'text'):
 76 |                             print(content_item.text)
 77 |                 else:
 78 |                     print("No retrieve result returned")
 79 | 
 80 |                 # Check database health
 81 |                 print("\nChecking database health...")
 82 |                 health_result = await session.call_tool("check_database_health", {})
 83 | 
 84 |                 # Print result
 85 |                 if health_result:
 86 |                     print("\nHealth Check Result:")
 87 |                     for content_item in health_result.content:
 88 |                         if hasattr(content_item, 'text'):
 89 |                             print(content_item.text)
 90 |                 else:
 91 |                     print("No health check result returned")
 92 | 
 93 |     except Exception as e:
 94 |         print(f"ERROR: {str(e)}")
 95 |         import traceback
 96 |         traceback.print_exc()
 97 | 
 98 | async def main():
 99 |     """Main function."""
100 |     print("=== MCP Memory Service Test: Store Memory ===\n")
101 |     await store_memory()
102 | 
103 | if __name__ == "__main__":
104 |     asyncio.run(main())
105 | 
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
  1 | [build-system]
  2 | requires = ["hatchling", "python-semantic-release", "build"]
  3 | build-backend = "hatchling.build"
  4 | 
  5 | [project]
  6 | name = "mcp-memory-service"
  7 | version = "8.42.0"
  8 | description = "Universal MCP memory service with semantic search, multi-client support, and autonomous consolidation for Claude Desktop, VS Code, and 13+ AI applications"
  9 | readme = "README.md"
 10 | requires-python = ">=3.10"
 11 | keywords = [
 12 |     "mcp", "model-context-protocol", "claude-desktop", "semantic-memory", 
 13 |     "vector-database", "ai-assistant", "sqlite-vec", "multi-client",
 14 |     "semantic-search", "memory-consolidation", "ai-productivity", "vs-code",
 15 |     "cursor", "continue", "fastapi", "developer-tools", "cross-platform"
 16 | ]
 17 | classifiers = [
 18 |     "Development Status :: 5 - Production/Stable",
 19 |     "Intended Audience :: Developers",
 20 |     "Topic :: Software Development :: Libraries :: Python Modules",
 21 |     "Topic :: Scientific/Engineering :: Artificial Intelligence",
 22 |     "Topic :: Database :: Database Engines/Servers",
 23 |     "License :: OSI Approved :: Apache Software License",
 24 |     "Programming Language :: Python :: 3",
 25 |     "Programming Language :: Python :: 3.10",
 26 |     "Programming Language :: Python :: 3.11", 
 27 |     "Programming Language :: Python :: 3.12",
 28 |     "Operating System :: OS Independent",
 29 |     "Environment :: Console",
 30 |     "Framework :: FastAPI"
 31 | ]
 32 | authors = [
 33 |     { name = "Heinrich Krupp", email = "[email protected]" }
 34 | ]
 35 | license = { text = "Apache-2.0" }
 36 | dependencies = [
 37 |     "tokenizers==0.20.3",
 38 |     "mcp>=1.0.0,<2.0.0",
 39 |     "sqlite-vec>=0.1.0",
 40 |     "build>=0.10.0",
 41 |     "aiohttp>=3.8.0",
 42 |     "fastapi>=0.115.0",
 43 |     "uvicorn>=0.30.0",
 44 |     "python-multipart>=0.0.9",
 45 |     "sse-starlette>=2.1.0",
 46 |     "aiofiles>=23.2.1",
 47 |     "psutil>=5.9.0",
 48 |     "zeroconf>=0.130.0",
 49 |     "pypdf2>=3.0.0",
 50 |     "chardet>=5.0.0",
 51 |     "click>=8.0.0",
 52 |     "httpx>=0.24.0",
 53 |     "authlib>=1.2.0",
 54 |     "python-jose[cryptography]>=3.3.0",
 55 |     "sentence-transformers>=2.2.2",
 56 |     "torch>=2.0.0",
 57 |     "typing-extensions>=4.0.0; python_version < '3.11'",
 58 |     "apscheduler>=3.11.0",
 59 | ]
 60 | 
 61 | [project.optional-dependencies]
 62 | # Machine learning dependencies for semantic search and embeddings
 63 | ml = [
 64 |     "sentence-transformers>=2.2.2",
 65 |     "torch>=2.0.0"
 66 | ]
 67 | # SQLite-vec with lightweight ONNX embeddings (recommended for most users)
 68 | sqlite = [
 69 |     "onnxruntime>=1.14.1"
 70 | ]
 71 | # SQLite-vec with full ML capabilities (for advanced features)
 72 | sqlite-ml = [
 73 |     "mcp-memory-service[sqlite,ml]"
 74 | ]
 75 | # Full installation including all optional dependencies
 76 | full = [
 77 |     "mcp-memory-service[sqlite,ml]"
 78 | ]
 79 | 
 80 | [project.scripts]
 81 | memory = "mcp_memory_service.cli.main:main"
 82 | memory-server = "mcp_memory_service.cli.main:memory_server_main"
 83 | mcp-memory-server = "mcp_memory_service.mcp_server:main"
 84 | 
 85 | [tool.hatch.build.targets.wheel]
 86 | packages = ["src/mcp_memory_service"]
 87 | 
 88 | [tool.hatch.version]
 89 | path = "src/mcp_memory_service/__init__.py"
 90 | 
 91 | [tool.semantic_release]
 92 | version_variable = [
 93 |     "src/mcp_memory_service/__init__.py:__version__",
 94 |     "pyproject.toml:version"
 95 | ]
 96 | branch = "main"
 97 | changelog_file = "CHANGELOG.md"
 98 | build_command = "pip install build && python -m build"
 99 | build_command_env = []
100 | dist_path = "dist/"
101 | upload_to_pypi = true
102 | upload_to_release = true
103 | commit_message = "chore(release): bump version to {version}"
104 | 
105 | [tool.semantic_release.commit_parser_options]
106 | allowed_tags = [
107 |     "build",
108 |     "chore",
109 |     "ci",
110 |     "docs",
111 |     "feat",
112 |     "fix",
113 |     "perf",
114 |     "style",
115 |     "refactor",
116 |     "test"
117 | ]
118 | minor_tags = ["feat"]
119 | patch_tags = ["fix", "perf"]
120 | 
121 | [tool.semantic_release.changelog]
122 | template_dir = "templates"
123 | changelog_sections = [
124 |     ["feat", "Features"],
125 |     ["fix", "Bug Fixes"],
126 |     ["perf", "Performance"],
127 |     ["refactor", "Code Refactoring"],
128 |     ["test", "Tests"]
129 | ]
130 | 
```

--------------------------------------------------------------------------------
/archive/setup-development/STARTUP_SETUP_GUIDE.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Memory Service Auto-Startup Setup Guide
  2 | 
  3 | ## ✅ Files Created:
  4 | - `mcp-memory.service` - Systemd service configuration
  5 | - `install_service.sh` - Installation script  
  6 | - `service_control.sh` - Service management script
  7 | - `STARTUP_SETUP_GUIDE.md` - This guide
  8 | 
  9 | ## 🚀 Manual Installation Steps:
 10 | 
 11 | ### 1. Install the systemd service:
 12 | ```bash
 13 | # Run the installation script (requires sudo password)
 14 | sudo bash install_service.sh
 15 | ```
 16 | 
 17 | ### 2. Start the service immediately:
 18 | ```bash
 19 | sudo systemctl start mcp-memory
 20 | ```
 21 | 
 22 | ### 3. Check service status:
 23 | ```bash
 24 | sudo systemctl status mcp-memory
 25 | ```
 26 | 
 27 | ### 4. View service logs:
 28 | ```bash
 29 | sudo journalctl -u mcp-memory -f
 30 | ```
 31 | 
 32 | ## 🛠️ Service Management Commands:
 33 | 
 34 | ### Using the control script:
 35 | ```bash
 36 | ./service_control.sh start     # Start service
 37 | ./service_control.sh stop      # Stop service  
 38 | ./service_control.sh restart   # Restart service
 39 | ./service_control.sh status    # Show status
 40 | ./service_control.sh logs      # View live logs
 41 | ./service_control.sh health    # Test API health
 42 | ./service_control.sh enable    # Enable startup
 43 | ./service_control.sh disable   # Disable startup
 44 | ```
 45 | 
 46 | ### Using systemctl directly:
 47 | ```bash
 48 | sudo systemctl start mcp-memory      # Start now
 49 | sudo systemctl stop mcp-memory       # Stop now
 50 | sudo systemctl restart mcp-memory    # Restart now
 51 | sudo systemctl status mcp-memory     # Check status
 52 | sudo systemctl enable mcp-memory     # Enable startup (already done)
 53 | sudo systemctl disable mcp-memory    # Disable startup
 54 | sudo journalctl -u mcp-memory -f     # Live logs
 55 | ```
 56 | 
 57 | ## 📋 Service Configuration:
 58 | 
 59 | ### Generated API Key:
 60 | ```
 61 | mcp-83c9840168aac025986cc4bc29e411bb
 62 | ```
 63 | 
 64 | ### Service Details:
 65 | - **Service Name**: `mcp-memory.service`
 66 | - **User**: hkr
 67 | - **Working Directory**: `/home/hkr/repositories/mcp-memory-service`
 68 | - **Auto-restart**: Yes (on failure)
 69 | - **Startup**: Enabled (starts on boot)
 70 | 
 71 | ### Environment Variables:
 72 | - `MCP_CONSOLIDATION_ENABLED=true`
 73 | - `MCP_MDNS_ENABLED=true`
 74 | - `MCP_HTTPS_ENABLED=true`
 75 | - `MCP_MDNS_SERVICE_NAME="MCP Memory"`
 76 | - `MCP_HTTP_HOST=0.0.0.0`
 77 | - `MCP_HTTP_PORT=8000`
 78 | - `MCP_MEMORY_STORAGE_BACKEND=sqlite_vec`
 79 | 
 80 | ## 🌐 Access Points:
 81 | 
 82 | Once running, the service will be available at:
 83 | - **Dashboard**: https://localhost:8000
 84 | - **API Documentation**: https://localhost:8000/api/docs
 85 | - **Health Check**: https://localhost:8000/api/health
 86 | - **SSE Events**: https://localhost:8000/api/events
 87 | - **mDNS Name**: `MCP Memory._mcp-memory._tcp.local.`
 88 | 
 89 | ## 🔧 Troubleshooting:
 90 | 
 91 | ### If service fails to start:
 92 | ```bash
 93 | # Check detailed logs
 94 | sudo journalctl -u mcp-memory --no-pager
 95 | 
 96 | # Check if virtual environment exists
 97 | ls -la /home/hkr/repositories/mcp-memory-service/venv/
 98 | 
 99 | # Test manual startup
100 | cd /home/hkr/repositories/mcp-memory-service
101 | source venv/bin/activate
102 | python scripts/run_http_server.py
103 | ```
104 | 
105 | ### If port 8000 is in use:
106 | ```bash
107 | # Check what's using port 8000
108 | sudo ss -tlnp | grep :8000
109 | 
110 | # Or change port in service file
111 | sudo nano /etc/systemd/system/mcp-memory.service
112 | # Edit: Environment=MCP_HTTP_PORT=8001
113 | sudo systemctl daemon-reload
114 | sudo systemctl restart mcp-memory
115 | ```
116 | 
117 | ## 🗑️ Uninstallation:
118 | 
119 | To remove the service:
120 | ```bash
121 | ./service_control.sh uninstall
122 | ```
123 | 
124 | Or manually:
125 | ```bash
126 | sudo systemctl stop mcp-memory
127 | sudo systemctl disable mcp-memory
128 | sudo rm /etc/systemd/system/mcp-memory.service
129 | sudo systemctl daemon-reload
130 | ```
131 | 
132 | ## ✅ Success Verification:
133 | 
134 | After installation, verify everything works:
135 | ```bash
136 | # 1. Check service is running
137 | sudo systemctl status mcp-memory
138 | 
139 | # 2. Test API health
140 | curl -k https://localhost:8000/api/health
141 | 
142 | # 3. Check mDNS discovery
143 | avahi-browse -t _mcp-memory._tcp
144 | 
145 | # 4. View live logs
146 | sudo journalctl -u mcp-memory -f
147 | ```
148 | 
149 | The service will now start automatically on every system boot! 🎉
```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/claude-code-compatibility.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Claude Code Compatibility Guide
  2 | 
  3 | ## Overview
  4 | 
  5 | The MCP Memory Service FastAPI server v4.0.0-alpha.1 implements the official MCP protocol but has specific compatibility considerations with Claude Code's SSE client implementation.
  6 | 
  7 | ## Current Status
  8 | 
  9 | ### ✅ Working MCP Clients
 10 | - **Standard MCP Libraries**: Python `mcp` package, JavaScript MCP SDK
 11 | - **Claude Desktop**: Works with proper MCP configuration
 12 | - **Custom MCP Clients**: Any client implementing standard MCP protocol
 13 | - **HTTP API**: Full REST API access via port 8080
 14 | 
 15 | ### ❌ Claude Code SSE Client Issue
 16 | 
 17 | **Problem**: Claude Code's SSE client has specific header and protocol requirements that don't match the FastMCP server implementation.
 18 | 
 19 | **Technical Details**:
 20 | - FastMCP server requires `Accept: application/json, text/event-stream` headers
 21 | - Claude Code's SSE client doesn't send the required header combination
 22 | - Server correctly rejects invalid SSE connections with proper error messages
 23 | 
 24 | **Error Symptoms**:
 25 | ```bash
 26 | claude mcp list
 27 | # Output: memory: http://10.0.1.30:8000/mcp (SSE) - ✗ Failed to connect
 28 | ```
 29 | 
 30 | ## Workarounds for Claude Code Users
 31 | 
 32 | ### Option 1: Use HTTP Dashboard
 33 | ```bash
 34 | # Access memory service via web interface
 35 | open http://memory.local:8080/
 36 | 
 37 | # Use API endpoints directly
 38 | curl -X POST http://memory.local:8080/api/memories \
 39 |   -H "Content-Type: application/json" \
 40 |   -d '{"content": "My memory", "tags": ["important"]}'
 41 | ```
 42 | 
 43 | ### Option 2: Use Claude Commands (Recommended)
 44 | ```bash
 45 | # Install Claude Code commands (bypass MCP entirely)
 46 | python install.py --install-claude-commands
 47 | 
 48 | # Use conversational memory commands
 49 | claude /memory-store "Important information"
 50 | claude /memory-recall "what did we discuss?"
 51 | claude /memory-search --tags "project,architecture"
 52 | ```
 53 | 
 54 | ### Option 3: Use Alternative MCP Client
 55 | ```python
 56 | # Python example with standard MCP client
 57 | import asyncio
 58 | from mcp import ClientSession
 59 | from mcp.client.stdio import stdio_client
 60 | 
 61 | async def test_memory():
 62 |     # This works with standard MCP protocol
 63 |     # Implementation details for your specific needs
 64 |     pass
 65 | ```
 66 | 
 67 | ## Technical Investigation Results
 68 | 
 69 | ### Server Verification ✅
 70 | ```bash
 71 | # Server correctly implements MCP protocol
 72 | curl -H "Accept: text/event-stream, application/json" \
 73 |      -H "Content-Type: application/json" \
 74 |      -X POST http://memory.local:8000/mcp \
 75 |      -d '{"jsonrpc":"2.0","id":"test","method":"tools/list","params":{}}'
 76 | # Result: 200 OK, SSE stream established
 77 | ```
 78 | 
 79 | ### Claude Code Client Issue ❌
 80 | ```bash
 81 | # Claude Code client fails header negotiation
 82 | # Missing required Accept header combination
 83 | # Connection rejected with 406 Not Acceptable
 84 | ```
 85 | 
 86 | ## Development Roadmap
 87 | 
 88 | ### Short Term (Next Release)
 89 | - [ ] Investigate Claude Code's exact SSE client requirements
 90 | - [ ] Consider server-side compatibility layer
 91 | - [ ] Expand client compatibility testing
 92 | 
 93 | ### Medium Term 
 94 | - [ ] Custom SSE implementation for Claude Code compatibility
 95 | - [ ] Alternative transport protocols (WebSocket, HTTP long-polling)
 96 | - [ ] Client library development
 97 | 
 98 | ### Long Term
 99 | - [ ] Collaborate with Claude Code team on SSE standardization
100 | - [ ] MCP protocol enhancement proposals
101 | - [ ] Universal client compatibility layer
102 | 
103 | ## Conclusion
104 | 
105 | The FastAPI MCP migration successfully achieved its primary goals:
106 | - ✅ SSL connectivity issues resolved
107 | - ✅ Standard MCP protocol compliance
108 | - ✅ Production-ready architecture
109 | 
110 | The Claude Code compatibility issue is a client-specific limitation that doesn't impact the core migration success. Users have multiple viable workarounds available.
111 | 
112 | ## Support
113 | 
114 | - **HTTP Dashboard**: http://memory.local:8080/
115 | - **Documentation**: See `DUAL_SERVICE_DEPLOYMENT.md`
116 | - **Issues**: Report at https://github.com/doobidoo/mcp-memory-service/issues
117 | - **Claude Commands**: See `docs/guides/claude-code-quickstart.md`
```

--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: Feature Request
  2 | description: Suggest a new feature or enhancement
  3 | title: "[Feature]: "
  4 | labels: ["enhancement", "triage"]
  5 | body:
  6 |   - type: markdown
  7 |     attributes:
  8 |       value: |
  9 |         Thank you for suggesting a feature! Please provide details about your use case and proposed solution.
 10 | 
 11 |   - type: textarea
 12 |     id: problem
 13 |     attributes:
 14 |       label: Problem or Use Case
 15 |       description: What problem does this feature solve? What are you trying to accomplish?
 16 |       placeholder: |
 17 |         I'm trying to... but currently the system doesn't support...
 18 |         This would help with...
 19 |     validations:
 20 |       required: true
 21 | 
 22 |   - type: textarea
 23 |     id: solution
 24 |     attributes:
 25 |       label: Proposed Solution
 26 |       description: How would you like this feature to work?
 27 |       placeholder: |
 28 |         Add a new MCP tool that allows...
 29 |         The API endpoint should accept...
 30 |         When the user runs...
 31 |     validations:
 32 |       required: true
 33 | 
 34 |   - type: textarea
 35 |     id: alternatives
 36 |     attributes:
 37 |       label: Alternatives Considered
 38 |       description: What other approaches have you considered? How do you currently work around this?
 39 |       placeholder: |
 40 |         I've tried using... but it doesn't work because...
 41 |         Other projects solve this by...
 42 | 
 43 |   - type: dropdown
 44 |     id: component
 45 |     attributes:
 46 |       label: Component Affected
 47 |       description: Which part of the system would this feature affect?
 48 |       options:
 49 |         - Storage Backend (sqlite/cloudflare/hybrid)
 50 |         - MCP Tools (memory operations)
 51 |         - HTTP API (dashboard/REST endpoints)
 52 |         - Document Ingestion (PDF/DOCX/PPTX)
 53 |         - Claude Code Integration (hooks/commands)
 54 |         - Configuration/Setup
 55 |         - Documentation
 56 |         - Testing/CI
 57 |         - Other
 58 |     validations:
 59 |       required: true
 60 | 
 61 |   - type: dropdown
 62 |     id: priority
 63 |     attributes:
 64 |       label: Priority
 65 |       description: How important is this feature to your workflow?
 66 |       options:
 67 |         - Critical (blocking my work)
 68 |         - High (significant improvement)
 69 |         - Medium (nice to have)
 70 |         - Low (future consideration)
 71 |     validations:
 72 |       required: true
 73 | 
 74 |   - type: textarea
 75 |     id: examples
 76 |     attributes:
 77 |       label: Examples or Mockups
 78 |       description: |
 79 |         Provide examples of how this would work:
 80 |         - API request/response examples
 81 |         - CLI command examples
 82 |         - UI mockups (for dashboard features)
 83 |         - Code snippets
 84 |       placeholder: |
 85 |         # Example usage
 86 |         claude /memory-export --format json --tags important
 87 | 
 88 |         # Expected output
 89 |         {"memories": [...], "count": 42}
 90 |       render: shell
 91 | 
 92 |   - type: textarea
 93 |     id: impact
 94 |     attributes:
 95 |       label: Impact on Existing Functionality
 96 |       description: Would this change affect existing features or require breaking changes?
 97 |       placeholder: |
 98 |         This would require...
 99 |         Existing users would need to...
100 |         Backward compatibility...
101 | 
102 |   - type: textarea
103 |     id: similar
104 |     attributes:
105 |       label: Similar Features in Other Projects
106 |       description: Are there similar features in other projects we can learn from?
107 |       placeholder: |
108 |         Project X implements this as...
109 |         Library Y has a similar API that works like...
110 | 
111 |   - type: checkboxes
112 |     id: checks
113 |     attributes:
114 |       label: Pre-submission Checklist
115 |       description: Please verify you've completed these steps
116 |       options:
117 |         - label: I've searched existing issues and feature requests
118 |           required: true
119 |         - label: I've described a specific use case (not just "it would be nice")
120 |           required: true
121 |         - label: I've considered the impact on existing functionality
122 |           required: true
123 |         - label: I'm willing to help test this feature once implemented
124 |           required: false
125 | 
```

--------------------------------------------------------------------------------
/scripts/validation/check_dev_setup.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Verify development environment setup for MCP Memory Service.
  4 | Detects common issues like stale venv packages vs updated source code.
  5 | 
  6 | Usage:
  7 |     python scripts/validation/check_dev_setup.py
  8 | 
  9 | Exit codes:
 10 |     0 - Development environment is correctly configured
 11 |     1 - Critical issues detected (editable install missing or version mismatch)
 12 | """
 13 | import sys
 14 | import os
 15 | from pathlib import Path
 16 | 
 17 | def check_editable_install():
 18 |     """Check if package is installed in editable mode."""
 19 |     try:
 20 |         import mcp_memory_service
 21 |         package_location = Path(mcp_memory_service.__file__).parent
 22 | 
 23 |         # Check if location is in source directory (editable) or site-packages
 24 |         if 'site-packages' in str(package_location):
 25 |             return False, str(package_location)
 26 |         else:
 27 |             return True, str(package_location)
 28 |     except ImportError:
 29 |         return None, "Package not installed"
 30 | 
 31 | def check_version_match():
 32 |     """Check if installed version matches source code version."""
 33 |     # Read source version
 34 |     repo_root = Path(__file__).parent.parent.parent
 35 |     init_file = repo_root / "src" / "mcp_memory_service" / "__init__.py"
 36 |     source_version = None
 37 | 
 38 |     if not init_file.exists():
 39 |         return None, "Unknown", "Source file not found"
 40 | 
 41 |     with open(init_file) as f:
 42 |         for line in f:
 43 |             if line.startswith('__version__'):
 44 |                 source_version = line.split('=')[1].strip().strip('"\'')
 45 |                 break
 46 | 
 47 |     # Get installed version
 48 |     try:
 49 |         import mcp_memory_service
 50 |         installed_version = mcp_memory_service.__version__
 51 |     except ImportError:
 52 |         return None, source_version, "Not installed"
 53 | 
 54 |     if source_version is None:
 55 |         return None, "Unknown", installed_version
 56 | 
 57 |     return source_version == installed_version, source_version, installed_version
 58 | 
 59 | def main():
 60 |     print("=" * 70)
 61 |     print("MCP Memory Service - Development Environment Check")
 62 |     print("=" * 70)
 63 | 
 64 |     has_error = False
 65 | 
 66 |     # Check 1: Editable install
 67 |     print("\n[1/2] Checking installation mode...")
 68 |     is_editable, location = check_editable_install()
 69 | 
 70 |     if is_editable is None:
 71 |         print("  ❌ CRITICAL: Package not installed")
 72 |         print(f"     Location: {location}")
 73 |         print("\n  Fix: pip install -e .")
 74 |         has_error = True
 75 |     elif not is_editable:
 76 |         print("  ⚠️  WARNING: Package installed in site-packages (not editable)")
 77 |         print(f"     Location: {location}")
 78 |         print("\n  This means source code changes won't take effect!")
 79 |         print("  Fix: pip uninstall mcp-memory-service && pip install -e .")
 80 |         has_error = True
 81 |     else:
 82 |         print(f"  ✅ OK: Editable install detected")
 83 |         print(f"     Location: {location}")
 84 | 
 85 |     # Check 2: Version match
 86 |     print("\n[2/2] Checking version consistency...")
 87 |     versions_match, source_ver, installed_ver = check_version_match()
 88 | 
 89 |     if versions_match is None:
 90 |         print("  ⚠️  WARNING: Could not determine versions")
 91 |         print(f"     Source:    {source_ver}")
 92 |         print(f"     Installed: {installed_ver}")
 93 |     elif not versions_match:
 94 |         print(f"  ❌ CRITICAL: Version mismatch detected!")
 95 |         print(f"     Source code: v{source_ver}")
 96 |         print(f"     Installed:   v{installed_ver}")
 97 |         print("\n  This is the 'stale venv' issue!")
 98 |         print("  Fix: pip install -e . --force-reinstall")
 99 |         has_error = True
100 |     else:
101 |         print(f"  ✅ OK: Versions match (v{source_ver})")
102 | 
103 |     print("\n" + "=" * 70)
104 |     if has_error:
105 |         print("❌ Development environment has CRITICAL issues!")
106 |         print("=" * 70)
107 |         sys.exit(1)
108 |     else:
109 |         print("✅ Development environment is correctly configured!")
110 |         print("=" * 70)
111 |         sys.exit(0)
112 | 
113 | if __name__ == "__main__":
114 |     main()
115 | 
```

--------------------------------------------------------------------------------
/docs/amp-cli-bridge.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Amp CLI Bridge (Semi-Automated Workflow)
  2 | 
  3 | **Purpose**: Leverage Amp CLI capabilities (research, code analysis, web search) from Claude Code without consuming Claude Code credits, using a semi-automated file-based workflow.
  4 | 
  5 | ## Quick Start
  6 | 
  7 | **1. Claude Code creates prompt**:
  8 | ```
  9 | You: "Use @agent-amp-bridge to research TypeScript 5.0 features"
 10 | Claude: [Creates prompt file and shows command]
 11 | ```
 12 | 
 13 | **2. Run the command shown**:
 14 | ```bash
 15 | amp @.claude/amp/prompts/pending/research-xyz.json
 16 | ```
 17 | 
 18 | **3. Amp processes and writes response automatically**
 19 | 
 20 | **4. Claude Code continues automatically**
 21 | 
 22 | ## Architecture
 23 | 
 24 | ```
 25 | Claude Code (@agent-amp-bridge) → .claude/amp/prompts/pending/{uuid}.json
 26 |                                             ↓
 27 |                           You run: amp @prompts/pending/{uuid}.json
 28 |                                             ↓
 29 |                           Amp writes: responses/ready/{uuid}.json
 30 |                                             ↓
 31 |                    Claude Code reads response ← Workflow continues
 32 | ```
 33 | 
 34 | ## File Structure
 35 | 
 36 | ```
 37 | .claude/amp/
 38 | ├── prompts/
 39 | │   └── pending/        # Prompts waiting for you to process
 40 | ├── responses/
 41 | │   ├── ready/          # Responses written by Amp
 42 | │   └── consumed/       # Archive of processed responses
 43 | └── README.md           # Documentation
 44 | ```
 45 | 
 46 | ## Message Format
 47 | 
 48 | **Prompt** (`.claude/amp/prompts/pending/{uuid}.json`):
 49 | ```json
 50 | {
 51 |   "id": "550e8400-e29b-41d4-a716-446655440000",
 52 |   "timestamp": "2025-11-04T20:00:00.000Z",
 53 |   "prompt": "Research async/await best practices in Python",
 54 |   "context": {
 55 |     "project": "mcp-memory-service",
 56 |     "cwd": "/path/to/project"
 57 |   },
 58 |   "options": {
 59 |     "timeout": 300000,
 60 |     "format": "markdown"
 61 |   }
 62 | }
 63 | ```
 64 | 
 65 | **Response** (`.claude/amp/responses/ready/{uuid}.json`):
 66 | ```json
 67 | {
 68 |   "id": "550e8400-e29b-41d4-a716-446655440000",
 69 |   "timestamp": "2025-11-04T20:05:00.000Z",
 70 |   "success": true,
 71 |   "output": "## Async/Await Best Practices\n\n...",
 72 |   "error": null,
 73 |   "duration": 300000
 74 | }
 75 | ```
 76 | 
 77 | ## Configuration
 78 | 
 79 | **File**: `.claude/amp/config.json`
 80 | 
 81 | ```json
 82 | {
 83 |   "pollInterval": 1000,      // Check for new prompts every 1s
 84 |   "timeout": 300000,          // 5 minute timeout per prompt
 85 |   "debug": false,             // Enable debug logging
 86 |   "ampCommand": "amp"         // Amp CLI command
 87 | }
 88 | ```
 89 | 
 90 | ## Use Cases
 91 | 
 92 | - Web Research: "Research latest React 18 features"
 93 | - Code Analysis: "Analyze our storage backend architecture"
 94 | - Documentation: "Generate API docs for MCP tools"
 95 | - Code Generation: "Create TypeScript type definitions"
 96 | - Best Practices: "Find OAuth 2.1 security recommendations"
 97 | 
 98 | ## Manual Inspection (Optional)
 99 | 
100 | ```bash
101 | # List pending prompts
102 | ls -lt .claude/amp/prompts/pending/
103 | 
104 | # View prompt content
105 | cat .claude/amp/prompts/pending/{uuid}.json | jq -r '.prompt'
106 | ```
107 | 
108 | ## Troubleshooting
109 | 
110 | **Amp CLI credit errors:**
111 | ```bash
112 | # Test if Amp is authenticated
113 | amp
114 | 
115 | # If credits exhausted, check status
116 | # https://ampcode.com/settings
117 | ```
118 | 
119 | **Response not appearing:**
120 | ```bash
121 | # Verify Amp wrote the file
122 | ls -lt .claude/amp/responses/ready/
123 | ```
124 | 
125 | **Permission issues:**
126 | ```bash
127 | # Ensure directories exist
128 | ls -la .claude/amp/
129 | 
130 | # Check write permissions
131 | touch .claude/amp/responses/ready/test.json && rm .claude/amp/responses/ready/test.json
132 | ```
133 | 
134 | ## Benefits
135 | 
136 | - Zero Claude Code Credits: Uses your separate Amp session
137 | - Uses Free Tier: Works with Amp's free tier (when credits available)
138 | - Simple Workflow: No background processes
139 | - Full Control: You decide when/what to process
140 | - Fault Tolerant: File-based queue survives crashes
141 | - Audit Trail: All prompts/responses saved
142 | - Reusable: Can replay prompts or review past responses
143 | 
144 | ## Limitations
145 | 
146 | - Manual Step Required: You must run the `amp @` command
147 | - Amp Credits: Still consumes Amp API credits
148 | - Semi-Async: Claude Code waits for you to process
149 | - Best for Research: Optimized for async research tasks, not real-time chat
150 | 
```
Page 4/47FirstPrevNextLast