#
tokens: 49610/50000 45/772 files (page 4/62)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 4 of 62. Use http://codebase.md/doobidoo/mcp-memory-service?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .claude
│   ├── agents
│   │   ├── amp-bridge.md
│   │   ├── amp-pr-automator.md
│   │   ├── code-quality-guard.md
│   │   ├── gemini-pr-automator.md
│   │   └── github-release-manager.md
│   ├── commands
│   │   ├── README.md
│   │   ├── refactor-function
│   │   ├── refactor-function-prod
│   │   └── refactor-function.md
│   ├── consolidation-fix-handoff.md
│   ├── consolidation-hang-fix-summary.md
│   ├── directives
│   │   ├── agents.md
│   │   ├── code-quality-workflow.md
│   │   ├── consolidation-details.md
│   │   ├── development-setup.md
│   │   ├── hooks-configuration.md
│   │   ├── memory-first.md
│   │   ├── memory-tagging.md
│   │   ├── pr-workflow.md
│   │   ├── quality-system-details.md
│   │   ├── README.md
│   │   ├── refactoring-checklist.md
│   │   ├── storage-backends.md
│   │   └── version-management.md
│   ├── prompts
│   │   └── hybrid-cleanup-integration.md
│   ├── settings.local.json.backup
│   └── settings.local.json.local
├── .commit-message
├── .coveragerc
├── .dockerignore
├── .env.example
├── .env.sqlite.backup
├── .envnn#
├── .gitattributes
├── .github
│   ├── FUNDING.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── feature_request.yml
│   │   └── performance_issue.yml
│   ├── pull_request_template.md
│   └── workflows
│       ├── bridge-tests.yml
│       ├── CACHE_FIX.md
│       ├── claude-branch-automation.yml
│       ├── claude-code-review.yml
│       ├── claude.yml
│       ├── cleanup-images.yml.disabled
│       ├── dev-setup-validation.yml
│       ├── docker-publish.yml
│       ├── dockerfile-lint.yml
│       ├── LATEST_FIXES.md
│       ├── main-optimized.yml.disabled
│       ├── main.yml
│       ├── publish-and-test.yml
│       ├── publish-dual.yml
│       ├── README_OPTIMIZATION.md
│       ├── release-tag.yml.disabled
│       ├── release.yml
│       ├── roadmap-review-reminder.yml
│       ├── SECRET_CONDITIONAL_FIX.md
│       └── WORKFLOW_FIXES.md
├── .gitignore
├── .mcp.json.backup
├── .mcp.json.template
├── .metrics
│   ├── baseline_cc_install_hooks.txt
│   ├── baseline_mi_install_hooks.txt
│   ├── baseline_nesting_install_hooks.txt
│   ├── BASELINE_REPORT.md
│   ├── COMPLEXITY_COMPARISON.txt
│   ├── QUICK_REFERENCE.txt
│   ├── README.md
│   ├── REFACTORED_BASELINE.md
│   ├── REFACTORING_COMPLETION_REPORT.md
│   └── TRACKING_TABLE.md
├── .pyscn
│   ├── .gitignore
│   └── reports
│       └── analyze_20251123_214224.html
├── AGENTS.md
├── ai-optimized-tool-descriptions.py
├── archive
│   ├── deployment
│   │   ├── deploy_fastmcp_fixed.sh
│   │   ├── deploy_http_with_mcp.sh
│   │   └── deploy_mcp_v4.sh
│   ├── deployment-configs
│   │   ├── empty_config.yml
│   │   └── smithery.yaml
│   ├── development
│   │   └── test_fastmcp.py
│   ├── docs-removed-2025-08-23
│   │   ├── authentication.md
│   │   ├── claude_integration.md
│   │   ├── claude-code-compatibility.md
│   │   ├── claude-code-integration.md
│   │   ├── claude-code-quickstart.md
│   │   ├── claude-desktop-setup.md
│   │   ├── complete-setup-guide.md
│   │   ├── database-synchronization.md
│   │   ├── development
│   │   │   ├── autonomous-memory-consolidation.md
│   │   │   ├── CLEANUP_PLAN.md
│   │   │   ├── CLEANUP_README.md
│   │   │   ├── CLEANUP_SUMMARY.md
│   │   │   ├── dream-inspired-memory-consolidation.md
│   │   │   ├── hybrid-slm-memory-consolidation.md
│   │   │   ├── mcp-milestone.md
│   │   │   ├── multi-client-architecture.md
│   │   │   ├── test-results.md
│   │   │   └── TIMESTAMP_FIX_SUMMARY.md
│   │   ├── distributed-sync.md
│   │   ├── invocation_guide.md
│   │   ├── macos-intel.md
│   │   ├── master-guide.md
│   │   ├── mcp-client-configuration.md
│   │   ├── multi-client-server.md
│   │   ├── service-installation.md
│   │   ├── sessions
│   │   │   └── MCP_ENHANCEMENT_SESSION_MEMORY_v4.1.0.md
│   │   ├── UBUNTU_SETUP.md
│   │   ├── ubuntu.md
│   │   ├── windows-setup.md
│   │   └── windows.md
│   ├── docs-root-cleanup-2025-08-23
│   │   ├── AWESOME_LIST_SUBMISSION.md
│   │   ├── CLOUDFLARE_IMPLEMENTATION.md
│   │   ├── DOCUMENTATION_ANALYSIS.md
│   │   ├── DOCUMENTATION_CLEANUP_PLAN.md
│   │   ├── DOCUMENTATION_CONSOLIDATION_COMPLETE.md
│   │   ├── LITESTREAM_SETUP_GUIDE.md
│   │   ├── lm_studio_system_prompt.md
│   │   ├── PYTORCH_DOWNLOAD_FIX.md
│   │   └── README-ORIGINAL-BACKUP.md
│   ├── investigations
│   │   └── MACOS_HOOKS_INVESTIGATION.md
│   ├── litestream-configs-v6.3.0
│   │   ├── install_service.sh
│   │   ├── litestream_master_config_fixed.yml
│   │   ├── litestream_master_config.yml
│   │   ├── litestream_replica_config_fixed.yml
│   │   ├── litestream_replica_config.yml
│   │   ├── litestream_replica_simple.yml
│   │   ├── litestream-http.service
│   │   ├── litestream.service
│   │   └── requirements-cloudflare.txt
│   ├── release-notes
│   │   └── release-notes-v7.1.4.md
│   └── setup-development
│       ├── README.md
│       ├── setup_consolidation_mdns.sh
│       ├── STARTUP_SETUP_GUIDE.md
│       └── test_service.sh
├── CHANGELOG-HISTORIC.md
├── CHANGELOG.md
├── claude_commands
│   ├── memory-context.md
│   ├── memory-health.md
│   ├── memory-ingest-dir.md
│   ├── memory-ingest.md
│   ├── memory-recall.md
│   ├── memory-search.md
│   ├── memory-store.md
│   ├── README.md
│   └── session-start.md
├── claude-hooks
│   ├── config.json
│   ├── config.template.json
│   ├── CONFIGURATION.md
│   ├── core
│   │   ├── auto-capture-hook.js
│   │   ├── auto-capture-hook.ps1
│   │   ├── memory-retrieval.js
│   │   ├── mid-conversation.js
│   │   ├── permission-request.js
│   │   ├── session-end.js
│   │   ├── session-start.js
│   │   └── topic-change.js
│   ├── debug-pattern-test.js
│   ├── install_claude_hooks_windows.ps1
│   ├── install_hooks.py
│   ├── memory-mode-controller.js
│   ├── MIGRATION.md
│   ├── README-AUTO-CAPTURE.md
│   ├── README-NATURAL-TRIGGERS.md
│   ├── README-PERMISSION-REQUEST.md
│   ├── README-phase2.md
│   ├── README.md
│   ├── simple-test.js
│   ├── statusline.sh
│   ├── test-adaptive-weights.js
│   ├── test-dual-protocol-hook.js
│   ├── test-mcp-hook.js
│   ├── test-natural-triggers.js
│   ├── test-recency-scoring.js
│   ├── tests
│   │   ├── integration-test.js
│   │   ├── phase2-integration-test.js
│   │   ├── test-code-execution.js
│   │   ├── test-cross-session.json
│   │   ├── test-permission-request.js
│   │   ├── test-session-tracking.json
│   │   └── test-threading.json
│   ├── utilities
│   │   ├── adaptive-pattern-detector.js
│   │   ├── auto-capture-patterns.js
│   │   ├── context-formatter.js
│   │   ├── context-shift-detector.js
│   │   ├── conversation-analyzer.js
│   │   ├── dynamic-context-updater.js
│   │   ├── git-analyzer.js
│   │   ├── mcp-client.js
│   │   ├── memory-client.js
│   │   ├── memory-scorer.js
│   │   ├── performance-manager.js
│   │   ├── project-detector.js
│   │   ├── session-cache.json
│   │   ├── session-tracker.js
│   │   ├── tiered-conversation-monitor.js
│   │   ├── user-override-detector.js
│   │   └── version-checker.js
│   └── WINDOWS-SESSIONSTART-BUG.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── COMMIT_MESSAGE.md
├── CONTRIBUTING.md
├── Development-Sprint-November-2025.md
├── docs
│   ├── amp-cli-bridge.md
│   ├── api
│   │   ├── code-execution-interface.md
│   │   ├── memory-metadata-api.md
│   │   ├── PHASE1_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_REPORT.md
│   │   └── tag-standardization.md
│   ├── architecture
│   │   ├── graph-database-design.md
│   │   ├── search-enhancement-spec.md
│   │   └── search-examples.md
│   ├── architecture.md
│   ├── archive
│   │   └── obsolete-workflows
│   │       ├── load_memory_context.md
│   │       └── README.md
│   ├── assets
│   │   └── images
│   │       ├── dashboard-v3.3.0-preview.png
│   │       ├── memory-awareness-hooks-example.png
│   │       ├── project-infographic.svg
│   │       └── README.md
│   ├── CLAUDE_CODE_QUICK_REFERENCE.md
│   ├── cloudflare-setup.md
│   ├── demo-recording-script.md
│   ├── deployment
│   │   ├── docker.md
│   │   ├── dual-service.md
│   │   ├── production-guide.md
│   │   └── systemd-service.md
│   ├── development
│   │   ├── ai-agent-instructions.md
│   │   ├── code-quality
│   │   │   ├── phase-2a-completion.md
│   │   │   ├── phase-2a-handle-get-prompt.md
│   │   │   ├── phase-2a-index.md
│   │   │   ├── phase-2a-install-package.md
│   │   │   └── phase-2b-session-summary.md
│   │   ├── code-quality-workflow.md
│   │   ├── dashboard-workflow.md
│   │   ├── issue-management.md
│   │   ├── pr-280-post-mortem.md
│   │   ├── pr-review-guide.md
│   │   ├── refactoring-notes.md
│   │   ├── release-checklist.md
│   │   └── todo-tracker.md
│   ├── docker-optimized-build.md
│   ├── document-ingestion.md
│   ├── DOCUMENTATION_AUDIT.md
│   ├── enhancement-roadmap-issue-14.md
│   ├── examples
│   │   ├── analysis-scripts.js
│   │   ├── maintenance-session-example.md
│   │   ├── memory-distribution-chart.jsx
│   │   ├── quality-system-configs.md
│   │   └── tag-schema.json
│   ├── features
│   │   └── association-quality-boost.md
│   ├── first-time-setup.md
│   ├── glama-deployment.md
│   ├── guides
│   │   ├── advanced-command-examples.md
│   │   ├── chromadb-migration.md
│   │   ├── commands-vs-mcp-server.md
│   │   ├── mcp-enhancements.md
│   │   ├── mdns-service-discovery.md
│   │   ├── memory-consolidation-guide.md
│   │   ├── memory-quality-guide.md
│   │   ├── migration.md
│   │   ├── scripts.md
│   │   └── STORAGE_BACKENDS.md
│   ├── HOOK_IMPROVEMENTS.md
│   ├── hooks
│   │   └── phase2-code-execution-migration.md
│   ├── http-server-management.md
│   ├── ide-compatability.md
│   ├── IMAGE_RETENTION_POLICY.md
│   ├── images
│   │   ├── dashboard-placeholder.md
│   │   └── update-restart-demo.png
│   ├── implementation
│   │   ├── health_checks.md
│   │   └── performance.md
│   ├── IMPLEMENTATION_PLAN_HTTP_SSE.md
│   ├── integration
│   │   ├── homebrew.md
│   │   └── multi-client.md
│   ├── integrations
│   │   ├── gemini.md
│   │   ├── groq-bridge.md
│   │   ├── groq-integration-summary.md
│   │   └── groq-model-comparison.md
│   ├── integrations.md
│   ├── legacy
│   │   └── dual-protocol-hooks.md
│   ├── LIGHTWEIGHT_ONNX_SETUP.md
│   ├── LM_STUDIO_COMPATIBILITY.md
│   ├── maintenance
│   │   └── memory-maintenance.md
│   ├── mastery
│   │   ├── api-reference.md
│   │   ├── architecture-overview.md
│   │   ├── configuration-guide.md
│   │   ├── local-setup-and-run.md
│   │   ├── testing-guide.md
│   │   └── troubleshooting.md
│   ├── migration
│   │   ├── code-execution-api-quick-start.md
│   │   └── graph-migration-guide.md
│   ├── natural-memory-triggers
│   │   ├── cli-reference.md
│   │   ├── installation-guide.md
│   │   └── performance-optimization.md
│   ├── oauth-setup.md
│   ├── pr-graphql-integration.md
│   ├── quality-system-ui-implementation.md
│   ├── quick-setup-cloudflare-dual-environment.md
│   ├── README.md
│   ├── refactoring
│   │   └── phase-3-3-analysis.md
│   ├── releases
│   │   └── v8.72.0-testing.md
│   ├── remote-configuration-wiki-section.md
│   ├── research
│   │   ├── code-execution-interface-implementation.md
│   │   └── code-execution-interface-summary.md
│   ├── ROADMAP.md
│   ├── sqlite-vec-backend.md
│   ├── statistics
│   │   ├── charts
│   │   │   ├── activity_patterns.png
│   │   │   ├── contributors.png
│   │   │   ├── growth_trajectory.png
│   │   │   ├── monthly_activity.png
│   │   │   └── october_sprint.png
│   │   ├── data
│   │   │   ├── activity_by_day.csv
│   │   │   ├── activity_by_hour.csv
│   │   │   ├── contributors.csv
│   │   │   └── monthly_activity.csv
│   │   ├── generate_charts.py
│   │   └── REPOSITORY_STATISTICS.md
│   ├── technical
│   │   ├── development.md
│   │   ├── memory-migration.md
│   │   ├── migration-log.md
│   │   ├── sqlite-vec-embedding-fixes.md
│   │   └── tag-storage.md
│   ├── testing
│   │   └── regression-tests.md
│   ├── testing-cloudflare-backend.md
│   ├── troubleshooting
│   │   ├── cloudflare-api-token-setup.md
│   │   ├── cloudflare-authentication.md
│   │   ├── database-transfer-migration.md
│   │   ├── general.md
│   │   ├── hooks-quick-reference.md
│   │   ├── memory-management.md
│   │   ├── pr162-schema-caching-issue.md
│   │   ├── session-end-hooks.md
│   │   └── sync-issues.md
│   ├── tutorials
│   │   ├── advanced-techniques.md
│   │   ├── data-analysis.md
│   │   └── demo-session-walkthrough.md
│   ├── wiki-documentation-plan.md
│   └── wiki-Graph-Database-Architecture.md
├── examples
│   ├── claude_desktop_config_template.json
│   ├── claude_desktop_config_windows.json
│   ├── claude-desktop-http-config.json
│   ├── config
│   │   └── claude_desktop_config.json
│   ├── http-mcp-bridge.js
│   ├── memory_export_template.json
│   ├── README.md
│   ├── setup
│   │   └── setup_multi_client_complete.py
│   └── start_https_example.sh
├── IMPLEMENTATION_SUMMARY.md
├── install_service.py
├── install.py
├── LICENSE
├── NOTICE
├── PR_DESCRIPTION.md
├── pyproject-lite.toml
├── pyproject.toml
├── pytest.ini
├── README.md
├── release-notes-v8.61.0.md
├── run_server.py
├── scripts
│   ├── .claude
│   │   └── settings.local.json
│   ├── archive
│   │   └── check_missing_timestamps.py
│   ├── backup
│   │   ├── backup_memories.py
│   │   ├── backup_sqlite_vec.sh
│   │   ├── export_distributable_memories.sh
│   │   └── restore_memories.py
│   ├── benchmarks
│   │   ├── benchmark_code_execution_api.py
│   │   ├── benchmark_hybrid_sync.py
│   │   └── benchmark_server_caching.py
│   ├── ci
│   │   ├── check_dockerfile_args.sh
│   │   └── validate_imports.sh
│   ├── database
│   │   ├── analyze_sqlite_vec_db.py
│   │   ├── check_sqlite_vec_status.py
│   │   ├── db_health_check.py
│   │   └── simple_timestamp_check.py
│   ├── development
│   │   ├── debug_server_initialization.py
│   │   ├── find_orphaned_files.py
│   │   ├── fix_mdns.sh
│   │   ├── fix_sitecustomize.py
│   │   ├── remote_ingest.sh
│   │   ├── setup-git-merge-drivers.sh
│   │   ├── uv-lock-merge.sh
│   │   └── verify_hybrid_sync.py
│   ├── hooks
│   │   └── pre-commit
│   ├── installation
│   │   ├── install_linux_service.py
│   │   ├── install_macos_service.py
│   │   ├── install_uv.py
│   │   ├── install_windows_service.py
│   │   ├── install.py
│   │   ├── setup_backup_cron.sh
│   │   ├── setup_claude_mcp.sh
│   │   └── setup_cloudflare_resources.py
│   ├── linux
│   │   ├── service_status.sh
│   │   ├── start_service.sh
│   │   ├── stop_service.sh
│   │   ├── uninstall_service.sh
│   │   └── view_logs.sh
│   ├── maintenance
│   │   ├── add_project_tags.py
│   │   ├── apply_quality_boost_retroactively.py
│   │   ├── assign_memory_types.py
│   │   ├── auto_retag_memory_merge.py
│   │   ├── auto_retag_memory.py
│   │   ├── backfill_graph_table.py
│   │   ├── check_memory_types.py
│   │   ├── cleanup_association_memories_hybrid.py
│   │   ├── cleanup_association_memories.py
│   │   ├── cleanup_corrupted_encoding.py
│   │   ├── cleanup_low_quality.py
│   │   ├── cleanup_memories.py
│   │   ├── cleanup_organize.py
│   │   ├── consolidate_memory_types.py
│   │   ├── consolidation_mappings.json
│   │   ├── delete_orphaned_vectors_fixed.py
│   │   ├── delete_test_memories.py
│   │   ├── fast_cleanup_duplicates_with_tracking.sh
│   │   ├── find_all_duplicates.py
│   │   ├── find_cloudflare_duplicates.py
│   │   ├── find_duplicates.py
│   │   ├── memory-types.md
│   │   ├── README.md
│   │   ├── recover_timestamps_from_cloudflare.py
│   │   ├── regenerate_embeddings.py
│   │   ├── repair_malformed_tags.py
│   │   ├── repair_memories.py
│   │   ├── repair_sqlite_vec_embeddings.py
│   │   ├── repair_zero_embeddings.py
│   │   ├── restore_from_json_export.py
│   │   ├── retag_valuable_memories.py
│   │   ├── scan_todos.sh
│   │   ├── soft_delete_test_memories.py
│   │   └── sync_status.py
│   ├── migration
│   │   ├── cleanup_mcp_timestamps.py
│   │   ├── legacy
│   │   │   └── migrate_chroma_to_sqlite.py
│   │   ├── mcp-migration.py
│   │   ├── migrate_sqlite_vec_embeddings.py
│   │   ├── migrate_storage.py
│   │   ├── migrate_tags.py
│   │   ├── migrate_timestamps.py
│   │   ├── migrate_to_cloudflare.py
│   │   ├── migrate_to_sqlite_vec.py
│   │   ├── migrate_v5_enhanced.py
│   │   ├── TIMESTAMP_CLEANUP_README.md
│   │   └── verify_mcp_timestamps.py
│   ├── pr
│   │   ├── amp_collect_results.sh
│   │   ├── amp_detect_breaking_changes.sh
│   │   ├── amp_generate_tests.sh
│   │   ├── amp_pr_review.sh
│   │   ├── amp_quality_gate.sh
│   │   ├── amp_suggest_fixes.sh
│   │   ├── auto_review.sh
│   │   ├── detect_breaking_changes.sh
│   │   ├── generate_tests.sh
│   │   ├── lib
│   │   │   └── graphql_helpers.sh
│   │   ├── pre_pr_check.sh
│   │   ├── quality_gate.sh
│   │   ├── resolve_threads.sh
│   │   ├── run_pyscn_analysis.sh
│   │   ├── run_quality_checks_on_files.sh
│   │   ├── run_quality_checks.sh
│   │   ├── thread_status.sh
│   │   └── watch_reviews.sh
│   ├── quality
│   │   ├── bulk_evaluate_onnx.py
│   │   ├── check_test_scores.py
│   │   ├── debug_deberta_scoring.py
│   │   ├── export_deberta_onnx.py
│   │   ├── fix_dead_code_install.sh
│   │   ├── migrate_to_deberta.py
│   │   ├── phase1_dead_code_analysis.md
│   │   ├── phase2_complexity_analysis.md
│   │   ├── README_PHASE1.md
│   │   ├── README_PHASE2.md
│   │   ├── rescore_deberta.py
│   │   ├── rescore_fallback.py
│   │   ├── reset_onnx_scores.py
│   │   ├── track_pyscn_metrics.sh
│   │   └── weekly_quality_review.sh
│   ├── README.md
│   ├── run
│   │   ├── memory_wrapper_cleanup.ps1
│   │   ├── memory_wrapper_cleanup.py
│   │   ├── memory_wrapper_cleanup.sh
│   │   ├── README_CLEANUP_WRAPPER.md
│   │   ├── run_mcp_memory.sh
│   │   ├── run-with-uv.sh
│   │   └── start_sqlite_vec.sh
│   ├── run_memory_server.py
│   ├── server
│   │   ├── check_http_server.py
│   │   ├── check_server_health.py
│   │   ├── memory_offline.py
│   │   ├── preload_models.py
│   │   ├── run_http_server.py
│   │   ├── run_memory_server.py
│   │   ├── start_http_server.bat
│   │   └── start_http_server.sh
│   ├── service
│   │   ├── deploy_dual_services.sh
│   │   ├── http_server_manager.sh
│   │   ├── install_http_service.sh
│   │   ├── mcp-memory-http.service
│   │   ├── mcp-memory.service
│   │   ├── memory_service_manager.sh
│   │   ├── service_control.sh
│   │   ├── service_utils.py
│   │   ├── update_service.sh
│   │   └── windows
│   │       ├── add_watchdog_trigger.ps1
│   │       ├── install_scheduled_task.ps1
│   │       ├── manage_service.ps1
│   │       ├── run_http_server_background.ps1
│   │       ├── uninstall_scheduled_task.ps1
│   │       └── update_and_restart.ps1
│   ├── setup-lightweight.sh
│   ├── sync
│   │   ├── check_drift.py
│   │   ├── claude_sync_commands.py
│   │   ├── export_memories.py
│   │   ├── import_memories.py
│   │   ├── litestream
│   │   │   ├── apply_local_changes.sh
│   │   │   ├── enhanced_memory_store.sh
│   │   │   ├── init_staging_db.sh
│   │   │   ├── io.litestream.replication.plist
│   │   │   ├── manual_sync.sh
│   │   │   ├── memory_sync.sh
│   │   │   ├── pull_remote_changes.sh
│   │   │   ├── push_to_remote.sh
│   │   │   ├── README.md
│   │   │   ├── resolve_conflicts.sh
│   │   │   ├── setup_local_litestream.sh
│   │   │   ├── setup_remote_litestream.sh
│   │   │   ├── staging_db_init.sql
│   │   │   ├── stash_local_changes.sh
│   │   │   ├── sync_from_remote_noconfig.sh
│   │   │   └── sync_from_remote.sh
│   │   ├── README.md
│   │   ├── safe_cloudflare_update.sh
│   │   ├── sync_memory_backends.py
│   │   └── sync_now.py
│   ├── testing
│   │   ├── run_complete_test.py
│   │   ├── run_memory_test.sh
│   │   ├── simple_test.py
│   │   ├── test_cleanup_logic.py
│   │   ├── test_cloudflare_backend.py
│   │   ├── test_docker_functionality.py
│   │   ├── test_installation.py
│   │   ├── test_mdns.py
│   │   ├── test_memory_api.py
│   │   ├── test_memory_simple.py
│   │   ├── test_migration.py
│   │   ├── test_search_api.py
│   │   ├── test_sqlite_vec_embeddings.py
│   │   ├── test_sse_events.py
│   │   ├── test-connection.py
│   │   └── test-hook.js
│   ├── update_and_restart.sh
│   ├── utils
│   │   ├── claude_commands_utils.py
│   │   ├── detect_platform.py
│   │   ├── generate_personalized_claude_md.sh
│   │   ├── groq
│   │   ├── groq_agent_bridge.py
│   │   ├── list-collections.py
│   │   ├── memory_wrapper_uv.py
│   │   ├── query_memories.py
│   │   ├── README_detect_platform.md
│   │   ├── smithery_wrapper.py
│   │   ├── test_groq_bridge.sh
│   │   └── uv_wrapper.py
│   └── validation
│       ├── check_dev_setup.py
│       ├── check_documentation_links.py
│       ├── check_handler_coverage.py
│       ├── diagnose_backend_config.py
│       ├── validate_configuration_complete.py
│       ├── validate_graph_tools.py
│       ├── validate_memories.py
│       ├── validate_migration.py
│       ├── validate_timestamp_integrity.py
│       ├── verify_environment.py
│       ├── verify_pytorch_windows.py
│       └── verify_torch.py
├── SECURITY.md
├── selective_timestamp_recovery.py
├── SPONSORS.md
├── src
│   └── mcp_memory_service
│       ├── __init__.py
│       ├── _version.py
│       ├── api
│       │   ├── __init__.py
│       │   ├── client.py
│       │   ├── operations.py
│       │   ├── sync_wrapper.py
│       │   └── types.py
│       ├── backup
│       │   ├── __init__.py
│       │   └── scheduler.py
│       ├── cli
│       │   ├── __init__.py
│       │   ├── ingestion.py
│       │   ├── main.py
│       │   └── utils.py
│       ├── config.py
│       ├── consolidation
│       │   ├── __init__.py
│       │   ├── associations.py
│       │   ├── base.py
│       │   ├── clustering.py
│       │   ├── compression.py
│       │   ├── consolidator.py
│       │   ├── decay.py
│       │   ├── forgetting.py
│       │   ├── health.py
│       │   └── scheduler.py
│       ├── dependency_check.py
│       ├── discovery
│       │   ├── __init__.py
│       │   ├── client.py
│       │   └── mdns_service.py
│       ├── embeddings
│       │   ├── __init__.py
│       │   └── onnx_embeddings.py
│       ├── ingestion
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── chunker.py
│       │   ├── csv_loader.py
│       │   ├── json_loader.py
│       │   ├── pdf_loader.py
│       │   ├── registry.py
│       │   ├── semtools_loader.py
│       │   └── text_loader.py
│       ├── lm_studio_compat.py
│       ├── mcp_server.py
│       ├── models
│       │   ├── __init__.py
│       │   └── memory.py
│       ├── quality
│       │   ├── __init__.py
│       │   ├── ai_evaluator.py
│       │   ├── async_scorer.py
│       │   ├── config.py
│       │   ├── implicit_signals.py
│       │   ├── metadata_codec.py
│       │   ├── onnx_ranker.py
│       │   └── scorer.py
│       ├── server
│       │   ├── __init__.py
│       │   ├── __main__.py
│       │   ├── cache_manager.py
│       │   ├── client_detection.py
│       │   ├── environment.py
│       │   ├── handlers
│       │   │   ├── __init__.py
│       │   │   ├── consolidation.py
│       │   │   ├── documents.py
│       │   │   ├── graph.py
│       │   │   ├── memory.py
│       │   │   ├── quality.py
│       │   │   └── utility.py
│       │   └── logging_config.py
│       ├── server_impl.py
│       ├── services
│       │   ├── __init__.py
│       │   └── memory_service.py
│       ├── storage
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── cloudflare.py
│       │   ├── factory.py
│       │   ├── graph.py
│       │   ├── http_client.py
│       │   ├── hybrid.py
│       │   ├── migrations
│       │   │   └── 008_add_graph_table.sql
│       │   └── sqlite_vec.py
│       ├── sync
│       │   ├── __init__.py
│       │   ├── exporter.py
│       │   ├── importer.py
│       │   └── litestream_config.py
│       ├── utils
│       │   ├── __init__.py
│       │   ├── cache_manager.py
│       │   ├── content_splitter.py
│       │   ├── db_utils.py
│       │   ├── debug.py
│       │   ├── directory_ingestion.py
│       │   ├── document_processing.py
│       │   ├── gpu_detection.py
│       │   ├── hashing.py
│       │   ├── health_check.py
│       │   ├── http_server_manager.py
│       │   ├── port_detection.py
│       │   ├── quality_analytics.py
│       │   ├── startup_orchestrator.py
│       │   ├── system_detection.py
│       │   └── time_parser.py
│       └── web
│           ├── __init__.py
│           ├── api
│           │   ├── __init__.py
│           │   ├── analytics.py
│           │   ├── backup.py
│           │   ├── consolidation.py
│           │   ├── documents.py
│           │   ├── events.py
│           │   ├── health.py
│           │   ├── manage.py
│           │   ├── mcp.py
│           │   ├── memories.py
│           │   ├── quality.py
│           │   ├── search.py
│           │   └── sync.py
│           ├── app.py
│           ├── dependencies.py
│           ├── oauth
│           │   ├── __init__.py
│           │   ├── authorization.py
│           │   ├── discovery.py
│           │   ├── middleware.py
│           │   ├── models.py
│           │   ├── registration.py
│           │   └── storage.py
│           ├── sse.py
│           └── static
│               ├── app.js
│               ├── i18n
│               │   ├── de.json
│               │   ├── en.json
│               │   ├── es.json
│               │   ├── fr.json
│               │   ├── ja.json
│               │   ├── ko.json
│               │   └── zh.json
│               ├── index.html
│               ├── README.md
│               ├── sse_test.html
│               └── style.css
├── start_http_debug.bat
├── start_http_server.sh
├── test_document.txt
├── test_version_checker.js
├── TESTING_NOTES.md
├── tests
│   ├── __init__.py
│   ├── api
│   │   ├── __init__.py
│   │   ├── test_compact_types.py
│   │   └── test_operations.py
│   ├── bridge
│   │   ├── mock_responses.js
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   └── test_http_mcp_bridge.js
│   ├── conftest.py
│   ├── consolidation
│   │   ├── __init__.py
│   │   ├── conftest.py
│   │   ├── test_associations.py
│   │   ├── test_clustering.py
│   │   ├── test_compression.py
│   │   ├── test_consolidator.py
│   │   ├── test_decay.py
│   │   ├── test_forgetting.py
│   │   └── test_graph_modes.py
│   ├── contracts
│   │   └── api-specification.yml
│   ├── integration
│   │   ├── conftest.py
│   │   ├── HANDLER_COVERAGE_REPORT.md
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   ├── test_all_memory_handlers.py
│   │   ├── test_api_key_fallback.py
│   │   ├── test_api_memories_chronological.py
│   │   ├── test_api_tag_time_search.py
│   │   ├── test_api_with_memory_service.py
│   │   ├── test_bridge_integration.js
│   │   ├── test_cli_interfaces.py
│   │   ├── test_cloudflare_connection.py
│   │   ├── test_concurrent_clients.py
│   │   ├── test_data_serialization_consistency.py
│   │   ├── test_http_server_startup.py
│   │   ├── test_mcp_memory.py
│   │   ├── test_mdns_integration.py
│   │   ├── test_oauth_basic_auth.py
│   │   ├── test_oauth_flow.py
│   │   ├── test_server_handlers.py
│   │   └── test_store_memory.py
│   ├── performance
│   │   ├── test_background_sync.py
│   │   └── test_hybrid_live.py
│   ├── README.md
│   ├── smithery
│   │   └── test_smithery.py
│   ├── sqlite
│   │   └── simple_sqlite_vec_test.py
│   ├── storage
│   │   ├── conftest.py
│   │   └── test_graph_storage.py
│   ├── test_client.py
│   ├── test_content_splitting.py
│   ├── test_database.py
│   ├── test_deberta_quality.py
│   ├── test_fallback_quality.py
│   ├── test_graph_traversal.py
│   ├── test_hybrid_cloudflare_limits.py
│   ├── test_hybrid_storage.py
│   ├── test_lightweight_onnx.py
│   ├── test_memory_ops.py
│   ├── test_memory_wrapper_cleanup.py
│   ├── test_quality_integration.py
│   ├── test_quality_system.py
│   ├── test_semantic_search.py
│   ├── test_sqlite_vec_storage.py
│   ├── test_time_parser.py
│   ├── test_timestamp_preservation.py
│   ├── timestamp
│   │   ├── test_hook_vs_manual_storage.py
│   │   ├── test_issue99_final_validation.py
│   │   ├── test_search_retrieval_inconsistency.py
│   │   ├── test_timestamp_issue.py
│   │   └── test_timestamp_simple.py
│   └── unit
│       ├── conftest.py
│       ├── test_cloudflare_storage.py
│       ├── test_csv_loader.py
│       ├── test_fastapi_dependencies.py
│       ├── test_import.py
│       ├── test_imports.py
│       ├── test_json_loader.py
│       ├── test_mdns_simple.py
│       ├── test_mdns.py
│       ├── test_memory_service.py
│       ├── test_memory.py
│       ├── test_semtools_loader.py
│       ├── test_storage_interface_compatibility.py
│       ├── test_tag_time_filtering.py
│       └── test_uv_no_pip_installer_fallback.py
├── tools
│   ├── docker
│   │   ├── DEPRECATED.md
│   │   ├── docker-compose.http.yml
│   │   ├── docker-compose.pythonpath.yml
│   │   ├── docker-compose.standalone.yml
│   │   ├── docker-compose.uv.yml
│   │   ├── docker-compose.yml
│   │   ├── docker-entrypoint-persistent.sh
│   │   ├── docker-entrypoint-unified.sh
│   │   ├── docker-entrypoint.sh
│   │   ├── Dockerfile
│   │   ├── Dockerfile.glama
│   │   ├── Dockerfile.slim
│   │   ├── README.md
│   │   └── test-docker-modes.sh
│   └── README.md
├── uv.lock
└── verify_compression.sh
```

# Files

--------------------------------------------------------------------------------
/scripts/utils/README_detect_platform.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Platform Detection Helper
 2 | 
 3 | ## Overview
 4 | 
 5 | `detect_platform.py` provides unified hardware and OS detection for bash scripts, using the same `gpu_detection.py` module as `install.py` for consistency.
 6 | 
 7 | ## Usage
 8 | 
 9 | ```bash
10 | # Run detection
11 | python scripts/utils/detect_platform.py
12 | 
13 | # Output (JSON):
14 | {
15 |   "os": "darwin",
16 |   "arch": "arm64",
17 |   "is_arm": true,
18 |   "is_x86": false,
19 |   "accelerator": "mps",
20 |   "has_cuda": false,
21 |   "has_rocm": false,
22 |   "has_mps": true,
23 |   "has_directml": false,
24 |   "cuda_version": null,
25 |   "rocm_version": null,
26 |   "directml_version": null,
27 |   "pytorch_index_url": "",
28 |   "needs_directml": false
29 | }
30 | ```
31 | 
32 | ## Supported Platforms
33 | 
34 | | Platform | Detection | PyTorch Index |
35 | |----------|-----------|---------------|
36 | | **Apple Silicon (M1/M2/M3)** | MPS via system_profiler | Default PyPI (MPS built-in) |
37 | | **NVIDIA GPU (CUDA)** | nvcc version | cu121/cu118/cu102 |
38 | | **AMD GPU (ROCm)** | rocminfo | rocm5.6 |
39 | | **Windows DirectML** | torch-directml import | CPU + directml package |
40 | | **CPU-only** | Fallback | CPU index |
41 | 
42 | ## Integration with update_and_restart.sh
43 | 
44 | The script automatically:
45 | 1. Detects hardware platform (MPS/CUDA/ROCm/DirectML/CPU)
46 | 2. Selects optimal PyTorch index URL
47 | 3. Installs DirectML package if needed (Windows)
48 | 4. Falls back to basic detection if Python helper unavailable
49 | 
50 | ## Benefits vs. Old Logic
51 | 
52 | **Old (Bash-only):**
53 | - ❌ Only detected macOS vs. Linux with nvidia-smi
54 | - ❌ Treated all macOS as CPU-only (performance loss on M-series)
55 | - ❌ No ROCm, DirectML, or MPS support
56 | 
57 | **New (Python-based):**
58 | - ✅ Detects MPS, CUDA, ROCm, DirectML, CPU
59 | - ✅ Consistent with install.py logic
60 | - ✅ Optimal PyTorch selection per platform
61 | - ✅ Graceful fallback to old logic if detection fails
62 | 
63 | ## Example Output (macOS M2)
64 | 
65 | ```bash
66 | ▶  Installing dependencies (editable mode)...
67 | ℹ  Existing venv Python version: 3.13
68 | ℹ  Installing with venv pip (this may take 1-2 minutes)...
69 | ℹ  Apple Silicon MPS detected - using MPS-optimized PyTorch
70 | ```
71 | 
72 | ## Example Output (Linux with NVIDIA)
73 | 
74 | ```bash
75 | ▶  Installing dependencies (editable mode)...
76 | ℹ  CUDA detected (12.1) - using optimized PyTorch
77 |   Installing with: --extra-index-url https://download.pytorch.org/whl/cu121
78 | ```
79 | 
80 | ## Maintenance
81 | 
82 | The detection logic is centralized in `src/mcp_memory_service/utils/gpu_detection.py`. Updates to that module automatically benefit both `install.py` and `update_and_restart.sh`.
83 | 
```

--------------------------------------------------------------------------------
/claude-hooks/utilities/user-override-detector.js:
--------------------------------------------------------------------------------

```javascript
 1 | /**
 2 |  * User Override Detector
 3 |  * Shared module for consistent #skip/#remember handling across all hooks
 4 |  *
 5 |  * Usage:
 6 |  *   const { detectUserOverrides } = require('./user-override-detector');
 7 |  *   const overrides = detectUserOverrides(userMessage);
 8 |  *   if (overrides.forceSkip) return;
 9 |  */
10 | 
11 | // User override patterns (case-insensitive, word boundary)
12 | const USER_OVERRIDES = {
13 |     forceRemember: /#remember\b/i,
14 |     forceSkip: /#skip\b/i
15 | };
16 | 
17 | /**
18 |  * Detect user overrides in a message
19 |  * @param {string} userMessage - The user's message text
20 |  * @returns {Object} Override detection result
21 |  */
22 | function detectUserOverrides(userMessage) {
23 |     if (!userMessage || typeof userMessage !== 'string') {
24 |         return { forceRemember: false, forceSkip: false };
25 |     }
26 | 
27 |     return {
28 |         forceRemember: USER_OVERRIDES.forceRemember.test(userMessage),
29 |         forceSkip: USER_OVERRIDES.forceSkip.test(userMessage)
30 |     };
31 | }
32 | 
33 | /**
34 |  * Extract user message from various context formats
35 |  * Handles different hook context structures
36 |  * @param {Object} context - Hook context object
37 |  * @returns {string|null} Extracted user message or null
38 |  */
39 | function extractUserMessage(context) {
40 |     if (!context) return null;
41 | 
42 |     // Direct userMessage property
43 |     if (context.userMessage) {
44 |         return typeof context.userMessage === 'string'
45 |             ? context.userMessage
46 |             : null;
47 |     }
48 | 
49 |     // From transcript (last user message)
50 |     if (context.transcript_path) {
51 |         // Transcript extraction should be done by caller
52 |         return null;
53 |     }
54 | 
55 |     // From message property
56 |     if (context.message) {
57 |         return typeof context.message === 'string'
58 |             ? context.message
59 |             : null;
60 |     }
61 | 
62 |     return null;
63 | }
64 | 
65 | /**
66 |  * Console output for override actions
67 |  */
68 | const OVERRIDE_MESSAGES = {
69 |     skip: '\x1b[33m\u23ed\ufe0f  Memory Hook\x1b[0m \x1b[2m\u2192\x1b[0m Skipped by user override (#skip)',
70 |     remember: '\x1b[36m\ud83d\udcbe Memory Hook\x1b[0m \x1b[2m\u2192\x1b[0m Force triggered by user override (#remember)'
71 | };
72 | 
73 | /**
74 |  * Log override action to console
75 |  * @param {'skip'|'remember'} action - The override action
76 |  */
77 | function logOverride(action) {
78 |     if (OVERRIDE_MESSAGES[action]) {
79 |         console.log(OVERRIDE_MESSAGES[action]);
80 |     }
81 | }
82 | 
83 | module.exports = {
84 |     detectUserOverrides,
85 |     extractUserMessage,
86 |     logOverride,
87 |     USER_OVERRIDES,
88 |     OVERRIDE_MESSAGES
89 | };
90 | 
```

--------------------------------------------------------------------------------
/scripts/service/windows/add_watchdog_trigger.ps1:
--------------------------------------------------------------------------------

```
 1 | #Requires -Version 5.1
 2 | <#
 3 | .SYNOPSIS
 4 |     Adds a repeating watchdog trigger to the MCP Memory HTTP Server task.
 5 | 
 6 | .DESCRIPTION
 7 |     Modifies the scheduled task to run every N minutes, ensuring the server
 8 |     automatically restarts if it crashes between logins.
 9 | 
10 | .PARAMETER IntervalMinutes
11 |     How often to check (default: 5 minutes).
12 | 
13 | .EXAMPLE
14 |     .\add_watchdog_trigger.ps1
15 |     Adds a 5-minute watchdog trigger.
16 | 
17 | .EXAMPLE
18 |     .\add_watchdog_trigger.ps1 -IntervalMinutes 10
19 |     Adds a 10-minute watchdog trigger.
20 | #>
21 | 
22 | param(
23 |     [int]$IntervalMinutes = 5
24 | )
25 | 
26 | $ErrorActionPreference = "Stop"
27 | $TaskName = "MCPMemoryHTTPServer"
28 | 
29 | Write-Host ""
30 | Write-Host "Adding Watchdog Trigger to $TaskName" -ForegroundColor Cyan
31 | Write-Host "=====================================" -ForegroundColor Cyan
32 | Write-Host ""
33 | 
34 | # Check if task exists
35 | $Task = Get-ScheduledTask -TaskName $TaskName -ErrorAction SilentlyContinue
36 | if (-not $Task) {
37 |     Write-Host "[ERROR] Task '$TaskName' not found. Run install_scheduled_task.ps1 first." -ForegroundColor Red
38 |     exit 1
39 | }
40 | 
41 | Write-Host "[INFO] Current triggers:"
42 | $Task.Triggers | ForEach-Object {
43 |     Write-Host "  - $($_.CimClass.CimClassName)"
44 | }
45 | 
46 | # Create new repeating trigger
47 | Write-Host ""
48 | Write-Host "[INFO] Adding repeating trigger (every $IntervalMinutes minutes)..." -ForegroundColor Yellow
49 | 
50 | # Note: RepetitionDuration must be finite but long (9999 days = ~27 years)
51 | $RepetitionTrigger = New-ScheduledTaskTrigger -Once -At (Get-Date) `
52 |     -RepetitionInterval (New-TimeSpan -Minutes $IntervalMinutes) `
53 |     -RepetitionDuration (New-TimeSpan -Days 9999)
54 | 
55 | # Get existing triggers and add new one
56 | $ExistingTriggers = @($Task.Triggers)
57 | $AllTriggers = $ExistingTriggers + @($RepetitionTrigger)
58 | 
59 | # Update task
60 | Set-ScheduledTask -TaskName $TaskName -Trigger $AllTriggers | Out-Null
61 | 
62 | Write-Host "[SUCCESS] Watchdog trigger added!" -ForegroundColor Green
63 | Write-Host ""
64 | Write-Host "Configuration:" -ForegroundColor Cyan
65 | Write-Host "  - Check interval: Every $IntervalMinutes minutes"
66 | Write-Host "  - Behavior: If server already running, exits immediately"
67 | Write-Host "  - Behavior: If server not running, starts it"
68 | Write-Host ""
69 | 
70 | # Show updated triggers
71 | $UpdatedTask = Get-ScheduledTask -TaskName $TaskName
72 | Write-Host "Updated triggers:" -ForegroundColor Cyan
73 | $UpdatedTask.Triggers | ForEach-Object {
74 |     $Type = $_.CimClass.CimClassName -replace 'MSFT_Task', '' -replace 'Trigger', ''
75 |     Write-Host "  - $Type"
76 | }
77 | Write-Host ""
78 | 
```

--------------------------------------------------------------------------------
/docs/archive/obsolete-workflows/load_memory_context.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Memory Context Loading Prompt
 2 | 
 3 | Use this prompt at the start of Claude Code sessions on machines in your local network:
 4 | 
 5 | ---
 6 | 
 7 | ## Prompt for Claude Code
 8 | 
 9 | ```
10 | Load MCP Memory Service context for this project. Before we begin working, please retrieve and incorporate all stored knowledge about this codebase from my local memory service:
11 | 
12 | **Memory Service Endpoint**: https://your-server-ip:8443/mcp
13 | **Authorization**: Bearer your-api-key
14 | 
15 | Execute this command to load context:
16 | ```bash
17 | curl -k -s -X POST https://your-server-ip:8443/mcp \
18 |   -H "Content-Type: application/json" \
19 |   -H "Authorization: Bearer your-api-key" \
20 |   -d '{"jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": {"name": "retrieve_memory", "arguments": {"query": "claude-code-reference distributable-reference", "limit": 20}}}' \
21 |   | jq -r '.result.content[0].text'
22 | ```
23 | 
24 | This memory contains:
25 | - Complete project structure and architecture
26 | - All key development, testing, and deployment commands
27 | - Environment variables and configuration patterns
28 | - Recent changes including v5.0.2 ONNX implementation details
29 | - Issue management approaches and current project status
30 | - Testing practices and platform-specific optimizations
31 | - Remote service deployment and health monitoring
32 | 
33 | After loading this context, you'll have comprehensive knowledge of the MCP Memory Service project equivalent to extensive codebase exploration, which will significantly reduce token usage and improve response accuracy.
34 | 
35 | Please confirm successful context loading and summarize the key project information you've retrieved.
36 | ```
37 | 
38 | ---
39 | 
40 | ## Alternative Short Prompt
41 | 
42 | For quick context loading:
43 | 
44 | ```
45 | Load project context from memory service: curl -k -s -X POST https://your-server-ip:8443/mcp -H "Content-Type: application/json" -H "Authorization: Bearer your-api-key" -d '{"jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": {"name": "retrieve_memory", "arguments": {"query": "claude-code-reference", "limit": 20}}}' | jq -r '.result.content[0].text'
46 | 
47 | Incorporate this MCP Memory Service project knowledge before proceeding.
48 | ```
49 | 
50 | ---
51 | 
52 | ## Network Distribution
53 | 
54 | 1. **Copy this prompt file** to other machines in your network
55 | 2. **Update IP address** if memory service moves
56 | 3. **Test connectivity** with: `curl -k -s https://your-server-ip:8443/api/health`
57 | 4. **Use at session start** for instant project context
58 | 
59 | This eliminates repetitive codebase discovery across all your development machines.
```

--------------------------------------------------------------------------------
/scripts/service/service_control.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | 
 3 | # MCP Memory Service Control Script
 4 | SERVICE_NAME="mcp-memory"
 5 | 
 6 | case "$1" in
 7 |     start)
 8 |         echo "Starting MCP Memory Service..."
 9 |         sudo systemctl start $SERVICE_NAME
10 |         sleep 2
11 |         sudo systemctl status $SERVICE_NAME --no-pager
12 |         ;;
13 |     stop)
14 |         echo "Stopping MCP Memory Service..."
15 |         sudo systemctl stop $SERVICE_NAME
16 |         sudo systemctl status $SERVICE_NAME --no-pager
17 |         ;;
18 |     restart)
19 |         echo "Restarting MCP Memory Service..."
20 |         sudo systemctl restart $SERVICE_NAME
21 |         sleep 2
22 |         sudo systemctl status $SERVICE_NAME --no-pager
23 |         ;;
24 |     status)
25 |         sudo systemctl status $SERVICE_NAME --no-pager
26 |         ;;
27 |     logs)
28 |         echo "Showing recent logs (Ctrl+C to exit)..."
29 |         sudo journalctl -u $SERVICE_NAME -f
30 |         ;;
31 |     health)
32 |         echo "Checking service health..."
33 |         curl -k -s https://localhost:8000/api/health | jq '.' 2>/dev/null || curl -k -s https://localhost:8000/api/health
34 |         ;;
35 |     enable)
36 |         echo "Enabling service for startup..."
37 |         sudo systemctl enable $SERVICE_NAME
38 |         echo "Service will start automatically on boot"
39 |         ;;
40 |     disable)
41 |         echo "Disabling service from startup..."
42 |         sudo systemctl disable $SERVICE_NAME
43 |         echo "Service will not start automatically on boot"
44 |         ;;
45 |     install)
46 |         echo "Installing service..."
47 |         ./install_service.sh
48 |         ;;
49 |     uninstall)
50 |         echo "Uninstalling service..."
51 |         sudo systemctl stop $SERVICE_NAME 2>/dev/null
52 |         sudo systemctl disable $SERVICE_NAME 2>/dev/null
53 |         sudo rm -f /etc/systemd/system/$SERVICE_NAME.service
54 |         sudo systemctl daemon-reload
55 |         echo "Service uninstalled"
56 |         ;;
57 |     *)
58 |         echo "Usage: $0 {start|stop|restart|status|logs|health|enable|disable|install|uninstall}"
59 |         echo ""
60 |         echo "Commands:"
61 |         echo "  start     - Start the service"
62 |         echo "  stop      - Stop the service"
63 |         echo "  restart   - Restart the service"
64 |         echo "  status    - Show service status"
65 |         echo "  logs      - Show live service logs"
66 |         echo "  health    - Check API health endpoint"
67 |         echo "  enable    - Enable service for startup"
68 |         echo "  disable   - Disable service from startup"
69 |         echo "  install   - Install the systemd service"
70 |         echo "  uninstall - Remove the systemd service"
71 |         exit 1
72 |         ;;
73 | esac
```

--------------------------------------------------------------------------------
/tests/smithery/test_smithery.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """
 3 | Test script to verify Smithery configuration works correctly.
 4 | This simulates how Smithery would invoke the service.
 5 | """
 6 | import os
 7 | import sys
 8 | import subprocess
 9 | import tempfile
10 | import json
11 | 
12 | def test_smithery_config():
13 |     """Test the Smithery configuration by simulating the expected command."""
14 |     print("Testing Smithery configuration...")
15 |     
16 |     # Create temporary paths for testing
17 |     with tempfile.TemporaryDirectory() as temp_dir:
18 |         chroma_path = os.path.join(temp_dir, "chroma_db")
19 |         backups_path = os.path.join(temp_dir, "backups")
20 |         
21 |         # Create directories
22 |         os.makedirs(chroma_path, exist_ok=True)
23 |         os.makedirs(backups_path, exist_ok=True)
24 |         
25 |         # Set environment variables as Smithery would
26 |         test_env = os.environ.copy()
27 |         test_env.update({
28 |             'MCP_MEMORY_CHROMA_PATH': chroma_path,
29 |             'MCP_MEMORY_BACKUPS_PATH': backups_path,
30 |             'PYTHONUNBUFFERED': '1',
31 |             'PYTORCH_ENABLE_MPS_FALLBACK': '1'
32 |         })
33 |         
34 |         # Command that Smithery would run
35 |         cmd = [sys.executable, 'smithery_wrapper.py', '--version']
36 |         
37 |         print(f"Running command: {' '.join(cmd)}")
38 |         print(f"Environment: {json.dumps({k: v for k, v in test_env.items() if k.startswith('MCP_') or k in ['PYTHONUNBUFFERED', 'PYTORCH_ENABLE_MPS_FALLBACK']}, indent=2)}")
39 |         
40 |         try:
41 |             result = subprocess.run(
42 |                 cmd, 
43 |                 env=test_env,
44 |                 capture_output=True,
45 |                 text=True,
46 |                 timeout=30
47 |             )
48 |             
49 |             print(f"Return code: {result.returncode}")
50 |             if result.stdout:
51 |                 print(f"STDOUT:\n{result.stdout}")
52 |             if result.stderr:
53 |                 print(f"STDERR:\n{result.stderr}")
54 |                 
55 |             if result.returncode == 0:
56 |                 print("✅ SUCCESS: Smithery configuration test passed!")
57 |                 return True
58 |             else:
59 |                 print("❌ FAILED: Smithery configuration test failed!")
60 |                 return False
61 |                 
62 |         except subprocess.TimeoutExpired:
63 |             print("❌ FAILED: Command timed out")
64 |             return False
65 |         except Exception as e:
66 |             print(f"❌ FAILED: Exception occurred: {e}")
67 |             return False
68 | 
69 | if __name__ == "__main__":
70 |     success = test_smithery_config()
71 |     sys.exit(0 if success else 1)
```

--------------------------------------------------------------------------------
/docs/integrations/groq-bridge.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Groq Agent Bridge - Requirements
 2 | 
 3 | Install the required package:
 4 | 
 5 | ```bash
 6 | pip install groq
 7 | # or
 8 | uv pip install groq
 9 | ```
10 | 
11 | Set up your environment:
12 | 
13 | ```bash
14 | export GROQ_API_KEY="your-api-key-here"
15 | ```
16 | 
17 | ## Available Models
18 | 
19 | The Groq bridge supports multiple high-performance models:
20 | 
21 | | Model | Context | Best For | Speed |
22 | |-------|---------|----------|-------|
23 | | **llama-3.3-70b-versatile** | 128K | General purpose (default) | ~300ms |
24 | | **moonshotai/kimi-k2-instruct** | 256K | Agentic coding, tool calling | ~200ms |
25 | | **llama-3.1-8b-instant** | 128K | Fast, simple tasks | ~100ms |
26 | 
27 | **Kimi K2 Features:**
28 | - 256K context window (largest on GroqCloud)
29 | - 1 trillion parameters (32B activated)
30 | - Excellent for front-end development and complex coding
31 | - Superior agentic intelligence and tool calling
32 | - 185 tokens/second throughput
33 | 
34 | ## Usage Examples
35 | 
36 | ### As a library from another AI agent:
37 | 
38 | ```python
39 | from groq_agent_bridge import GroqAgentBridge
40 | 
41 | # Initialize the bridge
42 | bridge = GroqAgentBridge()
43 | 
44 | # Simple call
45 | response = bridge.call_model_raw("Explain quantum computing in simple terms")
46 | print(response)
47 | 
48 | # Advanced call with options
49 | result = bridge.call_model(
50 |     prompt="Generate Python code for a binary search tree",
51 |     model="llama-3.3-70b-versatile",
52 |     max_tokens=500,
53 |     temperature=0.3,
54 |     system_message="You are an expert Python programmer"
55 | )
56 | print(result)
57 | ```
58 | 
59 | ### Command-line usage:
60 | 
61 | ```bash
62 | # Simple usage (uses default llama-3.3-70b-versatile)
63 | ./scripts/utils/groq "What is machine learning?"
64 | 
65 | # Use Kimi K2 for complex coding tasks
66 | ./scripts/utils/groq "Generate a React component with hooks" \
67 |   --model "moonshotai/kimi-k2-instruct"
68 | 
69 | # Fast simple queries with llama-3.1-8b-instant
70 | ./scripts/utils/groq "Rate complexity 1-10: def add(a,b): return a+b" \
71 |   --model "llama-3.1-8b-instant"
72 | 
73 | # Full options with default model
74 | ./scripts/utils/groq "Generate a SQL query" \
75 |   --model "llama-3.3-70b-versatile" \
76 |   --max-tokens 200 \
77 |   --temperature 0.5 \
78 |   --system "You are a database expert" \
79 |   --json
80 | ```
81 | 
82 | ### Integration with bash scripts:
83 | 
84 | ```bash
85 | #!/bin/bash
86 | export GROQ_API_KEY="your-key"
87 | 
88 | # Get response and save to file
89 | python groq_agent_bridge.py "Write a haiku about code" --temperature 0.9 > response.txt
90 | 
91 | # JSON output for parsing
92 | json_response=$(python groq_agent_bridge.py "Explain REST APIs" --json)
93 | # Parse with jq or other tools
94 | ```
95 | 
96 | This provides a completely non-interactive way for other AI agents to call Groq's models!
97 | 
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/server/__init__.py:
--------------------------------------------------------------------------------

```python
 1 | # Copyright 2024 Heinrich Krupp
 2 | #
 3 | # Licensed under the Apache License, Version 2.0 (the "License");
 4 | # you may not use this file except in compliance with the License.
 5 | # You may obtain a copy of the License at
 6 | #
 7 | #     http://www.apache.org/licenses/LICENSE-2.0
 8 | #
 9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 | 
15 | """
16 | Server package for MCP Memory Service.
17 | 
18 | Modular server components for better maintainability:
19 | - client_detection: MCP client detection (Claude Desktop, LM Studio, etc.)
20 | - logging_config: Client-aware logging configuration
21 | - environment: Python path setup, version checks, performance config
22 | - cache_manager: Global caching for storage and service instances
23 | """
24 | 
25 | # Client Detection
26 | from .client_detection import MCP_CLIENT, detect_mcp_client
27 | 
28 | # Logging Configuration
29 | from .logging_config import DualStreamHandler, configure_logging, logger
30 | 
31 | # Environment Configuration
32 | from .environment import (
33 |     setup_python_paths,
34 |     check_uv_environment,
35 |     check_version_consistency,
36 |     configure_environment,
37 |     configure_performance_environment
38 | )
39 | 
40 | # Cache Management
41 | from .cache_manager import (
42 |     _STORAGE_CACHE,
43 |     _MEMORY_SERVICE_CACHE,
44 |     _CACHE_LOCK,
45 |     _CACHE_STATS,
46 |     _get_cache_lock,
47 |     _get_or_create_memory_service,
48 |     _log_cache_performance
49 | )
50 | 
51 | # Backward compatibility: Import main functions from server_impl.py
52 | # server_impl.py (formerly server.py) contains main() and async_main()
53 | # We re-export them for backward compatibility: from mcp_memory_service.server import main
54 | from ..server_impl import main, async_main, MemoryServer
55 | 
56 | __all__ = [
57 |     # Client Detection
58 |     'MCP_CLIENT',
59 |     'detect_mcp_client',
60 | 
61 |     # Logging
62 |     'DualStreamHandler',
63 |     'configure_logging',
64 |     'logger',
65 | 
66 |     # Environment
67 |     'setup_python_paths',
68 |     'check_uv_environment',
69 |     'check_version_consistency',
70 |     'configure_environment',
71 |     'configure_performance_environment',
72 | 
73 |     # Cache
74 |     '_STORAGE_CACHE',
75 |     '_MEMORY_SERVICE_CACHE',
76 |     '_CACHE_LOCK',
77 |     '_CACHE_STATS',
78 |     '_get_cache_lock',
79 |     '_get_or_create_memory_service',
80 |     '_log_cache_performance',
81 | 
82 |     # Entry points and core classes (for backward compatibility)
83 |     'main',
84 |     'async_main',
85 |     'MemoryServer',
86 | ]
87 | 
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/cli/utils.py:
--------------------------------------------------------------------------------

```python
 1 | # Copyright 2024 Heinrich Krupp
 2 | #
 3 | # Licensed under the Apache License, Version 2.0 (the "License");
 4 | # you may not use this file except in compliance with the License.
 5 | # You may obtain a copy of the License at
 6 | #
 7 | #     http://www.apache.org/licenses/LICENSE-2.0
 8 | #
 9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 | 
15 | """
16 | CLI utilities for MCP Memory Service.
17 | """
18 | 
19 | import os
20 | from typing import Optional
21 | 
22 | from ..storage.base import MemoryStorage
23 | 
24 | 
25 | async def get_storage(backend: Optional[str] = None) -> MemoryStorage:
26 |     """
27 |     Get storage backend for CLI operations.
28 | 
29 |     Args:
30 |         backend: Storage backend name ('sqlite_vec', 'cloudflare', or 'hybrid')
31 | 
32 |     Returns:
33 |         Initialized storage backend
34 |     """
35 |     # Determine backend
36 |     if backend is None:
37 |         backend = os.getenv('MCP_MEMORY_STORAGE_BACKEND', 'sqlite_vec').lower()
38 | 
39 |     backend = backend.lower()
40 | 
41 |     if backend in ('sqlite_vec', 'sqlite-vec'):
42 |         from ..storage.sqlite_vec import SqliteVecMemoryStorage
43 |         from ..config import SQLITE_VEC_PATH
44 |         storage = SqliteVecMemoryStorage(SQLITE_VEC_PATH)
45 |         await storage.initialize()
46 |         return storage
47 |     elif backend == 'cloudflare':
48 |         from ..storage.cloudflare import CloudflareStorage
49 |         from ..config import (
50 |             CLOUDFLARE_API_TOKEN, CLOUDFLARE_ACCOUNT_ID,
51 |             CLOUDFLARE_VECTORIZE_INDEX, CLOUDFLARE_D1_DATABASE_ID,
52 |             CLOUDFLARE_R2_BUCKET, CLOUDFLARE_EMBEDDING_MODEL,
53 |             CLOUDFLARE_LARGE_CONTENT_THRESHOLD, CLOUDFLARE_MAX_RETRIES,
54 |             CLOUDFLARE_BASE_DELAY
55 |         )
56 |         storage = CloudflareStorage(
57 |             api_token=CLOUDFLARE_API_TOKEN,
58 |             account_id=CLOUDFLARE_ACCOUNT_ID,
59 |             vectorize_index=CLOUDFLARE_VECTORIZE_INDEX,
60 |             d1_database_id=CLOUDFLARE_D1_DATABASE_ID,
61 |             r2_bucket=CLOUDFLARE_R2_BUCKET,
62 |             embedding_model=CLOUDFLARE_EMBEDDING_MODEL,
63 |             large_content_threshold=CLOUDFLARE_LARGE_CONTENT_THRESHOLD,
64 |             max_retries=CLOUDFLARE_MAX_RETRIES,
65 |             base_delay=CLOUDFLARE_BASE_DELAY
66 |         )
67 |         await storage.initialize()
68 |         return storage
69 |     else:
70 |         raise ValueError(f"Unsupported storage backend: {backend}")
```

--------------------------------------------------------------------------------
/scripts/migration/TIMESTAMP_CLEANUP_README.md:
--------------------------------------------------------------------------------

```markdown
 1 | # MCP Memory Timestamp Cleanup Scripts
 2 | 
 3 | ## Overview
 4 | 
 5 | These scripts help clean up the timestamp mess in your MCP Memory ChromaDB database where multiple timestamp formats and fields have accumulated over time.
 6 | 
 7 | ## Files
 8 | 
 9 | 1. **`verify_mcp_timestamps.py`** - Verification script to check current timestamp state
10 | 2. **`cleanup_mcp_timestamps.py`** - Migration script to fix timestamp issues
11 | 
12 | ## The Problem
13 | 
14 | Your database has accumulated 8 different timestamp-related fields:
15 | - `timestamp` (integer) - Original design
16 | - `created_at` (float) - Duplicate data
17 | - `created_at_iso` (string) - ISO format duplicate
18 | - `timestamp_float` (float) - Another duplicate
19 | - `timestamp_str` (string) - String format duplicate
20 | - `updated_at` (float) - Update tracking
21 | - `updated_at_iso` (string) - Update tracking in ISO
22 | - `date` (generic) - Generic date field
23 | 
24 | This causes:
25 | - 3x storage overhead for the same timestamp
26 | - Confusion about which field to use
27 | - Inconsistent data retrieval
28 | 
29 | ## Usage
30 | 
31 | ### Step 1: Verify Current State
32 | 
33 | ```bash
34 | python3 scripts/migrations/verify_mcp_timestamps.py
35 | ```
36 | 
37 | This will show:
38 | - Total memories in database
39 | - Distribution of timestamp fields
40 | - Memories missing timestamps
41 | - Sample values showing the redundancy
42 | - Date ranges for each timestamp type
43 | 
44 | ### Step 2: Run Migration
45 | 
46 | ```bash
47 | python3 scripts/migrations/cleanup_mcp_timestamps.py
48 | ```
49 | 
50 | The migration will:
51 | 1. **Create a backup** of your database
52 | 2. **Standardize** all timestamps to integer format in the `timestamp` field
53 | 3. **Remove** all redundant timestamp fields
54 | 4. **Ensure** all memories have valid timestamps
55 | 5. **Optimize** the database with VACUUM
56 | 
57 | ### Step 3: Verify Results
58 | 
59 | ```bash
60 | python3 scripts/migrations/verify_mcp_timestamps.py
61 | ```
62 | 
63 | After migration, you should see:
64 | - Only one timestamp field (`timestamp`)
65 | - All memories have timestamps
66 | - Clean data structure
67 | 
68 | ## Safety
69 | 
70 | - The migration script **always creates a backup** before making changes
71 | - Backup location: `/Users/hkr/Library/Application Support/mcp-memory/chroma_db/chroma.sqlite3.backup_YYYYMMDD_HHMMSS`
72 | - If anything goes wrong, you can restore the backup
73 | 
74 | ## Restoration (if needed)
75 | 
76 | If you need to restore from backup:
77 | 
78 | ```bash
79 | # Stop Claude Desktop first
80 | cp "/path/to/backup" "/Users/hkr/Library/Application Support/mcp-memory/chroma_db/chroma.sqlite3"
81 | ```
82 | 
83 | ## After Migration
84 | 
85 | Update your MCP Memory Service code to only use the `timestamp` field (integer format) for all timestamp operations. This prevents the issue from recurring.
86 | 
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/utils/http_server_manager.py:
--------------------------------------------------------------------------------

```python
 1 | """HTTP Server Manager for MCP Memory Service multi-client coordination."""
 2 | 
 3 | import asyncio
 4 | import logging
 5 | import os
 6 | import subprocess
 7 | import sys
 8 | from pathlib import Path
 9 | from typing import Optional
10 | 
11 | logger = logging.getLogger(__name__)
12 | 
13 | 
14 | async def auto_start_http_server_if_needed() -> bool:
15 |     """
16 |     Auto-start HTTP server if needed for multi-client coordination.
17 |     
18 |     Returns:
19 |         bool: True if server was started or already running, False if failed
20 |     """
21 |     try:
22 |         # Check if HTTP auto-start is enabled
23 |         if not os.getenv("MCP_MEMORY_HTTP_AUTO_START", "").lower() in ("true", "1"):
24 |             logger.debug("HTTP auto-start not enabled")
25 |             return False
26 |             
27 |         # Check if server is already running
28 |         from ..utils.port_detection import is_port_in_use
29 |         port = int(os.getenv("MCP_HTTP_PORT", "8000"))
30 |         
31 |         if await is_port_in_use("localhost", port):
32 |             logger.info(f"HTTP server already running on port {port}")
33 |             return True
34 |             
35 |         # Try to start the HTTP server
36 |         logger.info(f"Starting HTTP server on port {port}")
37 |         
38 |         # Get the repository root
39 |         repo_root = Path(__file__).parent.parent.parent.parent
40 |         
41 |         # Start the HTTP server as a background process
42 |         cmd = [
43 |             sys.executable, "-m", "src.mcp_memory_service.app",
44 |             "--port", str(port),
45 |             "--host", "localhost"
46 |         ]
47 |         
48 |         process = subprocess.Popen(
49 |             cmd,
50 |             cwd=repo_root,
51 |             stdout=subprocess.DEVNULL,
52 |             stderr=subprocess.DEVNULL,
53 |             start_new_session=True
54 |         )
55 |         
56 |         # Wait a moment and check if the process started successfully
57 |         await asyncio.sleep(1)
58 |         
59 |         if process.poll() is None:  # Process is still running
60 |             # Wait a bit more and check if port is now in use
61 |             await asyncio.sleep(2)
62 |             if await is_port_in_use("localhost", port):
63 |                 logger.info(f"Successfully started HTTP server on port {port}")
64 |                 return True
65 |             else:
66 |                 logger.warning("HTTP server process started but port not in use")
67 |                 return False
68 |         else:
69 |             logger.warning(f"HTTP server process exited with code {process.returncode}")
70 |             return False
71 |             
72 |     except Exception as e:
73 |         logger.error(f"Failed to auto-start HTTP server: {e}")
74 |         return False
```

--------------------------------------------------------------------------------
/tests/conftest.py:
--------------------------------------------------------------------------------

```python
 1 | import pytest
 2 | import os
 3 | import sys
 4 | import tempfile
 5 | import shutil
 6 | import uuid
 7 | from typing import Callable, Optional, List
 8 | 
 9 | # Add src directory to Python path
10 | sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'src'))
11 | 
12 | # Reserved tag for test memories - enables automatic cleanup
13 | TEST_MEMORY_TAG = "__test__"
14 | 
15 | @pytest.fixture
16 | def temp_db_path():
17 |     '''Create a temporary directory for database testing.'''
18 |     temp_dir = tempfile.mkdtemp()
19 |     yield temp_dir
20 |     # Clean up after test
21 |     shutil.rmtree(temp_dir)
22 | 
23 | @pytest.fixture
24 | def unique_content() -> Callable[[str], str]:
25 |     """
26 |     Generate unique test content to avoid duplicate content errors.
27 | 
28 |     Usage:
29 |         def test_example(unique_content):
30 |             content = unique_content("Test memory about authentication")
31 |             hash1 = store(content, tags=["test"])
32 | 
33 |     Returns:
34 |         A function that takes a base string and returns a unique version.
35 |     """
36 |     def _generator(base: str = "test") -> str:
37 |         return f"{base} [{uuid.uuid4()}]"
38 |     return _generator
39 | 
40 | 
41 | @pytest.fixture
42 | def test_store():
43 |     """
44 |     Store function that auto-tags memories with TEST_MEMORY_TAG for cleanup.
45 | 
46 |     All memories created with this fixture will be automatically deleted
47 |     at the end of the test session via pytest_sessionfinish hook.
48 | 
49 |     Usage:
50 |         def test_example(test_store, unique_content):
51 |             hash1 = test_store(unique_content("Test memory"), tags=["auth"])
52 |             # Memory will have tags: ["__test__", "auth"]
53 |     """
54 |     from mcp_memory_service.api import store
55 | 
56 |     def _store(content: str, tags: Optional[List[str]] = None, **kwargs):
57 |         all_tags = [TEST_MEMORY_TAG] + (tags or [])
58 |         return store(content, tags=all_tags, **kwargs)
59 | 
60 |     return _store
61 | 
62 | 
63 | def pytest_sessionfinish(session, exitstatus):
64 |     """
65 |     Cleanup all test memories at end of test session.
66 | 
67 |     Deletes all memories tagged with TEST_MEMORY_TAG to prevent
68 |     test data from polluting the production database.
69 |     """
70 |     try:
71 |         from mcp_memory_service.api import delete_by_tag
72 |         result = delete_by_tag([TEST_MEMORY_TAG])
73 |         deleted = result.get('deleted', 0) if isinstance(result, dict) else 0
74 |         if deleted > 0:
75 |             print(f"\n[Test Cleanup] Deleted {deleted} test memories tagged with '{TEST_MEMORY_TAG}'")
76 |     except Exception as e:
77 |         # Don't fail the test session if cleanup fails
78 |         print(f"\n[Test Cleanup] Warning: Could not cleanup test memories: {e}")
79 | 
```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/claude_integration.md:
--------------------------------------------------------------------------------

```markdown
 1 | # MCP Memory Service - Development Guidelines
 2 | 
 3 | ## Commands
 4 | - Run memory server: `python scripts/run_memory_server.py`
 5 | - Run tests: `pytest tests/`
 6 | - Run specific test: `pytest tests/test_memory_ops.py::test_store_memory -v`
 7 | - Check environment: `python scripts/verify_environment_enhanced.py`
 8 | - Windows installation: `python scripts/install_windows.py`
 9 | - Build package: `python -m build`
10 | 
11 | ## Installation Guidelines
12 | - Always install in a virtual environment: `python -m venv venv`
13 | - Use `install.py` for cross-platform installation
14 | - Windows requires special PyTorch installation with correct index URL:
15 |   ```bash
16 |   pip install torch==2.1.0 torchvision==2.1.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
17 |   ```
18 | - For recursion errors, run: `python scripts/fix_sitecustomize.py`
19 | 
20 | ## Memory Service Invocation
21 | - See the comprehensive [Invocation Guide](invocation_guide.md) for full details
22 | - Key trigger phrases:
23 |   - **Storage**: "remember that", "remember this", "save to memory", "store in memory"
24 |   - **Retrieval**: "do you remember", "recall", "retrieve from memory", "search your memory for"
25 |   - **Tag-based**: "find memories with tag", "search for tag", "retrieve memories tagged"
26 |   - **Deletion**: "forget", "delete from memory", "remove from memory"
27 | 
28 | ## Code Style
29 | - Python 3.10+ with type hints
30 | - Use dataclasses for models (see `models/memory.py`)
31 | - Triple-quoted docstrings for modules and functions
32 | - Async/await pattern for all I/O operations
33 | - Error handling with specific exception types and informative messages
34 | - Logging with appropriate levels for different severity
35 | - Commit messages follow semantic release format: `type(scope): message`
36 | 
37 | ## Project Structure
38 | - `src/mcp_memory_service/` - Core package code
39 |   - `models/` - Data models
40 |   - `storage/` - Database abstraction
41 |   - `utils/` - Helper functions
42 |   - `server.py` - MCP protocol implementation
43 | - `scripts/` - Utility scripts
44 | - `memory_wrapper.py` - Windows wrapper script
45 | - `install.py` - Cross-platform installation script
46 | 
47 | ## Dependencies
48 | - ChromaDB (0.5.23) for vector database
49 | - sentence-transformers (>=2.2.2) for embeddings
50 | - PyTorch (platform-specific installation)
51 | - MCP protocol (>=1.0.0, <2.0.0) for client-server communication
52 | 
53 | ## Troubleshooting
54 | - For Windows installation issues, use `scripts/install_windows.py`
55 | - Apple Silicon requires Python 3.10+ built for ARM64
56 | - CUDA issues: verify with `torch.cuda.is_available()`
57 | - For MCP protocol issues, check `server.py` for required methods
```

--------------------------------------------------------------------------------
/.claude/directives/quality-system-details.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Memory Quality System - Detailed Reference
 2 | 
 3 | **Quick Summary for CLAUDE.md**: See main file for architecture overview. This file contains implementation details, configuration options, and troubleshooting.
 4 | 
 5 | ## Complete Configuration Options
 6 | 
 7 | ```bash
 8 | # Quality System (Local-First Defaults)
 9 | MCP_QUALITY_SYSTEM_ENABLED=true         # Default: enabled
10 | MCP_QUALITY_AI_PROVIDER=local           # local|groq|gemini|auto|none
11 | MCP_QUALITY_LOCAL_MODEL=nvidia-quality-classifier-deberta  # Default v8.49.0+
12 | MCP_QUALITY_LOCAL_DEVICE=auto           # auto|cpu|cuda|mps|directml
13 | 
14 | # Legacy model (backward compatible, not recommended)
15 | # MCP_QUALITY_LOCAL_MODEL=ms-marco-MiniLM-L-6-v2
16 | 
17 | # Quality-Boosted Search (Recommended with DeBERTa)
18 | MCP_QUALITY_BOOST_ENABLED=true          # More accurate with DeBERTa
19 | MCP_QUALITY_BOOST_WEIGHT=0.3            # 0.3 = 30% quality, 70% semantic
20 | 
21 | # Quality-Based Retention
22 | MCP_QUALITY_RETENTION_HIGH=365          # Days for quality ≥0.7
23 | MCP_QUALITY_RETENTION_MEDIUM=180        # Days for 0.5-0.7
24 | MCP_QUALITY_RETENTION_LOW_MIN=30        # Min days for <0.5
25 | ```
26 | 
27 | ## MCP Tools
28 | 
29 | - `rate_memory(content_hash, rating, feedback)` - Manual quality rating (-1/0/1)
30 | - `get_memory_quality(content_hash)` - Retrieve quality metrics
31 | - `analyze_quality_distribution(min_quality, max_quality)` - System-wide analytics
32 | - `retrieve_with_quality_boost(query, n_results, quality_weight)` - Quality-boosted search
33 | 
34 | ## Migration from MS-MARCO to DeBERTa
35 | 
36 | **Why Migrate:**
37 | - ✅ Eliminates self-matching bias (no query needed)
38 | - ✅ Uniform distribution (mean 0.60-0.70 vs 0.469)
39 | - ✅ Fewer false positives (<5% perfect scores vs 20%)
40 | - ✅ Absolute quality assessment vs relative ranking
41 | 
42 | **Migration Guide**: See [docs/guides/memory-quality-guide.md](../../docs/guides/memory-quality-guide.md#migration-from-ms-marco-to-deberta)
43 | 
44 | ## Success Metrics (Phase 1 - v8.48.3)
45 | 
46 | **Achieved:**
47 | - ✅ <100ms search latency with quality boost (45ms avg, +17% overhead)
48 | - ✅ $0 monthly cost (local SLM default)
49 | - ✅ 75% local SLM usage (3,570 of 4,762 memories)
50 | - ✅ 95% quality score coverage
51 | 
52 | **Challenges:**
53 | - ⚠️ Average score 0.469 (target: 0.6+)
54 | - ⚠️ Self-matching bias ~25%
55 | - ⚠️ Quality boost minimal ranking improvement (0-3%)
56 | 
57 | **Next Phase**: See [Issue #268](https://github.com/doobidoo/mcp-memory-service/issues/268)
58 | 
59 | ## Troubleshooting
60 | 
61 | See [docs/guides/memory-quality-guide.md](../../docs/guides/memory-quality-guide.md) for:
62 | - Model download issues
63 | - Performance tuning
64 | - Quality score interpretation
65 | - User feedback integration
66 | 
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/server/client_detection.py:
--------------------------------------------------------------------------------

```python
 1 | # Copyright 2024 Heinrich Krupp
 2 | #
 3 | # Licensed under the Apache License, Version 2.0 (the "License");
 4 | # you may not use this file except in compliance with the License.
 5 | # You may obtain a copy of the License at
 6 | #
 7 | #     http://www.apache.org/licenses/LICENSE-2.0
 8 | #
 9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 | 
15 | """
16 | Client detection module for MCP Memory Service.
17 | 
18 | Detects which MCP client is running the server (Claude Desktop, LM Studio, etc.)
19 | and provides environment-aware behavior adjustments.
20 | """
21 | 
22 | import os
23 | import logging
24 | import psutil
25 | 
26 | logger = logging.getLogger(__name__)
27 | 
28 | 
29 | def detect_mcp_client():
30 |     """Detect which MCP client is running this server."""
31 |     try:
32 |         # Get the parent process (the MCP client)
33 |         current_process = psutil.Process()
34 |         parent = current_process.parent()
35 | 
36 |         if parent:
37 |             parent_name = parent.name().lower()
38 |             parent_exe = parent.exe() if hasattr(parent, 'exe') else ""
39 | 
40 |             # Check for Claude Desktop
41 |             if 'claude' in parent_name or 'claude' in parent_exe.lower():
42 |                 return 'claude_desktop'
43 | 
44 |             # Check for LM Studio
45 |             if 'lmstudio' in parent_name or 'lm-studio' in parent_name or 'lmstudio' in parent_exe.lower():
46 |                 return 'lm_studio'
47 | 
48 |             # Check command line for additional clues
49 |             try:
50 |                 cmdline = parent.cmdline()
51 |                 cmdline_str = ' '.join(cmdline).lower()
52 | 
53 |                 if 'claude' in cmdline_str:
54 |                     return 'claude_desktop'
55 |                 if 'lmstudio' in cmdline_str or 'lm-studio' in cmdline_str:
56 |                     return 'lm_studio'
57 |             except (OSError, IndexError, AttributeError) as e:
58 |                 logger.debug(f"Could not detect client from process: {e}")
59 |                 pass
60 | 
61 |         # Fallback: check environment variables
62 |         if os.getenv('CLAUDE_DESKTOP'):
63 |             return 'claude_desktop'
64 |         if os.getenv('LM_STUDIO'):
65 |             return 'lm_studio'
66 | 
67 |         # Default to Claude Desktop for strict JSON compliance
68 |         return 'claude_desktop'
69 | 
70 |     except Exception:
71 |         # If detection fails, default to Claude Desktop (strict mode)
72 |         return 'claude_desktop'
73 | 
74 | 
75 | # Detect the current MCP client
76 | MCP_CLIENT = detect_mcp_client()
77 | 
```

--------------------------------------------------------------------------------
/archive/investigations/MACOS_HOOKS_INVESTIGATION.md:
--------------------------------------------------------------------------------

```markdown
 1 | # macOS Memory Hooks Investigation
 2 | 
 3 | ## Issue
 4 | Memory awareness hooks may work differently on macOS vs Linux when using MCP protocol.
 5 | 
 6 | ## Current Linux Behavior (Manjaro)
 7 | - **Problem**: Hooks try to spawn duplicate MCP server via `MCPClient(serverCommand)`
 8 | - **Symptom**: Connection timeout when hooks execute
 9 | - **Root Cause**: Claude Code already has MCP server on stdio, can't have two servers on same streams
10 | - **Current Workaround**: HTTP fallback (requires separate HTTP server on port 8443)
11 | 
12 | ## Hypothesis: macOS May Work Differently
13 | User reports hooks work on macOS without HTTP fallback. Possible reasons:
14 | 1. macOS Claude Code may provide hooks access to existing MCP connection
15 | 2. Different process/stdio handling on macOS vs Linux
16 | 3. `useExistingServer: true` config may actually work on macOS
17 | 
18 | ## Investigation Needed (On MacBook)
19 | 
20 | ### Test 1: MCP-Only Configuration
21 | ```json
22 | {
23 |   "memoryService": {
24 |     "protocol": "mcp",
25 |     "preferredProtocol": "mcp",
26 |     "mcp": {
27 |       "useExistingServer": true,
28 |       "serverName": "memory"
29 |     }
30 |   }
31 | }
32 | ```
33 | 
34 | **Expected on macOS (if hypothesis correct):**
35 | - ✅ Hooks connect successfully
36 | - ✅ No duplicate server spawned
37 | - ✅ Memory context injected on session start
38 | 
39 | **Expected on Linux (current behavior):**
40 | - ❌ Connection timeout
41 | - ❌ Multiple server processes spawn
42 | - ❌ Fallback to HTTP needed
43 | 
44 | ### Test 2: Check Memory Client Behavior
45 | 1. Run hook manually: `node ~/.claude/hooks/core/session-start.js`
46 | 2. Check process list: Does it spawn new `memory server` process?
47 | 3. Monitor connection: Does it timeout or succeed?
48 | 
49 | ### Test 3: Platform Comparison
50 | ```bash
51 | # On macOS
52 | ps aux | grep "memory server"  # How many instances?
53 | node ~/.claude/hooks/core/session-start.js  # Does it work?
54 | 
55 | # On Linux (current)
56 | ps aux | grep "memory server"  # Multiple instances!
57 | node ~/.claude/hooks/core/session-start.js  # Times out!
58 | ```
59 | 
60 | ## Files to Check
61 | - `claude-hooks/utilities/memory-client.js` - MCP connection logic
62 | - `claude-hooks/utilities/mcp-client.js` - Server spawning code
63 | - `claude-hooks/install_hooks.py` - Config generation (line 268-273: useExistingServer)
64 | 
65 | ## Next Steps
66 | 1. Test on MacBook with MCP-only config
67 | 2. If works on macOS: investigate platform-specific differences
68 | 3. Document proper cross-platform solution
69 | 4. Update hooks to work consistently on both platforms
70 | 
71 | ## Current Status
72 | - **Linux**: Requires HTTP fallback (confirmed working)
73 | - **macOS**: TBD - needs verification
74 | - **Goal**: Understand why different, achieve consistent behavior
75 | 
76 | ---
77 | Created: 2025-09-30
78 | Platform: Linux (Manjaro)
79 | Issue: Hooks/MCP connection conflict
80 | 
```

--------------------------------------------------------------------------------
/scripts/service/deploy_dual_services.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | 
 3 | echo "🚀 Deploying Dual MCP Services with mDNS..."
 4 | echo "   - FastMCP Server (port 8000) for Claude Code MCP clients"
 5 | echo "   - HTTP Dashboard (port 8080) for web interface"
 6 | echo "   - mDNS enabled for both services"
 7 | echo ""
 8 | 
 9 | # Stop existing services
10 | echo "⏹️ Stopping existing services..."
11 | sudo systemctl stop mcp-memory 2>/dev/null || true
12 | sudo systemctl stop mcp-http-dashboard 2>/dev/null || true
13 | 
14 | # Install FastMCP service with mDNS
15 | echo "📝 Installing FastMCP service (port 8000)..."
16 | sudo cp /tmp/fastmcp-server-with-mdns.service /etc/systemd/system/mcp-memory.service
17 | 
18 | # Install HTTP Dashboard service
19 | echo "📝 Installing HTTP Dashboard service (port 8080)..."
20 | sudo cp /tmp/mcp-http-dashboard.service /etc/systemd/system/mcp-http-dashboard.service
21 | 
22 | # Reload systemd
23 | echo "🔄 Reloading systemd daemon..."
24 | sudo systemctl daemon-reload
25 | 
26 | # Enable both services
27 | echo "🔛 Enabling both services for startup..."
28 | sudo systemctl enable mcp-memory
29 | sudo systemctl enable mcp-http-dashboard
30 | 
31 | # Start FastMCP service first
32 | echo "▶️ Starting FastMCP server (port 8000)..."
33 | sudo systemctl start mcp-memory
34 | sleep 2
35 | 
36 | # Start HTTP Dashboard service
37 | echo "▶️ Starting HTTP Dashboard (port 8080)..."
38 | sudo systemctl start mcp-http-dashboard
39 | sleep 2
40 | 
41 | # Check status of both services
42 | echo ""
43 | echo "🔍 Checking service status..."
44 | echo ""
45 | echo "=== FastMCP Server (port 8000) ==="
46 | sudo systemctl status mcp-memory --no-pager
47 | echo ""
48 | echo "=== HTTP Dashboard (port 8080) ==="
49 | sudo systemctl status mcp-http-dashboard --no-pager
50 | 
51 | echo ""
52 | echo "📊 Port status:"
53 | ss -tlnp | grep -E ":800[08]"
54 | 
55 | echo ""
56 | echo "🌐 mDNS Services (if avahi is installed):"
57 | avahi-browse -t _http._tcp 2>/dev/null | grep -E "(MCP|Memory)" || echo "No mDNS services found (avahi may not be installed)"
58 | avahi-browse -t _mcp._tcp 2>/dev/null | grep -E "(MCP|Memory)" || echo "No MCP mDNS services found"
59 | 
60 | echo ""
61 | echo "✅ Dual service deployment complete!"
62 | echo ""
63 | echo "🔗 Available Services:"
64 | echo "   - FastMCP Protocol: http://memory.local:8000/mcp (for Claude Code)"
65 | echo "   - HTTP Dashboard:   http://memory.local:8080/ (for web access)"
66 | echo "   - API Endpoints:    http://memory.local:8080/api/* (for curl/scripts)"
67 | echo ""
68 | echo "📋 Service Management:"
69 | echo "   - FastMCP logs:     sudo journalctl -u mcp-memory -f"
70 | echo "   - Dashboard logs:   sudo journalctl -u mcp-http-dashboard -f"
71 | echo "   - Stop FastMCP:     sudo systemctl stop mcp-memory"
72 | echo "   - Stop Dashboard:   sudo systemctl stop mcp-http-dashboard"
73 | echo ""
74 | echo "🔍 mDNS Discovery:"
75 | echo "   - Browse services:  avahi-browse -t _http._tcp"
76 | echo "   - Browse MCP:       avahi-browse -t _mcp._tcp"
```

--------------------------------------------------------------------------------
/archive/docs-root-cleanup-2025-08-23/PYTORCH_DOWNLOAD_FIX.md:
--------------------------------------------------------------------------------

```markdown
 1 | # PyTorch Download Issue - FIXED! 🎉
 2 | 
 3 | ## Problem
 4 | Claude Desktop was downloading PyTorch models (230MB+) on every startup, even with offline environment variables set in the config.
 5 | 
 6 | ## Root Cause
 7 | The issue was that **UV package manager isolation** prevented environment variables from being properly inherited, and model downloads happened before our offline configuration could take effect.
 8 | 
 9 | ## Solution Applied
10 | 
11 | ### 1. Created Offline Launcher Script
12 | **File**: `scripts/memory_offline.py`
13 | - Sets offline environment variables **before any imports**
14 | - Configures cache paths for Windows
15 | - Bypasses UV isolation by running Python directly
16 | 
17 | ### 2. Updated Claude Desktop Config
18 | **Your config now uses**:
19 | ```json
20 | {
21 |   "command": "python",
22 |   "args": ["C:/REPOSITORIES/mcp-memory-service/scripts/memory_offline.py"]
23 | }
24 | ```
25 | 
26 | **Instead of**:
27 | ```json
28 | {
29 |   "command": "uv", 
30 |   "args": ["--directory", "...", "run", "memory"]
31 | }
32 | ```
33 | 
34 | ### 3. Added Code-Level Offline Setup
35 | **File**: `src/mcp_memory_service/__init__.py`
36 | - Added `setup_offline_mode()` function
37 | - Runs immediately when module is imported
38 | - Provides fallback offline configuration
39 | 
40 | ## Test Results ✅
41 | 
42 | **Before Fix**:
43 | ```
44 | 2025-08-11T19:04:48.249Z [memory] [info] Message from client: {...}
45 | Downloading torch (230.2MiB)  ← PROBLEM
46 | 2025-08-11T19:05:48.151Z [memory] [info] Request timed out
47 | ```
48 | 
49 | **After Fix**:
50 | ```
51 | Setting up offline mode...
52 | HF_HUB_OFFLINE: 1
53 | HF_HOME: C:\Users\heinrich.krupp\.cache\huggingface  
54 | Starting MCP Memory Service in offline mode...
55 | [No download messages] ← FIXED!
56 | ```
57 | 
58 | ## Files Modified
59 | 
60 | 1. **Your Claude Desktop Config**: `%APPDATA%\Claude\claude_desktop_config.json`
61 |    - Changed from UV to direct Python execution
62 |    - Uses new offline launcher script
63 | 
64 | 2. **New Offline Launcher**: `scripts/memory_offline.py`
65 |    - Forces offline mode before any ML library imports
66 |    - Configures Windows cache paths automatically
67 | 
68 | 3. **Core Module Init**: `src/mcp_memory_service/__init__.py`
69 |    - Added offline mode setup as backup
70 |    - Runs on module import
71 | 
72 | 4. **Sample Config**: `examples/claude_desktop_config_windows.json`
73 |    - Updated for other users
74 |    - Uses new launcher approach
75 | 
76 | ## Impact
77 | 
78 | ✅ **No more 230MB PyTorch downloads on startup**
79 | ✅ **Faster Claude Desktop initialization**
80 | ✅ **Uses existing cached models (434 memories preserved)**
81 | ✅ **SQLite-vec backend still working**
82 | 
83 | ## For Other Users
84 | 
85 | Use the updated `examples/claude_desktop_config_windows.json` template and:
86 | 1. Replace `C:/REPOSITORIES/mcp-memory-service` with your path
87 | 2. Replace `YOUR_USERNAME` with your Windows username
88 | 3. Use `python` command with `scripts/memory_offline.py`
89 | 
90 | The stubborn PyTorch download issue is now **completely resolved**! 🎉
```

--------------------------------------------------------------------------------
/.github/workflows/CACHE_FIX.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Python Cache Configuration Fix
 2 | 
 3 | ## Issue Identified
 4 | **Date**: 2024-08-24
 5 | **Problem**: GitHub Actions workflows failing at Python setup step
 6 | 
 7 | ### Root Cause
 8 | The `setup-python` action was configured with `cache: 'pip'` but couldn't find a `requirements.txt` file. The project uses `pyproject.toml` for dependency management instead.
 9 | 
10 | ### Error Message
11 | ```
12 | Error: No file in /home/runner/work/mcp-memory-service/mcp-memory-service matched to [**/requirements.txt], make sure you have checked out the target repository
13 | ```
14 | 
15 | ## Solution Applied
16 | 
17 | Added `cache-dependency-path: '**/pyproject.toml'` to all Python setup steps that use pip caching.
18 | 
19 | ### Files Modified
20 | 
21 | #### 1. `.github/workflows/main-optimized.yml`
22 | Fixed 2 instances:
23 | - Line 34-39: Release job Python setup
24 | - Line 112-117: Test job Python setup
25 | 
26 | #### 2. `.github/workflows/cleanup-images.yml`
27 | Fixed 1 instance:
28 | - Line 95-100: Docker Hub cleanup job Python setup
29 | 
30 | ### Before
31 | ```yaml
32 | - name: Set up Python
33 |   uses: actions/setup-python@v4
34 |   with:
35 |     python-version: '3.11'
36 |     cache: 'pip'
37 |     # ❌ Missing cache-dependency-path causes failure
38 | ```
39 | 
40 | ### After
41 | ```yaml
42 | - name: Set up Python
43 |   uses: actions/setup-python@v4
44 |   with:
45 |     python-version: '3.11'
46 |     cache: 'pip'
47 |     cache-dependency-path: '**/pyproject.toml'
48 |     # ✅ Explicitly tells setup-python where to find dependencies
49 | ```
50 | 
51 | ## Benefits
52 | 
53 | 1. **Immediate Fix**: Workflows will no longer fail at Python setup step
54 | 2. **Performance**: Dependencies are properly cached, reducing workflow execution time
55 | 3. **Compatibility**: Works with modern Python projects using `pyproject.toml` (PEP 621)
56 | 
57 | ## Testing
58 | 
59 | All modified workflows have been validated:
60 | - ✅ `main-optimized.yml` - Valid YAML syntax
61 | - ✅ `cleanup-images.yml` - Valid YAML syntax
62 | 
63 | ## Background
64 | 
65 | The `setup-python` action defaults to looking for `requirements.txt` when using pip cache. Since this project uses `pyproject.toml` for dependency management (following modern Python packaging standards), we need to explicitly specify the dependency file path.
66 | 
67 | This is a known issue in the setup-python action:
68 | - Issue #502: Cache pip dependencies from pyproject.toml file
69 | - Issue #529: Change pip default cache path to include pyproject.toml
70 | 
71 | ## Next Steps
72 | 
73 | After pushing these changes:
74 | 1. Workflows should complete successfully
75 | 2. Monitor the Python setup steps to confirm caching works
76 | 3. Check workflow execution time improvements from proper caching
77 | 
78 | ## Alternative Solutions (Not Applied)
79 | 
80 | 1. **Remove caching**: Simply remove `cache: 'pip'` line (would work but slower)
81 | 2. **Create requirements.txt**: Generate from pyproject.toml (adds maintenance burden)
82 | 3. **Use uv directly**: Since project uses uv for package management (more complex change)
83 | 
84 | Date: 2024-08-24
85 | Status: Fixed and ready for deployment
```

--------------------------------------------------------------------------------
/scripts/pr/amp_suggest_fixes.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | # scripts/pr/amp_suggest_fixes.sh - Generate fix suggestions using Amp CLI
 3 | #
 4 | # Usage: bash scripts/pr/amp_suggest_fixes.sh <PR_NUMBER>
 5 | # Example: bash scripts/pr/amp_suggest_fixes.sh 215
 6 | 
 7 | set -e
 8 | 
 9 | PR_NUMBER=$1
10 | 
11 | if [ -z "$PR_NUMBER" ]; then
12 |     echo "Usage: $0 <PR_NUMBER>"
13 |     exit 1
14 | fi
15 | 
16 | if ! command -v gh &> /dev/null; then
17 |     echo "Error: GitHub CLI (gh) is not installed"
18 |     exit 1
19 | fi
20 | 
21 | echo "=== Amp CLI Fix Suggestions for PR #$PR_NUMBER ==="
22 | echo ""
23 | 
24 | # Ensure Amp directories exist
25 | mkdir -p .claude/amp/prompts/pending
26 | mkdir -p .claude/amp/responses/ready
27 | 
28 | # Get repository
29 | REPO=$(gh repo view --json nameWithOwner -q .nameWithOwner 2>/dev/null || echo "doobidoo/mcp-memory-service")
30 | 
31 | # Fetch review comments
32 | echo "Fetching review comments from PR #$PR_NUMBER..."
33 | review_comments=$(gh api "repos/$REPO/pulls/$PR_NUMBER/comments" | \
34 |     jq -r '[.[] | select(.user.login | test("bot|gemini|claude"))] | .[] | "- \(.path):\(.line) - \(.body[0:200])"' | \
35 |     head -50)
36 | 
37 | if [ -z "$review_comments" ]; then
38 |     echo "No review comments found."
39 |     exit 0
40 | fi
41 | 
42 | echo "Review Comments:"
43 | echo "$review_comments"
44 | echo ""
45 | 
46 | # Get PR diff
47 | echo "Fetching PR diff..."
48 | pr_diff=$(gh pr diff $PR_NUMBER | head -500)  # Limit to 500 lines to avoid token overflow
49 | 
50 | # Generate UUID for fix suggestions task
51 | fixes_uuid=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid)
52 | 
53 | echo "Creating Amp prompt for fix suggestions..."
54 | 
55 | # Create fix suggestions prompt
56 | cat > .claude/amp/prompts/pending/fixes-${fixes_uuid}.json << EOF
57 | {
58 |   "id": "${fixes_uuid}",
59 |   "timestamp": "$(date -u +"%Y-%m-%dT%H:%M:%S.000Z")",
60 |   "prompt": "Analyze these code review comments and suggest specific fixes. DO NOT auto-apply changes. Output format: For each issue, provide: 1) File path, 2) Issue description, 3) Suggested fix (code snippet or explanation), 4) Rationale. Focus on safe, non-breaking changes (formatting, type hints, error handling, variable naming, import organization).\n\nReview comments:\n${review_comments}\n\nPR diff (current code):\n${pr_diff}\n\nProvide actionable fix suggestions in markdown format.",
61 |   "context": {
62 |     "project": "mcp-memory-service",
63 |     "task": "fix-suggestions",
64 |     "pr_number": "${PR_NUMBER}"
65 |   },
66 |   "options": {
67 |     "timeout": 180000,
68 |     "format": "markdown"
69 |   }
70 | }
71 | EOF
72 | 
73 | echo "✅ Created Amp prompt for fix suggestions"
74 | echo ""
75 | echo "=== Run this Amp command ==="
76 | echo "amp @.claude/amp/prompts/pending/fixes-${fixes_uuid}.json"
77 | echo ""
78 | echo "=== Then collect the suggestions ==="
79 | echo "bash scripts/pr/amp_collect_results.sh --timeout 180 --uuids '${fixes_uuid}'"
80 | echo ""
81 | 
82 | # Save UUID for later collection
83 | echo "${fixes_uuid}" > /tmp/amp_fix_suggestions_uuid_${PR_NUMBER}.txt
84 | 
85 | echo "UUID saved to /tmp/amp_fix_suggestions_uuid_${PR_NUMBER}.txt for result collection"
86 | 
```

--------------------------------------------------------------------------------
/docs/mastery/troubleshooting.md:
--------------------------------------------------------------------------------

```markdown
 1 | # MCP Memory Service — Troubleshooting Guide
 2 | 
 3 | Common issues and proven fixes when running locally or in CI.
 4 | 
 5 | ## sqlite-vec Extension Loading Fails
 6 | 
 7 | Symptoms:
 8 | 
 9 | - Errors like: `SQLite extension loading not supported` or `enable_load_extension not available`.
10 | - `Failed to load sqlite-vec extension`.
11 | 
12 | Causes:
13 | 
14 | - Python’s `sqlite3` not compiled with loadable extensions (macOS system Python is common culprit).
15 | 
16 | Fixes:
17 | 
18 | - macOS:
19 |   - `brew install python` and use Homebrew Python.
20 |   - Or install via pyenv with extensions: `PYTHON_CONFIGURE_OPTS='--enable-loadable-sqlite-extensions' pyenv install 3.12.x`.
21 | - Linux:
22 |   - Install dev headers: `apt install python3-dev sqlite3` and ensure Python was built with `--enable-loadable-sqlite-extensions`.
23 | - Windows:
24 |   - Prefer official python.org installer or conda distribution.
25 | - Alternative: switch backend: `export MCP_MEMORY_STORAGE_BACKEND=chromadb` (see migration notes).
26 | 
27 | ## `sentence-transformers`/`torch` Not Available
28 | 
29 | Symptoms:
30 | 
31 | - Warnings about no embedding model; semantic search returns empty.
32 | 
33 | Fixes:
34 | 
35 | - Install ML deps: `pip install sentence-transformers torch` (or `uv add` equivalents).
36 | - For constrained environments, semantic search can still run once deps are installed; tag-based and metadata operations work without embeddings.
37 | 
38 | ## First-Run Model Downloads
39 | 
40 | Symptoms:
41 | 
42 | - Warnings like: `Using TRANSFORMERS_CACHE is deprecated` or `No snapshots directory`.
43 | 
44 | Status:
45 | 
46 | - Expected on first run while downloading `all-MiniLM-L6-v2` (~25MB). Subsequent runs use cache.
47 | 
48 | ## Cloudflare Backend Fails on Boot
49 | 
50 | Symptoms:
51 | 
52 | - Immediate exit with `Missing required environment variables for Cloudflare backend`.
53 | 
54 | Fixes:
55 | 
56 | - Set all required envs: `CLOUDFLARE_API_TOKEN`, `CLOUDFLARE_ACCOUNT_ID`, `CLOUDFLARE_VECTORIZE_INDEX`, `CLOUDFLARE_D1_DATABASE_ID`. Optional: `CLOUDFLARE_R2_BUCKET`.
57 | - Validate resources via Wrangler or dashboard; see `docs/cloudflare-setup.md`.
58 | 
59 | ## Port/Coordination Conflicts
60 | 
61 | Symptoms:
62 | 
63 | - Multi-client mode cannot start HTTP server, or falls back to direct mode.
64 | 
65 | Status/Fixes:
66 | 
67 | - The server auto-detects: `http_client` (connect), `http_server` (start), else `direct` (WAL). If the coordination port is in use by another service, expect direct fallback; adjust port or stop the conflicting service.
68 | 
69 | ## File Permission or Path Errors
70 | 
71 | Symptoms:
72 | 
73 | - Path write tests failing under `BASE_DIR` or backup directories.
74 | 
75 | Fixes:
76 | 
77 | - Ensure `MCP_MEMORY_BASE_DIR` points to a writable location; the service validates and creates directories and test-writes `.write_test` files with retries.
78 | 
79 | ## Slow Queries or High CPU
80 | 
81 | Checklist:
82 | 
83 | - Ensure embeddings are available and model loaded once (warmup).
84 | - For low RAM or Windows CUDA:
85 |   - `PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128`
86 |   - Reduce model cache sizes; see `configure_environment()` in `server.py`.
87 | - Tune SQLite pragmas via `MCP_MEMORY_SQLITE_PRAGMAS`.
88 | 
89 | 
```

--------------------------------------------------------------------------------
/scripts/server/check_http_server.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """
 3 | Check if the MCP Memory Service HTTP server is running.
 4 | 
 5 | This script checks if the HTTP server is accessible and provides
 6 | helpful feedback to users about how to start it if it's not running.
 7 | """
 8 | 
 9 | import sys
10 | import os
11 | from urllib.request import urlopen, Request
12 | from urllib.error import URLError, HTTPError
13 | import json
14 | import ssl
15 | 
16 | 
17 | def check_http_server(verbose: bool = False) -> bool:
18 |     """
19 |     Check if the HTTP server is running.
20 | 
21 |     Args:
22 |         verbose: If True, print detailed status messages
23 | 
24 |     Returns:
25 |         bool: True if server is running, False otherwise
26 |     """
27 |     # Determine the endpoint from environment
28 |     https_enabled = os.getenv('MCP_HTTPS_ENABLED', 'false').lower() == 'true'
29 |     http_port = int(os.getenv('MCP_HTTP_PORT', '8000'))
30 |     https_port = int(os.getenv('MCP_HTTPS_PORT', '8443'))
31 | 
32 |     if https_enabled:
33 |         endpoint = f"https://localhost:{https_port}/api/health"
34 |     else:
35 |         endpoint = f"http://localhost:{http_port}/api/health"
36 | 
37 |     try:
38 |         # Create SSL context that doesn't verify certificates (for self-signed certs)
39 |         ctx = ssl.create_default_context()
40 |         ctx.check_hostname = False
41 |         ctx.verify_mode = ssl.CERT_NONE
42 | 
43 |         req = Request(endpoint)
44 |         with urlopen(req, timeout=3, context=ctx) as response:
45 |             if response.status == 200:
46 |                 data = json.loads(response.read().decode('utf-8'))
47 |                 if verbose:
48 |                     print("[OK] HTTP server is running")
49 |                     print(f"   Version: {data.get('version', 'unknown')}")
50 |                     print(f"   Endpoint: {endpoint}")
51 |                     print(f"   Status: {data.get('status', 'unknown')}")
52 |                 return True
53 |             else:
54 |                 if verbose:
55 |                     print(f"[WARN] HTTP server responded with status {response.status}")
56 |                 return False
57 |     except (URLError, HTTPError, json.JSONDecodeError) as e:
58 |         if verbose:
59 |             print("[ERROR] HTTP server is NOT running")
60 |             print(f"\nTo start the HTTP server, run:")
61 |             print(f"   uv run python scripts/server/run_http_server.py")
62 |             print(f"\n   Or for HTTPS:")
63 |             print(f"   MCP_HTTPS_ENABLED=true uv run python scripts/server/run_http_server.py")
64 |             print(f"\nError: {str(e)}")
65 |         return False
66 | 
67 | 
68 | def main():
69 |     """Main entry point for CLI usage."""
70 |     import argparse
71 | 
72 |     parser = argparse.ArgumentParser(
73 |         description="Check if MCP Memory Service HTTP server is running"
74 |     )
75 |     parser.add_argument(
76 |         "-q", "--quiet",
77 |         action="store_true",
78 |         help="Only return exit code (0=running, 1=not running), no output."
79 |     )
80 | 
81 |     args = parser.parse_args()
82 | 
83 |     is_running = check_http_server(verbose=not args.quiet)
84 |     sys.exit(0 if is_running else 1)
85 | 
86 | 
87 | if __name__ == "__main__":
88 |     main()
89 | 
```

--------------------------------------------------------------------------------
/claude_commands/memory-ingest.md:
--------------------------------------------------------------------------------

```markdown
 1 | # memory-ingest
 2 | 
 3 | Ingest a document file into the MCP Memory Service database.
 4 | 
 5 | ## Usage
 6 | 
 7 | ```
 8 | claude /memory-ingest <file_path> [--tags TAG1,TAG2] [--chunk-size SIZE] [--chunk-overlap OVERLAP] [--memory-type TYPE]
 9 | ```
10 | 
11 | ## Parameters
12 | 
13 | - `file_path`: Path to the document file to ingest (required)
14 | - `--tags`: Comma-separated list of tags to apply to all memories created from this document
15 | - `--chunk-size`: Target size for text chunks in characters (default: 1000)
16 | - `--chunk-overlap`: Characters to overlap between chunks (default: 200)
17 | - `--memory-type`: Type label for created memories (default: "document")
18 | 
19 | ## Supported Formats
20 | 
21 | - PDF files (.pdf)
22 | - Text files (.txt, .md, .markdown, .rst)
23 | - JSON files (.json)
24 | 
25 | ## Implementation
26 | 
27 | I need to upload the document to the MCP Memory Service HTTP API endpoint and monitor the progress.
28 | 
29 | First, let me check if the service is running and get the correct endpoint:
30 | 
31 | ```bash
32 | # Check if the service is running on default port
33 | curl -s http://localhost:8080/api/health || echo "Service not running on 8080"
34 | 
35 | # Or check common alternative ports
36 | curl -s http://localhost:8443/api/health || echo "Service not running on 8443"
37 | ```
38 | 
39 | Assuming the service is running (adjust the URL as needed), I'll upload the document:
40 | 
41 | ```bash
42 | # Upload the document with specified parameters
43 | curl -X POST "http://localhost:8080/api/documents/upload" \\
44 |   -F "file=@$FILE_PATH" \\
45 |   -F "tags=$TAGS" \\
46 |   -F "chunk_size=$CHUNK_SIZE" \\
47 |   -F "chunk_overlap=$CHUNK_OVERLAP" \\
48 |   -F "memory_type=$MEMORY_TYPE" \\
49 |   
50 | ```
51 | 
52 | Then I'll monitor the upload progress:
53 | 
54 | ```bash
55 | # Monitor progress (replace UPLOAD_ID with the ID from the upload response)
56 | curl -s "http://localhost:8080/api/documents/status/UPLOAD_ID"
57 | ```
58 | 
59 | ## Examples
60 | 
61 | ```
62 | # Ingest a PDF with tags
63 | claude /memory-ingest manual.pdf --tags documentation,reference
64 | 
65 | # Ingest a markdown file with custom chunking
66 | claude /memory-ingest README.md --chunk-size 1500 --chunk-overlap 300 --tags project,readme
67 | 
68 | # Ingest a document as reference material
69 | claude /memory-ingest api-docs.json --tags api,reference --memory-type reference
70 | ```
71 | 
72 | ## Actual Execution Steps
73 | 
74 | When you run this command, I will:
75 | 
76 | 1. **Validate the file exists** and check if it's a supported format
77 | 2. **Determine the service endpoint** (try localhost:8080, then 8443)
78 | 3. **Upload the file** using the documents API endpoint with your specified parameters
79 | 4. **Monitor progress** and show real-time updates
80 | 5. **Report results** including chunks created and any errors
81 | 
82 | The document will be automatically parsed, chunked, and stored as searchable memories in your MCP Memory Service database.
83 | 
84 | ## Notes
85 | 
86 | - The document will be automatically parsed and chunked for optimal retrieval
87 | - Each chunk becomes a separate memory entry with semantic embeddings
88 | - Progress will be displayed during ingestion
89 | - Failed chunks will be reported but won't stop the overall process
90 | 
```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/development/mcp-milestone.md:
--------------------------------------------------------------------------------

```markdown
 1 | # MCP Memory Service v4.0.0-beta.1 - Major Milestone Achievement
 2 | 
 3 | **Date**: August 4, 2025  
 4 | **Status**: 🚀 **Mission Accomplished**
 5 | 
 6 | ## Project Evolution Complete
 7 | 
 8 | Successfully transitioned MCP Memory Service from experimental local-only service to **production-ready remote memory infrastructure** with native MCP protocol support.
 9 | 
10 | ## Technical Achievements
11 | 
12 | ### 1. Release Management ✅
13 | - **v4.0.0-beta.1** beta release completed
14 | - Fixed Docker CI/CD workflows (main.yml and publish-and-test.yml)
15 | - GitHub Release created with comprehensive notes
16 | - Repository cleanup (3 obsolete branches removed)
17 | 
18 | ### 2. GitHub Issues Resolved ✅
19 | - **Issue #71**: Remote Memory Service access - **FULLY RESOLVED** via FastAPI MCP integration
20 | - **Issue #72**: Node.js Bridge SSL issues - **SUPERSEDED** (bridge deprecated in favor of native protocol)
21 | 
22 | ### 3. MCP Protocol Compliance ✅
23 | Applied critical refactorings from fellow AI Coder:
24 | - **Flexible ID Validation**: `Optional[Union[str, int]]` supporting both string and integer IDs
25 | - **Dual Route Handling**: Both `/mcp` and `/mcp/` endpoints to prevent 307 redirects
26 | - **Content Hash Generation**: Proper `generate_content_hash()` implementation
27 | 
28 | ### 4. Infrastructure Deployment ✅
29 | - **Remote Server**: Successfully deployed at `your-server-ip:8000`
30 | - **Backend**: SQLite-vec (1.7MB database, 384-dimensional embeddings)
31 | - **Model**: all-MiniLM-L6-v2 loaded and operational
32 | - **Existing Data**: 65 memories already stored
33 | - **API Coverage**: Full MCP protocol + REST API + Dashboard
34 | 
35 | ## Strategic Impact
36 | 
37 | This represents the **successful completion of architectural evolution** from:
38 | - ❌ Local-only experimental service
39 | - ✅ Production-ready remote memory infrastructure
40 | 
41 | **Key Benefits Achieved**:
42 | 1. **Cross-Device Access**: Claude Code can connect from any device
43 | 2. **Protocol Compliance**: Standard MCP JSON-RPC 2.0 implementation
44 | 3. **Scalable Architecture**: Dual-service design (HTTP + MCP)
45 | 4. **Robust CI/CD**: Automated testing and deployment pipeline
46 | 
47 | ## Verification
48 | 
49 | **MCP Protocol Test Results**:
50 | ```bash
51 | # Health check successful
52 | curl -X POST http://your-server-ip:8000/mcp \
53 |   -H "Content-Type: application/json" \
54 |   -d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"check_database_health"}}'
55 | 
56 | # Response: {"status":"healthy","statistics":{"total_memories":65,"embedding_model":"all-MiniLM-L6-v2"}}
57 | ```
58 | 
59 | **Available Endpoints**:
60 | - 🔧 **MCP Protocol**: `http://your-server-ip:8000/mcp`
61 | - 📊 **Dashboard**: `http://your-server-ip:8000/`  
62 | - 📚 **API Docs**: `http://your-server-ip:8000/api/docs`
63 | 
64 | ## Next Steps
65 | 
66 | - Monitor beta feedback for v4.0.0 stable release
67 | - Continue remote memory service operation
68 | - Support Claude Code integrations across devices
69 | 
70 | ---
71 | 
72 | **This milestone marks the successful transformation of MCP Memory Service into a fully operational, remotely accessible, protocol-compliant memory infrastructure ready for production use.** 🎉
```

--------------------------------------------------------------------------------
/pyproject-lite.toml:
--------------------------------------------------------------------------------

```toml
 1 | [build-system]
 2 | requires = ["hatchling", "python-semantic-release", "build"]
 3 | build-backend = "hatchling.build"
 4 | 
 5 | [project]
 6 | name = "mcp-memory-service-lite"
 7 | version = "8.76.0"
 8 | description = "Lightweight MCP memory service with ONNX embeddings - no PyTorch required. 80% smaller install size."
 9 | readme = "README.md"
10 | requires-python = ">=3.10"
11 | keywords = [
12 |     "mcp", "model-context-protocol", "claude-desktop", "semantic-memory",
13 |     "vector-database", "ai-assistant", "sqlite-vec", "multi-client",
14 |     "semantic-search", "memory-consolidation", "ai-productivity", "vs-code",
15 |     "cursor", "continue", "fastapi", "developer-tools", "cross-platform",
16 |     "lightweight", "onnx"
17 | ]
18 | classifiers = [
19 |     "Development Status :: 5 - Production/Stable",
20 |     "Intended Audience :: Developers",
21 |     "Topic :: Software Development :: Libraries :: Python Modules",
22 |     "Topic :: Scientific/Engineering :: Artificial Intelligence",
23 |     "Topic :: Database :: Database Engines/Servers",
24 |     "License :: OSI Approved :: Apache Software License",
25 |     "Programming Language :: Python :: 3",
26 |     "Programming Language :: Python :: 3.10",
27 |     "Programming Language :: Python :: 3.11",
28 |     "Programming Language :: Python :: 3.12",
29 |     "Operating System :: OS Independent",
30 |     "Environment :: Console",
31 |     "Framework :: FastAPI"
32 | ]
33 | authors = [
34 |     { name = "Heinrich Krupp", email = "[email protected]" }
35 | ]
36 | maintainers = [
37 |     { name = "Sundeep G", email = "[email protected]" }
38 | ]
39 | license = { text = "Apache-2.0" }
40 | dependencies = [
41 |     "tokenizers==0.20.3",
42 |     "mcp>=1.8.0,<2.0.0",
43 |     "python-dotenv>=1.0.0",
44 |     "sqlite-vec>=0.1.0",
45 |     "build>=0.10.0",
46 |     "aiohttp>=3.8.0",
47 |     "fastapi>=0.115.0",
48 |     "uvicorn>=0.30.0",
49 |     "python-multipart>=0.0.9",
50 |     "sse-starlette>=2.1.0",
51 |     "aiofiles>=23.2.1",
52 |     "psutil>=5.9.0",
53 |     "zeroconf>=0.130.0",
54 |     "pypdf2>=3.0.0",
55 |     "chardet>=5.0.0",
56 |     "click>=8.0.0",
57 |     "httpx>=0.24.0",
58 |     "authlib>=1.2.0",
59 |     "python-jose[cryptography]>=3.3.0",
60 |     "onnxruntime>=1.14.1",
61 |     "typing-extensions>=4.0.0; python_version < '3.11'",
62 |     "apscheduler>=3.11.0",
63 | ]
64 | 
65 | [project.optional-dependencies]
66 | # Machine learning dependencies for full torch-based embeddings
67 | ml = [
68 |     "sentence-transformers>=2.2.2",
69 |     "torch>=2.0.0"
70 | ]
71 | # Full installation (lite + ml)
72 | full = [
73 |     "mcp-memory-service-lite[ml]"
74 | ]
75 | 
76 | [project.scripts]
77 | memory = "mcp_memory_service.cli.main:main"
78 | memory-server = "mcp_memory_service.cli.main:memory_server_main"
79 | mcp-memory-server = "mcp_memory_service.mcp_server:main"
80 | 
81 | [project.urls]
82 | Homepage = "https://github.com/doobidoo/mcp-memory-service"
83 | Documentation = "https://github.com/doobidoo/mcp-memory-service#readme"
84 | Repository = "https://github.com/doobidoo/mcp-memory-service"
85 | Issues = "https://github.com/doobidoo/mcp-memory-service/issues"
86 | 
87 | [tool.hatch.build.targets.wheel]
88 | packages = ["src/mcp_memory_service"]
89 | 
90 | [tool.hatch.version]
91 | path = "src/mcp_memory_service/__init__.py"
92 | 
```

--------------------------------------------------------------------------------
/scripts/maintenance/memory-types.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Memory Type Taxonomy (Updated Nov 2025)
 2 | 
 3 | Database consolidated from 342 fragmented types to 128 organized types. Use these **24 core types** for all new memories.
 4 | 
 5 | ## Content Types
 6 | - `note` - General notes, observations, summaries
 7 | - `reference` - Reference materials, knowledge base entries
 8 | - `document` - Formal documents, code snippets
 9 | - `guide` - How-to guides, tutorials, troubleshooting guides
10 | 
11 | ## Activity Types
12 | - `session` - Work sessions, development sessions
13 | - `implementation` - Implementation work, integrations
14 | - `analysis` - Analysis, reports, investigations
15 | - `troubleshooting` - Problem-solving, debugging
16 | - `test` - Testing activities, validation
17 | 
18 | ## Artifact Types
19 | - `fix` - Bug fixes, corrections
20 | - `feature` - New features, enhancements
21 | - `release` - Releases, release notes
22 | - `deployment` - Deployments, deployment records
23 | 
24 | ## Progress Types
25 | - `milestone` - Milestones, completions, achievements
26 | - `status` - Status updates, progress reports
27 | 
28 | ## Infrastructure Types
29 | - `configuration` - Configurations, setups, settings
30 | - `infrastructure` - Infrastructure changes, system updates
31 | - `process` - Processes, workflows, procedures
32 | - `security` - Security-related memories
33 | - `architecture` - Architecture decisions, design patterns
34 | 
35 | ## Other Types
36 | - `documentation` - Documentation artifacts
37 | - `solution` - Solutions, resolutions
38 | - `achievement` - Accomplishments, successes
39 | 
40 | ## Usage Guidelines
41 | 
42 | ### Avoid Creating New Type Variations
43 | 
44 | **DON'T** create variations like:
45 | - `bug-fix`, `bugfix`, `technical-fix` → Use `fix`
46 | - `technical-solution`, `project-solution` → Use `solution`
47 | - `project-implementation` → Use `implementation`
48 | - `technical-note` → Use `note`
49 | 
50 | ### Avoid Redundant Prefixes
51 | 
52 | Remove unnecessary qualifiers:
53 | - `project-*` → Use base type
54 | - `technical-*` → Use base type
55 | - `development-*` → Use base type
56 | 
57 | ### Cleanup Commands
58 | 
59 | ```bash
60 | # Preview type consolidation
61 | python scripts/maintenance/consolidate_memory_types.py --dry-run
62 | 
63 | # Execute type consolidation
64 | python scripts/maintenance/consolidate_memory_types.py
65 | 
66 | # Check type distribution
67 | python scripts/maintenance/check_memory_types.py
68 | 
69 | # Assign types to untyped memories
70 | python scripts/maintenance/assign_memory_types.py --dry-run
71 | python scripts/maintenance/assign_memory_types.py
72 | ```
73 | 
74 | ## Consolidation Rules
75 | 
76 | The consolidation script applies these transformations:
77 | 
78 | 1. **Fix variants** → `fix`: bug-fix, bugfix, technical-fix, etc.
79 | 2. **Implementation variants** → `implementation`: integrations, project-implementation, etc.
80 | 3. **Solution variants** → `solution`: technical-solution, project-solution, etc.
81 | 4. **Note variants** → `note`: technical-note, development-note, etc.
82 | 5. **Remove redundant prefixes**: project-, technical-, development-
83 | 
84 | ## Benefits of Standardization
85 | 
86 | - Improved search and retrieval accuracy
87 | - Better tag-based filtering
88 | - Reduced database fragmentation
89 | - Easier memory type analytics
90 | - Consistent memory organization
91 | 
```

--------------------------------------------------------------------------------
/tests/test_semantic_search.py:
--------------------------------------------------------------------------------

```python
 1 | """
 2 | MCP Memory Service
 3 | Copyright (c) 2024 Heinrich Krupp
 4 | Licensed under the MIT License. See LICENSE file in the project root for full license text.
 5 | """
 6 | """
 7 | Test semantic search functionality of the MCP Memory Service.
 8 | """
 9 | import pytest
10 | import pytest_asyncio
11 | import asyncio
12 | from mcp_memory_service.server import MemoryServer
13 | 
14 | @pytest_asyncio.fixture
15 | async def memory_server():
16 |     """Create a test instance of the memory server."""
17 |     server = MemoryServer()
18 |     # MemoryServer initializes itself, no initialize() call needed
19 |     yield server
20 |     # No cleanup needed
21 | 
22 | @pytest.mark.asyncio
23 | async def test_semantic_similarity(memory_server):
24 |     """Test semantic similarity scoring."""
25 |     # Store related memories
26 |     memories = [
27 |         "The quick brown fox jumps over the lazy dog",
28 |         "A fast auburn fox leaps above a sleepy canine",
29 |         "A cat chases a mouse"
30 |     ]
31 |     
32 |     for memory in memories:
33 |         await memory_server.store_memory(content=memory)
34 |     
35 |     # Test semantic retrieval
36 |     query = "swift red fox jumping over sleeping dog"
37 |     results = await memory_server.debug_retrieve(
38 |         query=query,
39 |         n_results=2,
40 |         similarity_threshold=0.0  # Get all results with scores
41 |     )
42 |     
43 |     # First two results should be the fox-related memories
44 |     assert len(results) >= 2
45 |     assert all("fox" in result for result in results[:2])
46 |     
47 | @pytest.mark.asyncio
48 | async def test_similarity_threshold(memory_server):
49 |     """Test similarity threshold filtering."""
50 |     await memory_server.store_memory(
51 |         content="Python is a programming language"
52 |     )
53 |     
54 |     # This query is semantically unrelated
55 |     results = await memory_server.debug_retrieve(
56 |         query="Recipe for chocolate cake",
57 |         similarity_threshold=0.8
58 |     )
59 |     
60 |     assert len(results) == 0  # No results above threshold
61 | 
62 | @pytest.mark.asyncio
63 | async def test_exact_match(memory_server):
64 |     """Test exact match retrieval."""
65 |     test_content = "This is an exact match test"
66 |     await memory_server.store_memory(content=test_content)
67 |     
68 |     results = await memory_server.exact_match_retrieve(
69 |         content=test_content
70 |     )
71 |     
72 |     assert len(results) == 1
73 |     assert results[0] == test_content
74 | 
75 | @pytest.mark.asyncio
76 | async def test_semantic_ordering(memory_server):
77 |     """Test that results are ordered by semantic similarity."""
78 |     # Store memories with varying relevance
79 |     memories = [
80 |         "Machine learning is a subset of artificial intelligence",
81 |         "Deep learning uses neural networks",
82 |         "A bicycle has two wheels"
83 |     ]
84 |     
85 |     for memory in memories:
86 |         await memory_server.store_memory(content=memory)
87 |     
88 |     query = "What is AI and machine learning?"
89 |     results = await memory_server.debug_retrieve(
90 |         query=query,
91 |         n_results=3,
92 |         similarity_threshold=0.0
93 |     )
94 |     
95 |     # Check ordering
96 |     assert "machine learning" in results[0].lower()
97 |     assert "bicycle" not in results[0].lower()
```

--------------------------------------------------------------------------------
/tools/docker/test-docker-modes.sh:
--------------------------------------------------------------------------------

```bash
  1 | #!/bin/bash
  2 | # Test script to verify both Docker modes work correctly
  3 | 
  4 | set -e
  5 | 
  6 | echo "==================================="
  7 | echo "Docker Setup Test Script"
  8 | echo "==================================="
  9 | 
 10 | # Colors for output
 11 | RED='\033[0;31m'
 12 | GREEN='\033[0;32m'
 13 | YELLOW='\033[1;33m'
 14 | NC='\033[0m' # No Color
 15 | 
 16 | # Function to print colored output
 17 | print_status() {
 18 |     if [ $1 -eq 0 ]; then
 19 |         echo -e "${GREEN}✓${NC} $2"
 20 |     else
 21 |         echo -e "${RED}✗${NC} $2"
 22 |         return 1
 23 |     fi
 24 | }
 25 | 
 26 | # Change to docker directory
 27 | cd "$(dirname "$0")"
 28 | 
 29 | echo ""
 30 | echo "1. Building Docker image..."
 31 | docker-compose build --quiet
 32 | print_status $? "Docker image built successfully"
 33 | 
 34 | echo ""
 35 | echo "2. Testing MCP Protocol Mode..."
 36 | echo "   Starting container in MCP mode..."
 37 | docker-compose up -d
 38 | sleep 5
 39 | 
 40 | # Check if container is running
 41 | docker-compose ps | grep -q "Up"
 42 | print_status $? "MCP mode container is running"
 43 | 
 44 | # Check logs for correct mode
 45 | docker-compose logs 2>&1 | grep -q "Running in mcp mode"
 46 | print_status $? "Container started in MCP mode"
 47 | 
 48 | # Stop MCP mode
 49 | docker-compose down
 50 | echo ""
 51 | 
 52 | echo "3. Testing HTTP API Mode..."
 53 | echo "   Starting container in HTTP mode..."
 54 | docker-compose -f docker-compose.http.yml up -d
 55 | sleep 10
 56 | 
 57 | # Check if container is running
 58 | docker-compose -f docker-compose.http.yml ps | grep -q "Up"
 59 | print_status $? "HTTP mode container is running"
 60 | 
 61 | # Check logs for Uvicorn
 62 | docker-compose -f docker-compose.http.yml logs 2>&1 | grep -q "Uvicorn\|FastAPI\|HTTP"
 63 | print_status $? "HTTP server started (Uvicorn/FastAPI)"
 64 | 
 65 | # Test health endpoint
 66 | HTTP_RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:8000/api/health 2>/dev/null || echo "000")
 67 | if [ "$HTTP_RESPONSE" = "200" ]; then
 68 |     print_status 0 "Health endpoint responding (HTTP $HTTP_RESPONSE)"
 69 | else
 70 |     print_status 1 "Health endpoint not responding (HTTP $HTTP_RESPONSE)"
 71 | fi
 72 | 
 73 | # Test with API key
 74 | API_TEST=$(curl -s -X POST http://localhost:8000/api/memories \
 75 |   -H "Content-Type: application/json" \
 76 |   -H "Authorization: Bearer your-secure-api-key-here" \
 77 |   -d '{"content": "Docker test memory", "tags": ["test"]}' 2>/dev/null | grep -q "success\|unauthorized" && echo "ok" || echo "fail")
 78 | 
 79 | if [ "$API_TEST" = "ok" ]; then
 80 |     print_status 0 "API endpoint accessible"
 81 | else
 82 |     print_status 1 "API endpoint not accessible"
 83 | fi
 84 | 
 85 | # Stop HTTP mode
 86 | docker-compose -f docker-compose.http.yml down
 87 | 
 88 | echo ""
 89 | echo "==================================="
 90 | echo "Test Summary:"
 91 | echo "==================================="
 92 | echo -e "${GREEN}✓${NC} All critical fixes from Joe applied:"
 93 | echo "  - PYTHONPATH=/app/src"
 94 | echo "  - run_server.py copied"
 95 | echo "  - Embedding models pre-downloaded"
 96 | echo ""
 97 | echo -e "${GREEN}✓${NC} Simplified Docker structure:"
 98 | echo "  - Unified entrypoint for both modes"
 99 | echo "  - Clear MCP vs HTTP separation"
100 | echo "  - Single Dockerfile for all modes"
101 | echo ""
102 | echo -e "${YELLOW}Note:${NC} Deprecated files marked in DEPRECATED.md"
103 | echo ""
104 | echo "Docker setup is ready for use!"
```

--------------------------------------------------------------------------------
/scripts/server/preload_models.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """
 3 | Pre-load sentence-transformers models to avoid startup delays.
 4 | 
 5 | This script downloads and caches the default embedding models used by
 6 | MCP Memory Service so they don't need to be downloaded during server startup,
 7 | which can cause timeout errors in Claude Desktop.
 8 | """
 9 | 
10 | import sys
11 | import os
12 | 
13 | def preload_sentence_transformers():
14 |     """Pre-load the default sentence-transformers model."""
15 |     try:
16 |         print("[INFO] Pre-loading sentence-transformers models...")
17 |         from sentence_transformers import SentenceTransformer
18 |         
19 |         # Default model used by the memory service
20 |         model_name = "all-MiniLM-L6-v2"
21 |         print(f"[INFO] Downloading and caching model: {model_name}")
22 |         
23 |         model = SentenceTransformer(model_name)
24 |         print(f"[OK] Model loaded successfully on device: {model.device}")
25 |         
26 |         # Test the model to ensure it works
27 |         print("[INFO] Testing model functionality...")
28 |         test_text = "This is a test sentence for embedding."
29 |         embedding = model.encode(test_text)
30 |         print(f"[OK] Model test successful - embedding shape: {embedding.shape}")
31 |         
32 |         return True
33 |         
34 |     except ImportError:
35 |         print("[WARNING] sentence-transformers not available - skipping model preload")
36 |         return False
37 |     except Exception as e:
38 |         print(f"[ERROR] Error preloading model: {e}")
39 |         return False
40 | 
41 | def check_cache_status():
42 |     """Check if models are already cached."""
43 |     cache_locations = [
44 |         os.path.expanduser("~/.cache/huggingface/hub"),
45 |         os.path.expanduser("~/.cache/torch/sentence_transformers"),
46 |     ]
47 |     
48 |     for cache_dir in cache_locations:
49 |         if os.path.exists(cache_dir):
50 |             try:
51 |                 contents = os.listdir(cache_dir)
52 |                 for item in contents:
53 |                     if 'sentence-transformers' in item.lower() or 'minilm' in item.lower():
54 |                         print(f"[OK] Found cached model: {item}")
55 |                         return True
56 |             except (OSError, PermissionError):
57 |                 continue
58 |     
59 |     print("[INFO] No cached models found")
60 |     return False
61 | 
62 | def main():
63 |     print("MCP Memory Service - Model Preloader")
64 |     print("=" * 50)
65 |     
66 |     # Check current cache status
67 |     print("\n[1] Checking cache status...")
68 |     models_cached = check_cache_status()
69 |     
70 |     if models_cached:
71 |         print("[OK] Models are already cached - no download needed")
72 |         return True
73 |     
74 |     # Preload models
75 |     print("\n[2] Preloading models...")
76 |     success = preload_sentence_transformers()
77 |     
78 |     if success:
79 |         print("\n[OK] Model preloading complete!")
80 |         print("[INFO] MCP Memory Service should now start without downloading models")
81 |     else:
82 |         print("\n[WARNING] Model preloading failed - server may need to download during startup")
83 |         
84 |     return success
85 | 
86 | if __name__ == "__main__":
87 |     success = main()
88 |     sys.exit(0 if success else 1)
```

--------------------------------------------------------------------------------
/docs/mastery/testing-guide.md:
--------------------------------------------------------------------------------

```markdown
 1 | # MCP Memory Service — Testing Guide
 2 | 
 3 | This guide explains how to run, understand, and extend the test suites.
 4 | 
 5 | ## Prerequisites
 6 | 
 7 | - Python ≥ 3.10 (3.12 recommended; 3.13 may lack prebuilt `sqlite-vec` wheels).
 8 | - Install dependencies (uv recommended):
 9 |   - `uv sync` (respects `pyproject.toml` and `uv.lock`), or
10 |   - `pip install -e .` plus extras as needed.
11 | - For SQLite-vec tests:
12 |   - `sqlite-vec` and `sentence-transformers`/`torch` should be installed.
13 |   - On some OS/Python combinations, sqlite extension loading must be supported (see Troubleshooting).
14 | 
15 | ## Test Layout
16 | 
17 | - `tests/README.md`: overview.
18 | - Categories:
19 |   - Unit: `tests/unit/` (e.g., tags, mdns, cloudflare stubs).
20 |   - Integration: `tests/integration/` (cross-component flows).
21 |   - Performance: `tests/performance/`.
22 |   - Backend-specific: top-level tests like `test_sqlite_vec_storage.py`, `test_time_parser.py`, `test_memory_ops.py`.
23 | 
24 | ## Running Tests
25 | 
26 | Run all:
27 | 
28 | ```
29 | pytest
30 | ```
31 | 
32 | Category:
33 | 
34 | ```
35 | pytest tests/unit/
36 | pytest tests/integration/
37 | pytest tests/performance/
38 | ```
39 | 
40 | Single file or test:
41 | 
42 | ```
43 | pytest tests/test_sqlite_vec_storage.py::TestSqliteVecStorage::test_store_memory -q
44 | ```
45 | 
46 | With uv:
47 | 
48 | ```
49 | uv run pytest -q
50 | ```
51 | 
52 | ## Important Behaviors and Skips
53 | 
54 | - SQLite-vec tests are marked to skip when `sqlite-vec` is unavailable:
55 |   - See `pytestmark = pytest.mark.skipif(not SQLITE_VEC_AVAILABLE, ...)` in `tests/test_sqlite_vec_storage.py`.
56 | - Some tests simulate no-embedding scenarios by patching `SENTENCE_TRANSFORMERS_AVAILABLE=False` to validate fallback code paths.
57 | - Temp directories isolate database files; connections are closed in teardown.
58 | 
59 | ## Coverage of Key Areas
60 | 
61 | - Storage CRUD and vector search (`test_sqlite_vec_storage.py`).
62 | - Time parsing and timestamp recall (`test_time_parser.py`, `test_timestamp_recall.py`).
63 | - Tag and metadata semantics (`test_tag_storage.py`, `unit/test_tags.py`).
64 | - Health checks and database init (`test_database.py`).
65 | - Cloudflare adapters have unit-level coverage stubbing network (`unit/test_cloudflare_storage.py`).
66 | 
67 | ## Writing New Tests
68 | 
69 | - Prefer async `pytest.mark.asyncio` for storage APIs.
70 | - Use `tempfile.mkdtemp()` for per-test DB paths.
71 | - Use `src.mcp_memory_service.models.memory.Memory` and `generate_content_hash` helpers.
72 | - For backend-specific behavior, keep tests colocated with backend tests and gate with availability flags.
73 | - For MCP tool surface tests, prefer FastMCP server (`mcp_server.py`) in isolated processes or with `lifespan` context.
74 | 
75 | ## Local MCP/Service Tests
76 | 
77 | Run stdio server:
78 | 
79 | ```
80 | uv run memory server
81 | ```
82 | 
83 | Run FastMCP HTTP server:
84 | 
85 | ```
86 | uv run mcp-memory-server
87 | ```
88 | 
89 | Use any MCP client (Claude Desktop/Code) and exercise tools: store, retrieve, search_by_tag, delete, health.
90 | 
91 | ## Debugging and Logs
92 | 
93 | - Set `LOG_LEVEL=INFO` for more verbosity.
94 | - For Claude Desktop: stdout is suppressed to preserve JSON; inspect stderr/warnings. LM Studio prints diagnostics to stdout.
95 | - Common sqlite-vec errors print actionable remediation (see Troubleshooting).
96 | 
97 | 
```

--------------------------------------------------------------------------------
/scripts/service/install_http_service.sh:
--------------------------------------------------------------------------------

```bash
  1 | #!/bin/bash
  2 | # Install MCP Memory HTTP Service for systemd
  3 | 
  4 | set -e
  5 | 
  6 | SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
  7 | PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
  8 | SERVICE_FILE="$SCRIPT_DIR/mcp-memory-http.service"
  9 | SERVICE_NAME="mcp-memory-http.service"
 10 | 
 11 | echo "MCP Memory HTTP Service Installation"
 12 | echo "===================================="
 13 | echo ""
 14 | 
 15 | # Check if service file exists
 16 | if [ ! -f "$SERVICE_FILE" ]; then
 17 |     echo "❌ Service file not found: $SERVICE_FILE"
 18 |     exit 1
 19 | fi
 20 | 
 21 | # Check if .env exists
 22 | if [ ! -f "$PROJECT_DIR/.env" ]; then
 23 |     echo "❌ .env file not found: $PROJECT_DIR/.env"
 24 |     echo "Please create .env file with your configuration"
 25 |     exit 1
 26 | fi
 27 | 
 28 | # Check if venv exists
 29 | if [ ! -d "$PROJECT_DIR/venv" ]; then
 30 |     echo "❌ Virtual environment not found: $PROJECT_DIR/venv"
 31 |     echo "Please run: python -m venv venv && source venv/bin/activate && pip install -e ."
 32 |     exit 1
 33 | fi
 34 | 
 35 | # Install as user service (recommended) or system service
 36 | echo "Installation Options:"
 37 | echo "1. User service (recommended) - runs as your user, no sudo needed"
 38 | echo "2. System service - runs at boot, requires sudo"
 39 | read -p "Select [1/2]: " choice
 40 | 
 41 | case $choice in
 42 |     1)
 43 |         # User service
 44 |         SERVICE_DIR="$HOME/.config/systemd/user"
 45 |         mkdir -p "$SERVICE_DIR"
 46 | 
 47 |         echo "Installing user service to: $SERVICE_DIR/$SERVICE_NAME"
 48 |         cp "$SERVICE_FILE" "$SERVICE_DIR/$SERVICE_NAME"
 49 | 
 50 |         # Reload systemd
 51 |         systemctl --user daemon-reload
 52 | 
 53 |         echo ""
 54 |         echo "✅ Service installed successfully!"
 55 |         echo ""
 56 |         echo "To start the service:"
 57 |         echo "  systemctl --user start $SERVICE_NAME"
 58 |         echo ""
 59 |         echo "To enable auto-start on login:"
 60 |         echo "  systemctl --user enable $SERVICE_NAME"
 61 |         echo "  loginctl enable-linger $USER  # Required for auto-start"
 62 |         echo ""
 63 |         echo "To check status:"
 64 |         echo "  systemctl --user status $SERVICE_NAME"
 65 |         echo ""
 66 |         echo "To view logs:"
 67 |         echo "  journalctl --user -u $SERVICE_NAME -f"
 68 |         ;;
 69 | 
 70 |     2)
 71 |         # System service
 72 |         if [ "$EUID" -ne 0 ]; then
 73 |             echo "❌ System service installation requires sudo"
 74 |             echo "Please run: sudo $0"
 75 |             exit 1
 76 |         fi
 77 | 
 78 |         SERVICE_DIR="/etc/systemd/system"
 79 |         echo "Installing system service to: $SERVICE_DIR/$SERVICE_NAME"
 80 |         cp "$SERVICE_FILE" "$SERVICE_DIR/$SERVICE_NAME"
 81 | 
 82 |         # Reload systemd
 83 |         systemctl daemon-reload
 84 | 
 85 |         echo ""
 86 |         echo "✅ Service installed successfully!"
 87 |         echo ""
 88 |         echo "To start the service:"
 89 |         echo "  sudo systemctl start $SERVICE_NAME"
 90 |         echo ""
 91 |         echo "To enable auto-start on boot:"
 92 |         echo "  sudo systemctl enable $SERVICE_NAME"
 93 |         echo ""
 94 |         echo "To check status:"
 95 |         echo "  sudo systemctl status $SERVICE_NAME"
 96 |         echo ""
 97 |         echo "To view logs:"
 98 |         echo "  sudo journalctl -u $SERVICE_NAME -f"
 99 |         ;;
100 | 
101 |     *)
102 |         echo "❌ Invalid choice"
103 |         exit 1
104 |         ;;
105 | esac
106 | 
```

--------------------------------------------------------------------------------
/.github/pull_request_template.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Pull Request
  2 | 
  3 | ## Description
  4 | 
  5 | <!-- Provide a clear and concise description of the changes -->
  6 | 
  7 | ## Motivation
  8 | 
  9 | <!-- Explain why these changes are needed and what problem they solve -->
 10 | 
 11 | ## Type of Change
 12 | 
 13 | <!-- Check all that apply -->
 14 | 
 15 | - [ ] 🐛 Bug fix (non-breaking change that fixes an issue)
 16 | - [ ] ✨ New feature (non-breaking change that adds functionality)
 17 | - [ ] 💥 Breaking change (fix or feature that would cause existing functionality to not work as expected)
 18 | - [ ] 📝 Documentation update
 19 | - [ ] 🧪 Test improvement
 20 | - [ ] ♻️ Code refactoring (no functional changes)
 21 | - [ ] ⚡ Performance improvement
 22 | - [ ] 🔧 Configuration change
 23 | - [ ] 🎨 UI/UX improvement
 24 | 
 25 | ## Changes
 26 | 
 27 | <!-- List the specific changes made in this PR -->
 28 | 
 29 | -
 30 | -
 31 | -
 32 | 
 33 | **Breaking Changes** (if any):
 34 | <!-- Describe any breaking changes and migration steps -->
 35 | 
 36 | -
 37 | 
 38 | ## Testing
 39 | 
 40 | ### How Has This Been Tested?
 41 | 
 42 | <!-- Describe the tests you ran to verify your changes -->
 43 | 
 44 | - [ ] Unit tests
 45 | - [ ] Integration tests
 46 | - [ ] Manual testing
 47 | - [ ] MCP Inspector validation
 48 | 
 49 | **Test Configuration**:
 50 | - Python version:
 51 | - OS:
 52 | - Storage backend:
 53 | - Installation method:
 54 | 
 55 | ### Test Coverage
 56 | 
 57 | <!-- Describe the test coverage added or modified -->
 58 | 
 59 | - [ ] Added new tests
 60 | - [ ] Updated existing tests
 61 | - [ ] Test coverage maintained/improved
 62 | 
 63 | ## Related Issues
 64 | 
 65 | <!-- Link related issues using keywords: Fixes #123, Closes #456, Relates to #789 -->
 66 | 
 67 | Fixes #
 68 | Closes #
 69 | Relates to #
 70 | 
 71 | ## Screenshots
 72 | 
 73 | <!-- If applicable, add screenshots to help explain your changes -->
 74 | 
 75 | ## Documentation
 76 | 
 77 | <!-- Check all that apply -->
 78 | 
 79 | - [ ] Updated README.md
 80 | - [ ] Updated CLAUDE.md
 81 | - [ ] Updated AGENTS.md
 82 | - [ ] Updated CHANGELOG.md
 83 | - [ ] Updated Wiki pages
 84 | - [ ] Updated code comments/docstrings
 85 | - [ ] Added API documentation
 86 | - [ ] No documentation needed
 87 | 
 88 | ## Pre-submission Checklist
 89 | 
 90 | <!-- Check all boxes before submitting -->
 91 | 
 92 | - [ ] ✅ My code follows the project's coding standards (PEP 8, type hints)
 93 | - [ ] ✅ I have performed a self-review of my code
 94 | - [ ] ✅ I have commented my code, particularly in hard-to-understand areas
 95 | - [ ] ✅ I have made corresponding changes to the documentation
 96 | - [ ] ✅ My changes generate no new warnings
 97 | - [ ] ✅ I have added tests that prove my fix is effective or that my feature works
 98 | - [ ] ✅ New and existing unit tests pass locally with my changes
 99 | - [ ] ✅ Any dependent changes have been merged and published
100 | - [ ] ✅ I have updated CHANGELOG.md following [Keep a Changelog](https://keepachangelog.com/) format
101 | - [ ] ✅ I have checked that no sensitive data is exposed (API keys, tokens, passwords)
102 | - [ ] ✅ I have verified this works with all supported storage backends (if applicable)
103 | 
104 | ## Additional Notes
105 | 
106 | <!-- Any additional information, context, or notes for reviewers -->
107 | 
108 | ---
109 | 
110 | **For Reviewers**:
111 | - Review checklist: See [PR Review Guide](https://github.com/doobidoo/mcp-memory-service/wiki/PR-Review-Guide)
112 | - Consider testing with Gemini Code Assist for comprehensive review
113 | - Verify CHANGELOG.md entry is present and correctly formatted
114 | - Check documentation accuracy and completeness
115 | 
```

--------------------------------------------------------------------------------
/.github/workflows/WORKFLOW_FIXES.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Workflow Fixes Applied
 2 | 
 3 | ## Issues Identified and Fixed
 4 | 
 5 | ### 1. Cleanup Images Workflow (`cleanup-images.yml`)
 6 | 
 7 | **Issues:**
 8 | - Referenced non-existent workflows in `workflow_run` trigger
 9 | - Used incorrect action versions (`@v5` instead of `@v4`)
10 | - Incorrect account type (`org` instead of `personal`)
11 | - Missing error handling for optional steps
12 | - No validation for Docker Hub credentials
13 | 
14 | **Fixes Applied:**
15 | - Updated workflow references to match actual workflow names
16 | - Downgraded action versions to stable versions (`@v4`, `@v1`)
17 | - Changed account type to `personal` for personal GitHub account
18 | - Added `continue-on-error: true` for optional cleanup steps
19 | - Added credential validation and conditional Docker Hub cleanup
20 | - Added informative messages when cleanup is skipped
21 | 
22 | ### 2. Main Optimized Workflow (`main-optimized.yml`)
23 | 
24 | **Issues:**
25 | - Complex matrix strategy with indirect secret access
26 | - No handling for missing Docker Hub credentials
27 | - Potential authentication failures for Docker registries
28 | 
29 | **Fixes Applied:**
30 | - Simplified login steps with explicit registry conditions
31 | - Added conditional Docker Hub login based on credential availability
32 | - Added skip message when Docker Hub credentials are missing
33 | - Improved error handling for registry authentication
34 | 
35 | ## Changes Made
36 | 
37 | ### cleanup-images.yml
38 | ```yaml
39 | # Before
40 | workflow_run:
41 |   workflows: ["Release (Tags) - Optimized", "Main CI/CD Pipeline - Optimized"]
42 | 
43 | uses: actions/delete-package-versions@v5
44 | account-type: org
45 | 
46 | # After
47 | workflow_run:
48 |   workflows: ["Main CI/CD Pipeline", "Docker Publish (Tags)", "Publish and Test (Tags)"]
49 | 
50 | uses: actions/delete-package-versions@v4
51 | account-type: personal
52 | continue-on-error: true
53 | ```
54 | 
55 | ### main-optimized.yml
56 | ```yaml
57 | # Before
58 | username: ${{ matrix.username_secret == '_github_actor' && github.actor || secrets[matrix.username_secret] }}
59 | 
60 | # After
61 | - name: Log in to Docker Hub
62 |   if: matrix.registry == 'docker.io' && secrets.DOCKER_USERNAME && secrets.DOCKER_PASSWORD
63 | - name: Log in to GitHub Container Registry
64 |   if: matrix.registry == 'ghcr.io'
65 | ```
66 | 
67 | ## Safety Improvements
68 | 
69 | 1. **Graceful Degradation**: Workflows now continue even if optional steps fail
70 | 2. **Credential Validation**: Proper checking for required secrets before use
71 | 3. **Clear Messaging**: Users are informed when steps are skipped
72 | 4. **Error Isolation**: Failures in one cleanup job don't affect others
73 | 
74 | ## Testing Recommendations
75 | 
76 | 1. **Manual Trigger Test**: Test cleanup workflow with dry-run mode
77 | 2. **Credential Scenarios**: Test with and without Docker Hub credentials
78 | 3. **Registry Cleanup**: Verify GHCR cleanup works independently
79 | 4. **Workflow Dependencies**: Ensure workflow triggers work correctly
80 | 
81 | ## Expected Behavior
82 | 
83 | - **With Full Credentials**: Both GHCR and Docker Hub cleanup run
84 | - **Without Docker Credentials**: Only GHCR cleanup runs, Docker Hub skipped
85 | - **Action Failures**: Individual cleanup steps may fail but workflow continues
86 | - **No Images to Clean**: Workflows complete successfully with no actions
87 | 
88 | Date: 2024-08-24
89 | Status: Applied and ready for testing
```

--------------------------------------------------------------------------------
/scripts/sync/claude_sync_commands.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | # Copyright 2024 Heinrich Krupp
 3 | #
 4 | # Licensed under the Apache License, Version 2.0 (the "License");
 5 | # you may not use this file except in compliance with the License.
 6 | # You may obtain a copy of the License at
 7 | #
 8 | #     http://www.apache.org/licenses/LICENSE-2.0
 9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | 
16 | """
17 | Claude command wrapper for memory sync operations.
18 | Provides convenient commands for managing dual memory backends.
19 | """
20 | import sys
21 | import asyncio
22 | import subprocess
23 | from pathlib import Path
24 | 
25 | SYNC_SCRIPT = Path(__file__).parent / "sync_memory_backends.py"
26 | 
27 | def run_sync_command(args):
28 |     """Run the sync script with given arguments."""
29 |     cmd = [sys.executable, str(SYNC_SCRIPT)] + args
30 |     result = subprocess.run(cmd, capture_output=True, text=True)
31 | 
32 |     if result.stdout:
33 |         print(result.stdout.strip())
34 |     if result.stderr:
35 |         print(result.stderr.strip(), file=sys.stderr)
36 | 
37 |     return result.returncode
38 | 
39 | def memory_sync_status():
40 |     """Show memory sync status."""
41 |     return run_sync_command(['--status'])
42 | 
43 | def memory_sync_backup():
44 |     """Backup Cloudflare memories to SQLite-vec."""
45 |     print("Backing up Cloudflare memories to SQLite-vec...")
46 |     return run_sync_command(['--direction', 'cf-to-sqlite'])
47 | 
48 | def memory_sync_restore():
49 |     """Restore SQLite-vec memories to Cloudflare."""
50 |     print("Restoring SQLite-vec memories to Cloudflare...")
51 |     return run_sync_command(['--direction', 'sqlite-to-cf'])
52 | 
53 | def memory_sync_bidirectional():
54 |     """Perform bidirectional sync."""
55 |     print("Performing bidirectional sync...")
56 |     return run_sync_command(['--direction', 'bidirectional'])
57 | 
58 | def memory_sync_dry_run():
59 |     """Show what would be synced without making changes."""
60 |     print("Dry run - showing what would be synced:")
61 |     return run_sync_command(['--dry-run'])
62 | 
63 | def show_usage():
64 |     """Show usage information."""
65 |     print("Usage: python claude_sync_commands.py <command>")
66 |     print("Commands:")
67 |     print("  status      - Show sync status")
68 |     print("  backup      - Backup Cloudflare → SQLite-vec")
69 |     print("  restore     - Restore SQLite-vec → Cloudflare")
70 |     print("  sync        - Bidirectional sync")
71 |     print("  dry-run     - Show what would be synced")
72 | 
73 | if __name__ == "__main__":
74 |     # Dictionary-based command dispatch for better scalability
75 |     commands = {
76 |         "status": memory_sync_status,
77 |         "backup": memory_sync_backup,
78 |         "restore": memory_sync_restore,
79 |         "sync": memory_sync_bidirectional,
80 |         "dry-run": memory_sync_dry_run,
81 |     }
82 | 
83 |     if len(sys.argv) < 2:
84 |         show_usage()
85 |         sys.exit(1)
86 | 
87 |     command = sys.argv[1]
88 | 
89 |     if command in commands:
90 |         sys.exit(commands[command]())
91 |     else:
92 |         print(f"Unknown command: {command}")
93 |         show_usage()
94 |         sys.exit(1)
```

--------------------------------------------------------------------------------
/scripts/utils/memory_wrapper_uv.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """
 3 | UV-specific memory wrapper for MCP Memory Service
 4 | This wrapper is specifically designed for UV-based installations.
 5 | """
 6 | import os
 7 | import sys
 8 | import subprocess
 9 | import traceback
10 | 
11 | # Set environment variables for better cross-platform compatibility
12 | os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
13 | os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:128"
14 | 
15 | def print_info(text):
16 |     """Print formatted info text."""
17 |     print(f"[INFO] {text}", file=sys.stderr, flush=True)
18 | 
19 | def print_error(text):
20 |     """Print formatted error text."""
21 |     print(f"[ERROR] {text}", file=sys.stderr, flush=True)
22 | 
23 | def print_success(text):
24 |     """Print formatted success text."""
25 |     print(f"[SUCCESS] {text}", file=sys.stderr, flush=True)
26 | 
27 | def main():
28 |     """Main entry point for UV-based memory service."""
29 |     print_info("Starting MCP Memory Service with UV...")
30 |     
31 |     # Set ChromaDB path if provided via environment variables
32 |     if "MCP_MEMORY_CHROMA_PATH" in os.environ:
33 |         print_info(f"Using ChromaDB path: {os.environ['MCP_MEMORY_CHROMA_PATH']}")
34 |     
35 |     # Set backups path if provided via environment variables
36 |     if "MCP_MEMORY_BACKUPS_PATH" in os.environ:
37 |         print_info(f"Using backups path: {os.environ['MCP_MEMORY_BACKUPS_PATH']}")
38 |     
39 |     try:
40 |         # Use UV to run the memory service
41 |         cmd = [sys.executable, '-m', 'uv', 'run', 'memory']
42 |         cmd.extend(sys.argv[1:])  # Pass through any additional arguments
43 |         
44 |         print_info(f"Running command: {' '.join(cmd)}")
45 |         
46 |         # Execute the command
47 |         result = subprocess.run(cmd, check=True)
48 |         sys.exit(result.returncode)
49 |         
50 |     except subprocess.SubprocessError as e:
51 |         print_error(f"UV run failed: {e}")
52 |         print_info("Falling back to direct module execution...")
53 |         
54 |         # Fallback to direct execution
55 |         try:
56 |             # Add the source directory to path
57 |             script_dir = os.path.dirname(os.path.abspath(__file__))
58 |             src_dir = os.path.join(script_dir, "src")
59 |             
60 |             if os.path.exists(src_dir):
61 |                 sys.path.insert(0, src_dir)
62 |             
63 |             # Import and run the server
64 |             from mcp_memory_service.server import main as server_main
65 |             server_main()
66 |             
67 |         except ImportError as import_error:
68 |             print_error(f"Failed to import memory service: {import_error}")
69 |             sys.exit(1)
70 |         except Exception as fallback_error:
71 |             print_error(f"Fallback execution failed: {fallback_error}")
72 |             traceback.print_exc(file=sys.stderr)
73 |             sys.exit(1)
74 |     
75 |     except Exception as e:
76 |         print_error(f"Error running memory service: {e}")
77 |         traceback.print_exc(file=sys.stderr)
78 |         sys.exit(1)
79 | 
80 | if __name__ == "__main__":
81 |     try:
82 |         main()
83 |     except KeyboardInterrupt:
84 |         print_info("Shutting down gracefully...")
85 |         sys.exit(0)
86 |     except Exception as e:
87 |         print_error(f"Unhandled exception: {e}")
88 |         traceback.print_exc(file=sys.stderr)
89 |         sys.exit(1)
90 | 
```

--------------------------------------------------------------------------------
/scripts/maintenance/find_all_duplicates.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """Find all near-duplicate memories across the database."""
 3 | 
 4 | import sqlite3
 5 | from pathlib import Path
 6 | from collections import defaultdict
 7 | import hashlib
 8 | 
 9 | import platform
10 | 
11 | # Platform-specific database path
12 | if platform.system() == "Darwin":  # macOS
13 |     DB_PATH = Path.home() / "Library/Application Support/mcp-memory/sqlite_vec.db"
14 | else:  # Linux/Windows
15 |     DB_PATH = Path.home() / ".local/share/mcp-memory/sqlite_vec.db"
16 | 
17 | def normalize_content(content):
18 |     """Normalize content by removing timestamps and session-specific data."""
19 |     # Remove common timestamp patterns
20 |     import re
21 |     normalized = content
22 |     normalized = re.sub(r'\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z', 'TIMESTAMP', normalized)
23 |     normalized = re.sub(r'\*\*Date\*\*: \d{2,4}[./]\d{2}[./]\d{2,4}', '**Date**: DATE', normalized)
24 |     normalized = re.sub(r'Timestamp: \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}', 'Timestamp: TIMESTAMP', normalized)
25 | 
26 |     return normalized.strip()
27 | 
28 | def content_hash(content):
29 |     """Create a hash of normalized content."""
30 |     normalized = normalize_content(content)
31 |     return hashlib.md5(normalized.encode()).hexdigest()
32 | 
33 | def main():
34 |     conn = sqlite3.connect(DB_PATH)
35 |     cursor = conn.cursor()
36 | 
37 |     print("Analyzing memories for duplicates...")
38 |     cursor.execute("SELECT content_hash, content, tags, created_at FROM memories ORDER BY created_at DESC")
39 | 
40 |     memories = cursor.fetchall()
41 |     print(f"Total memories: {len(memories)}")
42 | 
43 |     # Group by normalized content
44 |     content_groups = defaultdict(list)
45 |     for mem_hash, content, tags, created_at in memories:
46 |         norm_hash = content_hash(content)
47 |         content_groups[norm_hash].append({
48 |             'hash': mem_hash,
49 |             'content': content[:200],  # First 200 chars
50 |             'tags': tags,
51 |             'created_at': created_at
52 |         })
53 | 
54 |     # Find duplicates (groups with >1 memory)
55 |     duplicates = {k: v for k, v in content_groups.items() if len(v) > 1}
56 | 
57 |     if not duplicates:
58 |         print("✅ No duplicates found!")
59 |         conn.close()
60 |         return
61 | 
62 |     print(f"\n❌ Found {len(duplicates)} groups of duplicates:")
63 | 
64 |     total_duplicate_count = 0
65 |     for i, (norm_hash, group) in enumerate(duplicates.items(), 1):
66 |         count = len(group)
67 |         total_duplicate_count += count - 1  # Keep one, delete rest
68 | 
69 |         print(f"\n{i}. Group with {count} duplicates:")
70 |         print(f"   Content preview: {group[0]['content'][:100]}...")
71 |         print(f"   Tags: {group[0]['tags'][:80]}...")
72 |         print(f"   Hashes to keep: {group[0]['hash'][:16]}... (newest)")
73 |         print(f"   Hashes to delete: {count-1} older duplicates")
74 | 
75 |         if i >= 10:  # Show only first 10 groups
76 |             remaining = len(duplicates) - 10
77 |             print(f"\n... and {remaining} more duplicate groups")
78 |             break
79 | 
80 |     print(f"\n📊 Summary:")
81 |     print(f"   Total duplicate groups: {len(duplicates)}")
82 |     print(f"   Total memories to delete: {total_duplicate_count}")
83 |     print(f"   Total memories after cleanup: {len(memories) - total_duplicate_count}")
84 | 
85 |     conn.close()
86 | 
87 | if __name__ == "__main__":
88 |     main()
89 | 
```

--------------------------------------------------------------------------------
/.claude/directives/refactoring-checklist.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Refactoring Safety Checklist
  2 | 
  3 | ## ⚠️ MANDATORY for All Code Moves/Extractions
  4 | 
  5 | When extracting, moving, or refactoring code, follow this checklist to avoid production issues.
  6 | 
  7 | **Learned from**: Issues #299 (import errors), #300 (response format mismatch)
  8 | 
  9 | ---
 10 | 
 11 | ## 1. ✅ Import Path Validation
 12 | 
 13 | **Problem**: Relative imports break when code moves (`..services` → `...services`)
 14 | 
 15 | **Checklist**:
 16 | - [ ] Validate relative imports from new location
 17 | - [ ] Run `bash scripts/ci/validate_imports.sh` before commit
 18 | - [ ] Test actual imports (no mocks allowed for validation)
 19 | 
 20 | **Why**: 82% of import errors are undetected until production
 21 | 
 22 | ---
 23 | 
 24 | ## 2. ✅ Response Format Compatibility
 25 | 
 26 | **Problem**: Handler expects `result["message"]` but service returns `result["success"]`
 27 | 
 28 | **Checklist**:
 29 | - [ ] Handler matches service response keys (`success`/`error`, not `message`)
 30 | - [ ] Test both success AND error paths
 31 | - [ ] Check for KeyError risks in all code paths
 32 | 
 33 | **Why**: Response format mismatches cause runtime crashes
 34 | 
 35 | ---
 36 | 
 37 | ## 3. ✅ Integration Tests for ALL Extracted Functions
 38 | 
 39 | **Problem**: 82% of handlers had zero integration tests (3/17 tested)
 40 | 
 41 | **Checklist**:
 42 | - [ ] Create integration tests BEFORE committing refactoring
 43 | - [ ] 100% handler coverage required (17/17 handlers)
 44 | - [ ] Run `python scripts/validation/check_handler_coverage.py`
 45 | 
 46 | **Why**: Unit tests alone don't catch integration issues
 47 | 
 48 | ---
 49 | 
 50 | ## 4. ✅ Coverage Validation
 51 | 
 52 | **Problem**: Refactoring can inadvertently reduce test coverage
 53 | 
 54 | **Checklist**:
 55 | - [ ] Run `pytest --cov=src/mcp_memory_service --cov-fail-under=80`
 56 | - [ ] Coverage must not decrease (delta ≥ 0%)
 57 | - [ ] Add tests for new code before committing
 58 | 
 59 | **Why**: Coverage gate prevents untested code from merging
 60 | 
 61 | ---
 62 | 
 63 | ## 5. ✅ Pre-Commit Validation
 64 | 
 65 | **Run these commands before EVERY refactoring commit:**
 66 | 
 67 | ```bash
 68 | # 1. Import validation
 69 | bash scripts/ci/validate_imports.sh
 70 | 
 71 | # 2. Handler coverage check
 72 | python scripts/validation/check_handler_coverage.py
 73 | 
 74 | # 3. Coverage gate
 75 | pytest tests/ --cov=src --cov-fail-under=80
 76 | ```
 77 | 
 78 | **All must pass** before git commit.
 79 | 
 80 | ---
 81 | 
 82 | ## 6. ✅ Commit Strategy
 83 | 
 84 | **Problem**: Batching multiple extractions makes errors hard to trace
 85 | 
 86 | **Checklist**:
 87 | - [ ] Commit incrementally (one extraction per commit)
 88 | - [ ] Each commit has passing tests + coverage ≥80%
 89 | - [ ] Never batch multiple extractions in one commit
 90 | 
 91 | **Why**: Incremental commits = easy rollback if issues found
 92 | 
 93 | ---
 94 | 
 95 | ## Quick Command Reference
 96 | 
 97 | ```bash
 98 | # Full pre-refactoring validation
 99 | bash scripts/ci/validate_imports.sh && \
100 | python scripts/validation/check_handler_coverage.py && \
101 | pytest tests/ --cov=src --cov-fail-under=80
102 | 
103 | # If all pass → safe to commit
104 | git commit -m "refactor: extracted X function"
105 | ```
106 | 
107 | ---
108 | 
109 | ## Historical Context
110 | 
111 | **Issue #299**: Import error `..services` → `...services` undetected until production
112 | **Issue #300**: Response format mismatch `result["message"]` vs `result["success"]`
113 | **Root Cause**: 82% of handlers had zero integration tests (3/17 tested)
114 | **Solution**: 9-check pre-PR validation + 100% handler coverage requirement
115 | 
116 | **Prevention is better than debugging in production.**
117 | 
```

--------------------------------------------------------------------------------
/.github/workflows/publish-dual.yml:
--------------------------------------------------------------------------------

```yaml
  1 | name: Publish Dual Distribution (Main + Lite)
  2 | 
  3 | # This workflow publishes both mcp-memory-service (full) and mcp-memory-service-lite (lightweight)
  4 | # Both packages share the same codebase but have different dependencies
  5 | 
  6 | on:
  7 |   release:
  8 |     types: [published]
  9 |   workflow_dispatch:
 10 |     inputs:
 11 |       version:
 12 |         description: 'Version to publish (e.g., 8.75.1)'
 13 |         required: true
 14 |         type: string
 15 | 
 16 | jobs:
 17 |   publish-main:
 18 |     name: Publish Main Package (with PyTorch)
 19 |     runs-on: ubuntu-latest
 20 |     permissions:
 21 |       id-token: write
 22 |       contents: read
 23 | 
 24 |     steps:
 25 |     - uses: actions/checkout@v4
 26 |       with:
 27 |         fetch-depth: 0
 28 | 
 29 |     - name: Set up Python
 30 |       uses: actions/setup-python@v5
 31 |       with:
 32 |         python-version: '3.11'
 33 | 
 34 |     - name: Install build dependencies
 35 |       run: |
 36 |         python -m pip install --upgrade pip
 37 |         python -m pip install build hatchling twine
 38 | 
 39 |     - name: Build main package
 40 |       run: |
 41 |         echo "Building mcp-memory-service (full version with PyTorch)"
 42 |         python -m build
 43 | 
 44 |     - name: Publish to PyPI
 45 |       env:
 46 |         TWINE_USERNAME: __token__
 47 |         TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
 48 |       run: |
 49 |         python -m twine upload dist/* --skip-existing
 50 | 
 51 |   publish-lite:
 52 |     name: Publish Lite Package (ONNX only)
 53 |     runs-on: ubuntu-latest
 54 |     permissions:
 55 |       id-token: write
 56 |       contents: read
 57 | 
 58 |     steps:
 59 |     - uses: actions/checkout@v4
 60 |       with:
 61 |         fetch-depth: 0
 62 | 
 63 |     - name: Set up Python
 64 |       uses: actions/setup-python@v5
 65 |       with:
 66 |         python-version: '3.11'
 67 | 
 68 |     - name: Install build dependencies
 69 |       run: |
 70 |         python -m pip install --upgrade pip
 71 |         python -m pip install build hatchling twine
 72 | 
 73 |     - name: Prepare lite package
 74 |       run: |
 75 |         echo "Preparing mcp-memory-service-lite (lightweight ONNX version)"
 76 |         cp pyproject.toml pyproject-main.toml
 77 |         cp pyproject-lite.toml pyproject.toml
 78 | 
 79 |     - name: Build lite package
 80 |       run: |
 81 |         python -m build
 82 | 
 83 |     - name: Publish lite to PyPI
 84 |       env:
 85 |         TWINE_USERNAME: __token__
 86 |         TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
 87 |       run: |
 88 |         python -m twine upload dist/* --skip-existing
 89 | 
 90 |     - name: Clean build artifacts
 91 |       run: rm -rf dist/ build/ *.egg-info
 92 | 
 93 |     - name: Restore original pyproject.toml
 94 |       run: |
 95 |         mv pyproject-main.toml pyproject.toml
 96 | 
 97 |   verify-packages:
 98 |     name: Verify Both Packages
 99 |     runs-on: ubuntu-latest
100 |     needs: [publish-main, publish-lite]
101 | 
102 |     steps:
103 |     - name: Set up Python
104 |       uses: actions/setup-python@v5
105 |       with:
106 |         python-version: '3.11'
107 | 
108 |     - name: Wait for PyPI to update
109 |       run: sleep 60
110 | 
111 |     - name: Test main package install
112 |       run: |
113 |         python -m venv /tmp/test-main
114 |         /tmp/test-main/bin/pip install mcp-memory-service
115 |         /tmp/test-main/bin/python -c "import mcp_memory_service; print('Main package OK')"
116 | 
117 |     - name: Test lite package install
118 |       run: |
119 |         python -m venv /tmp/test-lite
120 |         /tmp/test-lite/bin/pip install mcp-memory-service-lite
121 |         /tmp/test-lite/bin/python -c "import mcp_memory_service; print('Lite package OK')"
122 | 
```

--------------------------------------------------------------------------------
/docs/integrations/gemini.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Gemini Context: MCP Memory Service
 2 | 
 3 | ## Project Overview
 4 | 
 5 | This project is a sophisticated and feature-rich MCP (Memory Component Protocol) server designed to provide a persistent, semantic memory layer for AI assistants, particularly "Claude Desktop". It's built with Python and leverages a variety of technologies to deliver a robust and performant memory service.
 6 | 
 7 | The core of the project is the `MemoryServer` class, which handles all MCP tool calls. It features a "dream-inspired" memory consolidation system that autonomously organizes and compresses memories over time. The server is built on top of FastAPI, providing a modern and asynchronous API.
 8 | 
 9 | The project offers two distinct storage backends, allowing users to choose the best fit for their needs:
10 | 
11 | *   **ChromaDB:** A feature-rich vector database that provides advanced search capabilities and is well-suited for large memory collections.
12 | *   **SQLite-vec:** A lightweight, file-based backend that uses the `sqlite-vec` extension for vector similarity search. This is a great option for resource-constrained environments.
13 | 
14 | The project also includes a comprehensive suite of scripts for installation, testing, and maintenance, as well as detailed documentation.
15 | 
16 | ## Building and Running
17 | 
18 | ### Installation
19 | 
20 | The project uses a custom installer that intelligently detects the user's hardware and selects the optimal configuration. To install the project, run the following commands:
21 | 
22 | ```bash
23 | # Clone the repository
24 | git clone https://github.com/doobidoo/mcp-memory-service.git
25 | cd mcp-memory-service
26 | 
27 | # Create and activate a virtual environment
28 | python -m venv venv
29 | source venv/bin/activate # On Windows: venv\Scripts\activate
30 | 
31 | # Run the intelligent installer
32 | python install.py
33 | ```
34 | 
35 | ### Running the Server
36 | 
37 | The server can be run in several ways, depending on the desired configuration. The primary entry point is the `mcp_memory_service.server:main` script, which can be executed as a Python module:
38 | 
39 | ```bash
40 | python -m mcp_memory_service.server
41 | ```
42 | 
43 | Alternatively, the `pyproject.toml` file defines a `memory` script that can be used to run the server:
44 | 
45 | ```bash
46 | memory
47 | ```
48 | 
49 | The server can also be run as a FastAPI application using `uvicorn`:
50 | 
51 | ```bash
52 | uvicorn mcp_memory_service.server:app --host 0.0.0.0 --port 8000
53 | ```
54 | 
55 | ### Testing
56 | 
57 | The project includes a comprehensive test suite that can be run using `pytest`:
58 | 
59 | ```bash
60 | pytest tests/
61 | ```
62 | 
63 | ## Development Conventions
64 | 
65 | *   **Python 3.10+:** The project requires Python 3.10 or higher.
66 | *   **Type Hinting:** The codebase uses type hints extensively to improve code clarity and maintainability.
67 | *   **Async/Await:** The project uses the `async/await` pattern for all I/O operations to ensure high performance and scalability.
68 | *   **PEP 8:** The code follows the PEP 8 style guide.
69 | *   **Dataclasses:** The project uses dataclasses for data models to reduce boilerplate code.
70 | *   **Docstrings:** All modules and functions have triple-quoted docstrings that explain their purpose, arguments, and return values.
71 | *   **Testing:** All new features should be accompanied by tests to ensure they work as expected and don't introduce regressions.
72 | 
```

--------------------------------------------------------------------------------
/scripts/quality/check_test_scores.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """Check actual scores for test cases to calibrate expectations."""
 3 | import sys
 4 | from pathlib import Path
 5 | 
 6 | sys.path.insert(0, str(Path(__file__).parent.parent.parent / "src"))
 7 | 
 8 | from mcp_memory_service.quality.onnx_ranker import ONNXRankerModel
 9 | 
10 | def main():
11 |     print("Initializing DeBERTa model...")
12 |     model = ONNXRankerModel(model_name="nvidia-quality-classifier-deberta", device="cpu")
13 | 
14 |     # Test cases from test_deberta_absolute_quality_scoring
15 |     print("\n" + "="*80)
16 |     print("TEST 1: test_deberta_absolute_quality_scoring")
17 |     print("="*80)
18 | 
19 |     high_quality_1 = "Implement caching layer for API responses with Redis backend. Use TTL of 1 hour for user data."
20 |     high_quality_2 = "Fix bug in user authentication flow. Added proper session validation and error handling."
21 |     low_quality = "TODO: check this"
22 | 
23 |     score1 = model.score_quality("", high_quality_1)
24 |     score2 = model.score_quality("", high_quality_2)
25 |     score3 = model.score_quality("", low_quality)
26 | 
27 |     print(f"High quality 1: {score1:.4f} (expected ≥0.6)")
28 |     print(f"High quality 2: {score2:.4f} (expected ≥0.6)")
29 |     print(f"Low quality:    {score3:.4f} (expected <0.4)")
30 | 
31 |     # Test cases from test_deberta_3class_output_mapping
32 |     print("\n" + "="*80)
33 |     print("TEST 2: test_deberta_3class_output_mapping")
34 |     print("="*80)
35 | 
36 |     excellent = "The implementation uses a sophisticated multi-tier architecture with semantic analysis, pattern matching, and adaptive learning algorithms to optimize retrieval accuracy."
37 |     average = "The code does some processing and returns a result."
38 |     poor = "stuff things maybe later TODO"
39 | 
40 |     score_excellent = model.score_quality("", excellent)
41 |     score_average = model.score_quality("", average)
42 |     score_poor = model.score_quality("", poor)
43 | 
44 |     print(f"Excellent: {score_excellent:.4f}")
45 |     print(f"Average:   {score_average:.4f}")
46 |     print(f"Poor:      {score_poor:.4f}")
47 |     print(f"Range:     {max(score_excellent, score_average, score_poor) - min(score_excellent, score_average, score_poor):.4f} (expected >0.2)")
48 |     print(f"Ordered correctly: {score_excellent > score_average > score_poor}")
49 | 
50 |     # Additional realistic test cases
51 |     print("\n" + "="*80)
52 |     print("ADDITIONAL TESTS: Realistic memory content")
53 |     print("="*80)
54 | 
55 |     tests = [
56 |         ("Configured CI/CD pipeline with GitHub Actions. Set up automated testing, linting, and deployment to production on merge to main branch. Added branch protection rules.", "DevOps implementation"),
57 |         ("Refactored authentication module to use JWT tokens instead of session cookies. Implemented refresh token rotation and secure token storage in httpOnly cookies.", "Security improvement"),
58 |         ("Updated README with installation instructions", "Documentation update"),
59 |         ("Meeting notes: discussed project timeline", "Generic notes"),
60 |         ("TODO", "Minimal content"),
61 |         ("test", "Single word"),
62 |     ]
63 | 
64 |     for content, label in tests:
65 |         score = model.score_quality("", content)
66 |         print(f"{score:.4f} - {label}")
67 |         print(f"        Content: {content[:60]}...")
68 | 
69 | if __name__ == "__main__":
70 |     main()
71 | 
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/server/logging_config.py:
--------------------------------------------------------------------------------

```python
 1 | # Copyright 2024 Heinrich Krupp
 2 | #
 3 | # Licensed under the Apache License, Version 2.0 (the "License");
 4 | # you may not use this file except in compliance with the License.
 5 | # You may obtain a copy of the License at
 6 | #
 7 | #     http://www.apache.org/licenses/LICENSE-2.0
 8 | #
 9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 | 
15 | """
16 | Logging configuration module for MCP Memory Service.
17 | 
18 | Provides client-aware logging that adjusts output behavior based on MCP client type.
19 | Claude Desktop requires strict JSON mode (stderr only), while LM Studio supports
20 | dual-stream output (stdout for INFO/DEBUG, stderr for WARNING+).
21 | """
22 | 
23 | import sys
24 | import os
25 | import logging
26 | from .client_detection import MCP_CLIENT
27 | 
28 | # Custom logging handler that routes INFO/DEBUG to stdout, WARNING/ERROR to stderr
29 | class DualStreamHandler(logging.Handler):
30 |     """Client-aware handler that adjusts logging behavior based on MCP client."""
31 | 
32 |     def __init__(self, client_type='claude_desktop'):
33 |         super().__init__()
34 |         self.client_type = client_type
35 |         self.stdout_handler = logging.StreamHandler(sys.stdout)
36 |         self.stderr_handler = logging.StreamHandler(sys.stderr)
37 | 
38 |         # Set the same formatter for both handlers
39 |         formatter = logging.Formatter('%(levelname)s:%(name)s:%(message)s')
40 |         self.stdout_handler.setFormatter(formatter)
41 |         self.stderr_handler.setFormatter(formatter)
42 | 
43 |     def emit(self, record):
44 |         """Route log records based on client type and level."""
45 |         # For Claude Desktop: strict JSON mode - suppress most output, route everything to stderr
46 |         if self.client_type == 'claude_desktop':
47 |             # Only emit WARNING and above to stderr to maintain JSON protocol
48 |             if record.levelno >= logging.WARNING:
49 |                 self.stderr_handler.emit(record)
50 |             # Suppress INFO/DEBUG for Claude Desktop to prevent JSON parsing errors
51 |             return
52 | 
53 |         # For LM Studio: enhanced mode with dual-stream
54 |         if record.levelno >= logging.WARNING:  # WARNING, ERROR, CRITICAL
55 |             self.stderr_handler.emit(record)
56 |         else:  # DEBUG, INFO
57 |             self.stdout_handler.emit(record)
58 | 
59 | def configure_logging():
60 |     """Configure root logger with client-aware handler."""
61 |     # Configure logging with client-aware handler BEFORE any imports that use logging
62 |     log_level = os.getenv('LOG_LEVEL', 'WARNING').upper()  # Default to WARNING for performance
63 |     root_logger = logging.getLogger()
64 |     root_logger.setLevel(getattr(logging, log_level, logging.WARNING))
65 | 
66 |     # Remove any existing handlers to avoid duplicates
67 |     for handler in root_logger.handlers[:]:
68 |         root_logger.removeHandler(handler)
69 | 
70 |     # Add our custom client-aware handler
71 |     client_aware_handler = DualStreamHandler(client_type=MCP_CLIENT)
72 |     root_logger.addHandler(client_aware_handler)
73 | 
74 |     return logging.getLogger(__name__)
75 | 
76 | # Auto-configure on import
77 | logger = configure_logging()
78 | 
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/web/api/events.py:
--------------------------------------------------------------------------------

```python
 1 | """
 2 | Server-Sent Events endpoints for real-time updates.
 3 | """
 4 | 
 5 | from fastapi import APIRouter, Request, Depends
 6 | from pydantic import BaseModel
 7 | from typing import Dict, Any, List, TYPE_CHECKING
 8 | 
 9 | from ...config import OAUTH_ENABLED
10 | from ..sse import create_event_stream, sse_manager
11 | from ..dependencies import get_storage
12 | 
13 | # OAuth authentication imports (conditional)
14 | if OAUTH_ENABLED or TYPE_CHECKING:
15 |     from ..oauth.middleware import require_read_access, AuthenticationResult
16 | else:
17 |     # Provide type stubs when OAuth is disabled
18 |     AuthenticationResult = None
19 |     require_read_access = None
20 | 
21 | router = APIRouter()
22 | 
23 | 
24 | class ConnectionInfo(BaseModel):
25 |     """Individual connection information."""
26 |     connection_id: str
27 |     client_ip: str
28 |     user_agent: str
29 |     connected_duration_seconds: float
30 |     last_heartbeat_seconds_ago: float
31 | 
32 | 
33 | class SSEStatsResponse(BaseModel):
34 |     """Response model for SSE connection statistics."""
35 |     total_connections: int
36 |     heartbeat_interval: int
37 |     connections: List[ConnectionInfo]
38 | 
39 | 
40 | @router.get("/events")
41 | async def events_endpoint(
42 |     request: Request,
43 |     user: AuthenticationResult = Depends(require_read_access) if OAUTH_ENABLED else None
44 | ):
45 |     """
46 |     Server-Sent Events endpoint for real-time updates.
47 |     
48 |     Provides a continuous stream of events including:
49 |     - memory_stored: When new memories are added
50 |     - memory_deleted: When memories are removed
51 |     - search_completed: When searches finish
52 |     - health_update: System status changes
53 |     - heartbeat: Periodic keep-alive signals
54 |     - connection_established: Welcome message
55 |     """
56 |     return await create_event_stream(request)
57 | 
58 | 
59 | @router.get("/events/stats")
60 | async def get_sse_stats(
61 |     user: AuthenticationResult = Depends(require_read_access) if OAUTH_ENABLED else None
62 | ):
63 |     """
64 |     Get statistics about current SSE connections.
65 |     
66 |     Returns information about active connections, connection duration,
67 |     and heartbeat status.
68 |     """
69 |     try:
70 |         # Get raw stats first to debug the structure
71 |         stats = sse_manager.get_connection_stats()
72 |         
73 |         # Validate structure and transform if needed
74 |         connections = []
75 |         for conn_data in stats.get('connections', []):
76 |             connections.append({
77 |                 "connection_id": conn_data.get("connection_id", "unknown"),
78 |                 "client_ip": conn_data.get("client_ip", "unknown"),
79 |                 "user_agent": conn_data.get("user_agent", "unknown"),
80 |                 "connected_duration_seconds": conn_data.get("connected_duration_seconds", 0.0),
81 |                 "last_heartbeat_seconds_ago": conn_data.get("last_heartbeat_seconds_ago", 0.0)
82 |             })
83 |         
84 |         return {
85 |             "total_connections": stats.get("total_connections", 0),
86 |             "heartbeat_interval": stats.get("heartbeat_interval", 30),
87 |             "connections": connections
88 |         }
89 |     except Exception as e:
90 |         import logging
91 |         logging.getLogger(__name__).error(f"Error getting SSE stats: {str(e)}")
92 |         # Return safe default stats if there's an error
93 |         return {
94 |             "total_connections": 0,
95 |             "heartbeat_interval": 30,
96 |             "connections": []
97 |         }
```

--------------------------------------------------------------------------------
/start_http_debug.bat:
--------------------------------------------------------------------------------

```
  1 | @echo off
  2 | REM MCP Memory Service HTTP Debug Mode Startup Script
  3 | REM This script starts the MCP Memory Service in HTTP mode for debugging and testing
  4 | 
  5 | echo ========================================
  6 | echo MCP Memory Service HTTP Debug Mode
  7 | echo Using uv for dependency management
  8 | echo ========================================
  9 | 
 10 | REM Check if uv is available
 11 | uv --version >nul 2>&1
 12 | if %errorlevel% neq 0 (
 13 |     echo ERROR: uv is not installed or not in PATH
 14 |     echo Please install uv: https://docs.astral.sh/uv/getting-started/installation/
 15 |     pause
 16 |     exit /b 1
 17 | )
 18 | 
 19 | echo uv version:
 20 | uv --version
 21 | 
 22 | REM Install dependencies using uv sync (recommended)
 23 | echo.
 24 | echo Installing dependencies...
 25 | echo This may take a few minutes on first run...
 26 | echo Installing core dependencies...
 27 | uv sync
 28 | 
 29 | REM Check if installation was successful
 30 | if %errorlevel% neq 0 (
 31 |     echo ERROR: Failed to install dependencies
 32 |     echo Please check the error messages above
 33 |     pause
 34 |     exit /b 1
 35 | )
 36 | 
 37 | echo Dependencies installed successfully!
 38 | 
 39 | REM Verify Python can import the service
 40 | echo.
 41 | echo Verifying installation...
 42 | python -c "import sys; sys.path.insert(0, 'src'); import mcp_memory_service; print('✓ MCP Memory Service imported successfully')"
 43 | if %errorlevel% neq 0 (
 44 |     echo ERROR: Failed to import MCP Memory Service
 45 |     echo Please check the error messages above
 46 |     pause
 47 |     exit /b 1
 48 | )
 49 | 
 50 | REM Set environment variables for HTTP mode
 51 | set MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
 52 | set MCP_HTTP_ENABLED=true
 53 | set MCP_HTTP_PORT=8000
 54 | set MCP_HTTPS_ENABLED=false
 55 | set MCP_MDNS_ENABLED=true
 56 | set MCP_MDNS_SERVICE_NAME=MCP-Memory-Service-Debug
 57 | 
 58 | REM Fix Transformers cache warning
 59 | set HF_HOME=%USERPROFILE%\.cache\huggingface
 60 | set TRANSFORMERS_CACHE=%USERPROFILE%\.cache\huggingface\transformers
 61 | 
 62 | REM Optional: Set API key for security
 63 | REM To use authentication, set your own API key in the environment variable:
 64 | REM set MCP_API_KEY=your-secure-api-key-here
 65 | REM Or pass it when running this script: set MCP_API_KEY=mykey && start_http_debug.bat
 66 | if "%MCP_API_KEY%"=="" (
 67 |     echo WARNING: No API key set. Running without authentication.
 68 |     echo          To enable auth, set MCP_API_KEY environment variable.
 69 | )
 70 | 
 71 | REM Optional: Enable debug logging
 72 | set MCP_DEBUG=true
 73 | 
 74 | echo Configuration:
 75 | echo   Storage Backend: %MCP_MEMORY_STORAGE_BACKEND%
 76 | echo   HTTP Port: %MCP_HTTP_PORT%
 77 | echo   HTTPS Enabled: %MCP_HTTPS_ENABLED%
 78 | echo   mDNS Enabled: %MCP_MDNS_ENABLED%
 79 | echo   Service Name: %MCP_MDNS_SERVICE_NAME%
 80 | if "%MCP_API_KEY%"=="" (
 81 |     echo   API Key Set: No ^(WARNING: No authentication^)
 82 | ) else (
 83 |     echo   API Key Set: Yes
 84 | )
 85 | echo   Debug Mode: %MCP_DEBUG%
 86 | echo.
 87 | 
 88 | echo Service will be available at:
 89 | echo   HTTP: http://localhost:%MCP_HTTP_PORT%
 90 | echo   API: http://localhost:%MCP_HTTP_PORT%/api
 91 | echo   Health: http://localhost:%MCP_HTTP_PORT%/api/health
 92 | echo   Dashboard: http://localhost:%MCP_HTTP_PORT%/dashboard
 93 | echo.
 94 | echo Press Ctrl+C to stop the service
 95 | echo.
 96 | echo ========================================
 97 | echo Starting MCP Memory Service...
 98 | echo ========================================
 99 | 
100 | REM Start the service using Python directly (required for HTTP mode)
101 | echo Starting service with Python...
102 | echo Note: Using Python directly for HTTP server mode
103 | uv run python run_server.py
```

--------------------------------------------------------------------------------
/scripts/pr/amp_detect_breaking_changes.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | # scripts/pr/amp_detect_breaking_changes.sh - Detect breaking API changes using Amp CLI
 3 | #
 4 | # Usage: bash scripts/pr/amp_detect_breaking_changes.sh <BASE_BRANCH> <HEAD_BRANCH>
 5 | # Example: bash scripts/pr/amp_detect_breaking_changes.sh main feature/new-api
 6 | 
 7 | set -e
 8 | 
 9 | BASE_BRANCH=${1:-main}
10 | HEAD_BRANCH=${2:-$(git branch --show-current)}
11 | 
12 | echo "=== Amp CLI Breaking Change Detection ==="
13 | echo "Base Branch: $BASE_BRANCH"
14 | echo "Head Branch: $HEAD_BRANCH"
15 | echo ""
16 | 
17 | # Ensure Amp directories exist
18 | mkdir -p .claude/amp/prompts/pending
19 | mkdir -p .claude/amp/responses/ready
20 | 
21 | # Get API-related file changes
22 | echo "Analyzing API changes..."
23 | api_changes=$(git diff origin/$BASE_BRANCH...origin/$HEAD_BRANCH -- \
24 |     src/mcp_memory_service/tools.py \
25 |     src/mcp_memory_service/web/api/ \
26 |     2>/dev/null || echo "")
27 | 
28 | if [ -z "$api_changes" ]; then
29 |     echo "✅ No API changes detected"
30 |     exit 0
31 | fi
32 | 
33 | echo "API changes detected ($(echo "$api_changes" | wc -l) lines)"
34 | echo ""
35 | 
36 | # Generate UUID for breaking change analysis
37 | breaking_uuid=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid)
38 | 
39 | echo "Creating Amp prompt for breaking change analysis..."
40 | 
41 | # Truncate large diffs to avoid token overflow
42 | api_changes_truncated=$(echo "$api_changes" | head -300)
43 | 
44 | # Create breaking change analysis prompt
45 | cat > .claude/amp/prompts/pending/breaking-${breaking_uuid}.json << EOF
46 | {
47 |   "id": "${breaking_uuid}",
48 |   "timestamp": "$(date -u +"%Y-%m-%dT%H:%M:%S.000Z")",
49 |   "prompt": "Analyze these API changes for breaking changes. A breaking change is:\n- Removed function/method/endpoint\n- Changed function signature (parameters removed/reordered)\n- Changed return type in incompatible way\n- Renamed public API\n- Changed HTTP endpoint path/method\n- Changed MCP tool schema (added required parameters, removed optional parameters, changed parameter types)\n\nReport ONLY breaking changes with severity (CRITICAL/HIGH/MEDIUM). If no breaking changes, respond: 'BREAKING_CHANGES_NONE'.\n\nOutput format:\nSeverity: [CRITICAL|HIGH|MEDIUM]\nType: [removal|signature-change|rename|schema-change]\nLocation: [file:function/endpoint]\nDetails: [explanation]\nMigration: [suggested migration path]\n\nAPI Changes:\n${api_changes_truncated}",
50 |   "context": {
51 |     "project": "mcp-memory-service",
52 |     "task": "breaking-change-detection",
53 |     "base_branch": "${BASE_BRANCH}",
54 |     "head_branch": "${HEAD_BRANCH}"
55 |   },
56 |   "options": {
57 |     "timeout": 120000,
58 |     "format": "text"
59 |   }
60 | }
61 | EOF
62 | 
63 | echo "✅ Created Amp prompt for breaking change analysis"
64 | echo ""
65 | echo "=== Run this Amp command ==="
66 | echo "amp @.claude/amp/prompts/pending/breaking-${breaking_uuid}.json"
67 | echo ""
68 | echo "=== Then collect the analysis ==="
69 | echo "bash scripts/pr/amp_collect_results.sh --timeout 120 --uuids '${breaking_uuid}'"
70 | echo ""
71 | 
72 | # Alternative: Direct analysis with custom result handler
73 | echo "=== Or use this one-liner for immediate analysis ==="
74 | echo "(amp @.claude/amp/prompts/pending/breaking-${breaking_uuid}.json > /tmp/amp-breaking.log 2>&1); sleep 5 && bash scripts/pr/amp_analyze_breaking_changes.sh '${breaking_uuid}'"
75 | echo ""
76 | 
77 | # Save UUID for later collection
78 | echo "${breaking_uuid}" > /tmp/amp_breaking_changes_uuid.txt
79 | echo "UUID saved to /tmp/amp_breaking_changes_uuid.txt"
80 | 
```

--------------------------------------------------------------------------------
/docs/HOOK_IMPROVEMENTS.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Claude Code Session Hook Improvements
 2 | 
 3 | ## Overview
 4 | Enhanced the session start hook to prioritize recent memories and provide better context awareness for Claude Code sessions.
 5 | 
 6 | ## Key Improvements Made
 7 | 
 8 | ### 1. Multi-Phase Memory Retrieval
 9 | - **Phase 1**: Recent memories (last week) - 60% of available slots
10 | - **Phase 2**: Important tagged memories (architecture, decisions) - remaining slots
11 | - **Phase 3**: Fallback to general project context if needed
12 | 
13 | ### 2. Enhanced Recency Prioritization
14 | - Recent memories get higher priority in initial search
15 | - Time-based indicators: 🕒 today, 📅 this week, regular dates for older
16 | - Configurable time windows (`last-week`, `last-2-weeks`, `last-month`)
17 | 
18 | ### 3. Better Memory Categorization
19 | - New "Recent Work" category for memories from last 7 days
20 | - Improved categorization: Recent → Decisions → Architecture → Insights → Features → Context
21 | - Visual indicators for recency in CLI output
22 | 
23 | ### 4. Enhanced Semantic Queries  
24 | - Git context integration (branch, recent commits)
25 | - Framework and language context in queries
26 | - User message context when available
27 | 
28 | ### 5. Improved Configuration
29 | ```json
30 | {
31 |   "memoryService": {
32 |     "recentFirstMode": true,           // Enable multi-phase retrieval
33 |     "recentMemoryRatio": 0.6,          // 60% for recent memories
34 |     "recentTimeWindow": "last-week",   // Time window for recent search
35 |     "fallbackTimeWindow": "last-month" // Fallback time window
36 |   },
37 |   "output": {
38 |     "showMemoryDetails": true,         // Show detailed memory info
39 |     "showRecencyInfo": true,           // Show recency indicators
40 |     "showPhaseDetails": true           // Show search phase details
41 |   }
42 | }
43 | ```
44 | 
45 | ### 6. Better Visual Feedback
46 | - Phase-by-phase search reporting
47 | - Recency indicators in memory display
48 | - Enhanced scoring display with time flags
49 | - Better deduplication reporting
50 | 
51 | ## Expected Impact
52 | 
53 | ### Before
54 | - Single query for all memories
55 | - No recency prioritization
56 | - Limited context in queries
57 | - Basic categorization
58 | - Truncated output
59 | 
60 | ### After  
61 | - Multi-phase approach prioritizing recent memories
62 | - Smart time-based retrieval
63 | - Git and framework-aware queries
64 | - Enhanced categorization with "Recent Work"
65 | - Full context display with recency indicators
66 | 
67 | ## Usage
68 | 
69 | The improvements are **backward compatible** - existing installations will automatically use the enhanced system. To disable, set:
70 | 
71 | ```json
72 | {
73 |   "memoryService": {
74 |     "recentFirstMode": false
75 |   }
76 | }
77 | ```
78 | 
79 | ## Files Modified
80 | 
81 | 1. `claude-hooks/core/session-start.js` - Multi-phase retrieval logic
82 | 2. `claude-hooks/utilities/context-formatter.js` - Enhanced display and categorization  
83 | 3. `claude-hooks/config.json` - New configuration options
84 | 4. `test-hook.js` - Test script for validation
85 | 
86 | ## Testing
87 | 
88 | Run `node test-hook.js` to test the enhanced hook with mock context. The test demonstrates:
89 | - Project detection and context building
90 | - Multi-phase memory retrieval
91 | - Enhanced categorization and display
92 | - Git context integration
93 | - Configurable time windows
94 | 
95 | ## Result
96 | 
97 | Session hooks now provide more relevant, recent context while maintaining access to important historical decisions and architecture information. Users get better continuity with their recent work while preserving long-term project memory.
```
Page 4/62FirstPrevNextLast