#
tokens: 49796/50000 53/625 files (page 3/35)
lines: off (toggle) GitHub
raw markdown copy
This is page 3 of 35. Use http://codebase.md/doobidoo/mcp-memory-service?page={x} to view the full context.

# Directory Structure

```
├── .claude
│   ├── agents
│   │   ├── amp-bridge.md
│   │   ├── amp-pr-automator.md
│   │   ├── code-quality-guard.md
│   │   ├── gemini-pr-automator.md
│   │   └── github-release-manager.md
│   ├── settings.local.json.backup
│   └── settings.local.json.local
├── .commit-message
├── .dockerignore
├── .env.example
├── .env.sqlite.backup
├── .envnn#
├── .gitattributes
├── .github
│   ├── FUNDING.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── feature_request.yml
│   │   └── performance_issue.yml
│   ├── pull_request_template.md
│   └── workflows
│       ├── bridge-tests.yml
│       ├── CACHE_FIX.md
│       ├── claude-code-review.yml
│       ├── claude.yml
│       ├── cleanup-images.yml.disabled
│       ├── dev-setup-validation.yml
│       ├── docker-publish.yml
│       ├── LATEST_FIXES.md
│       ├── main-optimized.yml.disabled
│       ├── main.yml
│       ├── publish-and-test.yml
│       ├── README_OPTIMIZATION.md
│       ├── release-tag.yml.disabled
│       ├── release.yml
│       ├── roadmap-review-reminder.yml
│       ├── SECRET_CONDITIONAL_FIX.md
│       └── WORKFLOW_FIXES.md
├── .gitignore
├── .mcp.json.backup
├── .mcp.json.template
├── .pyscn
│   ├── .gitignore
│   └── reports
│       └── analyze_20251123_214224.html
├── AGENTS.md
├── archive
│   ├── deployment
│   │   ├── deploy_fastmcp_fixed.sh
│   │   ├── deploy_http_with_mcp.sh
│   │   └── deploy_mcp_v4.sh
│   ├── deployment-configs
│   │   ├── empty_config.yml
│   │   └── smithery.yaml
│   ├── development
│   │   └── test_fastmcp.py
│   ├── docs-removed-2025-08-23
│   │   ├── authentication.md
│   │   ├── claude_integration.md
│   │   ├── claude-code-compatibility.md
│   │   ├── claude-code-integration.md
│   │   ├── claude-code-quickstart.md
│   │   ├── claude-desktop-setup.md
│   │   ├── complete-setup-guide.md
│   │   ├── database-synchronization.md
│   │   ├── development
│   │   │   ├── autonomous-memory-consolidation.md
│   │   │   ├── CLEANUP_PLAN.md
│   │   │   ├── CLEANUP_README.md
│   │   │   ├── CLEANUP_SUMMARY.md
│   │   │   ├── dream-inspired-memory-consolidation.md
│   │   │   ├── hybrid-slm-memory-consolidation.md
│   │   │   ├── mcp-milestone.md
│   │   │   ├── multi-client-architecture.md
│   │   │   ├── test-results.md
│   │   │   └── TIMESTAMP_FIX_SUMMARY.md
│   │   ├── distributed-sync.md
│   │   ├── invocation_guide.md
│   │   ├── macos-intel.md
│   │   ├── master-guide.md
│   │   ├── mcp-client-configuration.md
│   │   ├── multi-client-server.md
│   │   ├── service-installation.md
│   │   ├── sessions
│   │   │   └── MCP_ENHANCEMENT_SESSION_MEMORY_v4.1.0.md
│   │   ├── UBUNTU_SETUP.md
│   │   ├── ubuntu.md
│   │   ├── windows-setup.md
│   │   └── windows.md
│   ├── docs-root-cleanup-2025-08-23
│   │   ├── AWESOME_LIST_SUBMISSION.md
│   │   ├── CLOUDFLARE_IMPLEMENTATION.md
│   │   ├── DOCUMENTATION_ANALYSIS.md
│   │   ├── DOCUMENTATION_CLEANUP_PLAN.md
│   │   ├── DOCUMENTATION_CONSOLIDATION_COMPLETE.md
│   │   ├── LITESTREAM_SETUP_GUIDE.md
│   │   ├── lm_studio_system_prompt.md
│   │   ├── PYTORCH_DOWNLOAD_FIX.md
│   │   └── README-ORIGINAL-BACKUP.md
│   ├── investigations
│   │   └── MACOS_HOOKS_INVESTIGATION.md
│   ├── litestream-configs-v6.3.0
│   │   ├── install_service.sh
│   │   ├── litestream_master_config_fixed.yml
│   │   ├── litestream_master_config.yml
│   │   ├── litestream_replica_config_fixed.yml
│   │   ├── litestream_replica_config.yml
│   │   ├── litestream_replica_simple.yml
│   │   ├── litestream-http.service
│   │   ├── litestream.service
│   │   └── requirements-cloudflare.txt
│   ├── release-notes
│   │   └── release-notes-v7.1.4.md
│   └── setup-development
│       ├── README.md
│       ├── setup_consolidation_mdns.sh
│       ├── STARTUP_SETUP_GUIDE.md
│       └── test_service.sh
├── CHANGELOG-HISTORIC.md
├── CHANGELOG.md
├── claude_commands
│   ├── memory-context.md
│   ├── memory-health.md
│   ├── memory-ingest-dir.md
│   ├── memory-ingest.md
│   ├── memory-recall.md
│   ├── memory-search.md
│   ├── memory-store.md
│   ├── README.md
│   └── session-start.md
├── claude-hooks
│   ├── config.json
│   ├── config.template.json
│   ├── CONFIGURATION.md
│   ├── core
│   │   ├── memory-retrieval.js
│   │   ├── mid-conversation.js
│   │   ├── session-end.js
│   │   ├── session-start.js
│   │   └── topic-change.js
│   ├── debug-pattern-test.js
│   ├── install_claude_hooks_windows.ps1
│   ├── install_hooks.py
│   ├── memory-mode-controller.js
│   ├── MIGRATION.md
│   ├── README-NATURAL-TRIGGERS.md
│   ├── README-phase2.md
│   ├── README.md
│   ├── simple-test.js
│   ├── statusline.sh
│   ├── test-adaptive-weights.js
│   ├── test-dual-protocol-hook.js
│   ├── test-mcp-hook.js
│   ├── test-natural-triggers.js
│   ├── test-recency-scoring.js
│   ├── tests
│   │   ├── integration-test.js
│   │   ├── phase2-integration-test.js
│   │   ├── test-code-execution.js
│   │   ├── test-cross-session.json
│   │   ├── test-session-tracking.json
│   │   └── test-threading.json
│   ├── utilities
│   │   ├── adaptive-pattern-detector.js
│   │   ├── context-formatter.js
│   │   ├── context-shift-detector.js
│   │   ├── conversation-analyzer.js
│   │   ├── dynamic-context-updater.js
│   │   ├── git-analyzer.js
│   │   ├── mcp-client.js
│   │   ├── memory-client.js
│   │   ├── memory-scorer.js
│   │   ├── performance-manager.js
│   │   ├── project-detector.js
│   │   ├── session-tracker.js
│   │   ├── tiered-conversation-monitor.js
│   │   └── version-checker.js
│   └── WINDOWS-SESSIONSTART-BUG.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Development-Sprint-November-2025.md
├── docs
│   ├── amp-cli-bridge.md
│   ├── api
│   │   ├── code-execution-interface.md
│   │   ├── memory-metadata-api.md
│   │   ├── PHASE1_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_REPORT.md
│   │   └── tag-standardization.md
│   ├── architecture
│   │   ├── search-enhancement-spec.md
│   │   └── search-examples.md
│   ├── architecture.md
│   ├── archive
│   │   └── obsolete-workflows
│   │       ├── load_memory_context.md
│   │       └── README.md
│   ├── assets
│   │   └── images
│   │       ├── dashboard-v3.3.0-preview.png
│   │       ├── memory-awareness-hooks-example.png
│   │       ├── project-infographic.svg
│   │       └── README.md
│   ├── CLAUDE_CODE_QUICK_REFERENCE.md
│   ├── cloudflare-setup.md
│   ├── deployment
│   │   ├── docker.md
│   │   ├── dual-service.md
│   │   ├── production-guide.md
│   │   └── systemd-service.md
│   ├── development
│   │   ├── ai-agent-instructions.md
│   │   ├── code-quality
│   │   │   ├── phase-2a-completion.md
│   │   │   ├── phase-2a-handle-get-prompt.md
│   │   │   ├── phase-2a-index.md
│   │   │   ├── phase-2a-install-package.md
│   │   │   └── phase-2b-session-summary.md
│   │   ├── code-quality-workflow.md
│   │   ├── dashboard-workflow.md
│   │   ├── issue-management.md
│   │   ├── pr-review-guide.md
│   │   ├── refactoring-notes.md
│   │   ├── release-checklist.md
│   │   └── todo-tracker.md
│   ├── docker-optimized-build.md
│   ├── document-ingestion.md
│   ├── DOCUMENTATION_AUDIT.md
│   ├── enhancement-roadmap-issue-14.md
│   ├── examples
│   │   ├── analysis-scripts.js
│   │   ├── maintenance-session-example.md
│   │   ├── memory-distribution-chart.jsx
│   │   └── tag-schema.json
│   ├── first-time-setup.md
│   ├── glama-deployment.md
│   ├── guides
│   │   ├── advanced-command-examples.md
│   │   ├── chromadb-migration.md
│   │   ├── commands-vs-mcp-server.md
│   │   ├── mcp-enhancements.md
│   │   ├── mdns-service-discovery.md
│   │   ├── memory-consolidation-guide.md
│   │   ├── migration.md
│   │   ├── scripts.md
│   │   └── STORAGE_BACKENDS.md
│   ├── HOOK_IMPROVEMENTS.md
│   ├── hooks
│   │   └── phase2-code-execution-migration.md
│   ├── http-server-management.md
│   ├── ide-compatability.md
│   ├── IMAGE_RETENTION_POLICY.md
│   ├── images
│   │   └── dashboard-placeholder.md
│   ├── implementation
│   │   ├── health_checks.md
│   │   └── performance.md
│   ├── IMPLEMENTATION_PLAN_HTTP_SSE.md
│   ├── integration
│   │   ├── homebrew.md
│   │   └── multi-client.md
│   ├── integrations
│   │   ├── gemini.md
│   │   ├── groq-bridge.md
│   │   ├── groq-integration-summary.md
│   │   └── groq-model-comparison.md
│   ├── integrations.md
│   ├── legacy
│   │   └── dual-protocol-hooks.md
│   ├── LM_STUDIO_COMPATIBILITY.md
│   ├── maintenance
│   │   └── memory-maintenance.md
│   ├── mastery
│   │   ├── api-reference.md
│   │   ├── architecture-overview.md
│   │   ├── configuration-guide.md
│   │   ├── local-setup-and-run.md
│   │   ├── testing-guide.md
│   │   └── troubleshooting.md
│   ├── migration
│   │   └── code-execution-api-quick-start.md
│   ├── natural-memory-triggers
│   │   ├── cli-reference.md
│   │   ├── installation-guide.md
│   │   └── performance-optimization.md
│   ├── oauth-setup.md
│   ├── pr-graphql-integration.md
│   ├── quick-setup-cloudflare-dual-environment.md
│   ├── README.md
│   ├── remote-configuration-wiki-section.md
│   ├── research
│   │   ├── code-execution-interface-implementation.md
│   │   └── code-execution-interface-summary.md
│   ├── ROADMAP.md
│   ├── sqlite-vec-backend.md
│   ├── statistics
│   │   ├── charts
│   │   │   ├── activity_patterns.png
│   │   │   ├── contributors.png
│   │   │   ├── growth_trajectory.png
│   │   │   ├── monthly_activity.png
│   │   │   └── october_sprint.png
│   │   ├── data
│   │   │   ├── activity_by_day.csv
│   │   │   ├── activity_by_hour.csv
│   │   │   ├── contributors.csv
│   │   │   └── monthly_activity.csv
│   │   ├── generate_charts.py
│   │   └── REPOSITORY_STATISTICS.md
│   ├── technical
│   │   ├── development.md
│   │   ├── memory-migration.md
│   │   ├── migration-log.md
│   │   ├── sqlite-vec-embedding-fixes.md
│   │   └── tag-storage.md
│   ├── testing
│   │   └── regression-tests.md
│   ├── testing-cloudflare-backend.md
│   ├── troubleshooting
│   │   ├── cloudflare-api-token-setup.md
│   │   ├── cloudflare-authentication.md
│   │   ├── general.md
│   │   ├── hooks-quick-reference.md
│   │   ├── pr162-schema-caching-issue.md
│   │   ├── session-end-hooks.md
│   │   └── sync-issues.md
│   └── tutorials
│       ├── advanced-techniques.md
│       ├── data-analysis.md
│       └── demo-session-walkthrough.md
├── examples
│   ├── claude_desktop_config_template.json
│   ├── claude_desktop_config_windows.json
│   ├── claude-desktop-http-config.json
│   ├── config
│   │   └── claude_desktop_config.json
│   ├── http-mcp-bridge.js
│   ├── memory_export_template.json
│   ├── README.md
│   ├── setup
│   │   └── setup_multi_client_complete.py
│   └── start_https_example.sh
├── install_service.py
├── install.py
├── LICENSE
├── NOTICE
├── pyproject.toml
├── pytest.ini
├── README.md
├── run_server.py
├── scripts
│   ├── .claude
│   │   └── settings.local.json
│   ├── archive
│   │   └── check_missing_timestamps.py
│   ├── backup
│   │   ├── backup_memories.py
│   │   ├── backup_sqlite_vec.sh
│   │   ├── export_distributable_memories.sh
│   │   └── restore_memories.py
│   ├── benchmarks
│   │   ├── benchmark_code_execution_api.py
│   │   ├── benchmark_hybrid_sync.py
│   │   └── benchmark_server_caching.py
│   ├── database
│   │   ├── analyze_sqlite_vec_db.py
│   │   ├── check_sqlite_vec_status.py
│   │   ├── db_health_check.py
│   │   └── simple_timestamp_check.py
│   ├── development
│   │   ├── debug_server_initialization.py
│   │   ├── find_orphaned_files.py
│   │   ├── fix_mdns.sh
│   │   ├── fix_sitecustomize.py
│   │   ├── remote_ingest.sh
│   │   ├── setup-git-merge-drivers.sh
│   │   ├── uv-lock-merge.sh
│   │   └── verify_hybrid_sync.py
│   ├── hooks
│   │   └── pre-commit
│   ├── installation
│   │   ├── install_linux_service.py
│   │   ├── install_macos_service.py
│   │   ├── install_uv.py
│   │   ├── install_windows_service.py
│   │   ├── install.py
│   │   ├── setup_backup_cron.sh
│   │   ├── setup_claude_mcp.sh
│   │   └── setup_cloudflare_resources.py
│   ├── linux
│   │   ├── service_status.sh
│   │   ├── start_service.sh
│   │   ├── stop_service.sh
│   │   ├── uninstall_service.sh
│   │   └── view_logs.sh
│   ├── maintenance
│   │   ├── assign_memory_types.py
│   │   ├── check_memory_types.py
│   │   ├── cleanup_corrupted_encoding.py
│   │   ├── cleanup_memories.py
│   │   ├── cleanup_organize.py
│   │   ├── consolidate_memory_types.py
│   │   ├── consolidation_mappings.json
│   │   ├── delete_orphaned_vectors_fixed.py
│   │   ├── fast_cleanup_duplicates_with_tracking.sh
│   │   ├── find_all_duplicates.py
│   │   ├── find_cloudflare_duplicates.py
│   │   ├── find_duplicates.py
│   │   ├── memory-types.md
│   │   ├── README.md
│   │   ├── recover_timestamps_from_cloudflare.py
│   │   ├── regenerate_embeddings.py
│   │   ├── repair_malformed_tags.py
│   │   ├── repair_memories.py
│   │   ├── repair_sqlite_vec_embeddings.py
│   │   ├── repair_zero_embeddings.py
│   │   ├── restore_from_json_export.py
│   │   └── scan_todos.sh
│   ├── migration
│   │   ├── cleanup_mcp_timestamps.py
│   │   ├── legacy
│   │   │   └── migrate_chroma_to_sqlite.py
│   │   ├── mcp-migration.py
│   │   ├── migrate_sqlite_vec_embeddings.py
│   │   ├── migrate_storage.py
│   │   ├── migrate_tags.py
│   │   ├── migrate_timestamps.py
│   │   ├── migrate_to_cloudflare.py
│   │   ├── migrate_to_sqlite_vec.py
│   │   ├── migrate_v5_enhanced.py
│   │   ├── TIMESTAMP_CLEANUP_README.md
│   │   └── verify_mcp_timestamps.py
│   ├── pr
│   │   ├── amp_collect_results.sh
│   │   ├── amp_detect_breaking_changes.sh
│   │   ├── amp_generate_tests.sh
│   │   ├── amp_pr_review.sh
│   │   ├── amp_quality_gate.sh
│   │   ├── amp_suggest_fixes.sh
│   │   ├── auto_review.sh
│   │   ├── detect_breaking_changes.sh
│   │   ├── generate_tests.sh
│   │   ├── lib
│   │   │   └── graphql_helpers.sh
│   │   ├── quality_gate.sh
│   │   ├── resolve_threads.sh
│   │   ├── run_pyscn_analysis.sh
│   │   ├── run_quality_checks.sh
│   │   ├── thread_status.sh
│   │   └── watch_reviews.sh
│   ├── quality
│   │   ├── fix_dead_code_install.sh
│   │   ├── phase1_dead_code_analysis.md
│   │   ├── phase2_complexity_analysis.md
│   │   ├── README_PHASE1.md
│   │   ├── README_PHASE2.md
│   │   ├── track_pyscn_metrics.sh
│   │   └── weekly_quality_review.sh
│   ├── README.md
│   ├── run
│   │   ├── run_mcp_memory.sh
│   │   ├── run-with-uv.sh
│   │   └── start_sqlite_vec.sh
│   ├── run_memory_server.py
│   ├── server
│   │   ├── check_http_server.py
│   │   ├── check_server_health.py
│   │   ├── memory_offline.py
│   │   ├── preload_models.py
│   │   ├── run_http_server.py
│   │   ├── run_memory_server.py
│   │   ├── start_http_server.bat
│   │   └── start_http_server.sh
│   ├── service
│   │   ├── deploy_dual_services.sh
│   │   ├── install_http_service.sh
│   │   ├── mcp-memory-http.service
│   │   ├── mcp-memory.service
│   │   ├── memory_service_manager.sh
│   │   ├── service_control.sh
│   │   ├── service_utils.py
│   │   └── update_service.sh
│   ├── sync
│   │   ├── check_drift.py
│   │   ├── claude_sync_commands.py
│   │   ├── export_memories.py
│   │   ├── import_memories.py
│   │   ├── litestream
│   │   │   ├── apply_local_changes.sh
│   │   │   ├── enhanced_memory_store.sh
│   │   │   ├── init_staging_db.sh
│   │   │   ├── io.litestream.replication.plist
│   │   │   ├── manual_sync.sh
│   │   │   ├── memory_sync.sh
│   │   │   ├── pull_remote_changes.sh
│   │   │   ├── push_to_remote.sh
│   │   │   ├── README.md
│   │   │   ├── resolve_conflicts.sh
│   │   │   ├── setup_local_litestream.sh
│   │   │   ├── setup_remote_litestream.sh
│   │   │   ├── staging_db_init.sql
│   │   │   ├── stash_local_changes.sh
│   │   │   ├── sync_from_remote_noconfig.sh
│   │   │   └── sync_from_remote.sh
│   │   ├── README.md
│   │   ├── safe_cloudflare_update.sh
│   │   ├── sync_memory_backends.py
│   │   └── sync_now.py
│   ├── testing
│   │   ├── run_complete_test.py
│   │   ├── run_memory_test.sh
│   │   ├── simple_test.py
│   │   ├── test_cleanup_logic.py
│   │   ├── test_cloudflare_backend.py
│   │   ├── test_docker_functionality.py
│   │   ├── test_installation.py
│   │   ├── test_mdns.py
│   │   ├── test_memory_api.py
│   │   ├── test_memory_simple.py
│   │   ├── test_migration.py
│   │   ├── test_search_api.py
│   │   ├── test_sqlite_vec_embeddings.py
│   │   ├── test_sse_events.py
│   │   ├── test-connection.py
│   │   └── test-hook.js
│   ├── utils
│   │   ├── claude_commands_utils.py
│   │   ├── generate_personalized_claude_md.sh
│   │   ├── groq
│   │   ├── groq_agent_bridge.py
│   │   ├── list-collections.py
│   │   ├── memory_wrapper_uv.py
│   │   ├── query_memories.py
│   │   ├── smithery_wrapper.py
│   │   ├── test_groq_bridge.sh
│   │   └── uv_wrapper.py
│   └── validation
│       ├── check_dev_setup.py
│       ├── check_documentation_links.py
│       ├── diagnose_backend_config.py
│       ├── validate_configuration_complete.py
│       ├── validate_memories.py
│       ├── validate_migration.py
│       ├── validate_timestamp_integrity.py
│       ├── verify_environment.py
│       ├── verify_pytorch_windows.py
│       └── verify_torch.py
├── SECURITY.md
├── selective_timestamp_recovery.py
├── SPONSORS.md
├── src
│   └── mcp_memory_service
│       ├── __init__.py
│       ├── api
│       │   ├── __init__.py
│       │   ├── client.py
│       │   ├── operations.py
│       │   ├── sync_wrapper.py
│       │   └── types.py
│       ├── backup
│       │   ├── __init__.py
│       │   └── scheduler.py
│       ├── cli
│       │   ├── __init__.py
│       │   ├── ingestion.py
│       │   ├── main.py
│       │   └── utils.py
│       ├── config.py
│       ├── consolidation
│       │   ├── __init__.py
│       │   ├── associations.py
│       │   ├── base.py
│       │   ├── clustering.py
│       │   ├── compression.py
│       │   ├── consolidator.py
│       │   ├── decay.py
│       │   ├── forgetting.py
│       │   ├── health.py
│       │   └── scheduler.py
│       ├── dependency_check.py
│       ├── discovery
│       │   ├── __init__.py
│       │   ├── client.py
│       │   └── mdns_service.py
│       ├── embeddings
│       │   ├── __init__.py
│       │   └── onnx_embeddings.py
│       ├── ingestion
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── chunker.py
│       │   ├── csv_loader.py
│       │   ├── json_loader.py
│       │   ├── pdf_loader.py
│       │   ├── registry.py
│       │   ├── semtools_loader.py
│       │   └── text_loader.py
│       ├── lm_studio_compat.py
│       ├── mcp_server.py
│       ├── models
│       │   ├── __init__.py
│       │   └── memory.py
│       ├── server.py
│       ├── services
│       │   ├── __init__.py
│       │   └── memory_service.py
│       ├── storage
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── cloudflare.py
│       │   ├── factory.py
│       │   ├── http_client.py
│       │   ├── hybrid.py
│       │   └── sqlite_vec.py
│       ├── sync
│       │   ├── __init__.py
│       │   ├── exporter.py
│       │   ├── importer.py
│       │   └── litestream_config.py
│       ├── utils
│       │   ├── __init__.py
│       │   ├── cache_manager.py
│       │   ├── content_splitter.py
│       │   ├── db_utils.py
│       │   ├── debug.py
│       │   ├── document_processing.py
│       │   ├── gpu_detection.py
│       │   ├── hashing.py
│       │   ├── http_server_manager.py
│       │   ├── port_detection.py
│       │   ├── system_detection.py
│       │   └── time_parser.py
│       └── web
│           ├── __init__.py
│           ├── api
│           │   ├── __init__.py
│           │   ├── analytics.py
│           │   ├── backup.py
│           │   ├── consolidation.py
│           │   ├── documents.py
│           │   ├── events.py
│           │   ├── health.py
│           │   ├── manage.py
│           │   ├── mcp.py
│           │   ├── memories.py
│           │   ├── search.py
│           │   └── sync.py
│           ├── app.py
│           ├── dependencies.py
│           ├── oauth
│           │   ├── __init__.py
│           │   ├── authorization.py
│           │   ├── discovery.py
│           │   ├── middleware.py
│           │   ├── models.py
│           │   ├── registration.py
│           │   └── storage.py
│           ├── sse.py
│           └── static
│               ├── app.js
│               ├── index.html
│               ├── README.md
│               ├── sse_test.html
│               └── style.css
├── start_http_debug.bat
├── start_http_server.sh
├── test_document.txt
├── test_version_checker.js
├── tests
│   ├── __init__.py
│   ├── api
│   │   ├── __init__.py
│   │   ├── test_compact_types.py
│   │   └── test_operations.py
│   ├── bridge
│   │   ├── mock_responses.js
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   └── test_http_mcp_bridge.js
│   ├── conftest.py
│   ├── consolidation
│   │   ├── __init__.py
│   │   ├── conftest.py
│   │   ├── test_associations.py
│   │   ├── test_clustering.py
│   │   ├── test_compression.py
│   │   ├── test_consolidator.py
│   │   ├── test_decay.py
│   │   └── test_forgetting.py
│   ├── contracts
│   │   └── api-specification.yml
│   ├── integration
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   ├── test_api_key_fallback.py
│   │   ├── test_api_memories_chronological.py
│   │   ├── test_api_tag_time_search.py
│   │   ├── test_api_with_memory_service.py
│   │   ├── test_bridge_integration.js
│   │   ├── test_cli_interfaces.py
│   │   ├── test_cloudflare_connection.py
│   │   ├── test_concurrent_clients.py
│   │   ├── test_data_serialization_consistency.py
│   │   ├── test_http_server_startup.py
│   │   ├── test_mcp_memory.py
│   │   ├── test_mdns_integration.py
│   │   ├── test_oauth_basic_auth.py
│   │   ├── test_oauth_flow.py
│   │   ├── test_server_handlers.py
│   │   └── test_store_memory.py
│   ├── performance
│   │   ├── test_background_sync.py
│   │   └── test_hybrid_live.py
│   ├── README.md
│   ├── smithery
│   │   └── test_smithery.py
│   ├── sqlite
│   │   └── simple_sqlite_vec_test.py
│   ├── test_client.py
│   ├── test_content_splitting.py
│   ├── test_database.py
│   ├── test_hybrid_cloudflare_limits.py
│   ├── test_hybrid_storage.py
│   ├── test_memory_ops.py
│   ├── test_semantic_search.py
│   ├── test_sqlite_vec_storage.py
│   ├── test_time_parser.py
│   ├── test_timestamp_preservation.py
│   ├── timestamp
│   │   ├── test_hook_vs_manual_storage.py
│   │   ├── test_issue99_final_validation.py
│   │   ├── test_search_retrieval_inconsistency.py
│   │   ├── test_timestamp_issue.py
│   │   └── test_timestamp_simple.py
│   └── unit
│       ├── conftest.py
│       ├── test_cloudflare_storage.py
│       ├── test_csv_loader.py
│       ├── test_fastapi_dependencies.py
│       ├── test_import.py
│       ├── test_json_loader.py
│       ├── test_mdns_simple.py
│       ├── test_mdns.py
│       ├── test_memory_service.py
│       ├── test_memory.py
│       ├── test_semtools_loader.py
│       ├── test_storage_interface_compatibility.py
│       └── test_tag_time_filtering.py
├── tools
│   ├── docker
│   │   ├── DEPRECATED.md
│   │   ├── docker-compose.http.yml
│   │   ├── docker-compose.pythonpath.yml
│   │   ├── docker-compose.standalone.yml
│   │   ├── docker-compose.uv.yml
│   │   ├── docker-compose.yml
│   │   ├── docker-entrypoint-persistent.sh
│   │   ├── docker-entrypoint-unified.sh
│   │   ├── docker-entrypoint.sh
│   │   ├── Dockerfile
│   │   ├── Dockerfile.glama
│   │   ├── Dockerfile.slim
│   │   ├── README.md
│   │   └── test-docker-modes.sh
│   └── README.md
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/scripts/pr/amp_suggest_fixes.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
# scripts/pr/amp_suggest_fixes.sh - Generate fix suggestions using Amp CLI
#
# Usage: bash scripts/pr/amp_suggest_fixes.sh <PR_NUMBER>
# Example: bash scripts/pr/amp_suggest_fixes.sh 215

set -e

PR_NUMBER=$1

if [ -z "$PR_NUMBER" ]; then
    echo "Usage: $0 <PR_NUMBER>"
    exit 1
fi

if ! command -v gh &> /dev/null; then
    echo "Error: GitHub CLI (gh) is not installed"
    exit 1
fi

echo "=== Amp CLI Fix Suggestions for PR #$PR_NUMBER ==="
echo ""

# Ensure Amp directories exist
mkdir -p .claude/amp/prompts/pending
mkdir -p .claude/amp/responses/ready

# Get repository
REPO=$(gh repo view --json nameWithOwner -q .nameWithOwner 2>/dev/null || echo "doobidoo/mcp-memory-service")

# Fetch review comments
echo "Fetching review comments from PR #$PR_NUMBER..."
review_comments=$(gh api "repos/$REPO/pulls/$PR_NUMBER/comments" | \
    jq -r '[.[] | select(.user.login | test("bot|gemini|claude"))] | .[] | "- \(.path):\(.line) - \(.body[0:200])"' | \
    head -50)

if [ -z "$review_comments" ]; then
    echo "No review comments found."
    exit 0
fi

echo "Review Comments:"
echo "$review_comments"
echo ""

# Get PR diff
echo "Fetching PR diff..."
pr_diff=$(gh pr diff $PR_NUMBER | head -500)  # Limit to 500 lines to avoid token overflow

# Generate UUID for fix suggestions task
fixes_uuid=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid)

echo "Creating Amp prompt for fix suggestions..."

# Create fix suggestions prompt
cat > .claude/amp/prompts/pending/fixes-${fixes_uuid}.json << EOF
{
  "id": "${fixes_uuid}",
  "timestamp": "$(date -u +"%Y-%m-%dT%H:%M:%S.000Z")",
  "prompt": "Analyze these code review comments and suggest specific fixes. DO NOT auto-apply changes. Output format: For each issue, provide: 1) File path, 2) Issue description, 3) Suggested fix (code snippet or explanation), 4) Rationale. Focus on safe, non-breaking changes (formatting, type hints, error handling, variable naming, import organization).\n\nReview comments:\n${review_comments}\n\nPR diff (current code):\n${pr_diff}\n\nProvide actionable fix suggestions in markdown format.",
  "context": {
    "project": "mcp-memory-service",
    "task": "fix-suggestions",
    "pr_number": "${PR_NUMBER}"
  },
  "options": {
    "timeout": 180000,
    "format": "markdown"
  }
}
EOF

echo "✅ Created Amp prompt for fix suggestions"
echo ""
echo "=== Run this Amp command ==="
echo "amp @.claude/amp/prompts/pending/fixes-${fixes_uuid}.json"
echo ""
echo "=== Then collect the suggestions ==="
echo "bash scripts/pr/amp_collect_results.sh --timeout 180 --uuids '${fixes_uuid}'"
echo ""

# Save UUID for later collection
echo "${fixes_uuid}" > /tmp/amp_fix_suggestions_uuid_${PR_NUMBER}.txt

echo "UUID saved to /tmp/amp_fix_suggestions_uuid_${PR_NUMBER}.txt for result collection"

```

--------------------------------------------------------------------------------
/docs/mastery/troubleshooting.md:
--------------------------------------------------------------------------------

```markdown
# MCP Memory Service — Troubleshooting Guide

Common issues and proven fixes when running locally or in CI.

## sqlite-vec Extension Loading Fails

Symptoms:

- Errors like: `SQLite extension loading not supported` or `enable_load_extension not available`.
- `Failed to load sqlite-vec extension`.

Causes:

- Python’s `sqlite3` not compiled with loadable extensions (macOS system Python is common culprit).

Fixes:

- macOS:
  - `brew install python` and use Homebrew Python.
  - Or install via pyenv with extensions: `PYTHON_CONFIGURE_OPTS='--enable-loadable-sqlite-extensions' pyenv install 3.12.x`.
- Linux:
  - Install dev headers: `apt install python3-dev sqlite3` and ensure Python was built with `--enable-loadable-sqlite-extensions`.
- Windows:
  - Prefer official python.org installer or conda distribution.
- Alternative: switch backend: `export MCP_MEMORY_STORAGE_BACKEND=chromadb` (see migration notes).

## `sentence-transformers`/`torch` Not Available

Symptoms:

- Warnings about no embedding model; semantic search returns empty.

Fixes:

- Install ML deps: `pip install sentence-transformers torch` (or `uv add` equivalents).
- For constrained environments, semantic search can still run once deps are installed; tag-based and metadata operations work without embeddings.

## First-Run Model Downloads

Symptoms:

- Warnings like: `Using TRANSFORMERS_CACHE is deprecated` or `No snapshots directory`.

Status:

- Expected on first run while downloading `all-MiniLM-L6-v2` (~25MB). Subsequent runs use cache.

## Cloudflare Backend Fails on Boot

Symptoms:

- Immediate exit with `Missing required environment variables for Cloudflare backend`.

Fixes:

- Set all required envs: `CLOUDFLARE_API_TOKEN`, `CLOUDFLARE_ACCOUNT_ID`, `CLOUDFLARE_VECTORIZE_INDEX`, `CLOUDFLARE_D1_DATABASE_ID`. Optional: `CLOUDFLARE_R2_BUCKET`.
- Validate resources via Wrangler or dashboard; see `docs/cloudflare-setup.md`.

## Port/Coordination Conflicts

Symptoms:

- Multi-client mode cannot start HTTP server, or falls back to direct mode.

Status/Fixes:

- The server auto-detects: `http_client` (connect), `http_server` (start), else `direct` (WAL). If the coordination port is in use by another service, expect direct fallback; adjust port or stop the conflicting service.

## File Permission or Path Errors

Symptoms:

- Path write tests failing under `BASE_DIR` or backup directories.

Fixes:

- Ensure `MCP_MEMORY_BASE_DIR` points to a writable location; the service validates and creates directories and test-writes `.write_test` files with retries.

## Slow Queries or High CPU

Checklist:

- Ensure embeddings are available and model loaded once (warmup).
- For low RAM or Windows CUDA:
  - `PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128`
  - Reduce model cache sizes; see `configure_environment()` in `server.py`.
- Tune SQLite pragmas via `MCP_MEMORY_SQLITE_PRAGMAS`.


```

--------------------------------------------------------------------------------
/scripts/server/check_http_server.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Check if the MCP Memory Service HTTP server is running.

This script checks if the HTTP server is accessible and provides
helpful feedback to users about how to start it if it's not running.
"""

import sys
import os
from urllib.request import urlopen, Request
from urllib.error import URLError, HTTPError
import json
import ssl


def check_http_server(verbose: bool = False) -> bool:
    """
    Check if the HTTP server is running.

    Args:
        verbose: If True, print detailed status messages

    Returns:
        bool: True if server is running, False otherwise
    """
    # Determine the endpoint from environment
    https_enabled = os.getenv('MCP_HTTPS_ENABLED', 'false').lower() == 'true'
    http_port = int(os.getenv('MCP_HTTP_PORT', '8000'))
    https_port = int(os.getenv('MCP_HTTPS_PORT', '8443'))

    if https_enabled:
        endpoint = f"https://localhost:{https_port}/api/health"
    else:
        endpoint = f"http://localhost:{http_port}/api/health"

    try:
        # Create SSL context that doesn't verify certificates (for self-signed certs)
        ctx = ssl.create_default_context()
        ctx.check_hostname = False
        ctx.verify_mode = ssl.CERT_NONE

        req = Request(endpoint)
        with urlopen(req, timeout=3, context=ctx) as response:
            if response.status == 200:
                data = json.loads(response.read().decode('utf-8'))
                if verbose:
                    print("[OK] HTTP server is running")
                    print(f"   Version: {data.get('version', 'unknown')}")
                    print(f"   Endpoint: {endpoint}")
                    print(f"   Status: {data.get('status', 'unknown')}")
                return True
            else:
                if verbose:
                    print(f"[WARN] HTTP server responded with status {response.status}")
                return False
    except (URLError, HTTPError, json.JSONDecodeError) as e:
        if verbose:
            print("[ERROR] HTTP server is NOT running")
            print(f"\nTo start the HTTP server, run:")
            print(f"   uv run python scripts/server/run_http_server.py")
            print(f"\n   Or for HTTPS:")
            print(f"   MCP_HTTPS_ENABLED=true uv run python scripts/server/run_http_server.py")
            print(f"\nError: {str(e)}")
        return False


def main():
    """Main entry point for CLI usage."""
    import argparse

    parser = argparse.ArgumentParser(
        description="Check if MCP Memory Service HTTP server is running"
    )
    parser.add_argument(
        "-q", "--quiet",
        action="store_true",
        help="Only return exit code (0=running, 1=not running), no output."
    )

    args = parser.parse_args()

    is_running = check_http_server(verbose=not args.quiet)
    sys.exit(0 if is_running else 1)


if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/claude_commands/memory-ingest.md:
--------------------------------------------------------------------------------

```markdown
# memory-ingest

Ingest a document file into the MCP Memory Service database.

## Usage

```
claude /memory-ingest <file_path> [--tags TAG1,TAG2] [--chunk-size SIZE] [--chunk-overlap OVERLAP] [--memory-type TYPE]
```

## Parameters

- `file_path`: Path to the document file to ingest (required)
- `--tags`: Comma-separated list of tags to apply to all memories created from this document
- `--chunk-size`: Target size for text chunks in characters (default: 1000)
- `--chunk-overlap`: Characters to overlap between chunks (default: 200)
- `--memory-type`: Type label for created memories (default: "document")

## Supported Formats

- PDF files (.pdf)
- Text files (.txt, .md, .markdown, .rst)
- JSON files (.json)

## Implementation

I need to upload the document to the MCP Memory Service HTTP API endpoint and monitor the progress.

First, let me check if the service is running and get the correct endpoint:

```bash
# Check if the service is running on default port
curl -s http://localhost:8080/api/health || echo "Service not running on 8080"

# Or check common alternative ports
curl -s http://localhost:8443/api/health || echo "Service not running on 8443"
```

Assuming the service is running (adjust the URL as needed), I'll upload the document:

```bash
# Upload the document with specified parameters
curl -X POST "http://localhost:8080/api/documents/upload" \\
  -F "file=@$FILE_PATH" \\
  -F "tags=$TAGS" \\
  -F "chunk_size=$CHUNK_SIZE" \\
  -F "chunk_overlap=$CHUNK_OVERLAP" \\
  -F "memory_type=$MEMORY_TYPE" \\
  
```

Then I'll monitor the upload progress:

```bash
# Monitor progress (replace UPLOAD_ID with the ID from the upload response)
curl -s "http://localhost:8080/api/documents/status/UPLOAD_ID"
```

## Examples

```
# Ingest a PDF with tags
claude /memory-ingest manual.pdf --tags documentation,reference

# Ingest a markdown file with custom chunking
claude /memory-ingest README.md --chunk-size 1500 --chunk-overlap 300 --tags project,readme

# Ingest a document as reference material
claude /memory-ingest api-docs.json --tags api,reference --memory-type reference
```

## Actual Execution Steps

When you run this command, I will:

1. **Validate the file exists** and check if it's a supported format
2. **Determine the service endpoint** (try localhost:8080, then 8443)
3. **Upload the file** using the documents API endpoint with your specified parameters
4. **Monitor progress** and show real-time updates
5. **Report results** including chunks created and any errors

The document will be automatically parsed, chunked, and stored as searchable memories in your MCP Memory Service database.

## Notes

- The document will be automatically parsed and chunked for optimal retrieval
- Each chunk becomes a separate memory entry with semantic embeddings
- Progress will be displayed during ingestion
- Failed chunks will be reported but won't stop the overall process

```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/development/mcp-milestone.md:
--------------------------------------------------------------------------------

```markdown
# MCP Memory Service v4.0.0-beta.1 - Major Milestone Achievement

**Date**: August 4, 2025  
**Status**: 🚀 **Mission Accomplished**

## Project Evolution Complete

Successfully transitioned MCP Memory Service from experimental local-only service to **production-ready remote memory infrastructure** with native MCP protocol support.

## Technical Achievements

### 1. Release Management ✅
- **v4.0.0-beta.1** beta release completed
- Fixed Docker CI/CD workflows (main.yml and publish-and-test.yml)
- GitHub Release created with comprehensive notes
- Repository cleanup (3 obsolete branches removed)

### 2. GitHub Issues Resolved ✅
- **Issue #71**: Remote Memory Service access - **FULLY RESOLVED** via FastAPI MCP integration
- **Issue #72**: Node.js Bridge SSL issues - **SUPERSEDED** (bridge deprecated in favor of native protocol)

### 3. MCP Protocol Compliance ✅
Applied critical refactorings from fellow AI Coder:
- **Flexible ID Validation**: `Optional[Union[str, int]]` supporting both string and integer IDs
- **Dual Route Handling**: Both `/mcp` and `/mcp/` endpoints to prevent 307 redirects
- **Content Hash Generation**: Proper `generate_content_hash()` implementation

### 4. Infrastructure Deployment ✅
- **Remote Server**: Successfully deployed at `your-server-ip:8000`
- **Backend**: SQLite-vec (1.7MB database, 384-dimensional embeddings)
- **Model**: all-MiniLM-L6-v2 loaded and operational
- **Existing Data**: 65 memories already stored
- **API Coverage**: Full MCP protocol + REST API + Dashboard

## Strategic Impact

This represents the **successful completion of architectural evolution** from:
- ❌ Local-only experimental service
- ✅ Production-ready remote memory infrastructure

**Key Benefits Achieved**:
1. **Cross-Device Access**: Claude Code can connect from any device
2. **Protocol Compliance**: Standard MCP JSON-RPC 2.0 implementation
3. **Scalable Architecture**: Dual-service design (HTTP + MCP)
4. **Robust CI/CD**: Automated testing and deployment pipeline

## Verification

**MCP Protocol Test Results**:
```bash
# Health check successful
curl -X POST http://your-server-ip:8000/mcp \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"check_database_health"}}'

# Response: {"status":"healthy","statistics":{"total_memories":65,"embedding_model":"all-MiniLM-L6-v2"}}
```

**Available Endpoints**:
- 🔧 **MCP Protocol**: `http://your-server-ip:8000/mcp`
- 📊 **Dashboard**: `http://your-server-ip:8000/`  
- 📚 **API Docs**: `http://your-server-ip:8000/api/docs`

## Next Steps

- Monitor beta feedback for v4.0.0 stable release
- Continue remote memory service operation
- Support Claude Code integrations across devices

---

**This milestone marks the successful transformation of MCP Memory Service into a fully operational, remotely accessible, protocol-compliant memory infrastructure ready for production use.** 🎉
```

--------------------------------------------------------------------------------
/scripts/maintenance/memory-types.md:
--------------------------------------------------------------------------------

```markdown
# Memory Type Taxonomy (Updated Nov 2025)

Database consolidated from 342 fragmented types to 128 organized types. Use these **24 core types** for all new memories.

## Content Types
- `note` - General notes, observations, summaries
- `reference` - Reference materials, knowledge base entries
- `document` - Formal documents, code snippets
- `guide` - How-to guides, tutorials, troubleshooting guides

## Activity Types
- `session` - Work sessions, development sessions
- `implementation` - Implementation work, integrations
- `analysis` - Analysis, reports, investigations
- `troubleshooting` - Problem-solving, debugging
- `test` - Testing activities, validation

## Artifact Types
- `fix` - Bug fixes, corrections
- `feature` - New features, enhancements
- `release` - Releases, release notes
- `deployment` - Deployments, deployment records

## Progress Types
- `milestone` - Milestones, completions, achievements
- `status` - Status updates, progress reports

## Infrastructure Types
- `configuration` - Configurations, setups, settings
- `infrastructure` - Infrastructure changes, system updates
- `process` - Processes, workflows, procedures
- `security` - Security-related memories
- `architecture` - Architecture decisions, design patterns

## Other Types
- `documentation` - Documentation artifacts
- `solution` - Solutions, resolutions
- `achievement` - Accomplishments, successes

## Usage Guidelines

### Avoid Creating New Type Variations

**DON'T** create variations like:
- `bug-fix`, `bugfix`, `technical-fix` → Use `fix`
- `technical-solution`, `project-solution` → Use `solution`
- `project-implementation` → Use `implementation`
- `technical-note` → Use `note`

### Avoid Redundant Prefixes

Remove unnecessary qualifiers:
- `project-*` → Use base type
- `technical-*` → Use base type
- `development-*` → Use base type

### Cleanup Commands

```bash
# Preview type consolidation
python scripts/maintenance/consolidate_memory_types.py --dry-run

# Execute type consolidation
python scripts/maintenance/consolidate_memory_types.py

# Check type distribution
python scripts/maintenance/check_memory_types.py

# Assign types to untyped memories
python scripts/maintenance/assign_memory_types.py --dry-run
python scripts/maintenance/assign_memory_types.py
```

## Consolidation Rules

The consolidation script applies these transformations:

1. **Fix variants** → `fix`: bug-fix, bugfix, technical-fix, etc.
2. **Implementation variants** → `implementation`: integrations, project-implementation, etc.
3. **Solution variants** → `solution`: technical-solution, project-solution, etc.
4. **Note variants** → `note`: technical-note, development-note, etc.
5. **Remove redundant prefixes**: project-, technical-, development-

## Benefits of Standardization

- Improved search and retrieval accuracy
- Better tag-based filtering
- Reduced database fragmentation
- Easier memory type analytics
- Consistent memory organization

```

--------------------------------------------------------------------------------
/tools/docker/test-docker-modes.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
# Test script to verify both Docker modes work correctly

set -e

echo "==================================="
echo "Docker Setup Test Script"
echo "==================================="

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color

# Function to print colored output
print_status() {
    if [ $1 -eq 0 ]; then
        echo -e "${GREEN}✓${NC} $2"
    else
        echo -e "${RED}✗${NC} $2"
        return 1
    fi
}

# Change to docker directory
cd "$(dirname "$0")"

echo ""
echo "1. Building Docker image..."
docker-compose build --quiet
print_status $? "Docker image built successfully"

echo ""
echo "2. Testing MCP Protocol Mode..."
echo "   Starting container in MCP mode..."
docker-compose up -d
sleep 5

# Check if container is running
docker-compose ps | grep -q "Up"
print_status $? "MCP mode container is running"

# Check logs for correct mode
docker-compose logs 2>&1 | grep -q "Running in mcp mode"
print_status $? "Container started in MCP mode"

# Stop MCP mode
docker-compose down
echo ""

echo "3. Testing HTTP API Mode..."
echo "   Starting container in HTTP mode..."
docker-compose -f docker-compose.http.yml up -d
sleep 10

# Check if container is running
docker-compose -f docker-compose.http.yml ps | grep -q "Up"
print_status $? "HTTP mode container is running"

# Check logs for Uvicorn
docker-compose -f docker-compose.http.yml logs 2>&1 | grep -q "Uvicorn\|FastAPI\|HTTP"
print_status $? "HTTP server started (Uvicorn/FastAPI)"

# Test health endpoint
HTTP_RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:8000/api/health 2>/dev/null || echo "000")
if [ "$HTTP_RESPONSE" = "200" ]; then
    print_status 0 "Health endpoint responding (HTTP $HTTP_RESPONSE)"
else
    print_status 1 "Health endpoint not responding (HTTP $HTTP_RESPONSE)"
fi

# Test with API key
API_TEST=$(curl -s -X POST http://localhost:8000/api/memories \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-secure-api-key-here" \
  -d '{"content": "Docker test memory", "tags": ["test"]}' 2>/dev/null | grep -q "success\|unauthorized" && echo "ok" || echo "fail")

if [ "$API_TEST" = "ok" ]; then
    print_status 0 "API endpoint accessible"
else
    print_status 1 "API endpoint not accessible"
fi

# Stop HTTP mode
docker-compose -f docker-compose.http.yml down

echo ""
echo "==================================="
echo "Test Summary:"
echo "==================================="
echo -e "${GREEN}✓${NC} All critical fixes from Joe applied:"
echo "  - PYTHONPATH=/app/src"
echo "  - run_server.py copied"
echo "  - Embedding models pre-downloaded"
echo ""
echo -e "${GREEN}✓${NC} Simplified Docker structure:"
echo "  - Unified entrypoint for both modes"
echo "  - Clear MCP vs HTTP separation"
echo "  - Single Dockerfile for all modes"
echo ""
echo -e "${YELLOW}Note:${NC} Deprecated files marked in DEPRECATED.md"
echo ""
echo "Docker setup is ready for use!"
```

--------------------------------------------------------------------------------
/scripts/server/preload_models.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Pre-load sentence-transformers models to avoid startup delays.

This script downloads and caches the default embedding models used by
MCP Memory Service so they don't need to be downloaded during server startup,
which can cause timeout errors in Claude Desktop.
"""

import sys
import os

def preload_sentence_transformers():
    """Pre-load the default sentence-transformers model."""
    try:
        print("[INFO] Pre-loading sentence-transformers models...")
        from sentence_transformers import SentenceTransformer
        
        # Default model used by the memory service
        model_name = "all-MiniLM-L6-v2"
        print(f"[INFO] Downloading and caching model: {model_name}")
        
        model = SentenceTransformer(model_name)
        print(f"[OK] Model loaded successfully on device: {model.device}")
        
        # Test the model to ensure it works
        print("[INFO] Testing model functionality...")
        test_text = "This is a test sentence for embedding."
        embedding = model.encode(test_text)
        print(f"[OK] Model test successful - embedding shape: {embedding.shape}")
        
        return True
        
    except ImportError:
        print("[WARNING] sentence-transformers not available - skipping model preload")
        return False
    except Exception as e:
        print(f"[ERROR] Error preloading model: {e}")
        return False

def check_cache_status():
    """Check if models are already cached."""
    cache_locations = [
        os.path.expanduser("~/.cache/huggingface/hub"),
        os.path.expanduser("~/.cache/torch/sentence_transformers"),
    ]
    
    for cache_dir in cache_locations:
        if os.path.exists(cache_dir):
            try:
                contents = os.listdir(cache_dir)
                for item in contents:
                    if 'sentence-transformers' in item.lower() or 'minilm' in item.lower():
                        print(f"[OK] Found cached model: {item}")
                        return True
            except (OSError, PermissionError):
                continue
    
    print("[INFO] No cached models found")
    return False

def main():
    print("MCP Memory Service - Model Preloader")
    print("=" * 50)
    
    # Check current cache status
    print("\n[1] Checking cache status...")
    models_cached = check_cache_status()
    
    if models_cached:
        print("[OK] Models are already cached - no download needed")
        return True
    
    # Preload models
    print("\n[2] Preloading models...")
    success = preload_sentence_transformers()
    
    if success:
        print("\n[OK] Model preloading complete!")
        print("[INFO] MCP Memory Service should now start without downloading models")
    else:
        print("\n[WARNING] Model preloading failed - server may need to download during startup")
        
    return success

if __name__ == "__main__":
    success = main()
    sys.exit(0 if success else 1)
```

--------------------------------------------------------------------------------
/docs/mastery/testing-guide.md:
--------------------------------------------------------------------------------

```markdown
# MCP Memory Service — Testing Guide

This guide explains how to run, understand, and extend the test suites.

## Prerequisites

- Python ≥ 3.10 (3.12 recommended; 3.13 may lack prebuilt `sqlite-vec` wheels).
- Install dependencies (uv recommended):
  - `uv sync` (respects `pyproject.toml` and `uv.lock`), or
  - `pip install -e .` plus extras as needed.
- For SQLite-vec tests:
  - `sqlite-vec` and `sentence-transformers`/`torch` should be installed.
  - On some OS/Python combinations, sqlite extension loading must be supported (see Troubleshooting).

## Test Layout

- `tests/README.md`: overview.
- Categories:
  - Unit: `tests/unit/` (e.g., tags, mdns, cloudflare stubs).
  - Integration: `tests/integration/` (cross-component flows).
  - Performance: `tests/performance/`.
  - Backend-specific: top-level tests like `test_sqlite_vec_storage.py`, `test_time_parser.py`, `test_memory_ops.py`.

## Running Tests

Run all:

```
pytest
```

Category:

```
pytest tests/unit/
pytest tests/integration/
pytest tests/performance/
```

Single file or test:

```
pytest tests/test_sqlite_vec_storage.py::TestSqliteVecStorage::test_store_memory -q
```

With uv:

```
uv run pytest -q
```

## Important Behaviors and Skips

- SQLite-vec tests are marked to skip when `sqlite-vec` is unavailable:
  - See `pytestmark = pytest.mark.skipif(not SQLITE_VEC_AVAILABLE, ...)` in `tests/test_sqlite_vec_storage.py`.
- Some tests simulate no-embedding scenarios by patching `SENTENCE_TRANSFORMERS_AVAILABLE=False` to validate fallback code paths.
- Temp directories isolate database files; connections are closed in teardown.

## Coverage of Key Areas

- Storage CRUD and vector search (`test_sqlite_vec_storage.py`).
- Time parsing and timestamp recall (`test_time_parser.py`, `test_timestamp_recall.py`).
- Tag and metadata semantics (`test_tag_storage.py`, `unit/test_tags.py`).
- Health checks and database init (`test_database.py`).
- Cloudflare adapters have unit-level coverage stubbing network (`unit/test_cloudflare_storage.py`).

## Writing New Tests

- Prefer async `pytest.mark.asyncio` for storage APIs.
- Use `tempfile.mkdtemp()` for per-test DB paths.
- Use `src.mcp_memory_service.models.memory.Memory` and `generate_content_hash` helpers.
- For backend-specific behavior, keep tests colocated with backend tests and gate with availability flags.
- For MCP tool surface tests, prefer FastMCP server (`mcp_server.py`) in isolated processes or with `lifespan` context.

## Local MCP/Service Tests

Run stdio server:

```
uv run memory server
```

Run FastMCP HTTP server:

```
uv run mcp-memory-server
```

Use any MCP client (Claude Desktop/Code) and exercise tools: store, retrieve, search_by_tag, delete, health.

## Debugging and Logs

- Set `LOG_LEVEL=INFO` for more verbosity.
- For Claude Desktop: stdout is suppressed to preserve JSON; inspect stderr/warnings. LM Studio prints diagnostics to stdout.
- Common sqlite-vec errors print actionable remediation (see Troubleshooting).


```

--------------------------------------------------------------------------------
/scripts/service/install_http_service.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
# Install MCP Memory HTTP Service for systemd

set -e

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
SERVICE_FILE="$SCRIPT_DIR/mcp-memory-http.service"
SERVICE_NAME="mcp-memory-http.service"

echo "MCP Memory HTTP Service Installation"
echo "===================================="
echo ""

# Check if service file exists
if [ ! -f "$SERVICE_FILE" ]; then
    echo "❌ Service file not found: $SERVICE_FILE"
    exit 1
fi

# Check if .env exists
if [ ! -f "$PROJECT_DIR/.env" ]; then
    echo "❌ .env file not found: $PROJECT_DIR/.env"
    echo "Please create .env file with your configuration"
    exit 1
fi

# Check if venv exists
if [ ! -d "$PROJECT_DIR/venv" ]; then
    echo "❌ Virtual environment not found: $PROJECT_DIR/venv"
    echo "Please run: python -m venv venv && source venv/bin/activate && pip install -e ."
    exit 1
fi

# Install as user service (recommended) or system service
echo "Installation Options:"
echo "1. User service (recommended) - runs as your user, no sudo needed"
echo "2. System service - runs at boot, requires sudo"
read -p "Select [1/2]: " choice

case $choice in
    1)
        # User service
        SERVICE_DIR="$HOME/.config/systemd/user"
        mkdir -p "$SERVICE_DIR"

        echo "Installing user service to: $SERVICE_DIR/$SERVICE_NAME"
        cp "$SERVICE_FILE" "$SERVICE_DIR/$SERVICE_NAME"

        # Reload systemd
        systemctl --user daemon-reload

        echo ""
        echo "✅ Service installed successfully!"
        echo ""
        echo "To start the service:"
        echo "  systemctl --user start $SERVICE_NAME"
        echo ""
        echo "To enable auto-start on login:"
        echo "  systemctl --user enable $SERVICE_NAME"
        echo "  loginctl enable-linger $USER  # Required for auto-start"
        echo ""
        echo "To check status:"
        echo "  systemctl --user status $SERVICE_NAME"
        echo ""
        echo "To view logs:"
        echo "  journalctl --user -u $SERVICE_NAME -f"
        ;;

    2)
        # System service
        if [ "$EUID" -ne 0 ]; then
            echo "❌ System service installation requires sudo"
            echo "Please run: sudo $0"
            exit 1
        fi

        SERVICE_DIR="/etc/systemd/system"
        echo "Installing system service to: $SERVICE_DIR/$SERVICE_NAME"
        cp "$SERVICE_FILE" "$SERVICE_DIR/$SERVICE_NAME"

        # Reload systemd
        systemctl daemon-reload

        echo ""
        echo "✅ Service installed successfully!"
        echo ""
        echo "To start the service:"
        echo "  sudo systemctl start $SERVICE_NAME"
        echo ""
        echo "To enable auto-start on boot:"
        echo "  sudo systemctl enable $SERVICE_NAME"
        echo ""
        echo "To check status:"
        echo "  sudo systemctl status $SERVICE_NAME"
        echo ""
        echo "To view logs:"
        echo "  sudo journalctl -u $SERVICE_NAME -f"
        ;;

    *)
        echo "❌ Invalid choice"
        exit 1
        ;;
esac

```

--------------------------------------------------------------------------------
/.github/pull_request_template.md:
--------------------------------------------------------------------------------

```markdown
# Pull Request

## Description

<!-- Provide a clear and concise description of the changes -->

## Motivation

<!-- Explain why these changes are needed and what problem they solve -->

## Type of Change

<!-- Check all that apply -->

- [ ] 🐛 Bug fix (non-breaking change that fixes an issue)
- [ ] ✨ New feature (non-breaking change that adds functionality)
- [ ] 💥 Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] 📝 Documentation update
- [ ] 🧪 Test improvement
- [ ] ♻️ Code refactoring (no functional changes)
- [ ] ⚡ Performance improvement
- [ ] 🔧 Configuration change
- [ ] 🎨 UI/UX improvement

## Changes

<!-- List the specific changes made in this PR -->

-
-
-

**Breaking Changes** (if any):
<!-- Describe any breaking changes and migration steps -->

-

## Testing

### How Has This Been Tested?

<!-- Describe the tests you ran to verify your changes -->

- [ ] Unit tests
- [ ] Integration tests
- [ ] Manual testing
- [ ] MCP Inspector validation

**Test Configuration**:
- Python version:
- OS:
- Storage backend:
- Installation method:

### Test Coverage

<!-- Describe the test coverage added or modified -->

- [ ] Added new tests
- [ ] Updated existing tests
- [ ] Test coverage maintained/improved

## Related Issues

<!-- Link related issues using keywords: Fixes #123, Closes #456, Relates to #789 -->

Fixes #
Closes #
Relates to #

## Screenshots

<!-- If applicable, add screenshots to help explain your changes -->

## Documentation

<!-- Check all that apply -->

- [ ] Updated README.md
- [ ] Updated CLAUDE.md
- [ ] Updated AGENTS.md
- [ ] Updated CHANGELOG.md
- [ ] Updated Wiki pages
- [ ] Updated code comments/docstrings
- [ ] Added API documentation
- [ ] No documentation needed

## Pre-submission Checklist

<!-- Check all boxes before submitting -->

- [ ] ✅ My code follows the project's coding standards (PEP 8, type hints)
- [ ] ✅ I have performed a self-review of my code
- [ ] ✅ I have commented my code, particularly in hard-to-understand areas
- [ ] ✅ I have made corresponding changes to the documentation
- [ ] ✅ My changes generate no new warnings
- [ ] ✅ I have added tests that prove my fix is effective or that my feature works
- [ ] ✅ New and existing unit tests pass locally with my changes
- [ ] ✅ Any dependent changes have been merged and published
- [ ] ✅ I have updated CHANGELOG.md following [Keep a Changelog](https://keepachangelog.com/) format
- [ ] ✅ I have checked that no sensitive data is exposed (API keys, tokens, passwords)
- [ ] ✅ I have verified this works with all supported storage backends (if applicable)

## Additional Notes

<!-- Any additional information, context, or notes for reviewers -->

---

**For Reviewers**:
- Review checklist: See [PR Review Guide](https://github.com/doobidoo/mcp-memory-service/wiki/PR-Review-Guide)
- Consider testing with Gemini Code Assist for comprehensive review
- Verify CHANGELOG.md entry is present and correctly formatted
- Check documentation accuracy and completeness

```

--------------------------------------------------------------------------------
/.github/workflows/WORKFLOW_FIXES.md:
--------------------------------------------------------------------------------

```markdown
# Workflow Fixes Applied

## Issues Identified and Fixed

### 1. Cleanup Images Workflow (`cleanup-images.yml`)

**Issues:**
- Referenced non-existent workflows in `workflow_run` trigger
- Used incorrect action versions (`@v5` instead of `@v4`)
- Incorrect account type (`org` instead of `personal`)
- Missing error handling for optional steps
- No validation for Docker Hub credentials

**Fixes Applied:**
- Updated workflow references to match actual workflow names
- Downgraded action versions to stable versions (`@v4`, `@v1`)
- Changed account type to `personal` for personal GitHub account
- Added `continue-on-error: true` for optional cleanup steps
- Added credential validation and conditional Docker Hub cleanup
- Added informative messages when cleanup is skipped

### 2. Main Optimized Workflow (`main-optimized.yml`)

**Issues:**
- Complex matrix strategy with indirect secret access
- No handling for missing Docker Hub credentials
- Potential authentication failures for Docker registries

**Fixes Applied:**
- Simplified login steps with explicit registry conditions
- Added conditional Docker Hub login based on credential availability
- Added skip message when Docker Hub credentials are missing
- Improved error handling for registry authentication

## Changes Made

### cleanup-images.yml
```yaml
# Before
workflow_run:
  workflows: ["Release (Tags) - Optimized", "Main CI/CD Pipeline - Optimized"]

uses: actions/delete-package-versions@v5
account-type: org

# After
workflow_run:
  workflows: ["Main CI/CD Pipeline", "Docker Publish (Tags)", "Publish and Test (Tags)"]

uses: actions/delete-package-versions@v4
account-type: personal
continue-on-error: true
```

### main-optimized.yml
```yaml
# Before
username: ${{ matrix.username_secret == '_github_actor' && github.actor || secrets[matrix.username_secret] }}

# After
- name: Log in to Docker Hub
  if: matrix.registry == 'docker.io' && secrets.DOCKER_USERNAME && secrets.DOCKER_PASSWORD
- name: Log in to GitHub Container Registry
  if: matrix.registry == 'ghcr.io'
```

## Safety Improvements

1. **Graceful Degradation**: Workflows now continue even if optional steps fail
2. **Credential Validation**: Proper checking for required secrets before use
3. **Clear Messaging**: Users are informed when steps are skipped
4. **Error Isolation**: Failures in one cleanup job don't affect others

## Testing Recommendations

1. **Manual Trigger Test**: Test cleanup workflow with dry-run mode
2. **Credential Scenarios**: Test with and without Docker Hub credentials
3. **Registry Cleanup**: Verify GHCR cleanup works independently
4. **Workflow Dependencies**: Ensure workflow triggers work correctly

## Expected Behavior

- **With Full Credentials**: Both GHCR and Docker Hub cleanup run
- **Without Docker Credentials**: Only GHCR cleanup runs, Docker Hub skipped
- **Action Failures**: Individual cleanup steps may fail but workflow continues
- **No Images to Clean**: Workflows complete successfully with no actions

Date: 2024-08-24
Status: Applied and ready for testing
```

--------------------------------------------------------------------------------
/tests/test_semantic_search.py:
--------------------------------------------------------------------------------

```python
"""
MCP Memory Service
Copyright (c) 2024 Heinrich Krupp
Licensed under the MIT License. See LICENSE file in the project root for full license text.
"""
"""
Test semantic search functionality of the MCP Memory Service.
"""
import pytest
import pytest_asyncio
import asyncio
from mcp.server import Server
from mcp.server.models import InitializationOptions

@pytest_asyncio.fixture
async def memory_server():
    """Create a test instance of the memory server."""
    server = Server("test-memory")
    await server.initialize(InitializationOptions(
        server_name="test-memory",
        server_version="0.1.0"
    ))
    yield server
    await server.shutdown()

@pytest.mark.asyncio
async def test_semantic_similarity(memory_server):
    """Test semantic similarity scoring."""
    # Store related memories
    memories = [
        "The quick brown fox jumps over the lazy dog",
        "A fast auburn fox leaps above a sleepy canine",
        "A cat chases a mouse"
    ]
    
    for memory in memories:
        await memory_server.store_memory(content=memory)
    
    # Test semantic retrieval
    query = "swift red fox jumping over sleeping dog"
    results = await memory_server.debug_retrieve(
        query=query,
        n_results=2,
        similarity_threshold=0.0  # Get all results with scores
    )
    
    # First two results should be the fox-related memories
    assert len(results) >= 2
    assert all("fox" in result for result in results[:2])
    
@pytest.mark.asyncio
async def test_similarity_threshold(memory_server):
    """Test similarity threshold filtering."""
    await memory_server.store_memory(
        content="Python is a programming language"
    )
    
    # This query is semantically unrelated
    results = await memory_server.debug_retrieve(
        query="Recipe for chocolate cake",
        similarity_threshold=0.8
    )
    
    assert len(results) == 0  # No results above threshold

@pytest.mark.asyncio
async def test_exact_match(memory_server):
    """Test exact match retrieval."""
    test_content = "This is an exact match test"
    await memory_server.store_memory(content=test_content)
    
    results = await memory_server.exact_match_retrieve(
        content=test_content
    )
    
    assert len(results) == 1
    assert results[0] == test_content

@pytest.mark.asyncio
async def test_semantic_ordering(memory_server):
    """Test that results are ordered by semantic similarity."""
    # Store memories with varying relevance
    memories = [
        "Machine learning is a subset of artificial intelligence",
        "Deep learning uses neural networks",
        "A bicycle has two wheels"
    ]
    
    for memory in memories:
        await memory_server.store_memory(content=memory)
    
    query = "What is AI and machine learning?"
    results = await memory_server.debug_retrieve(
        query=query,
        n_results=3,
        similarity_threshold=0.0
    )
    
    # Check ordering
    assert "machine learning" in results[0].lower()
    assert "bicycle" not in results[0].lower()
```

--------------------------------------------------------------------------------
/scripts/sync/claude_sync_commands.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Claude command wrapper for memory sync operations.
Provides convenient commands for managing dual memory backends.
"""
import sys
import asyncio
import subprocess
from pathlib import Path

SYNC_SCRIPT = Path(__file__).parent / "sync_memory_backends.py"

def run_sync_command(args):
    """Run the sync script with given arguments."""
    cmd = [sys.executable, str(SYNC_SCRIPT)] + args
    result = subprocess.run(cmd, capture_output=True, text=True)

    if result.stdout:
        print(result.stdout.strip())
    if result.stderr:
        print(result.stderr.strip(), file=sys.stderr)

    return result.returncode

def memory_sync_status():
    """Show memory sync status."""
    return run_sync_command(['--status'])

def memory_sync_backup():
    """Backup Cloudflare memories to SQLite-vec."""
    print("Backing up Cloudflare memories to SQLite-vec...")
    return run_sync_command(['--direction', 'cf-to-sqlite'])

def memory_sync_restore():
    """Restore SQLite-vec memories to Cloudflare."""
    print("Restoring SQLite-vec memories to Cloudflare...")
    return run_sync_command(['--direction', 'sqlite-to-cf'])

def memory_sync_bidirectional():
    """Perform bidirectional sync."""
    print("Performing bidirectional sync...")
    return run_sync_command(['--direction', 'bidirectional'])

def memory_sync_dry_run():
    """Show what would be synced without making changes."""
    print("Dry run - showing what would be synced:")
    return run_sync_command(['--dry-run'])

def show_usage():
    """Show usage information."""
    print("Usage: python claude_sync_commands.py <command>")
    print("Commands:")
    print("  status      - Show sync status")
    print("  backup      - Backup Cloudflare → SQLite-vec")
    print("  restore     - Restore SQLite-vec → Cloudflare")
    print("  sync        - Bidirectional sync")
    print("  dry-run     - Show what would be synced")

if __name__ == "__main__":
    # Dictionary-based command dispatch for better scalability
    commands = {
        "status": memory_sync_status,
        "backup": memory_sync_backup,
        "restore": memory_sync_restore,
        "sync": memory_sync_bidirectional,
        "dry-run": memory_sync_dry_run,
    }

    if len(sys.argv) < 2:
        show_usage()
        sys.exit(1)

    command = sys.argv[1]

    if command in commands:
        sys.exit(commands[command]())
    else:
        print(f"Unknown command: {command}")
        show_usage()
        sys.exit(1)
```

--------------------------------------------------------------------------------
/scripts/utils/memory_wrapper_uv.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
UV-specific memory wrapper for MCP Memory Service
This wrapper is specifically designed for UV-based installations.
"""
import os
import sys
import subprocess
import traceback

# Set environment variables for better cross-platform compatibility
os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:128"

def print_info(text):
    """Print formatted info text."""
    print(f"[INFO] {text}", file=sys.stderr, flush=True)

def print_error(text):
    """Print formatted error text."""
    print(f"[ERROR] {text}", file=sys.stderr, flush=True)

def print_success(text):
    """Print formatted success text."""
    print(f"[SUCCESS] {text}", file=sys.stderr, flush=True)

def main():
    """Main entry point for UV-based memory service."""
    print_info("Starting MCP Memory Service with UV...")
    
    # Set ChromaDB path if provided via environment variables
    if "MCP_MEMORY_CHROMA_PATH" in os.environ:
        print_info(f"Using ChromaDB path: {os.environ['MCP_MEMORY_CHROMA_PATH']}")
    
    # Set backups path if provided via environment variables
    if "MCP_MEMORY_BACKUPS_PATH" in os.environ:
        print_info(f"Using backups path: {os.environ['MCP_MEMORY_BACKUPS_PATH']}")
    
    try:
        # Use UV to run the memory service
        cmd = [sys.executable, '-m', 'uv', 'run', 'memory']
        cmd.extend(sys.argv[1:])  # Pass through any additional arguments
        
        print_info(f"Running command: {' '.join(cmd)}")
        
        # Execute the command
        result = subprocess.run(cmd, check=True)
        sys.exit(result.returncode)
        
    except subprocess.SubprocessError as e:
        print_error(f"UV run failed: {e}")
        print_info("Falling back to direct module execution...")
        
        # Fallback to direct execution
        try:
            # Add the source directory to path
            script_dir = os.path.dirname(os.path.abspath(__file__))
            src_dir = os.path.join(script_dir, "src")
            
            if os.path.exists(src_dir):
                sys.path.insert(0, src_dir)
            
            # Import and run the server
            from mcp_memory_service.server import main as server_main
            server_main()
            
        except ImportError as import_error:
            print_error(f"Failed to import memory service: {import_error}")
            sys.exit(1)
        except Exception as fallback_error:
            print_error(f"Fallback execution failed: {fallback_error}")
            traceback.print_exc(file=sys.stderr)
            sys.exit(1)
    
    except Exception as e:
        print_error(f"Error running memory service: {e}")
        traceback.print_exc(file=sys.stderr)
        sys.exit(1)

if __name__ == "__main__":
    try:
        main()
    except KeyboardInterrupt:
        print_info("Shutting down gracefully...")
        sys.exit(0)
    except Exception as e:
        print_error(f"Unhandled exception: {e}")
        traceback.print_exc(file=sys.stderr)
        sys.exit(1)

```

--------------------------------------------------------------------------------
/scripts/maintenance/find_all_duplicates.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""Find all near-duplicate memories across the database."""

import sqlite3
from pathlib import Path
from collections import defaultdict
import hashlib

import platform

# Platform-specific database path
if platform.system() == "Darwin":  # macOS
    DB_PATH = Path.home() / "Library/Application Support/mcp-memory/sqlite_vec.db"
else:  # Linux/Windows
    DB_PATH = Path.home() / ".local/share/mcp-memory/sqlite_vec.db"

def normalize_content(content):
    """Normalize content by removing timestamps and session-specific data."""
    # Remove common timestamp patterns
    import re
    normalized = content
    normalized = re.sub(r'\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3}Z', 'TIMESTAMP', normalized)
    normalized = re.sub(r'\*\*Date\*\*: \d{2,4}[./]\d{2}[./]\d{2,4}', '**Date**: DATE', normalized)
    normalized = re.sub(r'Timestamp: \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}', 'Timestamp: TIMESTAMP', normalized)

    return normalized.strip()

def content_hash(content):
    """Create a hash of normalized content."""
    normalized = normalize_content(content)
    return hashlib.md5(normalized.encode()).hexdigest()

def main():
    conn = sqlite3.connect(DB_PATH)
    cursor = conn.cursor()

    print("Analyzing memories for duplicates...")
    cursor.execute("SELECT content_hash, content, tags, created_at FROM memories ORDER BY created_at DESC")

    memories = cursor.fetchall()
    print(f"Total memories: {len(memories)}")

    # Group by normalized content
    content_groups = defaultdict(list)
    for mem_hash, content, tags, created_at in memories:
        norm_hash = content_hash(content)
        content_groups[norm_hash].append({
            'hash': mem_hash,
            'content': content[:200],  # First 200 chars
            'tags': tags,
            'created_at': created_at
        })

    # Find duplicates (groups with >1 memory)
    duplicates = {k: v for k, v in content_groups.items() if len(v) > 1}

    if not duplicates:
        print("✅ No duplicates found!")
        conn.close()
        return

    print(f"\n❌ Found {len(duplicates)} groups of duplicates:")

    total_duplicate_count = 0
    for i, (norm_hash, group) in enumerate(duplicates.items(), 1):
        count = len(group)
        total_duplicate_count += count - 1  # Keep one, delete rest

        print(f"\n{i}. Group with {count} duplicates:")
        print(f"   Content preview: {group[0]['content'][:100]}...")
        print(f"   Tags: {group[0]['tags'][:80]}...")
        print(f"   Hashes to keep: {group[0]['hash'][:16]}... (newest)")
        print(f"   Hashes to delete: {count-1} older duplicates")

        if i >= 10:  # Show only first 10 groups
            remaining = len(duplicates) - 10
            print(f"\n... and {remaining} more duplicate groups")
            break

    print(f"\n📊 Summary:")
    print(f"   Total duplicate groups: {len(duplicates)}")
    print(f"   Total memories to delete: {total_duplicate_count}")
    print(f"   Total memories after cleanup: {len(memories) - total_duplicate_count}")

    conn.close()

if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/docs/integrations/gemini.md:
--------------------------------------------------------------------------------

```markdown
# Gemini Context: MCP Memory Service

## Project Overview

This project is a sophisticated and feature-rich MCP (Memory Component Protocol) server designed to provide a persistent, semantic memory layer for AI assistants, particularly "Claude Desktop". It's built with Python and leverages a variety of technologies to deliver a robust and performant memory service.

The core of the project is the `MemoryServer` class, which handles all MCP tool calls. It features a "dream-inspired" memory consolidation system that autonomously organizes and compresses memories over time. The server is built on top of FastAPI, providing a modern and asynchronous API.

The project offers two distinct storage backends, allowing users to choose the best fit for their needs:

*   **ChromaDB:** A feature-rich vector database that provides advanced search capabilities and is well-suited for large memory collections.
*   **SQLite-vec:** A lightweight, file-based backend that uses the `sqlite-vec` extension for vector similarity search. This is a great option for resource-constrained environments.

The project also includes a comprehensive suite of scripts for installation, testing, and maintenance, as well as detailed documentation.

## Building and Running

### Installation

The project uses a custom installer that intelligently detects the user's hardware and selects the optimal configuration. To install the project, run the following commands:

```bash
# Clone the repository
git clone https://github.com/doobidoo/mcp-memory-service.git
cd mcp-memory-service

# Create and activate a virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate

# Run the intelligent installer
python install.py
```

### Running the Server

The server can be run in several ways, depending on the desired configuration. The primary entry point is the `mcp_memory_service.server:main` script, which can be executed as a Python module:

```bash
python -m mcp_memory_service.server
```

Alternatively, the `pyproject.toml` file defines a `memory` script that can be used to run the server:

```bash
memory
```

The server can also be run as a FastAPI application using `uvicorn`:

```bash
uvicorn mcp_memory_service.server:app --host 0.0.0.0 --port 8000
```

### Testing

The project includes a comprehensive test suite that can be run using `pytest`:

```bash
pytest tests/
```

## Development Conventions

*   **Python 3.10+:** The project requires Python 3.10 or higher.
*   **Type Hinting:** The codebase uses type hints extensively to improve code clarity and maintainability.
*   **Async/Await:** The project uses the `async/await` pattern for all I/O operations to ensure high performance and scalability.
*   **PEP 8:** The code follows the PEP 8 style guide.
*   **Dataclasses:** The project uses dataclasses for data models to reduce boilerplate code.
*   **Docstrings:** All modules and functions have triple-quoted docstrings that explain their purpose, arguments, and return values.
*   **Testing:** All new features should be accompanied by tests to ensure they work as expected and don't introduce regressions.

```

--------------------------------------------------------------------------------
/src/mcp_memory_service/web/api/events.py:
--------------------------------------------------------------------------------

```python
"""
Server-Sent Events endpoints for real-time updates.
"""

from fastapi import APIRouter, Request, Depends
from pydantic import BaseModel
from typing import Dict, Any, List, TYPE_CHECKING

from ...config import OAUTH_ENABLED
from ..sse import create_event_stream, sse_manager
from ..dependencies import get_storage

# OAuth authentication imports (conditional)
if OAUTH_ENABLED or TYPE_CHECKING:
    from ..oauth.middleware import require_read_access, AuthenticationResult
else:
    # Provide type stubs when OAuth is disabled
    AuthenticationResult = None
    require_read_access = None

router = APIRouter()


class ConnectionInfo(BaseModel):
    """Individual connection information."""
    connection_id: str
    client_ip: str
    user_agent: str
    connected_duration_seconds: float
    last_heartbeat_seconds_ago: float


class SSEStatsResponse(BaseModel):
    """Response model for SSE connection statistics."""
    total_connections: int
    heartbeat_interval: int
    connections: List[ConnectionInfo]


@router.get("/events")
async def events_endpoint(
    request: Request,
    user: AuthenticationResult = Depends(require_read_access) if OAUTH_ENABLED else None
):
    """
    Server-Sent Events endpoint for real-time updates.
    
    Provides a continuous stream of events including:
    - memory_stored: When new memories are added
    - memory_deleted: When memories are removed
    - search_completed: When searches finish
    - health_update: System status changes
    - heartbeat: Periodic keep-alive signals
    - connection_established: Welcome message
    """
    return await create_event_stream(request)


@router.get("/events/stats")
async def get_sse_stats(
    user: AuthenticationResult = Depends(require_read_access) if OAUTH_ENABLED else None
):
    """
    Get statistics about current SSE connections.
    
    Returns information about active connections, connection duration,
    and heartbeat status.
    """
    try:
        # Get raw stats first to debug the structure
        stats = sse_manager.get_connection_stats()
        
        # Validate structure and transform if needed
        connections = []
        for conn_data in stats.get('connections', []):
            connections.append({
                "connection_id": conn_data.get("connection_id", "unknown"),
                "client_ip": conn_data.get("client_ip", "unknown"),
                "user_agent": conn_data.get("user_agent", "unknown"),
                "connected_duration_seconds": conn_data.get("connected_duration_seconds", 0.0),
                "last_heartbeat_seconds_ago": conn_data.get("last_heartbeat_seconds_ago", 0.0)
            })
        
        return {
            "total_connections": stats.get("total_connections", 0),
            "heartbeat_interval": stats.get("heartbeat_interval", 30),
            "connections": connections
        }
    except Exception as e:
        import logging
        logging.getLogger(__name__).error(f"Error getting SSE stats: {str(e)}")
        # Return safe default stats if there's an error
        return {
            "total_connections": 0,
            "heartbeat_interval": 30,
            "connections": []
        }
```

--------------------------------------------------------------------------------
/start_http_debug.bat:
--------------------------------------------------------------------------------

```
@echo off
REM MCP Memory Service HTTP Debug Mode Startup Script
REM This script starts the MCP Memory Service in HTTP mode for debugging and testing

echo ========================================
echo MCP Memory Service HTTP Debug Mode
echo Using uv for dependency management
echo ========================================

REM Check if uv is available
uv --version >nul 2>&1
if %errorlevel% neq 0 (
    echo ERROR: uv is not installed or not in PATH
    echo Please install uv: https://docs.astral.sh/uv/getting-started/installation/
    pause
    exit /b 1
)

echo uv version:
uv --version

REM Install dependencies using uv sync (recommended)
echo.
echo Installing dependencies...
echo This may take a few minutes on first run...
echo Installing core dependencies...
uv sync

REM Check if installation was successful
if %errorlevel% neq 0 (
    echo ERROR: Failed to install dependencies
    echo Please check the error messages above
    pause
    exit /b 1
)

echo Dependencies installed successfully!

REM Verify Python can import the service
echo.
echo Verifying installation...
python -c "import sys; sys.path.insert(0, 'src'); import mcp_memory_service; print('✓ MCP Memory Service imported successfully')"
if %errorlevel% neq 0 (
    echo ERROR: Failed to import MCP Memory Service
    echo Please check the error messages above
    pause
    exit /b 1
)

REM Set environment variables for HTTP mode
set MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
set MCP_HTTP_ENABLED=true
set MCP_HTTP_PORT=8000
set MCP_HTTPS_ENABLED=false
set MCP_MDNS_ENABLED=true
set MCP_MDNS_SERVICE_NAME=MCP-Memory-Service-Debug

REM Fix Transformers cache warning
set HF_HOME=%USERPROFILE%\.cache\huggingface
set TRANSFORMERS_CACHE=%USERPROFILE%\.cache\huggingface\transformers

REM Optional: Set API key for security
REM To use authentication, set your own API key in the environment variable:
REM set MCP_API_KEY=your-secure-api-key-here
REM Or pass it when running this script: set MCP_API_KEY=mykey && start_http_debug.bat
if "%MCP_API_KEY%"=="" (
    echo WARNING: No API key set. Running without authentication.
    echo          To enable auth, set MCP_API_KEY environment variable.
)

REM Optional: Enable debug logging
set MCP_DEBUG=true

echo Configuration:
echo   Storage Backend: %MCP_MEMORY_STORAGE_BACKEND%
echo   HTTP Port: %MCP_HTTP_PORT%
echo   HTTPS Enabled: %MCP_HTTPS_ENABLED%
echo   mDNS Enabled: %MCP_MDNS_ENABLED%
echo   Service Name: %MCP_MDNS_SERVICE_NAME%
if "%MCP_API_KEY%"=="" (
    echo   API Key Set: No ^(WARNING: No authentication^)
) else (
    echo   API Key Set: Yes
)
echo   Debug Mode: %MCP_DEBUG%
echo.

echo Service will be available at:
echo   HTTP: http://localhost:%MCP_HTTP_PORT%
echo   API: http://localhost:%MCP_HTTP_PORT%/api
echo   Health: http://localhost:%MCP_HTTP_PORT%/api/health
echo   Dashboard: http://localhost:%MCP_HTTP_PORT%/dashboard
echo.
echo Press Ctrl+C to stop the service
echo.
echo ========================================
echo Starting MCP Memory Service...
echo ========================================

REM Start the service using Python directly (required for HTTP mode)
echo Starting service with Python...
echo Note: Using Python directly for HTTP server mode
uv run python run_server.py
```

--------------------------------------------------------------------------------
/scripts/pr/amp_detect_breaking_changes.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
# scripts/pr/amp_detect_breaking_changes.sh - Detect breaking API changes using Amp CLI
#
# Usage: bash scripts/pr/amp_detect_breaking_changes.sh <BASE_BRANCH> <HEAD_BRANCH>
# Example: bash scripts/pr/amp_detect_breaking_changes.sh main feature/new-api

set -e

BASE_BRANCH=${1:-main}
HEAD_BRANCH=${2:-$(git branch --show-current)}

echo "=== Amp CLI Breaking Change Detection ==="
echo "Base Branch: $BASE_BRANCH"
echo "Head Branch: $HEAD_BRANCH"
echo ""

# Ensure Amp directories exist
mkdir -p .claude/amp/prompts/pending
mkdir -p .claude/amp/responses/ready

# Get API-related file changes
echo "Analyzing API changes..."
api_changes=$(git diff origin/$BASE_BRANCH...origin/$HEAD_BRANCH -- \
    src/mcp_memory_service/tools.py \
    src/mcp_memory_service/web/api/ \
    2>/dev/null || echo "")

if [ -z "$api_changes" ]; then
    echo "✅ No API changes detected"
    exit 0
fi

echo "API changes detected ($(echo "$api_changes" | wc -l) lines)"
echo ""

# Generate UUID for breaking change analysis
breaking_uuid=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid)

echo "Creating Amp prompt for breaking change analysis..."

# Truncate large diffs to avoid token overflow
api_changes_truncated=$(echo "$api_changes" | head -300)

# Create breaking change analysis prompt
cat > .claude/amp/prompts/pending/breaking-${breaking_uuid}.json << EOF
{
  "id": "${breaking_uuid}",
  "timestamp": "$(date -u +"%Y-%m-%dT%H:%M:%S.000Z")",
  "prompt": "Analyze these API changes for breaking changes. A breaking change is:\n- Removed function/method/endpoint\n- Changed function signature (parameters removed/reordered)\n- Changed return type in incompatible way\n- Renamed public API\n- Changed HTTP endpoint path/method\n- Changed MCP tool schema (added required parameters, removed optional parameters, changed parameter types)\n\nReport ONLY breaking changes with severity (CRITICAL/HIGH/MEDIUM). If no breaking changes, respond: 'BREAKING_CHANGES_NONE'.\n\nOutput format:\nSeverity: [CRITICAL|HIGH|MEDIUM]\nType: [removal|signature-change|rename|schema-change]\nLocation: [file:function/endpoint]\nDetails: [explanation]\nMigration: [suggested migration path]\n\nAPI Changes:\n${api_changes_truncated}",
  "context": {
    "project": "mcp-memory-service",
    "task": "breaking-change-detection",
    "base_branch": "${BASE_BRANCH}",
    "head_branch": "${HEAD_BRANCH}"
  },
  "options": {
    "timeout": 120000,
    "format": "text"
  }
}
EOF

echo "✅ Created Amp prompt for breaking change analysis"
echo ""
echo "=== Run this Amp command ==="
echo "amp @.claude/amp/prompts/pending/breaking-${breaking_uuid}.json"
echo ""
echo "=== Then collect the analysis ==="
echo "bash scripts/pr/amp_collect_results.sh --timeout 120 --uuids '${breaking_uuid}'"
echo ""

# Alternative: Direct analysis with custom result handler
echo "=== Or use this one-liner for immediate analysis ==="
echo "(amp @.claude/amp/prompts/pending/breaking-${breaking_uuid}.json > /tmp/amp-breaking.log 2>&1); sleep 5 && bash scripts/pr/amp_analyze_breaking_changes.sh '${breaking_uuid}'"
echo ""

# Save UUID for later collection
echo "${breaking_uuid}" > /tmp/amp_breaking_changes_uuid.txt
echo "UUID saved to /tmp/amp_breaking_changes_uuid.txt"

```

--------------------------------------------------------------------------------
/docs/HOOK_IMPROVEMENTS.md:
--------------------------------------------------------------------------------

```markdown
# Claude Code Session Hook Improvements

## Overview
Enhanced the session start hook to prioritize recent memories and provide better context awareness for Claude Code sessions.

## Key Improvements Made

### 1. Multi-Phase Memory Retrieval
- **Phase 1**: Recent memories (last week) - 60% of available slots
- **Phase 2**: Important tagged memories (architecture, decisions) - remaining slots
- **Phase 3**: Fallback to general project context if needed

### 2. Enhanced Recency Prioritization
- Recent memories get higher priority in initial search
- Time-based indicators: 🕒 today, 📅 this week, regular dates for older
- Configurable time windows (`last-week`, `last-2-weeks`, `last-month`)

### 3. Better Memory Categorization
- New "Recent Work" category for memories from last 7 days
- Improved categorization: Recent → Decisions → Architecture → Insights → Features → Context
- Visual indicators for recency in CLI output

### 4. Enhanced Semantic Queries  
- Git context integration (branch, recent commits)
- Framework and language context in queries
- User message context when available

### 5. Improved Configuration
```json
{
  "memoryService": {
    "recentFirstMode": true,           // Enable multi-phase retrieval
    "recentMemoryRatio": 0.6,          // 60% for recent memories
    "recentTimeWindow": "last-week",   // Time window for recent search
    "fallbackTimeWindow": "last-month" // Fallback time window
  },
  "output": {
    "showMemoryDetails": true,         // Show detailed memory info
    "showRecencyInfo": true,           // Show recency indicators
    "showPhaseDetails": true           // Show search phase details
  }
}
```

### 6. Better Visual Feedback
- Phase-by-phase search reporting
- Recency indicators in memory display
- Enhanced scoring display with time flags
- Better deduplication reporting

## Expected Impact

### Before
- Single query for all memories
- No recency prioritization
- Limited context in queries
- Basic categorization
- Truncated output

### After  
- Multi-phase approach prioritizing recent memories
- Smart time-based retrieval
- Git and framework-aware queries
- Enhanced categorization with "Recent Work"
- Full context display with recency indicators

## Usage

The improvements are **backward compatible** - existing installations will automatically use the enhanced system. To disable, set:

```json
{
  "memoryService": {
    "recentFirstMode": false
  }
}
```

## Files Modified

1. `claude-hooks/core/session-start.js` - Multi-phase retrieval logic
2. `claude-hooks/utilities/context-formatter.js` - Enhanced display and categorization  
3. `claude-hooks/config.json` - New configuration options
4. `test-hook.js` - Test script for validation

## Testing

Run `node test-hook.js` to test the enhanced hook with mock context. The test demonstrates:
- Project detection and context building
- Multi-phase memory retrieval
- Enhanced categorization and display
- Git context integration
- Configurable time windows

## Result

Session hooks now provide more relevant, recent context while maintaining access to important historical decisions and architecture information. Users get better continuity with their recent work while preserving long-term project memory.
```

--------------------------------------------------------------------------------
/claude_commands/session-start.md:
--------------------------------------------------------------------------------

```markdown
# Display Session Memory Context

Run the session-start memory awareness hook manually to display relevant memories, project context, and git analysis.

## What this does:

Executes the session-start.js hook to:
1. **Load Project Context**: Detect current project and framework
2. **Analyze Git History**: Review recent commits and changes
3. **Retrieve Relevant Memories**: Find memories related to current project
4. **Display Memory Context**: Show categorized memories:
   - 🔥 Recent Work
   - ⚠️ Current Problems
   - 📋 Additional Context

## Usage:

```bash
claude /session-start
```

## Windows Compatibility:

This command is specifically designed as a **Windows workaround** for the SessionStart hook bug (#160).

On Windows, SessionStart hooks cause Claude Code to hang indefinitely. This slash command provides the same functionality but can be triggered manually when you start a new session.

**Works on all platforms**: Windows, macOS, Linux

## When to use:

- At the start of each coding session
- When switching projects or contexts
- After compacting conversations to refresh memory context
- When you need to see what memories are available

## What you'll see:

```
🧠 Memory Hook → Initializing session awareness...
📂 Project: mcp-memory-service
💾 Storage: sqlite-vec (Connected) • 1968 memories • 15.37MB
📊 Git Context → 10 commits, 3 changelog entries

📚 Memory Search → Found 4 relevant memories (2 recent)

┌─ 🧠 Injected Memory Context → mcp-memory-service, FastAPI, Python
│
├─ 🔥 Recent Work:
│  ├─ MCP Memory Service v8.6... 📅 6d ago
│  └─ Session Summary - mcp-memory-service... 📅 6d ago
│
├─ ⚠️ Current Problems:
│  └─ Dream-Inspired Memory Consolidation... 📅 Oct 22
│
└─ 📋 Additional Context:
   └─ MCP Memory Service v8.5... 📅 Oct 22
```

## Alternative: Automatic Mid-Conversation Hook

Your UserPromptSubmit hook already runs automatically and retrieves memories when appropriate patterns are detected. This command is for when you want to **explicitly see** the memory context at session start.

## Technical Details:

- Runs: `node ~/.claude/hooks/core/session-start.js`
- HTTP endpoint: http://127.0.0.1:8000
- Protocol: HTTP (MCP fallback if HTTP unavailable)
- Performance: <2 seconds typical execution time

## Troubleshooting:

### Command not found
- Ensure hooks are installed: `ls ~/.claude/hooks/core/session-start.js`
- Reinstall: `cd claude-hooks && python install_hooks.py --basic`

### No memories displayed
- Check HTTP server is running: `curl http://127.0.0.1:8000/api/health`
- Verify hooks config: `cat ~/.claude/hooks/config.json`
- Check endpoint matches: Should be `http://127.0.0.1:8000`

### Error: Cannot find module
- **Windows**: Ensure path is quoted properly in hooks config
- Check Node.js installed: `node --version`
- Verify hook file exists at expected location

## Related:

- **GitHub Issue**: [#160 - Windows SessionStart hook bug](https://github.com/doobidoo/mcp-memory-service/issues/160)
- **Technical Analysis**: `claude-hooks/WINDOWS-SESSIONSTART-BUG.md`
- **Hook Documentation**: `claude-hooks/README.md`

---

**For Windows Users**: This is the **recommended workaround** for session initialization until the SessionStart hook bug is fixed in Claude Code core.

```

--------------------------------------------------------------------------------
/scripts/maintenance/repair_memories.py:
--------------------------------------------------------------------------------

```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# scripts/repair_memories.py

import asyncio
import json
import logging
from mcp_memory_service.storage.chroma import ChromaMemoryStorage
from mcp_memory_service.utils.hashing import generate_content_hash
import argparse

logger = logging.getLogger(__name__)

async def repair_missing_hashes(storage):
    """Repair memories missing content_hash by generating new ones"""
    results = storage.collection.get(
        include=["metadatas", "documents"]
    )
    
    fixed_count = 0
    for i, meta in enumerate(results["metadatas"]):
        memory_id = results["ids"][i]
        
        if "content_hash" not in meta:
            try:
                # Generate hash from content and metadata
                content = results["documents"][i]
                # Create a copy of metadata without the content_hash field
                meta_for_hash = {k: v for k, v in meta.items() if k != "content_hash"}
                new_hash = generate_content_hash(content, meta_for_hash)
                
                # Update metadata with new hash
                new_meta = meta.copy()
                new_meta["content_hash"] = new_hash
                
                # Update the memory
                storage.collection.update(
                    ids=[memory_id],
                    metadatas=[new_meta]
                )
                
                logger.info(f"Fixed memory {memory_id} with new hash: {new_hash}")
                fixed_count += 1
                
            except Exception as e:
                logger.error(f"Error fixing memory {memory_id}: {str(e)}")
    
    return fixed_count

async def main():
log_level = os.getenv('LOG_LEVEL', 'ERROR').upper()
logging.basicConfig(
    level=getattr(logging, log_level, logging.ERROR),
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    stream=sys.stderr
)
    
    parser = argparse.ArgumentParser(description='Repair memories with missing content hashes')
    parser.add_argument('--db-path', required=True, help='Path to ChromaDB database')
    args = parser.parse_args()
    
    logger.info(f"Connecting to database at: {args.db_path}")
    storage = ChromaMemoryStorage(args.db_path)
    
    logger.info("Starting repair process...")
    fixed_count = await repair_missing_hashes(storage)
    logger.info(f"Repair completed. Fixed {fixed_count} memories")
    
    # Run validation again to confirm fixes
    logger.info("Running validation to confirm fixes...")
    from validate_memories import run_validation_report
    report = await run_validation_report(storage)
    print("\nPost-repair validation report:")
    print(report)

if __name__ == "__main__":
    asyncio.run(main())
```

--------------------------------------------------------------------------------
/tests/test_database.py:
--------------------------------------------------------------------------------

```python
"""
MCP Memory Service
Copyright (c) 2024 Heinrich Krupp
Licensed under the MIT License. See LICENSE file in the project root for full license text.
"""
"""
Test database operations of the MCP Memory Service.
"""
import pytest
import pytest_asyncio
import asyncio
import os
from mcp.server import Server
from mcp.server.models import InitializationOptions

@pytest_asyncio.fixture
async def memory_server():
    """Create a test instance of the memory server."""
    server = Server("test-memory")
    await server.initialize(InitializationOptions(
        server_name="test-memory",
        server_version="0.1.0"
    ))
    yield server
    await server.shutdown()

@pytest.mark.asyncio
async def test_create_backup(memory_server):
    """Test database backup creation."""
    # Store some test data
    await memory_server.store_memory(
        content="Test memory for backup"
    )
    
    # Create backup
    backup_response = await memory_server.create_backup()
    
    assert backup_response.get("success") is True
    assert backup_response.get("backup_path") is not None
    assert os.path.exists(backup_response.get("backup_path"))

@pytest.mark.asyncio
async def test_database_health(memory_server):
    """Test database health check functionality."""
    health_status = await memory_server.check_database_health()
    
    assert health_status is not None
    assert "status" in health_status
    assert "memory_count" in health_status
    assert "database_size" in health_status

@pytest.mark.asyncio
async def test_optimize_database(memory_server):
    """Test database optimization."""
    # Store multiple memories to trigger optimization
    for i in range(10):
        await memory_server.store_memory(
            content=f"Test memory {i}"
        )
    
    # Run optimization
    optimize_response = await memory_server.optimize_db()
    
    assert optimize_response.get("success") is True
    assert "optimized_size" in optimize_response

@pytest.mark.asyncio
async def test_cleanup_duplicates(memory_server):
    """Test duplicate memory cleanup."""
    # Store duplicate memories
    duplicate_content = "This is a duplicate memory"
    await memory_server.store_memory(content=duplicate_content)
    await memory_server.store_memory(content=duplicate_content)
    
    # Clean up duplicates
    cleanup_response = await memory_server.cleanup_duplicates()
    
    assert cleanup_response.get("success") is True
    assert cleanup_response.get("duplicates_removed") >= 1
    
    # Verify only one copy remains
    results = await memory_server.exact_match_retrieve(
        content=duplicate_content
    )
    assert len(results) == 1

@pytest.mark.asyncio
async def test_database_persistence(memory_server):
    """Test database persistence across server restarts."""
    test_content = "Persistent memory test"
    
    # Store memory
    await memory_server.store_memory(content=test_content)
    
    # Simulate server restart
    await memory_server.shutdown()
    await memory_server.initialize(InitializationOptions(
        server_name="test-memory",
        server_version="0.1.0"
    ))
    
    # Verify memory persists
    results = await memory_server.exact_match_retrieve(
        content=test_content
    )
    assert len(results) == 1
    assert results[0] == test_content
```

--------------------------------------------------------------------------------
/claude_commands/memory-ingest-dir.md:
--------------------------------------------------------------------------------

```markdown
# memory-ingest-dir

Batch ingest all supported documents from a directory into the MCP Memory Service database.

## Usage

```
claude /memory-ingest-dir <directory_path> [--tags TAG1,TAG2] [--recursive] [--file-extensions EXT1,EXT2] [--chunk-size SIZE] [--chunk-overlap SIZE] [--max-files COUNT]
```

## Parameters

- `directory_path`: Path to the directory containing documents to ingest (required)
- `--tags`: Comma-separated list of tags to apply to all memories created
- `--recursive`: Process subdirectories recursively (default: true)
- `--file-extensions`: Comma-separated list of file extensions to process (default: pdf,txt,md,json)
- `--chunk-size`: Target size for text chunks in characters (default: 1000)
- `--chunk-overlap`: Characters to overlap between chunks (default: 200)
- `--max-files`: Maximum number of files to process (default: 100)

## Supported Formats

- PDF files (.pdf)
- Text files (.txt, .md, .markdown, .rst)
- JSON files (.json)

## Implementation

I need to upload multiple documents from a directory to the MCP Memory Service HTTP API endpoint.

First, let me check if the service is running and find all supported files in the directory:

```bash
# Check if the service is running
curl -s http://localhost:8080/api/health || curl -s http://localhost:8443/api/health || echo "Service not running"

# Find supported files in the directory
find "$DIRECTORY_PATH" -type f \( -iname "*.pdf" -o -iname "*.txt" -o -iname "*.md" -o -iname "*.json" \) | head -n $MAX_FILES
```

Then I'll upload the files in batch:

```bash
# Create a temporary script to upload files
UPLOAD_SCRIPT=$(mktemp)
cat > "$UPLOAD_SCRIPT" << 'EOF'
#!/bin/bash
TAGS="$1"
CHUNK_SIZE="$2"
CHUNK_OVERLAP="$3"
MAX_FILES="$4"
shift 4
FILES=("$@")

for file in "${FILES[@]}"; do
  echo "Uploading: $file"
  curl -X POST "http://localhost:8080/api/documents/upload" \
    -F "file=@$file" \
    -F "tags=$TAGS" \
    -F "chunk_size=$CHUNK_SIZE" \
    -F "chunk_overlap=$CHUNK_OVERLAP" \
    -F "memory_type=document"
  echo ""
done
EOF

chmod +x "$UPLOAD_SCRIPT"
```

## Examples

```
# Ingest all PDFs from a directory
claude /memory-ingest-dir ./docs --file-extensions pdf --tags documentation

# Recursively ingest from knowledge base
claude /memory-ingest-dir ./knowledge-base --recursive --tags knowledge,reference

# Limit processing to specific formats
claude /memory-ingest-dir ./articles --file-extensions md,txt --max-files 50 --tags articles
```

## Actual Execution Steps

When you run this command, I will:

1. **Scan the directory** for supported file types (PDF, TXT, MD, JSON)
2. **Apply filtering** based on file extensions and max files limit
3. **Validate the service** is running and accessible
4. **Upload files in batch** using the documents API endpoint
5. **Monitor progress** for each file and show real-time updates
6. **Report results** including total chunks created and any errors

All documents will be processed with consistent tagging and chunking parameters.

## Notes

- Files are processed in parallel for efficiency
- Progress is displayed with file counts and chunk statistics
- Each document gets processed independently - failures in one don't stop others
- Automatic tagging includes source directory and file type information
- Large directories may take time - consider using --max-files for testing

```

--------------------------------------------------------------------------------
/tests/timestamp/test_timestamp_simple.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""Test script to debug timestamp issues in recall functionality."""

import time
from datetime import datetime, timedelta

def test_timestamp_precision():
    """Test timestamp storage and retrieval issues."""
    
    print("=== Testing Timestamp Precision Issue ===")
    
    # Test 1: Precision loss when converting float to int
    print("\n1. Testing precision loss:")
    current_time = time.time()
    print(f"Current time (float): {current_time}")
    print(f"Current time (int): {int(current_time)}")
    print(f"Difference: {current_time - int(current_time)} seconds")
    
    # Test 2: Edge case demonstration
    print("\n2. Testing edge case with timestamps:")
    
    # Create timestamps for yesterday at midnight
    today = datetime.now().replace(hour=0, minute=0, second=0, microsecond=0)
    yesterday_start = (today - timedelta(days=1)).timestamp()
    yesterday_end = (today - timedelta(microseconds=1)).timestamp()
    
    print(f"\nYesterday range:")
    print(f"  Start (float): {yesterday_start} ({datetime.fromtimestamp(yesterday_start)})")
    print(f"  End (float): {yesterday_end} ({datetime.fromtimestamp(yesterday_end)})")
    print(f"  Start (int): {int(yesterday_start)}")
    print(f"  End (int): {int(yesterday_end)}")
    
    # Test a memory created at various times yesterday
    test_times = [
        ("00:00:00.5", yesterday_start + 0.5),
        ("00:00:30", yesterday_start + 30),
        ("12:00:00", yesterday_start + 12*3600),
        ("23:59:59.5", yesterday_end - 0.5)
    ]
    
    print("\n3. Testing memory inclusion with float vs int comparison:")
    for time_desc, timestamp in test_times:
        print(f"\n  Memory at {time_desc} (timestamp: {timestamp}):")
        
        # Float comparison
        float_included = (timestamp >= yesterday_start and timestamp <= yesterday_end)
        print(f"    Float comparison: {float_included}")
        
        # Int comparison (current implementation)
        int_included = (int(timestamp) >= int(yesterday_start) and int(timestamp) <= int(yesterday_end))
        print(f"    Int comparison: {int_included}")
        
        if float_included != int_included:
            print(f"    ⚠️  MISMATCH! Memory would be {'excluded' if float_included else 'included'} incorrectly!")
    
    # Test 4: Demonstrate the issue with ChromaDB filtering
    print("\n4. ChromaDB filter comparison issue:")
    print("  ChromaDB uses integer comparisons for numeric fields.")
    print("  When we store timestamp as int(created_at), we lose sub-second precision.")
    print("  This can cause memories to be excluded from time-based queries.")
    
    # Example of the fix
    print("\n5. Proposed fix:")
    print("  Option 1: Store timestamp as float in metadata (if ChromaDB supports it)")
    print("  Option 2: Store timestamp with higher precision (e.g., milliseconds as int)")
    print("  Option 3: Use ISO string timestamps for filtering")
    
    # Test millisecond precision
    print("\n6. Testing millisecond precision:")
    current_ms = int(current_time * 1000)
    print(f"  Current time in ms: {current_ms}")
    print(f"  Reconstructed time: {current_ms / 1000}")
    print(f"  Precision preserved: {abs((current_ms / 1000) - current_time) < 0.001}")

if __name__ == "__main__":
    test_timestamp_precision()

```

--------------------------------------------------------------------------------
/run_server.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""Run the MCP Memory Service with HTTP/HTTPS/mDNS support via FastAPI."""

import os
import sys
import uvicorn
import logging

# Add src to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'src'))

# Set up logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

if __name__ == "__main__":
    # Log configuration
    logger.info("Starting MCP Memory Service FastAPI server with the following configuration:")
    logger.info(f"  Storage Backend: {os.environ.get('MCP_MEMORY_STORAGE_BACKEND', 'sqlite_vec')}")
    logger.info(f"  HTTP Port: {os.environ.get('MCP_HTTP_PORT', '8000')}")
    logger.info(f"  HTTPS Enabled: {os.environ.get('MCP_HTTPS_ENABLED', 'false')}")
    logger.info(f"  HTTPS Port: {os.environ.get('MCP_HTTPS_PORT', '8443')}")
    logger.info(f"  mDNS Enabled: {os.environ.get('MCP_MDNS_ENABLED', 'false')}")
    logger.info(f"  API Key Set: {'Yes' if os.environ.get('MCP_API_KEY') else 'No'}")
    
    http_port = int(os.environ.get('MCP_HTTP_PORT', 8000))
    
    # Check if HTTPS is enabled
    if os.environ.get('MCP_HTTPS_ENABLED', 'false').lower() == 'true':
        https_port = int(os.environ.get('MCP_HTTPS_PORT', 8443))
        
        # Check for environment variable certificates first
        cert_file = os.environ.get('MCP_SSL_CERT_FILE')
        key_file = os.environ.get('MCP_SSL_KEY_FILE')
        
        if cert_file and key_file:
            # Use provided certificates
            if not os.path.exists(cert_file):
                logger.error(f"Certificate file not found: {cert_file}")
                sys.exit(1)
            if not os.path.exists(key_file):
                logger.error(f"Key file not found: {key_file}")
                sys.exit(1)
            logger.info(f"Using provided certificates: {cert_file}")
        else:
            # Generate self-signed certificate if needed
            cert_dir = os.path.expanduser("~/.mcp_memory_certs")
            os.makedirs(cert_dir, exist_ok=True)
            cert_file = os.path.join(cert_dir, "cert.pem")
            key_file = os.path.join(cert_dir, "key.pem")
            
            if not os.path.exists(cert_file) or not os.path.exists(key_file):
                logger.info("Generating self-signed certificate for HTTPS...")
                import subprocess
                subprocess.run([
                    "openssl", "req", "-x509", "-newkey", "rsa:4096",
                    "-keyout", key_file, "-out", cert_file,
                    "-days", "365", "-nodes",
                    "-subj", "/C=US/ST=State/L=City/O=MCP/CN=localhost"
                ], check=True)
                logger.info(f"Certificate generated at {cert_dir}")
        
        # Run with HTTPS
        logger.info(f"Starting HTTPS server on port {https_port}")
        uvicorn.run(
            "mcp_memory_service.web.app:app",
            host="0.0.0.0",
            port=https_port,
            ssl_keyfile=key_file,
            ssl_certfile=cert_file,
            reload=False,
            log_level="info"
        )
    else:
        # Run HTTP only
        logger.info(f"Starting HTTP server on port {http_port}")
        uvicorn.run(
            "mcp_memory_service.web.app:app",
            host="0.0.0.0",
            port=http_port,
            reload=False,
            log_level="info"
        )
```

--------------------------------------------------------------------------------
/tools/docker/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
## Slim (CPU-only) Docker image optimized for sqlite-vec + ONNX
## Consolidated as the primary Dockerfile to avoid confusion.
FROM python:3.12-slim

# Build arguments for conditional features
ARG SKIP_MODEL_DOWNLOAD=false
ARG PLATFORM=linux/amd64
ARG INSTALL_EXTRA="[sqlite]"
ARG FORCE_CPU_PYTORCH=false

# Set environment variables
ENV PYTHONUNBUFFERED=1 \
    MCP_MEMORY_STORAGE_BACKEND=sqlite_vec \
    MCP_MEMORY_SQLITE_PATH=/app/sqlite_db \
    MCP_MEMORY_BACKUPS_PATH=/app/backups \
    PYTHONPATH=/app/src \
    DOCKER_CONTAINER=1 \
    CHROMA_TELEMETRY_IMPL=none \
    ANONYMIZED_TELEMETRY=false \
    HF_HUB_DISABLE_TELEMETRY=1

# Set the working directory
WORKDIR /app

# Minimal system packages with security updates
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
    curl \
    bash \
    && apt-get upgrade -y \
    && rm -rf /var/lib/apt/lists/* \
    && apt-get clean

# Copy essential files
COPY pyproject.toml .
COPY uv.lock .
COPY README.md .
COPY scripts/installation/install_uv.py .

# Install UV
RUN python install_uv.py

# Create directories for data persistence
RUN mkdir -p /app/sqlite_db /app/backups

# Copy source code
COPY src/ /app/src/
COPY run_server.py ./

# Copy utility scripts if they exist
COPY scripts/utils/uv_wrapper.py ./uv_wrapper.py
COPY scripts/utils/memory_wrapper_uv.py ./memory_wrapper_uv.py

# Copy Docker entrypoint scripts
COPY tools/docker/docker-entrypoint.sh /usr/local/bin/
COPY tools/docker/docker-entrypoint-persistent.sh /usr/local/bin/
COPY tools/docker/docker-entrypoint-unified.sh /usr/local/bin/

# Install the package with UV (configurable dependency group)
# Use CPU-only PyTorch by default to save disk space in CI/test environments
RUN if [ "$FORCE_CPU_PYTORCH" = "true" ] || [ "$INSTALL_EXTRA" = "[sqlite]" ]; then \
        echo "Installing CPU-only PyTorch to save disk space..."; \
        python -m uv pip install torch --index-url https://download.pytorch.org/whl/cpu; \
        python -m uv pip install -e .${INSTALL_EXTRA}; \
    else \
        python -m uv pip install -e .${INSTALL_EXTRA}; \
    fi

# Conditionally pre-download ONNX models for lightweight embedding
RUN if [ "$SKIP_MODEL_DOWNLOAD" != "true" ]; then \
        echo "Pre-downloading ONNX embedding models..." && \
        python -c "try:\
            import onnxruntime as ort; \
            print('ONNX runtime available for lightweight embeddings'); \
            print('ONNX models will be downloaded at runtime as needed')\
        except Exception as e:\
            print(f'Warning: ONNX runtime not available: {e}'); \
            print('Embedding functionality may be limited')" || echo "ONNX check failed, continuing..."; \
    else \
        echo "Skipping model download (SKIP_MODEL_DOWNLOAD=true)"; \
    fi

# Configure stdio for MCP communication and make entrypoints executable
RUN chmod a+rw /dev/stdin /dev/stdout /dev/stderr && \
    chmod +x /usr/local/bin/docker-entrypoint.sh && \
    chmod +x /usr/local/bin/docker-entrypoint-persistent.sh && \
    chmod +x /usr/local/bin/docker-entrypoint-unified.sh

# Add volume mount points for data persistence
VOLUME ["/app/sqlite_db", "/app/backups"]

# Expose the port (if needed)
EXPOSE 8000

# Use the unified entrypoint script by default
# Can be overridden with docker-entrypoint.sh for backward compatibility
ENTRYPOINT ["/usr/local/bin/docker-entrypoint-unified.sh"]

```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/claude-desktop-setup.md:
--------------------------------------------------------------------------------

```markdown
# Claude Desktop Setup Guide - Windows

This guide helps you configure the MCP Memory Service to work with Claude Desktop on Windows without repeated PyTorch downloads.

## Problem and Solution

**Issue**: Claude Desktop was downloading PyTorch models (230MB+) on every startup due to missing offline configuration.

**Solution**: Added offline mode environment variables to your Claude Desktop config to use cached models.

## What Was Fixed

✅ **Your Claude Desktop Config Updated**:
- Added offline mode environment variables (`HF_HUB_OFFLINE=1`, `TRANSFORMERS_OFFLINE=1`)
- Added cache path configurations 
- Kept your existing SQLite-vec backend setup

✅ **Verified Components Working**:
- SQLite-vec database: 434 memories accessible ✅
- sentence-transformers: Loading models without downloads ✅
- Offline mode: No network requests when properly configured ✅

## Current Working Configuration

Your active config at `%APPDATA%\Claude\claude_desktop_config.json` now has:

- **Backend**: SQLite-vec (single database file)
- **Database**: `memory_migrated.db` with 434 memories
- **Offline Mode**: Enabled to prevent downloads
- **UV Package Manager**: For better dependency management

## For Other Users

See `examples/claude_desktop_config_windows.json` for an anonymized template with:
- SQLite-vec backend configuration (recommended)
- ChromaDB alternative configuration  
- Offline mode settings
- Performance optimizations

Replace `YOUR_USERNAME` with your actual Windows username.

## Key Changes Made

### 1. Config Template Updates
- Removed `PYTHONNOUSERSITE=1`, `PIP_NO_DEPENDENCIES=1`, `PIP_NO_INSTALL=1`
- These were blocking access to globally installed packages

### 2. Server Path Detection
Enhanced `src/mcp_memory_service/server.py`:
- Better virtual environment detection
- Windows-specific path handling
- Global site-packages access when not blocked

### 3. Dependency Checking
Improved `src/mcp_memory_service/dependency_check.py`:
- Enhanced model cache detection for Windows
- Better first-run detection logic
- Multiple cache location checking

### 4. Storage Backend Fixes
Updated both ChromaDB and SQLite-vec storage:
- Fixed hardcoded Linux paths
- Added offline mode configuration
- Better cache path detection

## Verification

After updating your Claude Desktop config:

1. **Restart Claude Desktop** completely
2. **Check the logs** - you should see:
   ```
   ✅ All dependencies are installed
   DEBUG: Found cached model in C:\Users\[username]\.cache\huggingface\hub
   ```
3. **No more downloads** - The 230MB PyTorch download should not occur

## Testing

You can test the server directly:
```bash
python scripts/run_memory_server.py --debug
```

You should see dependency checking passes and models load from cache.

## Troubleshooting

If you still see downloads:
1. Verify your username in the config paths
2. Check that models exist in `%USERPROFILE%\.cache\huggingface\hub`
3. Ensure Claude Desktop has been fully restarted

## Files Modified

- `examples/claude_desktop_config_template.json` - Removed blocking env vars
- `examples/claude_desktop_config_windows.json` - New Windows-specific config
- `src/mcp_memory_service/server.py` - Enhanced path detection
- `src/mcp_memory_service/dependency_check.py` - Better cache detection
- `src/mcp_memory_service/storage/sqlite_vec.py` - Fixed hardcoded paths
- `src/mcp_memory_service/storage/chroma.py` - Added offline mode support
```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/development/TIMESTAMP_FIX_SUMMARY.md:
--------------------------------------------------------------------------------

```markdown
# Timestamp Recall Fix Summary

## Issue Description
The memory recall functionality in mcp-memory-service was experiencing issues with timestamp-based queries. Memories stored with precise timestamps (including sub-second precision) were being retrieved incorrectly or not at all when using time-based recall functions.

## Root Cause
The issue was caused by timestamps being converted to integers at multiple points in the codebase:

1. **Storage**: In `ChromaMemoryStorage._optimize_metadata_for_chroma()`, timestamps were being stored as `int(memory.created_at)`
2. **Querying**: In the `recall()` method, timestamp comparisons were using `int(start_timestamp)` and `int(end_timestamp)`
3. **Memory Model**: In `Memory.to_dict()`, the timestamp field was being converted to `int(self.created_at)`

This integer conversion caused loss of sub-second precision, making all memories within the same second indistinguishable by timestamp.

## Changes Made

### 1. Fixed Timestamp Storage (chroma.py)
Changed line 949 in `_optimize_metadata_for_chroma()`:
```python
# Before:
"timestamp": int(memory.created_at),

# After:
"timestamp": float(memory.created_at),  # Changed from int() to float()
```

### 2. Fixed Timestamp Queries (chroma.py)
Changed lines 739 and 743 in the `recall()` method:
```python
# Before:
where_clause["$and"].append({"timestamp": {"$gte": int(start_timestamp)}})
where_clause["$and"].append({"timestamp": {"$lte": int(end_timestamp)}})

# After:
where_clause["$and"].append({"timestamp": {"$gte": float(start_timestamp)}})
where_clause["$and"].append({"timestamp": {"$lte": float(end_timestamp)}})
```

### 3. Fixed Memory Model (memory.py)
Changed line 161 in `Memory.to_dict()`:
```python
# Before:
"timestamp": int(self.created_at),  # Legacy timestamp (int)

# After:
"timestamp": float(self.created_at),  # Changed from int() to preserve precision
```

### 4. Fixed Date Parsing Order (time_parser.py)
Moved the full ISO date pattern check before the specific date pattern check to prevent "2024-06-15" from being incorrectly parsed as "24-06-15".

## Tests Added

### 1. Timestamp Recall Tests (`tests/test_timestamp_recall.py`)
- Tests for timestamp precision storage
- Tests for natural language time parsing
- Tests for various recall scenarios (yesterday, last week, specific dates)
- Tests for combined semantic and time-based recall
- Edge case tests

### 2. Time Parser Tests (`tests/test_time_parser.py`)
- Comprehensive tests for all supported time expressions
- Tests for relative dates (yesterday, 3 days ago, last week)
- Tests for specific date formats (MM/DD/YYYY, YYYY-MM-DD)
- Tests for seasons, holidays, and named periods
- Tests for time extraction from queries

## Verification
The fix has been verified with:
1. Unit tests covering individual components
2. Integration tests demonstrating end-to-end functionality
3. Precision tests showing that sub-second timestamps are now preserved

## Impact
- Memories can now be recalled with precise timestamp filtering
- Sub-second precision is maintained throughout storage and retrieval
- Natural language time expressions work correctly
- No breaking changes to existing functionality

## Recommendations
1. Consider adding database migration for existing memories with integer timestamps
2. Monitor performance impact of float vs integer comparisons in large datasets
3. Add documentation about supported time expressions for users

```

--------------------------------------------------------------------------------
/.github/workflows/LATEST_FIXES.md:
--------------------------------------------------------------------------------

```markdown
# Latest Workflow Fixes (2024-08-24)

## Issues Resolved

### 1. Workflow Conflicts
**Problem**: Both `main.yml` and `main-optimized.yml` running simultaneously with same triggers, concurrency groups, and job names.

**Solution**: 
- Temporarily disabled `main.yml` → `main.yml.disabled`
- Allows `main-optimized.yml` to run without conflicts

### 2. Matrix Strategy Failures
**Problem**: Matrix jobs failing fast, stopping entire workflow on single failure.

**Solutions Applied**:
- Added `fail-fast: false` to both test and publish-docker matrix strategies
- Allows other matrix combinations to continue even if one fails
- Improved fault tolerance

### 3. Missing Debugging Information
**Problem**: Workflow failures lacked context about what exactly was failing.

**Solutions Applied**:
- Added comprehensive debugging steps to test jobs
- Environment information logging (Python version, disk space, etc.)
- File system validation before operations
- Progress indicators with emojis for better visibility

### 4. Poor Error Handling
**Problem**: Jobs failed completely on minor issues, preventing workflow completion.

**Solutions Applied**:
- Added `continue-on-error: true` to optional steps
- Improved conditional logic for Docker Hub authentication
- Better fallback handling for missing test directories
- More informative error messages

### 5. Dependency Issues
**Problem**: Jobs failing due to missing files, credentials, or dependencies.

**Solutions Applied**:
- Added pre-flight checks for required files (Dockerfile, src/, pyproject.toml)
- Enhanced credential validation
- Created fallback test when test directory missing
- Improved job dependency conditions

## Specific Changes Made

### main-optimized.yml
```yaml
# Added debugging
- name: Debug environment
  run: |
    echo "🐍 Python version: $(python --version)"
    # ... more debug info

# Fixed matrix strategies  
strategy:
  fail-fast: false  # ← Key addition
  matrix:
    # ... existing matrix

# Enhanced test steps with validation
- name: Run unit tests
  if: matrix.test-type == 'unit'
  run: |
    echo "🧪 Starting unit tests..."
    # ... detailed steps with error handling

# Improved Docker build validation
- name: Check Docker requirements
  run: |
    echo "🐳 Checking Docker build requirements..."
    # ... file validation
```

### File Changes
- `main.yml` → `main.yml.disabled` (temporary)
- Enhanced error handling in both workflows
- Added comprehensive debugging throughout

## Expected Improvements

1. **Workflow Stability**: No more conflicts between competing workflows
2. **Better Diagnostics**: Clear logging shows where issues occur
3. **Fault Tolerance**: Individual job failures don't stop entire workflow
4. **Graceful Degradation**: Missing credentials/dependencies handled elegantly
5. **Developer Experience**: Emojis and clear messages improve log readability

## Testing Strategy

1. **Immediate**: Push changes trigger main-optimized.yml only
2. **Monitor**: Watch for improved error messages and debugging info
3. **Validate**: Ensure matrix jobs complete independently
4. **Rollback**: Original main.yml available if needed

## Success Metrics

- ✅ Workflows complete without conflicts
- ✅ Matrix jobs show individual results
- ✅ Clear error messages when issues occur  
- ✅ Graceful handling of missing credentials
- ✅ Debugging information helps troubleshoot future issues

Date: 2024-08-24  
Status: Applied and ready for testing
```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/development/CLEANUP_SUMMARY.md:
--------------------------------------------------------------------------------

```markdown
# 🧹 MCP-Memory-Service Cleanup Summary

**Date:** June 7, 2025  
**Operation:** Artifact Cleanup & Reorganization

## 📊 **Cleanup Statistics**

### **Files Archived:**
- **Memory Service**: 11 test/debug files → `/archive/`
- **Dashboard**: 37 test/debug files → `/archive/test-files/`
- **Total**: 48 obsolete artifacts safely preserved

### **Backups Created:**
- `mcp-memory-service-backup-20250607-0705.tar.gz` (193MB)
- `mcp-memory-dashboard-backup-20250607-0706.tar.gz` (215MB)

## 🗂️ **Files Moved to Archive**

### **Memory Service (`/archive/`):**
```
alternative_test_server.py
compatibility_test_server.py
diagnose_mcp.py
fixed_memory_server.py
memory_wrapper.py
memory_wrapper_uv.py
minimal_uv_server.py
simplified_memory_server.py
test_client.py
ultimate_protocol_debug.py
uv_wrapper.py
```

### **Dashboard (`/archive/test-files/`):**
```
All test_*.js files (20+ files)
All test_*.py files (5+ files)  
All test_*.sh files (5+ files)
*_minimal_server.py files
investigation.js & investigation_report.json
comprehensive_*test* files
final_*test* files
```

### **Dashboard (`/archive/`):**
```
CLAUDE_INTEGRATION_TEST.md
INTEGRATION_ACTION_PLAN.md
RESTORATION_COMPLETE.md
investigation.js
investigation_report.json
ultimate_investigation.js
ultimate_investigation.sh
```

## ✅ **Post-Cleanup Verification**

### **Memory Service Status:**
- ✅ Database Health: HEALTHY
- ✅ Total Memories: 164 (increased from previous 162)
- ✅ Storage: 8.36 MB
- ✅ Dashboard Integration: WORKING
- ✅ Core Operations: ALL FUNCTIONAL

### **Tests Performed:**
1. Database health check ✅
2. Dashboard health check ✅  
3. Memory storage operation ✅
4. Memory retrieval operation ✅

## 🎯 **Production Files Preserved**

### **Memory Service Core:**
- `src/mcp_memory_service/server.py` - Main server
- `src/mcp_memory_service/server copy.py` - **CRITICAL BACKUP**
- All core implementation files
- Configuration files (pyproject.toml, etc.)
- Documentation (README.md, CHANGELOG.md)

### **Dashboard Core:**
- `src/` directory - Main dashboard implementation
- Configuration files (package.json, vite.config.ts, etc.)
- Build scripts and deployment files

## 📁 **Directory Structure (Cleaned)**

### **Memory Service:**
```
mcp-memory-service/
├── src/mcp_memory_service/    # Core implementation
├── scripts/                   # Utility scripts
├── tests/                     # Test suite
├── archive/                   # Archived test artifacts
├── pyproject.toml            # Project config
├── requirements.txt          # Dependencies
└── README.md                 # Documentation
```

### **Dashboard:**
```
mcp-memory-dashboard/
├── src/                      # Core dashboard
├── dist/                     # Built files
├── archive/                  # Archived test artifacts
├── package.json             # Project config
├── vite.config.ts           # Build config
└── README.md                # Documentation
```

## 🔒 **Safety Measures**

1. **Full backups created** before any file operations
2. **Archives created** instead of deletion (nothing lost)
3. **Critical files preserved** (especially `server copy.py`)
4. **Functionality verified** after cleanup
5. **Production code untouched**

## 📝 **Next Steps**

1. ✅ Memory service cleanup complete
2. 🔄 Dashboard integration testing (next phase)
3. 🎯 Focus on remaining dashboard issues
4. 📊 Performance optimization if needed

---

**Result: Clean, organized codebase with all production functionality intact! 🚀**

```

--------------------------------------------------------------------------------
/docs/glama-deployment.md:
--------------------------------------------------------------------------------

```markdown
# Glama Deployment Guide

This guide provides instructions for deploying the MCP Memory Service on the Glama platform.

## Overview

The MCP Memory Service is now available on Glama at: https://glama.ai/mcp/servers/bzvl3lz34o

Glama is a directory for MCP servers that provides easy discovery and deployment options for users.

## Docker Configuration for Glama

### Primary Dockerfile

The repository includes an optimized Dockerfile specifically for Glama deployment:
- `Dockerfile` - Main production Dockerfile
- `Dockerfile.glama` - Glama-optimized version with enhanced labels and health checks

### Key Features

1. **Multi-platform Support**: Works on x86_64 and ARM64 architectures
2. **Health Checks**: Built-in health monitoring for container status
3. **Data Persistence**: Proper volume configuration for ChromaDB and backups
4. **Environment Configuration**: Pre-configured for optimal performance
5. **Security**: Minimal attack surface with slim Python base image

### Quick Start from Glama

Users can deploy the service using:

```bash
# Using the Glama-provided configuration
docker run -d -p 8000:8000 \
  -v $(pwd)/data/chroma_db:/app/chroma_db \
  -v $(pwd)/data/backups:/app/backups \
  doobidoo/mcp-memory-service:latest
```

### Environment Variables

The following environment variables are pre-configured:

| Variable | Value | Purpose |
|----------|-------|---------|
| `MCP_MEMORY_CHROMA_PATH` | `/app/chroma_db` | ChromaDB storage location |
| `MCP_MEMORY_BACKUPS_PATH` | `/app/backups` | Backup storage location |
| `DOCKER_CONTAINER` | `1` | Indicates Docker environment |
| `CHROMA_TELEMETRY_IMPL` | `none` | Disables ChromaDB telemetry |
| `PYTORCH_ENABLE_MPS_FALLBACK` | `1` | Enables MPS fallback for Apple Silicon |

### Standalone Mode

For deployment without an active MCP client, use:

```bash
docker run -d -p 8000:8000 \
  -e MCP_STANDALONE_MODE=1 \
  -v $(pwd)/data/chroma_db:/app/chroma_db \
  -v $(pwd)/data/backups:/app/backups \
  doobidoo/mcp-memory-service:latest
```

## Glama Platform Integration

### Server Verification

The Dockerfile passes all Glama server checks:
- ✅ Valid Dockerfile syntax
- ✅ Proper base image
- ✅ Security best practices
- ✅ Health check implementation
- ✅ Volume configuration
- ✅ Port exposure

### User Experience

Glama users benefit from:
1. **One-click deployment** from the Glama interface
2. **Pre-configured settings** for immediate use
3. **Documentation integration** with setup instructions
4. **Community feedback** and ratings
5. **Version tracking** and update notifications

### Monitoring and Health

The Docker image includes health checks that verify:
- Python environment is working
- MCP Memory Service can be imported
- Dependencies are properly loaded

## Maintenance

### Updates

The Glama listing is automatically updated when:
1. New versions are tagged in the GitHub repository
2. Docker images are published to Docker Hub
3. Documentation is updated

### Support

For Glama-specific issues:
1. Check the Glama platform documentation
2. Verify Docker configuration
3. Review container logs for errors
4. Test with standalone mode for debugging

## Contributing

To improve the Glama integration:
1. Test the deployment on different platforms
2. Provide feedback on the installation experience
3. Suggest improvements to the Docker configuration
4. Report any platform-specific issues

The goal is to make the MCP Memory Service as accessible as possible to the 60k+ monthly Glama users.
```

--------------------------------------------------------------------------------
/scripts/testing/test_memory_simple.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""Simple test for memory CRUD operations."""

import requests
import json
import time

BASE_URL = "http://localhost:8000"

def test_memory_crud():
    """Test the complete CRUD workflow for memories."""
    
    print("Testing Memory CRUD Operations")
    print("=" * 40)
    
    # Test 1: Health check
    print("\n[1] Health check...")
    try:
        resp = requests.get(f"{BASE_URL}/api/health", timeout=5)
        if resp.status_code == 200:
            print("[PASS] Server is healthy")
        else:
            print(f"[FAIL] Health check failed: {resp.status_code}")
            return
    except Exception as e:
        print(f"[FAIL] Cannot connect: {e}")
        return
    
    # Test 2: Store memory
    print("\n[2] Storing memory...")
    test_memory = {
        "content": "Test memory for API verification",
        "tags": ["test", "api"],
        "memory_type": "test",
        "metadata": {"timestamp": time.time()}
    }
    
    try:
        resp = requests.post(
            f"{BASE_URL}/api/memories",
            json=test_memory,
            headers={"Content-Type": "application/json"},
            timeout=10
        )
        
        if resp.status_code == 200:
            result = resp.json()
            if result["success"]:
                content_hash = result["content_hash"]
                print(f"[PASS] Memory stored: {content_hash[:12]}...")
            else:
                print(f"[FAIL] Storage failed: {result['message']}")
                return
        else:
            print(f"[FAIL] Storage failed: {resp.status_code}")
            print(f"Error: {resp.text}")
            return
    except Exception as e:
        print(f"[FAIL] Storage error: {e}")
        return
    
    # Test 3: List memories
    print("\n[3] Listing memories...")
    try:
        resp = requests.get(f"{BASE_URL}/api/memories?page=1&page_size=5", timeout=10)
        if resp.status_code == 200:
            result = resp.json()
            print(f"[PASS] Found {len(result['memories'])} memories")
            print(f"Total: {result['total']}")
        else:
            print(f"[FAIL] Listing failed: {resp.status_code}")
    except Exception as e:
        print(f"[FAIL] Listing error: {e}")
    
    # Test 4: Delete memory
    print("\n[4] Deleting memory...")
    try:
        resp = requests.delete(f"{BASE_URL}/api/memories/{content_hash}", timeout=10)
        if resp.status_code == 200:
            result = resp.json()
            if result["success"]:
                print("[PASS] Memory deleted")
            else:
                print(f"[FAIL] Deletion failed: {result['message']}")
        else:
            print(f"[FAIL] Deletion failed: {resp.status_code}")
    except Exception as e:
        print(f"[FAIL] Deletion error: {e}")
    
    print("\n" + "=" * 40)
    print("CRUD testing completed!")

if __name__ == "__main__":
    test_memory_crud()
```

--------------------------------------------------------------------------------
/src/mcp_memory_service/web/oauth/discovery.py:
--------------------------------------------------------------------------------

```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
OAuth 2.1 Discovery endpoints for MCP Memory Service.

Implements .well-known endpoints required for OAuth 2.1 Dynamic Client Registration.
"""

import logging
from fastapi import APIRouter
from ...config import OAUTH_ISSUER, get_jwt_algorithm
from .models import OAuthServerMetadata

logger = logging.getLogger(__name__)

router = APIRouter()



@router.get("/.well-known/oauth-authorization-server/mcp")
async def oauth_authorization_server_metadata() -> OAuthServerMetadata:
    """
    OAuth 2.1 Authorization Server Metadata endpoint.

    Returns metadata about the OAuth 2.1 authorization server as specified
    in RFC 8414. This endpoint is required for OAuth 2.1 Dynamic Client Registration.
    """
    logger.info("OAuth authorization server metadata requested")

    # Use OAUTH_ISSUER consistently for both issuer field and endpoint URLs
    # This ensures URL consistency across discovery and JWT token validation
    algorithm = get_jwt_algorithm()
    metadata = OAuthServerMetadata(
        issuer=OAUTH_ISSUER,
        authorization_endpoint=f"{OAUTH_ISSUER}/oauth/authorize",
        token_endpoint=f"{OAUTH_ISSUER}/oauth/token",
        registration_endpoint=f"{OAUTH_ISSUER}/oauth/register",
        grant_types_supported=["authorization_code", "client_credentials"],
        response_types_supported=["code"],
        token_endpoint_auth_methods_supported=["client_secret_basic", "client_secret_post"],
        scopes_supported=["read", "write", "admin"],
        id_token_signing_alg_values_supported=[algorithm]
    )

    logger.debug(f"Returning OAuth metadata: issuer={metadata.issuer}")
    return metadata


@router.get("/.well-known/openid-configuration/mcp")
async def openid_configuration() -> OAuthServerMetadata:
    """
    OpenID Connect Discovery endpoint.

    Some OAuth 2.1 clients may also check this endpoint for compatibility.
    For now, we return the same metadata as the OAuth authorization server.
    """
    logger.info("OpenID Connect configuration requested")

    # Return the same metadata as OAuth authorization server for compatibility
    return await oauth_authorization_server_metadata()


@router.get("/.well-known/oauth-authorization-server")
async def oauth_authorization_server_metadata_generic() -> OAuthServerMetadata:
    """
    Generic OAuth 2.1 Authorization Server Metadata endpoint.

    Fallback endpoint for clients that don't append the /mcp suffix.
    """
    logger.info("Generic OAuth authorization server metadata requested")
    return await oauth_authorization_server_metadata()


@router.get("/.well-known/openid-configuration")
async def openid_configuration_generic() -> OAuthServerMetadata:
    """
    Generic OpenID Connect Discovery endpoint.

    Fallback endpoint for clients that don't append the /mcp suffix.
    """
    logger.info("Generic OpenID Connect configuration requested")
    return await oauth_authorization_server_metadata()
```

--------------------------------------------------------------------------------
/docs/mastery/api-reference.md:
--------------------------------------------------------------------------------

```markdown
# MCP Memory Service — API Reference

This document catalogs available APIs exposed via the MCP servers and summarizes request and response patterns.

## MCP (FastMCP HTTP) Tools

Defined in `src/mcp_memory_service/mcp_server.py` using `@mcp.tool()`:

- `store_memory(content, tags=None, memory_type="note", metadata=None, client_hostname=None)`
  - Stores a new memory; tags and metadata optional. If `INCLUDE_HOSTNAME=true`, a `source:<hostname>` tag and `hostname` metadata are added.
  - Response: `{ success: bool, message: str, content_hash: str }`.

- `retrieve_memory(query, n_results=5, min_similarity=0.0)`
  - Semantic search by query; returns up to `n_results` matching memories.
  - Response: `{ memories: [{ content, content_hash, tags, memory_type, created_at, similarity_score }...], query, total_results }`.

- `search_by_tag(tags, match_all=False)`
  - Search by a tag or list of tags. `match_all=true` requires all tags; otherwise any.
  - Response: `{ memories: [{ content, content_hash, tags, memory_type, created_at }...], search_tags: [...], match_all, total_results }`.

- `delete_memory(content_hash)`
  - Deletes a memory by its content hash.
  - Response: `{ success: bool, message: str, content_hash }`.

- `check_database_health()`
  - Health and status of the configured backend.
  - Response: `{ status: "healthy"|"error", backend, statistics: { total_memories, total_tags, storage_size, last_backup }, timestamp? }`.

Transport: `mcp.run("streamable-http")`, default host `0.0.0.0`, default port `8000` or `MCP_SERVER_PORT`/`MCP_SERVER_HOST`.

## MCP (stdio) Server Tools and Prompts

Defined in `src/mcp_memory_service/server.py` using `mcp.server.Server`. Exposes a broader set of tools/prompts beyond the core FastMCP tools above.

Highlights:

- Core memory ops: store, retrieve/search, search_by_tag(s), delete, delete_by_tag, cleanup_duplicates, update_memory_metadata, time-based recall.
- Analysis/export: knowledge_analysis, knowledge_export (supports `format: json|markdown|text`, optional filters).
- Maintenance: memory_cleanup (duplicate detection heuristics), health/stats, tag listing.
- Consolidation (optional): association, clustering, compression, forgetting tasks and schedulers when enabled.

Note: The stdio server dynamically picks storage mode for multi-client scenarios (direct SQLite-vec with WAL vs. HTTP coordination), suppresses stdout for Claude Desktop, and prints richer diagnostics for LM Studio.

## HTTP Interface

- For FastMCP, HTTP transport is used to carry MCP protocol; endpoints are handled by the FastMCP layer and not intended as a REST API surface.
- A dedicated HTTP API and dashboard exist under `src/mcp_memory_service/web/` in some distributions. In this repo version, coordination HTTP is internal and the recommended external interface is MCP.

## Error Model and Logging

- MCP tool errors are surfaced as `{ success: false, message: <details> }` or include `error` fields.
- Logging routes WARNING+ to stderr (Claude Desktop strict mode), info/debug to stdout only for LM Studio; set `LOG_LEVEL` for verbosity.

## Examples

Store memory:

```
tool: store_memory
args: { "content": "Refactored auth flow to use OAuth 2.1", "tags": ["auth", "refactor"], "memory_type": "note" }
```

Retrieve by query:

```
tool: retrieve_memory
args: { "query": "OAuth refactor", "n_results": 5 }
```

Search by tags:

```
tool: search_by_tag
args: { "tags": ["auth", "refactor"], "match_all": true }
```

Delete by hash:

```
tool: delete_memory
args: { "content_hash": "<hash>" }
```


```

--------------------------------------------------------------------------------
/scripts/pr/detect_breaking_changes.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
# scripts/pr/detect_breaking_changes.sh - Analyze API changes for breaking changes
#
# Usage: bash scripts/pr/detect_breaking_changes.sh <BASE_BRANCH> [HEAD_BRANCH]
# Example: bash scripts/pr/detect_breaking_changes.sh main feature/new-api

set -e

BASE_BRANCH=${1:-main}
HEAD_BRANCH=${2:-$(git branch --show-current)}

if [ -z "$BASE_BRANCH" ]; then
    echo "Usage: $0 <BASE_BRANCH> [HEAD_BRANCH]"
    echo "Example: $0 main feature/new-api"
    exit 1
fi

if ! command -v gemini &> /dev/null; then
    echo "Error: Gemini CLI is not installed"
    exit 1
fi

echo "=== Breaking Change Detection ==="
echo "Base branch: $BASE_BRANCH"
echo "Head branch: $HEAD_BRANCH"
echo ""

# Get API-related file changes
echo "Analyzing API changes..."
api_changes=$(git diff $BASE_BRANCH...$HEAD_BRANCH -- \
    src/mcp_memory_service/tools.py \
    src/mcp_memory_service/web/api/ \
    src/mcp_memory_service/storage/base.py \
    2>/dev/null || echo "")

if [ -z "$api_changes" ]; then
    echo "✅ No API changes detected"
    echo ""
    echo "Checked paths:"
    echo "- src/mcp_memory_service/tools.py (MCP tools)"
    echo "- src/mcp_memory_service/web/api/ (Web API endpoints)"
    echo "- src/mcp_memory_service/storage/base.py (Storage interface)"
    exit 0
fi

echo "API changes detected. Analyzing for breaking changes..."
echo ""

# Check diff size and warn if large
diff_lines=$(echo "$api_changes" | wc -l)
if [ $diff_lines -gt 200 ]; then
    echo "⚠️  Warning: Large diff ($diff_lines lines) - analysis may miss changes beyond model context window"
    echo "   Consider reviewing the full diff manually for breaking changes"
fi

# Analyze with Gemini (full diff, not truncated)
result=$(gemini "Analyze these API changes for BREAKING CHANGES ONLY.

A breaking change is:
1. **Removed** function, method, class, or HTTP endpoint
2. **Changed function signature**: parameters removed, reordered, or made required
3. **Changed return type**: incompatible return value structure
4. **Renamed public API**: function, class, endpoint renamed without alias
5. **Changed HTTP endpoint**: path or method changed
6. **Removed configuration option**: environment variable or config field removed

NON-BREAKING changes (ignore these):
- Added new functions/endpoints (backward compatible)
- Added optional parameters with defaults
- Improved documentation
- Internal implementation changes
- Refactoring that preserves public interface

For each breaking change, provide:
- Severity: CRITICAL (data loss/security) / HIGH (blocks upgrade) / MEDIUM (migration effort)
- Type: Removed / Signature Changed / Renamed / etc.
- Location: File and function/endpoint name
- Impact: What breaks for users
- Migration: How users should adapt

API Changes:
\`\`\`diff
$api_changes
\`\`\`

Output format:
If breaking changes found:
## BREAKING CHANGES DETECTED

### [SEVERITY] Type: Location
**Impact:** <description>
**Migration:** <instructions>

If no breaking changes:
No breaking changes detected.")

echo "$result"
echo ""

# Check severity
if echo "$result" | grep -qi "CRITICAL"; then
    echo "🔴 CRITICAL breaking changes detected!"
    exit 3
elif echo "$result" | grep -qi "HIGH"; then
    echo "🟠 HIGH severity breaking changes detected!"
    exit 2
elif echo "$result" | grep -qi "MEDIUM"; then
    echo "🟡 MEDIUM severity breaking changes detected"
    exit 1
elif echo "$result" | grep -qi "breaking"; then
    echo "⚠️  Breaking changes detected (unspecified severity)"
    exit 1
else
    echo "✅ No breaking changes detected"
    exit 0
fi

```

--------------------------------------------------------------------------------
/claude_commands/memory-recall.md:
--------------------------------------------------------------------------------

```markdown
# Recall Memories by Time and Context

I'll help you retrieve memories from your MCP Memory Service using natural language time expressions and contextual queries. This command excels at finding past conversations, decisions, and notes based on when they occurred.

## What I'll do:

1. **Parse Time Expressions**: I'll interpret natural language time queries like:
   - "yesterday", "last week", "two months ago"
   - "last Tuesday", "this morning", "last summer"
   - "before the database migration", "since we started using SQLite"

2. **Context-Aware Search**: I'll consider the current project context to find relevant memories related to your current work.

3. **Smart Filtering**: I'll automatically filter results to show the most relevant memories first, considering:
   - Temporal relevance to your query
   - Project and directory context matching
   - Semantic similarity to current work

4. **Present Results**: I'll format the retrieved memories with clear context about when they were created and why they're relevant.

## Usage Examples:

```bash
claude /memory-recall "what did we decide about the database last week?"
claude /memory-recall "yesterday's architectural decisions"
claude /memory-recall "memories from when we were working on the mDNS feature"
claude /memory-recall --project "mcp-memory-service" "last month's progress"
```

## Implementation:

I'll connect to your MCP Memory Service at `https://memory.local:8443/` and use its API endpoints. The recall process involves:

1. **Query Processing**: Parse the natural language time expression and extract context clues
2. **Memory Retrieval**: Use the appropriate API endpoints:
   - `POST /api/search/by-time` - Natural language time-based queries
   - `POST /api/search` - Semantic search for context-based recall
   - `GET /api/memories` - List memories with pagination and filtering
   - `GET /api/memories/{hash}` - Retrieve specific memory by hash
3. **Context Matching**: Filter results based on current project and directory context
4. **Relevance Scoring**: Use similarity scores from the API responses
5. **Result Presentation**: Format memories with timestamps, tags, and relevance context

All requests use curl with `-k` flag for HTTPS and proper JSON formatting.

For each recalled memory, I'll show:
- **Content**: The actual memory content
- **Created**: When the memory was stored
- **Tags**: Associated tags and categories
- **Context**: Project and session context when stored
- **Relevance**: Why this memory matches your query

## Time Expression Examples:

- **Relative**: "yesterday", "last week", "two days ago", "this month"
- **Seasonal**: "last summer", "this winter", "spring 2024"

**Note**: Some expressions like "last hour" may not be supported by the time parser. Standard expressions like "today", "yesterday", "last week" work reliably.
- **Event-based**: "before the refactor", "since we switched to SQLite", "during the testing phase"
- **Specific**: "January 15th", "last Tuesday morning", "end of last month"

## Arguments:

- `$ARGUMENTS` - The time-based query, with optional flags:
  - `--limit N` - Maximum number of memories to retrieve (default: 10)
  - `--project "name"` - Filter by specific project
  - `--tags "tag1,tag2"` - Additional tag filtering
  - `--type "note|decision|task"` - Filter by memory type
  - `--include-context` - Show full session context for each memory

If no memories are found for the specified time period, I'll suggest broadening the search or checking if the MCP Memory Service contains data for that timeframe.
```

--------------------------------------------------------------------------------
/tests/test_memory_ops.py:
--------------------------------------------------------------------------------

```python
"""
MCP Memory Service
Copyright (c) 2024 Heinrich Krupp
Licensed under the MIT License. See LICENSE file in the project root for full license text.
"""
"""
Test core memory operations of the MCP Memory Service.
"""
import pytest
import pytest_asyncio
import asyncio
from mcp.server import Server
from mcp.server.models import InitializationOptions

@pytest_asyncio.fixture
async def memory_server():
    """Create a test instance of the memory server."""
    server = Server("test-memory")
    # Initialize with test configuration
    await server.initialize(InitializationOptions(
        server_name="test-memory",
        server_version="0.1.0"
    ))
    yield server
    # Cleanup after tests
    await server.shutdown()

@pytest.mark.asyncio
async def test_store_memory(memory_server):
    """Test storing new memory entries."""
    test_content = "The capital of France is Paris"
    test_metadata = {
        "tags": ["geography", "cities", "europe"],
        "type": "fact"
    }
    
    response = await memory_server.store_memory(
        content=test_content,
        metadata=test_metadata
    )
    
    assert response is not None
    # Add more specific assertions based on expected response format

@pytest.mark.asyncio
async def test_retrieve_memory(memory_server):
    """Test retrieving memories using semantic search."""
    # First store some test data
    test_memories = [
        "The capital of France is Paris",
        "London is the capital of England",
        "Berlin is the capital of Germany"
    ]
    
    for memory in test_memories:
        await memory_server.store_memory(content=memory)
    
    # Test retrieval
    query = "What is the capital of France?"
    results = await memory_server.retrieve_memory(
        query=query,
        n_results=1
    )
    
    assert results is not None
    assert len(results) == 1
    assert "Paris" in results[0]  # The most relevant result should mention Paris

@pytest.mark.asyncio
async def test_search_by_tag(memory_server):
    """Test retrieving memories by tags."""
    # Store memory with tags
    await memory_server.store_memory(
        content="Paris is beautiful in spring",
        metadata={"tags": ["travel", "cities", "europe"]}
    )
    
    # Search by tags
    results = await memory_server.search_by_tag(
        tags=["travel", "europe"]
    )
    
    assert results is not None
    assert len(results) > 0
    assert "Paris" in results[0]

@pytest.mark.asyncio
async def test_delete_memory(memory_server):
    """Test deleting specific memories."""
    # Store a memory and get its hash
    content = "Memory to be deleted"
    response = await memory_server.store_memory(content=content)
    content_hash = response.get("hash")
    
    # Delete the memory
    delete_response = await memory_server.delete_memory(
        content_hash=content_hash
    )
    
    assert delete_response.get("success") is True
    
    # Verify memory is deleted
    results = await memory_server.exact_match_retrieve(content=content)
    assert len(results) == 0

@pytest.mark.asyncio
async def test_memory_with_empty_content(memory_server):
    """Test handling of empty or invalid content."""
    with pytest.raises(ValueError):
        await memory_server.store_memory(content="")

@pytest.mark.asyncio
async def test_memory_with_invalid_tags(memory_server):
    """Test handling of invalid tags metadata."""
    with pytest.raises(ValueError):
        await memory_server.store_memory(
            content="Test content",
            metadata={"tags": "invalid"}  # Should be a list
        )
```

--------------------------------------------------------------------------------
/claude_commands/memory-store.md:
--------------------------------------------------------------------------------

```markdown
# Store Memory with Context

I'll help you store information in your MCP Memory Service with proper context and tagging. This command captures the current session context and stores it as a persistent memory that can be recalled later.

## What I'll do:

1. **Detect Current Context**: I'll analyze the current working directory, recent files, and conversation context to understand what we're working on.

2. **Capture Memory Content**: I'll take the provided information or current session summary and prepare it for storage.

3. **Add Smart Tags**: I'll automatically generate relevant tags based on:
   - Machine hostname (source identifier)
   - Current project directory name
   - Programming languages detected
   - File types and patterns
   - Any explicit tags you provide

4. **Store with Metadata**: I'll include useful metadata like:
   - Machine hostname for source tracking
   - Timestamp and session context
   - Project path and git repository info
   - File associations and dependencies

## Usage Examples:

```bash
claude /memory-store "We decided to use SQLite-vec instead of ChromaDB for better performance"
claude /memory-store --tags "decision,architecture" "Database backend choice rationale"
claude /memory-store --type "note" "Remember to update the Docker configuration after the database change"
```

## Implementation:

I'll use a **hybrid remote-first approach** with local fallback for reliability:

### Primary: Remote API Storage
- **Try remote first**: `https://narrowbox.local:8443/api/memories` 
- **Real-time sync**: Changes immediately available across all clients
- **Single source of truth**: Consolidated database on remote server

### Fallback: Local Staging
- **If remote fails**: Store locally in staging database for later sync
- **Offline capability**: Continue working when remote is unreachable  
- **Auto-sync**: Changes pushed to remote when connectivity returns

### Smart Sync Workflow
```
1. Try remote API directly (fastest path)
2. If offline/failed: Stage locally + notify user  
3. On reconnect: ./sync/memory_sync.sh automatically syncs
4. Conflict resolution: Remote wins, with user notification
```

The content will be stored with automatic context detection:
- **Machine Context**: Hostname automatically added as tag (e.g., "source:your-machine-name")
- **Project Context**: Current directory, git repository, recent commits
- **Session Context**: Current conversation topics and decisions
- **Technical Context**: Programming language, frameworks, and tools in use
- **Temporal Context**: Date, time, and relationship to recent activities

### Service Endpoints:
- **Primary API**: `https://narrowbox.local:8443/api/memories`
- **Sync Status**: Use `./sync/memory_sync.sh status` to check pending changes
- **Manual Sync**: Use `./sync/memory_sync.sh sync` for full synchronization

I'll use the correct curl syntax with `-k` flag for HTTPS, proper JSON payload formatting, and automatic client hostname detection using the `X-Client-Hostname` header.

## Arguments:

- `$ARGUMENTS` - The content to store as memory, or additional flags:
  - `--tags "tag1,tag2"` - Explicit tags to add
  - `--type "note|decision|task|reference"` - Memory type classification
  - `--project "name"` - Override project name detection
  - `--private` - Mark as private/sensitive content

I'll store the memory automatically without asking for confirmation. The memory will be saved immediately using proper JSON formatting with the curl command. You'll receive a brief confirmation showing the content hash and applied tags after successful storage.
```

--------------------------------------------------------------------------------
/scripts/maintenance/delete_orphaned_vectors_fixed.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Delete orphaned vectors from Cloudflare Vectorize using correct endpoint.

Uses /delete_by_ids (underscores, not hyphens) with proper JSON payload format.
"""

import asyncio
import os
import sys
from pathlib import Path


async def main():
    # Set OAuth to false to avoid validation issues
    os.environ['MCP_OAUTH_ENABLED'] = 'false'

    # Import after setting environment
    from mcp_memory_service.storage.cloudflare import CloudflareStorage
    from mcp_memory_service.config import (
        CLOUDFLARE_API_TOKEN, CLOUDFLARE_ACCOUNT_ID,
        CLOUDFLARE_VECTORIZE_INDEX, CLOUDFLARE_D1_DATABASE_ID,
        EMBEDDING_MODEL_NAME
    )

    # Read vector IDs from the completed hash file
    hash_file = Path.home() / "cloudflare_d1_cleanup_completed.txt"

    if not hash_file.exists():
        print(f"❌ Error: Completed hash file not found: {hash_file}")
        print(f"   The D1 cleanup must be run first")
        sys.exit(1)

    print(f"📄 Reading vector IDs from: {hash_file}")

    with open(hash_file) as f:
        vector_ids = [line.strip() for line in f if line.strip()]

    if not vector_ids:
        print(f"✅ No vector IDs to delete (file is empty)")
        sys.exit(0)

    print(f"📋 Found {len(vector_ids)} orphaned vectors to delete")
    print(f"🔗 Connecting to Cloudflare...\n")

    # Initialize Cloudflare storage
    cloudflare = CloudflareStorage(
        api_token=CLOUDFLARE_API_TOKEN,
        account_id=CLOUDFLARE_ACCOUNT_ID,
        vectorize_index=CLOUDFLARE_VECTORIZE_INDEX,
        d1_database_id=CLOUDFLARE_D1_DATABASE_ID,
        embedding_model=EMBEDDING_MODEL_NAME
    )

    await cloudflare.initialize()

    print(f"✅ Connected to Cloudflare")
    print(f"🗑️  Deleting {len(vector_ids)} vectors using correct /delete_by_ids endpoint...\n")

    deleted = 0
    failed = []

    # Batch delete in groups of 100 (API recommended batch size)
    batch_size = 100
    total_batches = (len(vector_ids) + batch_size - 1) // batch_size

    for batch_num, i in enumerate(range(0, len(vector_ids), batch_size), 1):
        batch = vector_ids[i:i+batch_size]

        try:
            # Use the public API method for better encapsulation
            result = await cloudflare.delete_vectors_by_ids(batch)

            if result.get("success"):
                deleted += len(batch)
                mutation_id = result.get("result", {}).get("mutationId", "N/A")
                print(f"Batch {batch_num}/{total_batches}: ✓ Deleted {len(batch)} vectors (mutation: {mutation_id[:16]}...)")
            else:
                failed.extend(batch)
                print(f"Batch {batch_num}/{total_batches}: ✗ Failed - {result.get('errors', 'Unknown error')}")

        except Exception as e:
            failed.extend(batch)
            print(f"Batch {batch_num}/{total_batches}: ✗ Exception - {str(e)[:100]}")

    # Final summary
    print(f"\n{'='*60}")
    print(f"📊 Vector Cleanup Summary")
    print(f"{'='*60}")
    print(f"✅ Successfully deleted: {deleted}/{len(vector_ids)}")
    print(f"✗  Failed: {len(failed)}/{len(vector_ids)}")
    print(f"{'='*60}\n")

    if deleted > 0:
        print(f"🎉 Vector cleanup complete!")
        print(f"📋 {deleted} orphaned vectors removed from Vectorize")
        print(f"⏱️  Note: Deletions are asynchronous and may take a few seconds to propagate\n")

    if failed:
        print(f"⚠️  {len(failed)} vectors failed to delete")
        print(f"   You may need to retry these manually\n")

    return 0 if len(failed) == 0 else 1


if __name__ == "__main__":
    sys.exit(asyncio.run(main()))

```

--------------------------------------------------------------------------------
/scripts/validation/verify_torch.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Verify PyTorch installation and functionality.
This script attempts to import PyTorch and run basic operations.
"""
import os
import sys

# Disable sitecustomize.py and other import hooks to prevent recursion issues
os.environ["PYTHONNOUSERSITE"] = "1"  # Disable user site-packages
os.environ["PYTHONPATH"] = ""  # Clear PYTHONPATH

def print_info(text):
    """Print formatted info text."""
    print(f"[INFO] {text}")

def print_error(text):
    """Print formatted error text."""
    print(f"[ERROR] {text}")

def print_success(text):
    """Print formatted success text."""
    print(f"[SUCCESS] {text}")

def print_warning(text):
    """Print formatted warning text."""
    print(f"[WARNING] {text}")

def verify_torch():
    """Verify PyTorch installation and functionality."""
    print_info("Verifying PyTorch installation")
    
    # Add site-packages to sys.path
    site_packages = os.path.join(sys.prefix, 'Lib', 'site-packages')
    if site_packages not in sys.path:
        sys.path.insert(0, site_packages)
    
    # Print sys.path for debugging
    print_info("Python path:")
    for path in sys.path:
        print(f"  - {path}")
    
    # Try to import torch
    try:
        print_info("Attempting to import torch")
        import torch
        print_success(f"PyTorch is installed (version {torch.__version__})")
        print_info(f"PyTorch location: {torch.__file__}")
        
        # Check if CUDA is available
        if torch.cuda.is_available():
            print_success(f"CUDA is available (version {torch.version.cuda})")
            print_info(f"GPU: {torch.cuda.get_device_name(0)}")
            
            # Test a simple CUDA operation
            try:
                x = torch.rand(5, 3).cuda()
                y = torch.rand(5, 3).cuda()
                z = x + y
                print_success("Basic CUDA tensor operations work correctly")
            except Exception as e:
                print_warning(f"CUDA tensor operations failed: {e}")
                print_warning("Falling back to CPU mode")
        else:
            print_info("CUDA is not available, using CPU-only mode")
        
        # Test a simple tensor operation
        try:
            x = torch.rand(5, 3)
            y = torch.rand(5, 3)
            z = x + y
            print_success("Basic tensor operations work correctly")
        except Exception as e:
            print_error(f"Failed to perform basic tensor operations: {e}")
            return False
        
        return True
    except ImportError as e:
        print_error(f"PyTorch is not installed: {e}")
        return False
    except Exception as e:
        print_error(f"Error checking PyTorch installation: {e}")
        import traceback
        traceback.print_exc()
        return False

def main():
    """Main function."""
    if verify_torch():
        print_success("PyTorch verification completed successfully")
    else:
        print_error("PyTorch verification failed")
        sys.exit(1)

if __name__ == "__main__":
    main()
```

--------------------------------------------------------------------------------
/scripts/migration/migrate_to_sqlite_vec.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Simple migration script to help users migrate from ChromaDB to sqlite-vec.
This provides an easy way to switch to the lighter sqlite-vec backend.
"""

import os
import sys
import asyncio
from pathlib import Path

# Add scripts directory to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'scripts'))

from migrate_storage import MigrationTool

async def main():
    """Simple migration from ChromaDB to sqlite-vec with sensible defaults."""
    print("🔄 MCP Memory Service - Migrate to SQLite-vec")
    print("=" * 50)
    
    # Get default paths
    home = Path.home()
    if sys.platform == 'darwin':  # macOS
        base_dir = home / 'Library' / 'Application Support' / 'mcp-memory'
    elif sys.platform == 'win32':  # Windows
        base_dir = Path(os.getenv('LOCALAPPDATA', '')) / 'mcp-memory'
    else:  # Linux
        base_dir = home / '.local' / 'share' / 'mcp-memory'
    
    chroma_path = base_dir / 'chroma_db'
    sqlite_path = base_dir / 'sqlite_vec.db'
    backup_path = base_dir / 'migration_backup.json'
    
    print(f"📁 Source (ChromaDB): {chroma_path}")
    print(f"📁 Target (SQLite-vec): {sqlite_path}")
    print(f"💾 Backup: {backup_path}")
    print()
    
    # Check if source exists
    if not chroma_path.exists():
        print(f"❌ ChromaDB path not found: {chroma_path}")
        print("💡 Make sure you have some memories stored first.")
        return 1
    
    # Check if target already exists
    if sqlite_path.exists():
        response = input(f"⚠️  SQLite-vec database already exists. Overwrite? (y/N): ")
        if response.lower() != 'y':
            print("❌ Migration cancelled")
            return 1
    
    # Confirm migration
    print("🚀 Ready to migrate!")
    print("   This will:")
    print("   - Export all memories from ChromaDB")
    print("   - Create a backup file")
    print("   - Import memories to SQLite-vec")
    print()
    
    response = input("Continue? (Y/n): ")
    if response.lower() == 'n':
        print("❌ Migration cancelled")
        return 1
    
    # Perform migration
    migration_tool = MigrationTool()
    
    try:
        success = await migration_tool.migrate(
            from_backend='chroma',
            to_backend='sqlite_vec',
            source_path=str(chroma_path),
            target_path=str(sqlite_path),
            create_backup=True,
            backup_path=str(backup_path)
        )
        
        if success:
            print()
            print("✅ Migration completed successfully!")
            print()
            print("📝 Next steps:")
            print("   1. Set environment variable: MCP_MEMORY_STORAGE_BACKEND=sqlite_vec")
            print("   2. Restart your MCP client (Claude Desktop)")
            print("   3. Test that your memories are accessible")
            print()
            print("🔧 Environment variable examples:")
            print("   # Bash/Zsh:")
            print("   export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec")
            print()
            print("   # Windows Command Prompt:")
            print("   set MCP_MEMORY_STORAGE_BACKEND=sqlite_vec")
            print()
            print("   # Windows PowerShell:")
            print("   $env:MCP_MEMORY_STORAGE_BACKEND='sqlite_vec'")
            print()
            print(f"💾 Backup available at: {backup_path}")
            return 0
        else:
            print("❌ Migration failed. Check logs for details.")
            return 1
            
    except Exception as e:
        print(f"❌ Migration failed: {e}")
        return 1

if __name__ == "__main__":
    sys.exit(asyncio.run(main()))
```

--------------------------------------------------------------------------------
/docs/development/code-quality/phase-2a-index.md:
--------------------------------------------------------------------------------

```markdown
# Phase 2a Refactoring - Complete Documentation Index

**Status:** ✅ COMPLETE  
**Date:** November 24, 2025  
**Issue:** #246 - Code Quality Phase 2

---

## 📋 Documentation Files

### 1. PHASE_2A_COMPLETION_REPORT.md
**Comprehensive completion report with full metrics**

- Executive summary of achievements
- Detailed before/after analysis for each function
- Quality improvements across all dimensions
- Test suite status verification
- Lessons learned and recommendations
- 433 lines of detailed analysis

**Read this for:** Complete project overview and detailed metrics

### 2. REFACTORING_HANDLE_GET_PROMPT.md
**Function #6 refactoring specification - Latest completion**

- Function complexity reduction: 33 → 6 (82%)
- 5 specialized prompt handlers documented
- Design rationale and strategy
- Testing recommendations
- Code review checklist
- 194 lines of detailed specification

**Read this for:** In-depth look at the final refactoring completed

---

## 🔧 Code Changes

### Modified Files

**src/mcp_memory_service/server.py**
- Refactored `handle_get_prompt()` method
- Created 5 new helper methods:
  - `_prompt_memory_review()`
  - `_prompt_memory_analysis()`
  - `_prompt_knowledge_export()`
  - `_prompt_memory_cleanup()`
  - `_prompt_learning_session()`

**src/mcp_memory_service/mcp_server.py**
- Fixed test collection error
- Added graceful FastMCP fallback
- `_DummyFastMCP` class for compatibility

---

## 📊 Summary Metrics

| Metric | Value |
|--------|-------|
| Functions Refactored | 6 of 27 (22%) |
| Average Complexity Reduction | 77% |
| Peak Complexity Reduction | 87% (62 → 8) |
| Tests Passing | 431 |
| Backward Compatibility | 100% |
| Health Score Improvement | 73/100 (target: 80/100) |

---

## ✅ Functions Completed

1. **install.py::main()** - 62 → 8 (87% ↓)
2. **sqlite_vec.py::initialize()** - Nesting 10 → 3 (70% ↓)
3. **config.py::__main__()** - 42 (validated extraction)
4. **oauth/authorization.py::token()** - 35 → 8 (77% ↓)
5. **install_package()** - 33 → 7 (78% ↓)
6. **handle_get_prompt()** - 33 → 6 (82% ↓) ⭐

---

## 🔗 Related Resources

- **GitHub Issue:** [#246 - Code Quality Phase 2](https://github.com/doobidoo/mcp-memory-service/issues/246)
- **Issue Comment:** [Phase 2a Progress Update](https://github.com/doobidoo/mcp-memory-service/issues/246#issuecomment-3572351946)

---

## 📈 Next Phases

### Phase 2a Continuation
- 21 remaining high-complexity functions
- Estimated: 2-3 release cycles
- Apply same successful patterns

### Phase 2b
- Code duplication consolidation
- 14 duplicate groups → reduce to <3%
- Estimated: 1-2 release cycles

### Phase 2c
- Architecture compliance violations
- 95.8% → 100% compliance
- Estimated: 1 release cycle

---

## 🎯 How to Use This Documentation

**For Code Review:**
1. Start with PHASE_2A_COMPLETION_REPORT.md for overview
2. Review REFACTORING_HANDLE_GET_PROMPT.md for detailed design
3. Check git commits for actual code changes

**For Continuation (Phase 2a):**
1. Review quality improvements in PHASE_2A_COMPLETION_REPORT.md
2. Follow same patterns: dispatcher + specialized handlers
3. Apply extract method for nesting reduction
4. Ensure backward compatibility maintained

**For Future Refactoring:**
- Use dispatcher pattern for multi-branch logic
- Extract methods for nesting depth >3
- Maintain single responsibility principle
- Always keep backward compatibility

---

## 🚀 Key Achievements

✅ 77% average complexity reduction  
✅ 100% backward compatibility  
✅ 431 tests passing  
✅ Clear path for Phase 2b & 2c  
✅ Comprehensive documentation  
✅ Ready for review and merge  

---

**Last Updated:** November 24, 2025  
**Status:** COMPLETE AND VERIFIED

```

--------------------------------------------------------------------------------
/scripts/testing/run_complete_test.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Complete test suite for the HTTP/SSE + SQLite-vec implementation.
Runs all tests in sequence to validate the entire system.
"""

import subprocess
import sys
import time
import requests
from pathlib import Path

def check_server_health():
    """Check if the server is running and healthy."""
    try:
        response = requests.get("http://localhost:8000/api/health", timeout=5)
        return response.status_code == 200
    except:
        return False

def run_test_script(script_name, description):
    """Run a test script and return success status."""
    print(f"\n{'='*60}")
    print(f"🧪 {description}")
    print('='*60)
    
    try:
        # Run the test script
        result = subprocess.run([
            sys.executable, 
            str(Path(__file__).parent / script_name)
        ], capture_output=True, text=True, timeout=60)
        
        if result.returncode == 0:
            print("✅ Test PASSED")
            if result.stdout:
                print("Output:", result.stdout[-500:])  # Last 500 chars
            return True
        else:
            print("❌ Test FAILED")
            print("Error:", result.stderr)
            return False
            
    except subprocess.TimeoutExpired:
        print("⏰ Test TIMED OUT")
        return False
    except Exception as e:
        print(f"❌ Test ERROR: {e}")
        return False

def main():
    """Run the complete test suite."""
    print("🚀 MCP Memory Service - Complete Test Suite")
    print("=" * 60)
    
    # Check if server is running
    if not check_server_health():
        print("❌ Server is not running or not healthy!")
        print("💡 Please start the server first:")
        print("   python scripts/run_http_server.py")
        return 1
    
    print("✅ Server is healthy and ready for testing")
    
    # Test suite configuration
    tests = [
        ("test_memory_simple.py", "Memory CRUD Operations Test"),
        ("test_search_api.py", "Search API Functionality Test"),
        ("test_sse_events.py", "Real-time SSE Events Test"),
    ]
    
    results = []
    
    # Run each test
    for script, description in tests:
        success = run_test_script(script, description)
        results.append((description, success))
        
        if success:
            print(f"✅ {description} - PASSED")
        else:
            print(f"❌ {description} - FAILED")
        
        # Brief pause between tests
        time.sleep(2)
    
    # Summary
    print(f"\n{'='*60}")
    print("📊 TEST SUMMARY")
    print('='*60)
    
    passed = sum(1 for _, success in results if success)
    total = len(results)
    
    for description, success in results:
        status = "✅ PASS" if success else "❌ FAIL"
        print(f"{status} {description}")
    
    print(f"\nResults: {passed}/{total} tests passed")
    
    if passed == total:
        print("\n🎉 ALL TESTS PASSED! System is working perfectly!")
        return 0
    else:
        print(f"\n⚠️  {total - passed} tests failed. Check the logs above.")
        return 1

if __name__ == "__main__":
    sys.exit(main())
```

--------------------------------------------------------------------------------
/tests/integration/test_store_memory.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Test script to store a memory in the MCP Memory Service.
"""
import asyncio
import json
import os
import sys

# Import MCP client
try:
    from mcp import ClientSession
    from mcp.client.stdio import stdio_client, StdioServerParameters
except ImportError as e:
    print(f"MCP client not found: {e}")
    print("Install with: pip install mcp")
    sys.exit(1)

async def store_memory():
    """Store a test memory."""
    try:
        # Configure MCP server connection
        server_params = StdioServerParameters(
            command="uv",
            args=["run", "memory", "server"],
            env=None
        )

        # Connect to memory service using stdio_client
        async with stdio_client(server_params) as (read, write):
            async with ClientSession(read, write) as session:
                # Initialize the session
                await session.initialize()
                print("Connected to memory service!")

                # List available tools
                tools_response = await session.list_tools()
                print(f"Found {len(tools_response.tools)} tools")

                # Check if store_memory tool exists
                if not any(tool.name == "store_memory" for tool in tools_response.tools):
                    print("ERROR: store_memory tool not found")
                    return

                # Create a test memory
                memory_data = {
                    "content": "This is a test memory created by the test_store_memory.py script.",
                    "metadata": {
                        "tags": "test,example,python",  # Comma-separated string format
                        "type": "note"
                    }
                }

                # Store the memory
                print(f"\nStoring test memory: {memory_data['content']}")
                result = await session.call_tool("store_memory", memory_data)

                # Print result
                if result:
                    print("\nResult:")
                    for content_item in result.content:
                        if hasattr(content_item, 'text'):
                            print(content_item.text)
                else:
                    print("No result returned")

                # Try to retrieve the memory
                print("\nRetrieving memory...")
                retrieve_result = await session.call_tool("retrieve_memory", {"query": "test memory", "n_results": 5})

                # Print result
                if retrieve_result:
                    print("\nRetrieve Result:")
                    for content_item in retrieve_result.content:
                        if hasattr(content_item, 'text'):
                            print(content_item.text)
                else:
                    print("No retrieve result returned")

                # Check database health
                print("\nChecking database health...")
                health_result = await session.call_tool("check_database_health", {})

                # Print result
                if health_result:
                    print("\nHealth Check Result:")
                    for content_item in health_result.content:
                        if hasattr(content_item, 'text'):
                            print(content_item.text)
                else:
                    print("No health check result returned")

    except Exception as e:
        print(f"ERROR: {str(e)}")
        import traceback
        traceback.print_exc()

async def main():
    """Main function."""
    print("=== MCP Memory Service Test: Store Memory ===\n")
    await store_memory()

if __name__ == "__main__":
    asyncio.run(main())

```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[build-system]
requires = ["hatchling", "python-semantic-release", "build"]
build-backend = "hatchling.build"

[project]
name = "mcp-memory-service"
version = "8.42.0"
description = "Universal MCP memory service with semantic search, multi-client support, and autonomous consolidation for Claude Desktop, VS Code, and 13+ AI applications"
readme = "README.md"
requires-python = ">=3.10"
keywords = [
    "mcp", "model-context-protocol", "claude-desktop", "semantic-memory", 
    "vector-database", "ai-assistant", "sqlite-vec", "multi-client",
    "semantic-search", "memory-consolidation", "ai-productivity", "vs-code",
    "cursor", "continue", "fastapi", "developer-tools", "cross-platform"
]
classifiers = [
    "Development Status :: 5 - Production/Stable",
    "Intended Audience :: Developers",
    "Topic :: Software Development :: Libraries :: Python Modules",
    "Topic :: Scientific/Engineering :: Artificial Intelligence",
    "Topic :: Database :: Database Engines/Servers",
    "License :: OSI Approved :: Apache Software License",
    "Programming Language :: Python :: 3",
    "Programming Language :: Python :: 3.10",
    "Programming Language :: Python :: 3.11", 
    "Programming Language :: Python :: 3.12",
    "Operating System :: OS Independent",
    "Environment :: Console",
    "Framework :: FastAPI"
]
authors = [
    { name = "Heinrich Krupp", email = "[email protected]" }
]
license = { text = "Apache-2.0" }
dependencies = [
    "tokenizers==0.20.3",
    "mcp>=1.0.0,<2.0.0",
    "sqlite-vec>=0.1.0",
    "build>=0.10.0",
    "aiohttp>=3.8.0",
    "fastapi>=0.115.0",
    "uvicorn>=0.30.0",
    "python-multipart>=0.0.9",
    "sse-starlette>=2.1.0",
    "aiofiles>=23.2.1",
    "psutil>=5.9.0",
    "zeroconf>=0.130.0",
    "pypdf2>=3.0.0",
    "chardet>=5.0.0",
    "click>=8.0.0",
    "httpx>=0.24.0",
    "authlib>=1.2.0",
    "python-jose[cryptography]>=3.3.0",
    "sentence-transformers>=2.2.2",
    "torch>=2.0.0",
    "typing-extensions>=4.0.0; python_version < '3.11'",
    "apscheduler>=3.11.0",
]

[project.optional-dependencies]
# Machine learning dependencies for semantic search and embeddings
ml = [
    "sentence-transformers>=2.2.2",
    "torch>=2.0.0"
]
# SQLite-vec with lightweight ONNX embeddings (recommended for most users)
sqlite = [
    "onnxruntime>=1.14.1"
]
# SQLite-vec with full ML capabilities (for advanced features)
sqlite-ml = [
    "mcp-memory-service[sqlite,ml]"
]
# Full installation including all optional dependencies
full = [
    "mcp-memory-service[sqlite,ml]"
]

[project.scripts]
memory = "mcp_memory_service.cli.main:main"
memory-server = "mcp_memory_service.cli.main:memory_server_main"
mcp-memory-server = "mcp_memory_service.mcp_server:main"

[tool.hatch.build.targets.wheel]
packages = ["src/mcp_memory_service"]

[tool.hatch.version]
path = "src/mcp_memory_service/__init__.py"

[tool.semantic_release]
version_variable = [
    "src/mcp_memory_service/__init__.py:__version__",
    "pyproject.toml:version"
]
branch = "main"
changelog_file = "CHANGELOG.md"
build_command = "pip install build && python -m build"
build_command_env = []
dist_path = "dist/"
upload_to_pypi = true
upload_to_release = true
commit_message = "chore(release): bump version to {version}"

[tool.semantic_release.commit_parser_options]
allowed_tags = [
    "build",
    "chore",
    "ci",
    "docs",
    "feat",
    "fix",
    "perf",
    "style",
    "refactor",
    "test"
]
minor_tags = ["feat"]
patch_tags = ["fix", "perf"]

[tool.semantic_release.changelog]
template_dir = "templates"
changelog_sections = [
    ["feat", "Features"],
    ["fix", "Bug Fixes"],
    ["perf", "Performance"],
    ["refactor", "Code Refactoring"],
    ["test", "Tests"]
]

```

--------------------------------------------------------------------------------
/archive/setup-development/STARTUP_SETUP_GUIDE.md:
--------------------------------------------------------------------------------

```markdown
# MCP Memory Service Auto-Startup Setup Guide

## ✅ Files Created:
- `mcp-memory.service` - Systemd service configuration
- `install_service.sh` - Installation script  
- `service_control.sh` - Service management script
- `STARTUP_SETUP_GUIDE.md` - This guide

## 🚀 Manual Installation Steps:

### 1. Install the systemd service:
```bash
# Run the installation script (requires sudo password)
sudo bash install_service.sh
```

### 2. Start the service immediately:
```bash
sudo systemctl start mcp-memory
```

### 3. Check service status:
```bash
sudo systemctl status mcp-memory
```

### 4. View service logs:
```bash
sudo journalctl -u mcp-memory -f
```

## 🛠️ Service Management Commands:

### Using the control script:
```bash
./service_control.sh start     # Start service
./service_control.sh stop      # Stop service  
./service_control.sh restart   # Restart service
./service_control.sh status    # Show status
./service_control.sh logs      # View live logs
./service_control.sh health    # Test API health
./service_control.sh enable    # Enable startup
./service_control.sh disable   # Disable startup
```

### Using systemctl directly:
```bash
sudo systemctl start mcp-memory      # Start now
sudo systemctl stop mcp-memory       # Stop now
sudo systemctl restart mcp-memory    # Restart now
sudo systemctl status mcp-memory     # Check status
sudo systemctl enable mcp-memory     # Enable startup (already done)
sudo systemctl disable mcp-memory    # Disable startup
sudo journalctl -u mcp-memory -f     # Live logs
```

## 📋 Service Configuration:

### Generated API Key:
```
mcp-83c9840168aac025986cc4bc29e411bb
```

### Service Details:
- **Service Name**: `mcp-memory.service`
- **User**: hkr
- **Working Directory**: `/home/hkr/repositories/mcp-memory-service`
- **Auto-restart**: Yes (on failure)
- **Startup**: Enabled (starts on boot)

### Environment Variables:
- `MCP_CONSOLIDATION_ENABLED=true`
- `MCP_MDNS_ENABLED=true`
- `MCP_HTTPS_ENABLED=true`
- `MCP_MDNS_SERVICE_NAME="MCP Memory"`
- `MCP_HTTP_HOST=0.0.0.0`
- `MCP_HTTP_PORT=8000`
- `MCP_MEMORY_STORAGE_BACKEND=sqlite_vec`

## 🌐 Access Points:

Once running, the service will be available at:
- **Dashboard**: https://localhost:8000
- **API Documentation**: https://localhost:8000/api/docs
- **Health Check**: https://localhost:8000/api/health
- **SSE Events**: https://localhost:8000/api/events
- **mDNS Name**: `MCP Memory._mcp-memory._tcp.local.`

## 🔧 Troubleshooting:

### If service fails to start:
```bash
# Check detailed logs
sudo journalctl -u mcp-memory --no-pager

# Check if virtual environment exists
ls -la /home/hkr/repositories/mcp-memory-service/venv/

# Test manual startup
cd /home/hkr/repositories/mcp-memory-service
source venv/bin/activate
python scripts/run_http_server.py
```

### If port 8000 is in use:
```bash
# Check what's using port 8000
sudo ss -tlnp | grep :8000

# Or change port in service file
sudo nano /etc/systemd/system/mcp-memory.service
# Edit: Environment=MCP_HTTP_PORT=8001
sudo systemctl daemon-reload
sudo systemctl restart mcp-memory
```

## 🗑️ Uninstallation:

To remove the service:
```bash
./service_control.sh uninstall
```

Or manually:
```bash
sudo systemctl stop mcp-memory
sudo systemctl disable mcp-memory
sudo rm /etc/systemd/system/mcp-memory.service
sudo systemctl daemon-reload
```

## ✅ Success Verification:

After installation, verify everything works:
```bash
# 1. Check service is running
sudo systemctl status mcp-memory

# 2. Test API health
curl -k https://localhost:8000/api/health

# 3. Check mDNS discovery
avahi-browse -t _mcp-memory._tcp

# 4. View live logs
sudo journalctl -u mcp-memory -f
```

The service will now start automatically on every system boot! 🎉
```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/claude-code-compatibility.md:
--------------------------------------------------------------------------------

```markdown
# Claude Code Compatibility Guide

## Overview

The MCP Memory Service FastAPI server v4.0.0-alpha.1 implements the official MCP protocol but has specific compatibility considerations with Claude Code's SSE client implementation.

## Current Status

### ✅ Working MCP Clients
- **Standard MCP Libraries**: Python `mcp` package, JavaScript MCP SDK
- **Claude Desktop**: Works with proper MCP configuration
- **Custom MCP Clients**: Any client implementing standard MCP protocol
- **HTTP API**: Full REST API access via port 8080

### ❌ Claude Code SSE Client Issue

**Problem**: Claude Code's SSE client has specific header and protocol requirements that don't match the FastMCP server implementation.

**Technical Details**:
- FastMCP server requires `Accept: application/json, text/event-stream` headers
- Claude Code's SSE client doesn't send the required header combination
- Server correctly rejects invalid SSE connections with proper error messages

**Error Symptoms**:
```bash
claude mcp list
# Output: memory: http://10.0.1.30:8000/mcp (SSE) - ✗ Failed to connect
```

## Workarounds for Claude Code Users

### Option 1: Use HTTP Dashboard
```bash
# Access memory service via web interface
open http://memory.local:8080/

# Use API endpoints directly
curl -X POST http://memory.local:8080/api/memories \
  -H "Content-Type: application/json" \
  -d '{"content": "My memory", "tags": ["important"]}'
```

### Option 2: Use Claude Commands (Recommended)
```bash
# Install Claude Code commands (bypass MCP entirely)
python install.py --install-claude-commands

# Use conversational memory commands
claude /memory-store "Important information"
claude /memory-recall "what did we discuss?"
claude /memory-search --tags "project,architecture"
```

### Option 3: Use Alternative MCP Client
```python
# Python example with standard MCP client
import asyncio
from mcp import ClientSession
from mcp.client.stdio import stdio_client

async def test_memory():
    # This works with standard MCP protocol
    # Implementation details for your specific needs
    pass
```

## Technical Investigation Results

### Server Verification ✅
```bash
# Server correctly implements MCP protocol
curl -H "Accept: text/event-stream, application/json" \
     -H "Content-Type: application/json" \
     -X POST http://memory.local:8000/mcp \
     -d '{"jsonrpc":"2.0","id":"test","method":"tools/list","params":{}}'
# Result: 200 OK, SSE stream established
```

### Claude Code Client Issue ❌
```bash
# Claude Code client fails header negotiation
# Missing required Accept header combination
# Connection rejected with 406 Not Acceptable
```

## Development Roadmap

### Short Term (Next Release)
- [ ] Investigate Claude Code's exact SSE client requirements
- [ ] Consider server-side compatibility layer
- [ ] Expand client compatibility testing

### Medium Term 
- [ ] Custom SSE implementation for Claude Code compatibility
- [ ] Alternative transport protocols (WebSocket, HTTP long-polling)
- [ ] Client library development

### Long Term
- [ ] Collaborate with Claude Code team on SSE standardization
- [ ] MCP protocol enhancement proposals
- [ ] Universal client compatibility layer

## Conclusion

The FastAPI MCP migration successfully achieved its primary goals:
- ✅ SSL connectivity issues resolved
- ✅ Standard MCP protocol compliance
- ✅ Production-ready architecture

The Claude Code compatibility issue is a client-specific limitation that doesn't impact the core migration success. Users have multiple viable workarounds available.

## Support

- **HTTP Dashboard**: http://memory.local:8080/
- **Documentation**: See `DUAL_SERVICE_DEPLOYMENT.md`
- **Issues**: Report at https://github.com/doobidoo/mcp-memory-service/issues
- **Claude Commands**: See `docs/guides/claude-code-quickstart.md`
```

--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.yml:
--------------------------------------------------------------------------------

```yaml
name: Feature Request
description: Suggest a new feature or enhancement
title: "[Feature]: "
labels: ["enhancement", "triage"]
body:
  - type: markdown
    attributes:
      value: |
        Thank you for suggesting a feature! Please provide details about your use case and proposed solution.

  - type: textarea
    id: problem
    attributes:
      label: Problem or Use Case
      description: What problem does this feature solve? What are you trying to accomplish?
      placeholder: |
        I'm trying to... but currently the system doesn't support...
        This would help with...
    validations:
      required: true

  - type: textarea
    id: solution
    attributes:
      label: Proposed Solution
      description: How would you like this feature to work?
      placeholder: |
        Add a new MCP tool that allows...
        The API endpoint should accept...
        When the user runs...
    validations:
      required: true

  - type: textarea
    id: alternatives
    attributes:
      label: Alternatives Considered
      description: What other approaches have you considered? How do you currently work around this?
      placeholder: |
        I've tried using... but it doesn't work because...
        Other projects solve this by...

  - type: dropdown
    id: component
    attributes:
      label: Component Affected
      description: Which part of the system would this feature affect?
      options:
        - Storage Backend (sqlite/cloudflare/hybrid)
        - MCP Tools (memory operations)
        - HTTP API (dashboard/REST endpoints)
        - Document Ingestion (PDF/DOCX/PPTX)
        - Claude Code Integration (hooks/commands)
        - Configuration/Setup
        - Documentation
        - Testing/CI
        - Other
    validations:
      required: true

  - type: dropdown
    id: priority
    attributes:
      label: Priority
      description: How important is this feature to your workflow?
      options:
        - Critical (blocking my work)
        - High (significant improvement)
        - Medium (nice to have)
        - Low (future consideration)
    validations:
      required: true

  - type: textarea
    id: examples
    attributes:
      label: Examples or Mockups
      description: |
        Provide examples of how this would work:
        - API request/response examples
        - CLI command examples
        - UI mockups (for dashboard features)
        - Code snippets
      placeholder: |
        # Example usage
        claude /memory-export --format json --tags important

        # Expected output
        {"memories": [...], "count": 42}
      render: shell

  - type: textarea
    id: impact
    attributes:
      label: Impact on Existing Functionality
      description: Would this change affect existing features or require breaking changes?
      placeholder: |
        This would require...
        Existing users would need to...
        Backward compatibility...

  - type: textarea
    id: similar
    attributes:
      label: Similar Features in Other Projects
      description: Are there similar features in other projects we can learn from?
      placeholder: |
        Project X implements this as...
        Library Y has a similar API that works like...

  - type: checkboxes
    id: checks
    attributes:
      label: Pre-submission Checklist
      description: Please verify you've completed these steps
      options:
        - label: I've searched existing issues and feature requests
          required: true
        - label: I've described a specific use case (not just "it would be nice")
          required: true
        - label: I've considered the impact on existing functionality
          required: true
        - label: I'm willing to help test this feature once implemented
          required: false

```

--------------------------------------------------------------------------------
/scripts/validation/check_dev_setup.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Verify development environment setup for MCP Memory Service.
Detects common issues like stale venv packages vs updated source code.

Usage:
    python scripts/validation/check_dev_setup.py

Exit codes:
    0 - Development environment is correctly configured
    1 - Critical issues detected (editable install missing or version mismatch)
"""
import sys
import os
from pathlib import Path

def check_editable_install():
    """Check if package is installed in editable mode."""
    try:
        import mcp_memory_service
        package_location = Path(mcp_memory_service.__file__).parent

        # Check if location is in source directory (editable) or site-packages
        if 'site-packages' in str(package_location):
            return False, str(package_location)
        else:
            return True, str(package_location)
    except ImportError:
        return None, "Package not installed"

def check_version_match():
    """Check if installed version matches source code version."""
    # Read source version
    repo_root = Path(__file__).parent.parent.parent
    init_file = repo_root / "src" / "mcp_memory_service" / "__init__.py"
    source_version = None

    if not init_file.exists():
        return None, "Unknown", "Source file not found"

    with open(init_file) as f:
        for line in f:
            if line.startswith('__version__'):
                source_version = line.split('=')[1].strip().strip('"\'')
                break

    # Get installed version
    try:
        import mcp_memory_service
        installed_version = mcp_memory_service.__version__
    except ImportError:
        return None, source_version, "Not installed"

    if source_version is None:
        return None, "Unknown", installed_version

    return source_version == installed_version, source_version, installed_version

def main():
    print("=" * 70)
    print("MCP Memory Service - Development Environment Check")
    print("=" * 70)

    has_error = False

    # Check 1: Editable install
    print("\n[1/2] Checking installation mode...")
    is_editable, location = check_editable_install()

    if is_editable is None:
        print("  ❌ CRITICAL: Package not installed")
        print(f"     Location: {location}")
        print("\n  Fix: pip install -e .")
        has_error = True
    elif not is_editable:
        print("  ⚠️  WARNING: Package installed in site-packages (not editable)")
        print(f"     Location: {location}")
        print("\n  This means source code changes won't take effect!")
        print("  Fix: pip uninstall mcp-memory-service && pip install -e .")
        has_error = True
    else:
        print(f"  ✅ OK: Editable install detected")
        print(f"     Location: {location}")

    # Check 2: Version match
    print("\n[2/2] Checking version consistency...")
    versions_match, source_ver, installed_ver = check_version_match()

    if versions_match is None:
        print("  ⚠️  WARNING: Could not determine versions")
        print(f"     Source:    {source_ver}")
        print(f"     Installed: {installed_ver}")
    elif not versions_match:
        print(f"  ❌ CRITICAL: Version mismatch detected!")
        print(f"     Source code: v{source_ver}")
        print(f"     Installed:   v{installed_ver}")
        print("\n  This is the 'stale venv' issue!")
        print("  Fix: pip install -e . --force-reinstall")
        has_error = True
    else:
        print(f"  ✅ OK: Versions match (v{source_ver})")

    print("\n" + "=" * 70)
    if has_error:
        print("❌ Development environment has CRITICAL issues!")
        print("=" * 70)
        sys.exit(1)
    else:
        print("✅ Development environment is correctly configured!")
        print("=" * 70)
        sys.exit(0)

if __name__ == "__main__":
    main()

```

--------------------------------------------------------------------------------
/docs/amp-cli-bridge.md:
--------------------------------------------------------------------------------

```markdown
# Amp CLI Bridge (Semi-Automated Workflow)

**Purpose**: Leverage Amp CLI capabilities (research, code analysis, web search) from Claude Code without consuming Claude Code credits, using a semi-automated file-based workflow.

## Quick Start

**1. Claude Code creates prompt**:
```
You: "Use @agent-amp-bridge to research TypeScript 5.0 features"
Claude: [Creates prompt file and shows command]
```

**2. Run the command shown**:
```bash
amp @.claude/amp/prompts/pending/research-xyz.json
```

**3. Amp processes and writes response automatically**

**4. Claude Code continues automatically**

## Architecture

```
Claude Code (@agent-amp-bridge) → .claude/amp/prompts/pending/{uuid}.json
                                            ↓
                          You run: amp @prompts/pending/{uuid}.json
                                            ↓
                          Amp writes: responses/ready/{uuid}.json
                                            ↓
                   Claude Code reads response ← Workflow continues
```

## File Structure

```
.claude/amp/
├── prompts/
│   └── pending/        # Prompts waiting for you to process
├── responses/
│   ├── ready/          # Responses written by Amp
│   └── consumed/       # Archive of processed responses
└── README.md           # Documentation
```

## Message Format

**Prompt** (`.claude/amp/prompts/pending/{uuid}.json`):
```json
{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "timestamp": "2025-11-04T20:00:00.000Z",
  "prompt": "Research async/await best practices in Python",
  "context": {
    "project": "mcp-memory-service",
    "cwd": "/path/to/project"
  },
  "options": {
    "timeout": 300000,
    "format": "markdown"
  }
}
```

**Response** (`.claude/amp/responses/ready/{uuid}.json`):
```json
{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "timestamp": "2025-11-04T20:05:00.000Z",
  "success": true,
  "output": "## Async/Await Best Practices\n\n...",
  "error": null,
  "duration": 300000
}
```

## Configuration

**File**: `.claude/amp/config.json`

```json
{
  "pollInterval": 1000,      // Check for new prompts every 1s
  "timeout": 300000,          // 5 minute timeout per prompt
  "debug": false,             // Enable debug logging
  "ampCommand": "amp"         // Amp CLI command
}
```

## Use Cases

- Web Research: "Research latest React 18 features"
- Code Analysis: "Analyze our storage backend architecture"
- Documentation: "Generate API docs for MCP tools"
- Code Generation: "Create TypeScript type definitions"
- Best Practices: "Find OAuth 2.1 security recommendations"

## Manual Inspection (Optional)

```bash
# List pending prompts
ls -lt .claude/amp/prompts/pending/

# View prompt content
cat .claude/amp/prompts/pending/{uuid}.json | jq -r '.prompt'
```

## Troubleshooting

**Amp CLI credit errors:**
```bash
# Test if Amp is authenticated
amp

# If credits exhausted, check status
# https://ampcode.com/settings
```

**Response not appearing:**
```bash
# Verify Amp wrote the file
ls -lt .claude/amp/responses/ready/
```

**Permission issues:**
```bash
# Ensure directories exist
ls -la .claude/amp/

# Check write permissions
touch .claude/amp/responses/ready/test.json && rm .claude/amp/responses/ready/test.json
```

## Benefits

- Zero Claude Code Credits: Uses your separate Amp session
- Uses Free Tier: Works with Amp's free tier (when credits available)
- Simple Workflow: No background processes
- Full Control: You decide when/what to process
- Fault Tolerant: File-based queue survives crashes
- Audit Trail: All prompts/responses saved
- Reusable: Can replay prompts or review past responses

## Limitations

- Manual Step Required: You must run the `amp @` command
- Amp Credits: Still consumes Amp API credits
- Semi-Async: Claude Code waits for you to process
- Best for Research: Optimized for async research tasks, not real-time chat

```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/windows-setup.md:
--------------------------------------------------------------------------------

```markdown
# Windows Setup Guide for MCP Memory Service

This guide provides comprehensive instructions for setting up and running the MCP Memory Service on Windows systems, including handling common Windows-specific issues.

## Installation

### Prerequisites
- Python 3.10 or newer
- Git for Windows
- Visual Studio Build Tools (for PyTorch)

### Recommended Installation (Using UV)

1. Install UV:
```bash
pip install uv
```

2. Clone and setup:
```bash
git clone https://github.com/doobidoo/mcp-memory-service.git
cd mcp-memory-service
uv venv
.venv\Scripts\activate
uv pip install -r requirements.txt
uv pip install -e .
```

### Alternative: Windows-Specific Installation

If you encounter issues with UV, use our Windows-specific installation script:

```bash
python scripts/install_windows.py
```

This script handles:
1. Detecting CUDA availability
2. Installing the correct PyTorch version
3. Setting up dependencies without conflicts
4. Verifying the installation

## Configuration

### Claude Desktop Configuration

1. Create or edit your Claude Desktop configuration file:
   - Location: `%APPDATA%\Claude\claude_desktop_config.json`

2. Add the following configuration:
```json
{
  "memory": {
    "command": "python",
    "args": [
      "C:\\path\\to\\mcp-memory-service\\memory_wrapper.py"
    ],
    "env": {
      "MCP_MEMORY_CHROMA_PATH": "C:\\Users\\YourUsername\\AppData\\Local\\mcp-memory\\chroma_db",
      "MCP_MEMORY_BACKUPS_PATH": "C:\\Users\\YourUsername\\AppData\\Local\\mcp-memory\\backups"
    }
  }
}
```

### Environment Variables

Important Windows-specific environment variables:
```
MCP_MEMORY_USE_DIRECTML=1 # Enable DirectML acceleration if CUDA is not available
PYTORCH_ENABLE_MPS_FALLBACK=0 # Disable MPS (not needed on Windows)
```

## Common Windows-Specific Issues

### PyTorch Installation Issues

If you see errors about PyTorch installation:

1. Use the Windows-specific installation script:
```bash
python scripts/install_windows.py
```

2. Or manually install PyTorch with the correct index URL:
```bash
pip install torch==2.1.0 torchvision==2.1.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
```

### JSON Parsing Errors

If you see "Unexpected token" errors in Claude Desktop:

**Symptoms:**
```
Unexpected token 'U', "Using Chro"... is not valid JSON
Unexpected token 'I', "[INFO] Star"... is not valid JSON
```

**Solution:**
- Update to the latest version which includes Windows-specific stream handling fixes
- Use the memory wrapper script which properly handles stdout/stderr separation

### Recursion Errors

If you encounter recursion errors:

1. Run the sitecustomize fix script:
```bash
python scripts/fix_sitecustomize.py
```

2. Restart your Python environment

## Debugging Tools

Windows-specific debugging tools:

```bash
# Verify PyTorch installation
python scripts/verify_pytorch_windows.py

# Check environment compatibility
python scripts/verify_environment_enhanced.py

# Test the memory server
python scripts/run_memory_server.py
```

## Log Files

Important log locations on Windows:
- Claude Desktop logs: `%APPDATA%\Claude\logs\mcp-server-memory.log`
- Memory service logs: `%LOCALAPPDATA%\mcp-memory\logs\memory_service.log`

## Performance Optimization

### GPU Acceleration

1. CUDA (recommended if available):
- Ensure NVIDIA drivers are up to date
- CUDA toolkit is not required (bundled with PyTorch)

2. DirectML (alternative):
- Enable with `MCP_MEMORY_USE_DIRECTML=1`
- Useful for AMD GPUs or when CUDA is not available

### Memory Usage

If experiencing memory issues:
1. Reduce batch size:
```bash
set MCP_MEMORY_BATCH_SIZE=4
```

2. Use a smaller model:
```bash
set MCP_MEMORY_MODEL_NAME=paraphrase-MiniLM-L3-v2
```

## Getting Help

If you encounter Windows-specific issues:
1. Check the logs in `%APPDATA%\Claude\logs\`
2. Run verification tools mentioned above
3. Contact support via Telegram: t.me/doobeedoo
```
Page 3/35FirstPrevNextLast