#
tokens: 49321/50000 31/625 files (page 6/35)
lines: off (toggle) GitHub
raw markdown copy
This is page 6 of 35. Use http://codebase.md/doobidoo/mcp-memory-service?page={x} to view the full context.

# Directory Structure

```
├── .claude
│   ├── agents
│   │   ├── amp-bridge.md
│   │   ├── amp-pr-automator.md
│   │   ├── code-quality-guard.md
│   │   ├── gemini-pr-automator.md
│   │   └── github-release-manager.md
│   ├── settings.local.json.backup
│   └── settings.local.json.local
├── .commit-message
├── .dockerignore
├── .env.example
├── .env.sqlite.backup
├── .envnn#
├── .gitattributes
├── .github
│   ├── FUNDING.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── feature_request.yml
│   │   └── performance_issue.yml
│   ├── pull_request_template.md
│   └── workflows
│       ├── bridge-tests.yml
│       ├── CACHE_FIX.md
│       ├── claude-code-review.yml
│       ├── claude.yml
│       ├── cleanup-images.yml.disabled
│       ├── dev-setup-validation.yml
│       ├── docker-publish.yml
│       ├── LATEST_FIXES.md
│       ├── main-optimized.yml.disabled
│       ├── main.yml
│       ├── publish-and-test.yml
│       ├── README_OPTIMIZATION.md
│       ├── release-tag.yml.disabled
│       ├── release.yml
│       ├── roadmap-review-reminder.yml
│       ├── SECRET_CONDITIONAL_FIX.md
│       └── WORKFLOW_FIXES.md
├── .gitignore
├── .mcp.json.backup
├── .mcp.json.template
├── .pyscn
│   ├── .gitignore
│   └── reports
│       └── analyze_20251123_214224.html
├── AGENTS.md
├── archive
│   ├── deployment
│   │   ├── deploy_fastmcp_fixed.sh
│   │   ├── deploy_http_with_mcp.sh
│   │   └── deploy_mcp_v4.sh
│   ├── deployment-configs
│   │   ├── empty_config.yml
│   │   └── smithery.yaml
│   ├── development
│   │   └── test_fastmcp.py
│   ├── docs-removed-2025-08-23
│   │   ├── authentication.md
│   │   ├── claude_integration.md
│   │   ├── claude-code-compatibility.md
│   │   ├── claude-code-integration.md
│   │   ├── claude-code-quickstart.md
│   │   ├── claude-desktop-setup.md
│   │   ├── complete-setup-guide.md
│   │   ├── database-synchronization.md
│   │   ├── development
│   │   │   ├── autonomous-memory-consolidation.md
│   │   │   ├── CLEANUP_PLAN.md
│   │   │   ├── CLEANUP_README.md
│   │   │   ├── CLEANUP_SUMMARY.md
│   │   │   ├── dream-inspired-memory-consolidation.md
│   │   │   ├── hybrid-slm-memory-consolidation.md
│   │   │   ├── mcp-milestone.md
│   │   │   ├── multi-client-architecture.md
│   │   │   ├── test-results.md
│   │   │   └── TIMESTAMP_FIX_SUMMARY.md
│   │   ├── distributed-sync.md
│   │   ├── invocation_guide.md
│   │   ├── macos-intel.md
│   │   ├── master-guide.md
│   │   ├── mcp-client-configuration.md
│   │   ├── multi-client-server.md
│   │   ├── service-installation.md
│   │   ├── sessions
│   │   │   └── MCP_ENHANCEMENT_SESSION_MEMORY_v4.1.0.md
│   │   ├── UBUNTU_SETUP.md
│   │   ├── ubuntu.md
│   │   ├── windows-setup.md
│   │   └── windows.md
│   ├── docs-root-cleanup-2025-08-23
│   │   ├── AWESOME_LIST_SUBMISSION.md
│   │   ├── CLOUDFLARE_IMPLEMENTATION.md
│   │   ├── DOCUMENTATION_ANALYSIS.md
│   │   ├── DOCUMENTATION_CLEANUP_PLAN.md
│   │   ├── DOCUMENTATION_CONSOLIDATION_COMPLETE.md
│   │   ├── LITESTREAM_SETUP_GUIDE.md
│   │   ├── lm_studio_system_prompt.md
│   │   ├── PYTORCH_DOWNLOAD_FIX.md
│   │   └── README-ORIGINAL-BACKUP.md
│   ├── investigations
│   │   └── MACOS_HOOKS_INVESTIGATION.md
│   ├── litestream-configs-v6.3.0
│   │   ├── install_service.sh
│   │   ├── litestream_master_config_fixed.yml
│   │   ├── litestream_master_config.yml
│   │   ├── litestream_replica_config_fixed.yml
│   │   ├── litestream_replica_config.yml
│   │   ├── litestream_replica_simple.yml
│   │   ├── litestream-http.service
│   │   ├── litestream.service
│   │   └── requirements-cloudflare.txt
│   ├── release-notes
│   │   └── release-notes-v7.1.4.md
│   └── setup-development
│       ├── README.md
│       ├── setup_consolidation_mdns.sh
│       ├── STARTUP_SETUP_GUIDE.md
│       └── test_service.sh
├── CHANGELOG-HISTORIC.md
├── CHANGELOG.md
├── claude_commands
│   ├── memory-context.md
│   ├── memory-health.md
│   ├── memory-ingest-dir.md
│   ├── memory-ingest.md
│   ├── memory-recall.md
│   ├── memory-search.md
│   ├── memory-store.md
│   ├── README.md
│   └── session-start.md
├── claude-hooks
│   ├── config.json
│   ├── config.template.json
│   ├── CONFIGURATION.md
│   ├── core
│   │   ├── memory-retrieval.js
│   │   ├── mid-conversation.js
│   │   ├── session-end.js
│   │   ├── session-start.js
│   │   └── topic-change.js
│   ├── debug-pattern-test.js
│   ├── install_claude_hooks_windows.ps1
│   ├── install_hooks.py
│   ├── memory-mode-controller.js
│   ├── MIGRATION.md
│   ├── README-NATURAL-TRIGGERS.md
│   ├── README-phase2.md
│   ├── README.md
│   ├── simple-test.js
│   ├── statusline.sh
│   ├── test-adaptive-weights.js
│   ├── test-dual-protocol-hook.js
│   ├── test-mcp-hook.js
│   ├── test-natural-triggers.js
│   ├── test-recency-scoring.js
│   ├── tests
│   │   ├── integration-test.js
│   │   ├── phase2-integration-test.js
│   │   ├── test-code-execution.js
│   │   ├── test-cross-session.json
│   │   ├── test-session-tracking.json
│   │   └── test-threading.json
│   ├── utilities
│   │   ├── adaptive-pattern-detector.js
│   │   ├── context-formatter.js
│   │   ├── context-shift-detector.js
│   │   ├── conversation-analyzer.js
│   │   ├── dynamic-context-updater.js
│   │   ├── git-analyzer.js
│   │   ├── mcp-client.js
│   │   ├── memory-client.js
│   │   ├── memory-scorer.js
│   │   ├── performance-manager.js
│   │   ├── project-detector.js
│   │   ├── session-tracker.js
│   │   ├── tiered-conversation-monitor.js
│   │   └── version-checker.js
│   └── WINDOWS-SESSIONSTART-BUG.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Development-Sprint-November-2025.md
├── docs
│   ├── amp-cli-bridge.md
│   ├── api
│   │   ├── code-execution-interface.md
│   │   ├── memory-metadata-api.md
│   │   ├── PHASE1_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_REPORT.md
│   │   └── tag-standardization.md
│   ├── architecture
│   │   ├── search-enhancement-spec.md
│   │   └── search-examples.md
│   ├── architecture.md
│   ├── archive
│   │   └── obsolete-workflows
│   │       ├── load_memory_context.md
│   │       └── README.md
│   ├── assets
│   │   └── images
│   │       ├── dashboard-v3.3.0-preview.png
│   │       ├── memory-awareness-hooks-example.png
│   │       ├── project-infographic.svg
│   │       └── README.md
│   ├── CLAUDE_CODE_QUICK_REFERENCE.md
│   ├── cloudflare-setup.md
│   ├── deployment
│   │   ├── docker.md
│   │   ├── dual-service.md
│   │   ├── production-guide.md
│   │   └── systemd-service.md
│   ├── development
│   │   ├── ai-agent-instructions.md
│   │   ├── code-quality
│   │   │   ├── phase-2a-completion.md
│   │   │   ├── phase-2a-handle-get-prompt.md
│   │   │   ├── phase-2a-index.md
│   │   │   ├── phase-2a-install-package.md
│   │   │   └── phase-2b-session-summary.md
│   │   ├── code-quality-workflow.md
│   │   ├── dashboard-workflow.md
│   │   ├── issue-management.md
│   │   ├── pr-review-guide.md
│   │   ├── refactoring-notes.md
│   │   ├── release-checklist.md
│   │   └── todo-tracker.md
│   ├── docker-optimized-build.md
│   ├── document-ingestion.md
│   ├── DOCUMENTATION_AUDIT.md
│   ├── enhancement-roadmap-issue-14.md
│   ├── examples
│   │   ├── analysis-scripts.js
│   │   ├── maintenance-session-example.md
│   │   ├── memory-distribution-chart.jsx
│   │   └── tag-schema.json
│   ├── first-time-setup.md
│   ├── glama-deployment.md
│   ├── guides
│   │   ├── advanced-command-examples.md
│   │   ├── chromadb-migration.md
│   │   ├── commands-vs-mcp-server.md
│   │   ├── mcp-enhancements.md
│   │   ├── mdns-service-discovery.md
│   │   ├── memory-consolidation-guide.md
│   │   ├── migration.md
│   │   ├── scripts.md
│   │   └── STORAGE_BACKENDS.md
│   ├── HOOK_IMPROVEMENTS.md
│   ├── hooks
│   │   └── phase2-code-execution-migration.md
│   ├── http-server-management.md
│   ├── ide-compatability.md
│   ├── IMAGE_RETENTION_POLICY.md
│   ├── images
│   │   └── dashboard-placeholder.md
│   ├── implementation
│   │   ├── health_checks.md
│   │   └── performance.md
│   ├── IMPLEMENTATION_PLAN_HTTP_SSE.md
│   ├── integration
│   │   ├── homebrew.md
│   │   └── multi-client.md
│   ├── integrations
│   │   ├── gemini.md
│   │   ├── groq-bridge.md
│   │   ├── groq-integration-summary.md
│   │   └── groq-model-comparison.md
│   ├── integrations.md
│   ├── legacy
│   │   └── dual-protocol-hooks.md
│   ├── LM_STUDIO_COMPATIBILITY.md
│   ├── maintenance
│   │   └── memory-maintenance.md
│   ├── mastery
│   │   ├── api-reference.md
│   │   ├── architecture-overview.md
│   │   ├── configuration-guide.md
│   │   ├── local-setup-and-run.md
│   │   ├── testing-guide.md
│   │   └── troubleshooting.md
│   ├── migration
│   │   └── code-execution-api-quick-start.md
│   ├── natural-memory-triggers
│   │   ├── cli-reference.md
│   │   ├── installation-guide.md
│   │   └── performance-optimization.md
│   ├── oauth-setup.md
│   ├── pr-graphql-integration.md
│   ├── quick-setup-cloudflare-dual-environment.md
│   ├── README.md
│   ├── remote-configuration-wiki-section.md
│   ├── research
│   │   ├── code-execution-interface-implementation.md
│   │   └── code-execution-interface-summary.md
│   ├── ROADMAP.md
│   ├── sqlite-vec-backend.md
│   ├── statistics
│   │   ├── charts
│   │   │   ├── activity_patterns.png
│   │   │   ├── contributors.png
│   │   │   ├── growth_trajectory.png
│   │   │   ├── monthly_activity.png
│   │   │   └── october_sprint.png
│   │   ├── data
│   │   │   ├── activity_by_day.csv
│   │   │   ├── activity_by_hour.csv
│   │   │   ├── contributors.csv
│   │   │   └── monthly_activity.csv
│   │   ├── generate_charts.py
│   │   └── REPOSITORY_STATISTICS.md
│   ├── technical
│   │   ├── development.md
│   │   ├── memory-migration.md
│   │   ├── migration-log.md
│   │   ├── sqlite-vec-embedding-fixes.md
│   │   └── tag-storage.md
│   ├── testing
│   │   └── regression-tests.md
│   ├── testing-cloudflare-backend.md
│   ├── troubleshooting
│   │   ├── cloudflare-api-token-setup.md
│   │   ├── cloudflare-authentication.md
│   │   ├── general.md
│   │   ├── hooks-quick-reference.md
│   │   ├── pr162-schema-caching-issue.md
│   │   ├── session-end-hooks.md
│   │   └── sync-issues.md
│   └── tutorials
│       ├── advanced-techniques.md
│       ├── data-analysis.md
│       └── demo-session-walkthrough.md
├── examples
│   ├── claude_desktop_config_template.json
│   ├── claude_desktop_config_windows.json
│   ├── claude-desktop-http-config.json
│   ├── config
│   │   └── claude_desktop_config.json
│   ├── http-mcp-bridge.js
│   ├── memory_export_template.json
│   ├── README.md
│   ├── setup
│   │   └── setup_multi_client_complete.py
│   └── start_https_example.sh
├── install_service.py
├── install.py
├── LICENSE
├── NOTICE
├── pyproject.toml
├── pytest.ini
├── README.md
├── run_server.py
├── scripts
│   ├── .claude
│   │   └── settings.local.json
│   ├── archive
│   │   └── check_missing_timestamps.py
│   ├── backup
│   │   ├── backup_memories.py
│   │   ├── backup_sqlite_vec.sh
│   │   ├── export_distributable_memories.sh
│   │   └── restore_memories.py
│   ├── benchmarks
│   │   ├── benchmark_code_execution_api.py
│   │   ├── benchmark_hybrid_sync.py
│   │   └── benchmark_server_caching.py
│   ├── database
│   │   ├── analyze_sqlite_vec_db.py
│   │   ├── check_sqlite_vec_status.py
│   │   ├── db_health_check.py
│   │   └── simple_timestamp_check.py
│   ├── development
│   │   ├── debug_server_initialization.py
│   │   ├── find_orphaned_files.py
│   │   ├── fix_mdns.sh
│   │   ├── fix_sitecustomize.py
│   │   ├── remote_ingest.sh
│   │   ├── setup-git-merge-drivers.sh
│   │   ├── uv-lock-merge.sh
│   │   └── verify_hybrid_sync.py
│   ├── hooks
│   │   └── pre-commit
│   ├── installation
│   │   ├── install_linux_service.py
│   │   ├── install_macos_service.py
│   │   ├── install_uv.py
│   │   ├── install_windows_service.py
│   │   ├── install.py
│   │   ├── setup_backup_cron.sh
│   │   ├── setup_claude_mcp.sh
│   │   └── setup_cloudflare_resources.py
│   ├── linux
│   │   ├── service_status.sh
│   │   ├── start_service.sh
│   │   ├── stop_service.sh
│   │   ├── uninstall_service.sh
│   │   └── view_logs.sh
│   ├── maintenance
│   │   ├── assign_memory_types.py
│   │   ├── check_memory_types.py
│   │   ├── cleanup_corrupted_encoding.py
│   │   ├── cleanup_memories.py
│   │   ├── cleanup_organize.py
│   │   ├── consolidate_memory_types.py
│   │   ├── consolidation_mappings.json
│   │   ├── delete_orphaned_vectors_fixed.py
│   │   ├── fast_cleanup_duplicates_with_tracking.sh
│   │   ├── find_all_duplicates.py
│   │   ├── find_cloudflare_duplicates.py
│   │   ├── find_duplicates.py
│   │   ├── memory-types.md
│   │   ├── README.md
│   │   ├── recover_timestamps_from_cloudflare.py
│   │   ├── regenerate_embeddings.py
│   │   ├── repair_malformed_tags.py
│   │   ├── repair_memories.py
│   │   ├── repair_sqlite_vec_embeddings.py
│   │   ├── repair_zero_embeddings.py
│   │   ├── restore_from_json_export.py
│   │   └── scan_todos.sh
│   ├── migration
│   │   ├── cleanup_mcp_timestamps.py
│   │   ├── legacy
│   │   │   └── migrate_chroma_to_sqlite.py
│   │   ├── mcp-migration.py
│   │   ├── migrate_sqlite_vec_embeddings.py
│   │   ├── migrate_storage.py
│   │   ├── migrate_tags.py
│   │   ├── migrate_timestamps.py
│   │   ├── migrate_to_cloudflare.py
│   │   ├── migrate_to_sqlite_vec.py
│   │   ├── migrate_v5_enhanced.py
│   │   ├── TIMESTAMP_CLEANUP_README.md
│   │   └── verify_mcp_timestamps.py
│   ├── pr
│   │   ├── amp_collect_results.sh
│   │   ├── amp_detect_breaking_changes.sh
│   │   ├── amp_generate_tests.sh
│   │   ├── amp_pr_review.sh
│   │   ├── amp_quality_gate.sh
│   │   ├── amp_suggest_fixes.sh
│   │   ├── auto_review.sh
│   │   ├── detect_breaking_changes.sh
│   │   ├── generate_tests.sh
│   │   ├── lib
│   │   │   └── graphql_helpers.sh
│   │   ├── quality_gate.sh
│   │   ├── resolve_threads.sh
│   │   ├── run_pyscn_analysis.sh
│   │   ├── run_quality_checks.sh
│   │   ├── thread_status.sh
│   │   └── watch_reviews.sh
│   ├── quality
│   │   ├── fix_dead_code_install.sh
│   │   ├── phase1_dead_code_analysis.md
│   │   ├── phase2_complexity_analysis.md
│   │   ├── README_PHASE1.md
│   │   ├── README_PHASE2.md
│   │   ├── track_pyscn_metrics.sh
│   │   └── weekly_quality_review.sh
│   ├── README.md
│   ├── run
│   │   ├── run_mcp_memory.sh
│   │   ├── run-with-uv.sh
│   │   └── start_sqlite_vec.sh
│   ├── run_memory_server.py
│   ├── server
│   │   ├── check_http_server.py
│   │   ├── check_server_health.py
│   │   ├── memory_offline.py
│   │   ├── preload_models.py
│   │   ├── run_http_server.py
│   │   ├── run_memory_server.py
│   │   ├── start_http_server.bat
│   │   └── start_http_server.sh
│   ├── service
│   │   ├── deploy_dual_services.sh
│   │   ├── install_http_service.sh
│   │   ├── mcp-memory-http.service
│   │   ├── mcp-memory.service
│   │   ├── memory_service_manager.sh
│   │   ├── service_control.sh
│   │   ├── service_utils.py
│   │   └── update_service.sh
│   ├── sync
│   │   ├── check_drift.py
│   │   ├── claude_sync_commands.py
│   │   ├── export_memories.py
│   │   ├── import_memories.py
│   │   ├── litestream
│   │   │   ├── apply_local_changes.sh
│   │   │   ├── enhanced_memory_store.sh
│   │   │   ├── init_staging_db.sh
│   │   │   ├── io.litestream.replication.plist
│   │   │   ├── manual_sync.sh
│   │   │   ├── memory_sync.sh
│   │   │   ├── pull_remote_changes.sh
│   │   │   ├── push_to_remote.sh
│   │   │   ├── README.md
│   │   │   ├── resolve_conflicts.sh
│   │   │   ├── setup_local_litestream.sh
│   │   │   ├── setup_remote_litestream.sh
│   │   │   ├── staging_db_init.sql
│   │   │   ├── stash_local_changes.sh
│   │   │   ├── sync_from_remote_noconfig.sh
│   │   │   └── sync_from_remote.sh
│   │   ├── README.md
│   │   ├── safe_cloudflare_update.sh
│   │   ├── sync_memory_backends.py
│   │   └── sync_now.py
│   ├── testing
│   │   ├── run_complete_test.py
│   │   ├── run_memory_test.sh
│   │   ├── simple_test.py
│   │   ├── test_cleanup_logic.py
│   │   ├── test_cloudflare_backend.py
│   │   ├── test_docker_functionality.py
│   │   ├── test_installation.py
│   │   ├── test_mdns.py
│   │   ├── test_memory_api.py
│   │   ├── test_memory_simple.py
│   │   ├── test_migration.py
│   │   ├── test_search_api.py
│   │   ├── test_sqlite_vec_embeddings.py
│   │   ├── test_sse_events.py
│   │   ├── test-connection.py
│   │   └── test-hook.js
│   ├── utils
│   │   ├── claude_commands_utils.py
│   │   ├── generate_personalized_claude_md.sh
│   │   ├── groq
│   │   ├── groq_agent_bridge.py
│   │   ├── list-collections.py
│   │   ├── memory_wrapper_uv.py
│   │   ├── query_memories.py
│   │   ├── smithery_wrapper.py
│   │   ├── test_groq_bridge.sh
│   │   └── uv_wrapper.py
│   └── validation
│       ├── check_dev_setup.py
│       ├── check_documentation_links.py
│       ├── diagnose_backend_config.py
│       ├── validate_configuration_complete.py
│       ├── validate_memories.py
│       ├── validate_migration.py
│       ├── validate_timestamp_integrity.py
│       ├── verify_environment.py
│       ├── verify_pytorch_windows.py
│       └── verify_torch.py
├── SECURITY.md
├── selective_timestamp_recovery.py
├── SPONSORS.md
├── src
│   └── mcp_memory_service
│       ├── __init__.py
│       ├── api
│       │   ├── __init__.py
│       │   ├── client.py
│       │   ├── operations.py
│       │   ├── sync_wrapper.py
│       │   └── types.py
│       ├── backup
│       │   ├── __init__.py
│       │   └── scheduler.py
│       ├── cli
│       │   ├── __init__.py
│       │   ├── ingestion.py
│       │   ├── main.py
│       │   └── utils.py
│       ├── config.py
│       ├── consolidation
│       │   ├── __init__.py
│       │   ├── associations.py
│       │   ├── base.py
│       │   ├── clustering.py
│       │   ├── compression.py
│       │   ├── consolidator.py
│       │   ├── decay.py
│       │   ├── forgetting.py
│       │   ├── health.py
│       │   └── scheduler.py
│       ├── dependency_check.py
│       ├── discovery
│       │   ├── __init__.py
│       │   ├── client.py
│       │   └── mdns_service.py
│       ├── embeddings
│       │   ├── __init__.py
│       │   └── onnx_embeddings.py
│       ├── ingestion
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── chunker.py
│       │   ├── csv_loader.py
│       │   ├── json_loader.py
│       │   ├── pdf_loader.py
│       │   ├── registry.py
│       │   ├── semtools_loader.py
│       │   └── text_loader.py
│       ├── lm_studio_compat.py
│       ├── mcp_server.py
│       ├── models
│       │   ├── __init__.py
│       │   └── memory.py
│       ├── server.py
│       ├── services
│       │   ├── __init__.py
│       │   └── memory_service.py
│       ├── storage
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── cloudflare.py
│       │   ├── factory.py
│       │   ├── http_client.py
│       │   ├── hybrid.py
│       │   └── sqlite_vec.py
│       ├── sync
│       │   ├── __init__.py
│       │   ├── exporter.py
│       │   ├── importer.py
│       │   └── litestream_config.py
│       ├── utils
│       │   ├── __init__.py
│       │   ├── cache_manager.py
│       │   ├── content_splitter.py
│       │   ├── db_utils.py
│       │   ├── debug.py
│       │   ├── document_processing.py
│       │   ├── gpu_detection.py
│       │   ├── hashing.py
│       │   ├── http_server_manager.py
│       │   ├── port_detection.py
│       │   ├── system_detection.py
│       │   └── time_parser.py
│       └── web
│           ├── __init__.py
│           ├── api
│           │   ├── __init__.py
│           │   ├── analytics.py
│           │   ├── backup.py
│           │   ├── consolidation.py
│           │   ├── documents.py
│           │   ├── events.py
│           │   ├── health.py
│           │   ├── manage.py
│           │   ├── mcp.py
│           │   ├── memories.py
│           │   ├── search.py
│           │   └── sync.py
│           ├── app.py
│           ├── dependencies.py
│           ├── oauth
│           │   ├── __init__.py
│           │   ├── authorization.py
│           │   ├── discovery.py
│           │   ├── middleware.py
│           │   ├── models.py
│           │   ├── registration.py
│           │   └── storage.py
│           ├── sse.py
│           └── static
│               ├── app.js
│               ├── index.html
│               ├── README.md
│               ├── sse_test.html
│               └── style.css
├── start_http_debug.bat
├── start_http_server.sh
├── test_document.txt
├── test_version_checker.js
├── tests
│   ├── __init__.py
│   ├── api
│   │   ├── __init__.py
│   │   ├── test_compact_types.py
│   │   └── test_operations.py
│   ├── bridge
│   │   ├── mock_responses.js
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   └── test_http_mcp_bridge.js
│   ├── conftest.py
│   ├── consolidation
│   │   ├── __init__.py
│   │   ├── conftest.py
│   │   ├── test_associations.py
│   │   ├── test_clustering.py
│   │   ├── test_compression.py
│   │   ├── test_consolidator.py
│   │   ├── test_decay.py
│   │   └── test_forgetting.py
│   ├── contracts
│   │   └── api-specification.yml
│   ├── integration
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   ├── test_api_key_fallback.py
│   │   ├── test_api_memories_chronological.py
│   │   ├── test_api_tag_time_search.py
│   │   ├── test_api_with_memory_service.py
│   │   ├── test_bridge_integration.js
│   │   ├── test_cli_interfaces.py
│   │   ├── test_cloudflare_connection.py
│   │   ├── test_concurrent_clients.py
│   │   ├── test_data_serialization_consistency.py
│   │   ├── test_http_server_startup.py
│   │   ├── test_mcp_memory.py
│   │   ├── test_mdns_integration.py
│   │   ├── test_oauth_basic_auth.py
│   │   ├── test_oauth_flow.py
│   │   ├── test_server_handlers.py
│   │   └── test_store_memory.py
│   ├── performance
│   │   ├── test_background_sync.py
│   │   └── test_hybrid_live.py
│   ├── README.md
│   ├── smithery
│   │   └── test_smithery.py
│   ├── sqlite
│   │   └── simple_sqlite_vec_test.py
│   ├── test_client.py
│   ├── test_content_splitting.py
│   ├── test_database.py
│   ├── test_hybrid_cloudflare_limits.py
│   ├── test_hybrid_storage.py
│   ├── test_memory_ops.py
│   ├── test_semantic_search.py
│   ├── test_sqlite_vec_storage.py
│   ├── test_time_parser.py
│   ├── test_timestamp_preservation.py
│   ├── timestamp
│   │   ├── test_hook_vs_manual_storage.py
│   │   ├── test_issue99_final_validation.py
│   │   ├── test_search_retrieval_inconsistency.py
│   │   ├── test_timestamp_issue.py
│   │   └── test_timestamp_simple.py
│   └── unit
│       ├── conftest.py
│       ├── test_cloudflare_storage.py
│       ├── test_csv_loader.py
│       ├── test_fastapi_dependencies.py
│       ├── test_import.py
│       ├── test_json_loader.py
│       ├── test_mdns_simple.py
│       ├── test_mdns.py
│       ├── test_memory_service.py
│       ├── test_memory.py
│       ├── test_semtools_loader.py
│       ├── test_storage_interface_compatibility.py
│       └── test_tag_time_filtering.py
├── tools
│   ├── docker
│   │   ├── DEPRECATED.md
│   │   ├── docker-compose.http.yml
│   │   ├── docker-compose.pythonpath.yml
│   │   ├── docker-compose.standalone.yml
│   │   ├── docker-compose.uv.yml
│   │   ├── docker-compose.yml
│   │   ├── docker-entrypoint-persistent.sh
│   │   ├── docker-entrypoint-unified.sh
│   │   ├── docker-entrypoint.sh
│   │   ├── Dockerfile
│   │   ├── Dockerfile.glama
│   │   ├── Dockerfile.slim
│   │   ├── README.md
│   │   └── test-docker-modes.sh
│   └── README.md
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/src/mcp_memory_service/consolidation/base.py:
--------------------------------------------------------------------------------

```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""Base classes and interfaces for memory consolidation components."""

from abc import ABC, abstractmethod
from typing import List, Dict, Any, Optional, Tuple
from dataclasses import dataclass, field
from datetime import datetime
import logging

from ..models.memory import Memory

logger = logging.getLogger(__name__)

@dataclass
class ConsolidationConfig:
    """Configuration for consolidation operations."""
    
    # Decay settings
    decay_enabled: bool = True
    retention_periods: Dict[str, int] = field(default_factory=lambda: {
        'critical': 365,
        'reference': 180, 
        'standard': 30,
        'temporary': 7
    })
    
    # Association settings
    associations_enabled: bool = True
    min_similarity: float = 0.3
    max_similarity: float = 0.7
    max_pairs_per_run: int = 100
    
    # Clustering settings
    clustering_enabled: bool = True
    min_cluster_size: int = 5
    clustering_algorithm: str = 'dbscan'  # 'dbscan', 'hierarchical'
    
    # Compression settings
    compression_enabled: bool = True
    max_summary_length: int = 500
    preserve_originals: bool = True
    
    # Forgetting settings
    forgetting_enabled: bool = True
    relevance_threshold: float = 0.1
    access_threshold_days: int = 90
    archive_location: Optional[str] = None

    # Incremental consolidation settings
    batch_size: int = 500  # Memories to process per consolidation run
    incremental_mode: bool = True  # Enable oldest-first batch processing

@dataclass
class ConsolidationReport:
    """Report of consolidation operations performed."""
    time_horizon: str
    start_time: datetime
    end_time: datetime
    memories_processed: int
    associations_discovered: int = 0
    clusters_created: int = 0
    memories_compressed: int = 0
    memories_archived: int = 0
    errors: List[str] = field(default_factory=list)
    performance_metrics: Dict[str, Any] = field(default_factory=dict)

@dataclass
class MemoryAssociation:
    """Represents a discovered association between memories."""
    source_memory_hashes: List[str]
    similarity_score: float
    connection_type: str
    discovery_method: str
    discovery_date: datetime
    metadata: Dict[str, Any] = field(default_factory=dict)

@dataclass
class MemoryCluster:
    """Represents a cluster of semantically related memories."""
    cluster_id: str
    memory_hashes: List[str]
    centroid_embedding: List[float]
    coherence_score: float
    created_at: datetime
    theme_keywords: List[str] = field(default_factory=list)
    metadata: Dict[str, Any] = field(default_factory=dict)

class ConsolidationBase(ABC):
    """Abstract base class for consolidation components."""
    
    def __init__(self, config: ConsolidationConfig):
        self.config = config
        self.logger = logging.getLogger(f"{__name__}.{self.__class__.__name__}")
    
    @abstractmethod
    async def process(self, memories: List[Memory], **kwargs) -> Any:
        """Process the given memories and return results."""
        pass
    
    def _validate_memories(self, memories: List[Memory]) -> bool:
        """Validate that memories list is valid for processing."""
        if not memories:
            self.logger.warning("Empty memories list provided")
            return False
        
        for memory in memories:
            if not hasattr(memory, 'content_hash') or not memory.content_hash:
                self.logger.error(f"Memory missing content_hash: {memory}")
                return False
        
        return True
    
    def _get_memory_age_days(self, memory: Memory, reference_time: Optional[datetime] = None) -> int:
        """Get the age of a memory in days."""
        ref_time = reference_time or datetime.now()
        
        if memory.created_at:
            created_dt = datetime.utcfromtimestamp(memory.created_at)
            return (ref_time - created_dt).days
        elif memory.timestamp:
            return (ref_time - memory.timestamp).days
        else:
            self.logger.warning(f"Memory {memory.content_hash} has no timestamp")
            return 0
    
    def _extract_memory_type(self, memory: Memory) -> str:
        """Extract the memory type, with fallback to 'standard'."""
        return memory.memory_type or 'standard'
    
    def _is_protected_memory(self, memory: Memory) -> bool:
        """Check if a memory is protected from consolidation operations."""
        protected_tags = {'critical', 'important', 'reference', 'permanent'}
        return bool(set(memory.tags).intersection(protected_tags))

class ConsolidationError(Exception):
    """Base exception for consolidation operations."""
    pass

class ConsolidationConfigError(ConsolidationError):
    """Exception raised for configuration errors."""
    pass

class ConsolidationProcessingError(ConsolidationError):
    """Exception raised during processing operations."""
    pass
```

--------------------------------------------------------------------------------
/docs/troubleshooting/cloudflare-authentication.md:
--------------------------------------------------------------------------------

```markdown
# Cloudflare Authentication Troubleshooting

This guide helps resolve common Cloudflare authentication issues with the MCP Memory Service.

## Overview

Cloudflare API tokens come in different types with varying scopes and verification methods. Understanding these differences is crucial for proper authentication.

## Token Types and Verification

### Account-Scoped Tokens (Recommended)

**What they are:** Tokens with specific permissions limited to a particular Cloudflare account.

**Required Permissions:**
- `D1 Database:Edit` - For D1 database operations
- `Vectorize:Edit` - For vector index operations

**Verification Endpoint:**
```bash
curl "https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/tokens/verify" \
     -H "Authorization: Bearer {YOUR_TOKEN}"
```

**Success Response:**
```json
{
  "result": {
    "id": "token_id_here",
    "status": "active",
    "expires_on": "2026-04-30T23:59:59Z"
  },
  "success": true,
  "errors": [],
  "messages": [
    {
      "code": 10000,
      "message": "This API Token is valid and active"
    }
  ]
}
```

### Global Tokens (Legacy)

**What they are:** Tokens with broader permissions across all accounts.

**Verification Endpoint:**
```bash
curl "https://api.cloudflare.com/client/v4/user/tokens/verify" \
     -H "Authorization: Bearer {YOUR_TOKEN}"
```

## Common Error Messages

### "Invalid API Token" (Error 1000)

**Cause:** Using the wrong verification endpoint for your token type.

**Solution:**
1. If using account-scoped token, use the account-specific endpoint
2. If using global token, use the user endpoint
3. Check token expiration date
4. Verify token permissions

**Example:**
```bash
# ❌ Wrong: Testing account-scoped token with user endpoint
curl "https://api.cloudflare.com/client/v4/user/tokens/verify" \
     -H "Authorization: Bearer account_scoped_token"
# Returns: {"success":false,"errors":[{"code":1000,"message":"Invalid API Token"}]}

# ✅ Correct: Testing account-scoped token with account endpoint
curl "https://api.cloudflare.com/client/v4/accounts/your_account_id/tokens/verify" \
     -H "Authorization: Bearer account_scoped_token"
# Returns: {"success":true,...}
```

### "401 Unauthorized" During Operations

**Cause:** Token lacks required permissions for specific operations.

**Solution:**
1. Verify token has `D1 Database:Edit` permission
2. Verify token has `Vectorize:Edit` permission
3. Check if token has expired
4. Ensure account ID matches token scope

### "Client error '401 Unauthorized'" in MCP Service

**Cause:** Environment variables may not be properly loaded or token is invalid.

**Debugging Steps:**
1. Check environment variable loading:
   ```bash
   python scripts/validation/diagnose_backend_config.py
   ```

2. Test token manually:
   ```bash
   curl "https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/tokens/verify" \
        -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN"
   ```

3. Test D1 database access:
   ```bash
   curl -X POST "https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/d1/database/$CLOUDFLARE_D1_DATABASE_ID/query" \
        -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN" \
        -H "Content-Type: application/json" \
        -d '{"sql": "SELECT name FROM sqlite_master WHERE type='"'"'table'"'"';"}'
   ```

## Token Creation Guide

### Creating Account-Scoped Tokens

1. Go to [Cloudflare Dashboard](https://dash.cloudflare.com/profile/api-tokens)
2. Click "Create Token"
3. Use "Custom token" template
4. Set permissions:
   - `Account` → `Cloudflare D1:Edit`
   - `Account` → `Vectorize:Edit`
5. Set account resources to your specific account
6. Add client IP restrictions (optional but recommended)
7. Set expiration date
8. Create and copy the token immediately

### Token Security Best Practices

- ✅ Use account-scoped tokens with minimal required permissions
- ✅ Set expiration dates (e.g., 1 year maximum)
- ✅ Add IP restrictions when possible
- ✅ Store tokens securely (environment variables, not in code)
- ✅ Rotate tokens regularly
- ❌ Never commit tokens to version control
- ❌ Don't use global tokens unless absolutely necessary

## Environment Configuration

### Required Variables

```bash
# Account-scoped token (recommended)
CLOUDFLARE_API_TOKEN=your_account_scoped_token_here
CLOUDFLARE_ACCOUNT_ID=your_account_id_here
CLOUDFLARE_D1_DATABASE_ID=your_d1_database_id_here
CLOUDFLARE_VECTORIZE_INDEX=mcp-memory-index
```

### Validation Command

```bash
# Test all configuration
python scripts/validation/diagnose_backend_config.py

# Quick token test
curl "https://api.cloudflare.com/client/v4/accounts/$CLOUDFLARE_ACCOUNT_ID/tokens/verify" \
     -H "Authorization: Bearer $CLOUDFLARE_API_TOKEN"
```

## Troubleshooting Checklist

- [ ] Token is account-scoped and has correct permissions
- [ ] Using correct verification endpoint (`/accounts/{id}/tokens/verify`)
- [ ] Environment variables are loaded correctly
- [ ] Account ID matches token scope
- [ ] Token has not expired
- [ ] D1 database ID is correct
- [ ] Vectorize index exists
- [ ] MCP service has been restarted after configuration changes

## Getting Help

If you're still experiencing issues:

1. Run the diagnostic script: `python scripts/validation/diagnose_backend_config.py`
2. Check the [GitHub Issues](https://github.com/doobidoo/mcp-memory-service/issues)
3. Review the main [README.md](../../README.md) for setup instructions
4. Check the [CLAUDE.md](../../CLAUDE.md) for Claude Code specific guidance
```

--------------------------------------------------------------------------------
/scripts/quality/README_PHASE1.md:
--------------------------------------------------------------------------------

```markdown
# Phase 1: Dead Code Removal - Quick Reference

**Issue:** #240 Code Quality Improvement
**Phase:** 1 of 3 (Dead Code Removal)
**Status:** Analysis Complete, Ready for Fix

---

## Quick Summary

**Problem:** 27 dead code issues (2 critical) identified by pyscn
**Root Cause:** Single premature `return False` at line 1358 in `scripts/installation/install.py`
**Impact:** 77 lines of Claude Desktop configuration code never executed during installation
**Fix:** Move unreachable code block outside exception handler
**Estimated Improvement:** +5 to +9 points overall health score (63 → 68-72)

---

## Files Generated

1. **`phase1_dead_code_analysis.md`** - Complete analysis report with detailed breakdown
2. **`fix_dead_code_install.sh`** - Interactive script to guide you through the fix
3. **`README_PHASE1.md`** - This quick reference guide

---

## How to Use

### Option 1: Interactive Script (Recommended)
```bash
# Run from project root directory
bash scripts/quality/fix_dead_code_install.sh
```

The script will:
- Create a backup branch
- Guide you through manual code editing
- Verify syntax after fix
- Run tests (if available)
- Show diff and commit message

### Option 2: Manual Fix

1. **Open file:** `scripts/installation/install.py`
2. **Go to line 1358** (inside except block)
3. **Change:**
   ```python
   except Exception as e:
       print_error(f"Failed to test backups directory: {e}")
       return False
   ```
   **To:**
   ```python
   except Exception as e:
       print_error(f"Failed to test backups directory: {e}")
       print_warning("Continuing with Claude Desktop configuration despite write test failure")
   ```
4. **Cut lines 1360-1436** (Claude Desktop config block)
5. **Paste after the except block** (dedent by 4 spaces)
6. **Save and verify:**
   ```bash
   python -m py_compile scripts/installation/install.py
   ```

---

## Verification Steps

After applying the fix:

1. **Syntax check:**
   ```bash
   python -m py_compile scripts/installation/install.py
   ```

2. **Run tests:**
   ```bash
   pytest tests/unit/test_installation.py -v
   ```

3. **Test installation:**
   ```bash
   python scripts/installation/install.py --storage-backend sqlite_vec
   cat ~/.claude/claude_desktop_config.json | grep mcp-memory-service
   ```

4. **Re-run pyscn:**
   ```bash
   pyscn analyze . --output .pyscn/reports/
   ```

5. **Check new health score** in the HTML report

---

## Expected Results

### Before Fix
- **Health Score:** 63/100 (Grade C)
- **Dead Code Issues:** 27 (2 critical)
- **Dead Code Score:** 70/100
- **Claude Desktop Config:** Never created during installation

### After Fix
- **Health Score:** 68-72/100 (Grade C+)
- **Dead Code Issues:** 0
- **Dead Code Score:** 85-90/100
- **Claude Desktop Config:** Automatically created during installation

---

## Commit Message Template

```
fix: move Claude Desktop configuration out of unreachable code block

Fixes issue #240 Phase 1 - Dead Code Removal

The configure_paths() function had a 'return False' statement inside
an exception handler that made 77 lines of Claude Desktop configuration
code unreachable. This caused installations to skip Claude Desktop setup.

Changes:
- Move Claude Desktop config code (lines 1360-1436) outside except block
- Replace premature 'return False' with warning message
- Ensure config runs regardless of write test result

Impact:
- Resolves all 27 dead code issues identified by pyscn
- Claude Desktop now configured automatically during installation
- Dead code score: 70 → 85-90 (+15 to +20 points)
- Overall health score: 63 → 68-72 (+5 to +9 points)

Testing:
- Syntax validated with py_compile
- Unit tests pass: pytest tests/unit/test_installation.py
- Manual installation tested with sqlite_vec backend
- pyscn re-analysis confirms 0 dead code issues

Co-authored-by: pyscn analysis tool
```

---

## Next Steps After Phase 1

Once Phase 1 is complete and merged:

1. **Run pyscn again** to get updated health score
2. **Celebrate!** 🎉 You've eliminated all dead code issues
3. **Move to Phase 2:** Low-hanging complexity reductions
   - Target complexity score improvement (currently 40/100)
   - Focus on functions with complexity 15-25 (easier wins)
4. **Move to Phase 3:** Duplication removal
   - Target duplication score improvement (currently 30/100)
   - Focus on test duplication (identified in pyscn report)

---

## Troubleshooting

### Syntax errors after fix
- Check indentation (should match `try` statement level)
- Verify no lines were accidentally deleted
- Restore from backup: `cp scripts/installation/install.py.backup scripts/installation/install.py`

### Tests fail after fix
- Review test expectations - they may need updating
- Check if tests mock the file write test
- Tests may be outdated if they expect old behavior

### pyscn still shows dead code
- Verify the `return False` was changed to a warning
- Confirm code block was moved OUTSIDE the except block
- Check that no extra `return` statements were left behind

---

## Reference Documents

- **Full Analysis:** `scripts/quality/phase1_dead_code_analysis.md`
- **pyscn Report:** `.pyscn/reports/analyze_20251123_214224.html`
- **Issue Tracker:** GitHub Issue #240

---

## Contact

Questions? See the detailed analysis in `phase1_dead_code_analysis.md` or refer to Issue #240 on GitHub.

**Time Estimate:** 10-15 minutes for fix + verification
**Difficulty:** Easy (code movement, no logic changes)
**Risk:** Low (code was never executing anyway)

```

--------------------------------------------------------------------------------
/src/mcp_memory_service/ingestion/base.py:
--------------------------------------------------------------------------------

```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Base classes and interfaces for document ingestion.
"""

import logging
from abc import ABC, abstractmethod
from dataclasses import dataclass
from pathlib import Path
from typing import List, Dict, Any, Optional, AsyncGenerator
from datetime import datetime

logger = logging.getLogger(__name__)


@dataclass
class DocumentChunk:
    """
    Represents a chunk of text extracted from a document.
    
    Attributes:
        content: The extracted text content
        metadata: Additional metadata about the chunk
        chunk_index: Position of this chunk in the document
        source_file: Original file path
    """
    content: str
    metadata: Dict[str, Any]
    chunk_index: int
    source_file: Path
    
    def __post_init__(self):
        """Add default metadata after initialization."""
        if 'source' not in self.metadata:
            self.metadata['source'] = str(self.source_file)
        if 'chunk_index' not in self.metadata:
            self.metadata['chunk_index'] = self.chunk_index
        if 'extracted_at' not in self.metadata:
            self.metadata['extracted_at'] = datetime.now().isoformat()


@dataclass
class IngestionResult:
    """
    Result of document ingestion operation.
    
    Attributes:
        success: Whether ingestion was successful
        chunks_processed: Number of chunks created
        chunks_stored: Number of chunks successfully stored
        errors: List of error messages encountered
        source_file: Original file that was processed
        processing_time: Time taken to process in seconds
    """
    success: bool
    chunks_processed: int
    chunks_stored: int
    errors: List[str]
    source_file: Path
    processing_time: float
    
    @property
    def success_rate(self) -> float:
        """Calculate success rate as percentage."""
        if self.chunks_processed == 0:
            return 0.0
        return (self.chunks_stored / self.chunks_processed) * 100


class DocumentLoader(ABC):
    """
    Abstract base class for document loaders.
    
    Each document format (PDF, text, etc.) should implement this interface
    to provide consistent document processing capabilities.
    """
    
    def __init__(self, chunk_size: int = 1000, chunk_overlap: int = 200):
        """
        Initialize document loader.
        
        Args:
            chunk_size: Target size for text chunks in characters
            chunk_overlap: Number of characters to overlap between chunks
        """
        self.chunk_size = chunk_size
        self.chunk_overlap = chunk_overlap
        self.supported_extensions: List[str] = []
    
    @abstractmethod
    def can_handle(self, file_path: Path) -> bool:
        """
        Check if this loader can handle the given file.
        
        Args:
            file_path: Path to the file to check
            
        Returns:
            True if this loader can process the file
        """
        pass
    
    @abstractmethod
    async def extract_chunks(self, file_path: Path, **kwargs) -> AsyncGenerator[DocumentChunk, None]:
        """
        Extract text chunks from a document.
        
        Args:
            file_path: Path to the document file
            **kwargs: Additional options specific to the loader
            
        Yields:
            DocumentChunk objects containing extracted text and metadata
            
        Raises:
            FileNotFoundError: If the file doesn't exist
            ValueError: If the file format is not supported
            Exception: Other processing errors
        """
        pass
    
    async def validate_file(self, file_path: Path) -> None:
        """
        Validate that a file can be processed.
        
        Args:
            file_path: Path to validate
            
        Raises:
            FileNotFoundError: If file doesn't exist
            ValueError: If file is not supported or invalid
        """
        if not file_path.exists():
            raise FileNotFoundError(f"File not found: {file_path}")
        
        if not file_path.is_file():
            raise ValueError(f"Path is not a file: {file_path}")
        
        if not self.can_handle(file_path):
            raise ValueError(f"File format not supported: {file_path.suffix}")
    
    def get_base_metadata(self, file_path: Path) -> Dict[str, Any]:
        """
        Get base metadata common to all document types.
        
        Args:
            file_path: Path to the file
            
        Returns:
            Dictionary of base metadata
        """
        stat = file_path.stat()
        return {
            'source_file': str(file_path),
            'file_name': file_path.name,
            'file_extension': file_path.suffix.lower(),
            'file_size': stat.st_size,
            'modified_time': datetime.fromtimestamp(stat.st_mtime).isoformat(),
            'loader_type': self.__class__.__name__
        }
```

--------------------------------------------------------------------------------
/scripts/quality/track_pyscn_metrics.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
# scripts/quality/track_pyscn_metrics.sh - Track pyscn metrics over time
#
# Usage: bash scripts/quality/track_pyscn_metrics.sh
#
# Features:
# - Run pyscn analysis
# - Extract metrics to CSV
# - Store in .pyscn/history/ (gitignored)
# - Compare to previous run
# - Alert on regressions (>5% health score drop)

set -e

# Colors for output
RED='\033[0;31m'
YELLOW='\033[1;33m'
GREEN='\033[0;32m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color

echo -e "${BLUE}=== pyscn Metrics Tracking ===${NC}"
echo ""

# Check for pyscn
if ! command -v pyscn &> /dev/null; then
    echo -e "${RED}❌ pyscn not found${NC}"
    echo "Install with: pip install pyscn"
    exit 1
fi

# Create history directory
mkdir -p .pyscn/history

# Generate timestamp
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
DATE_READABLE=$(date +"%Y-%m-%d %H:%M:%S")

# Run pyscn analysis
echo "Running pyscn analysis..."
REPORT_FILE=".pyscn/reports/analyze_${TIMESTAMP}.html"

if pyscn analyze . --output "$REPORT_FILE" > /tmp/pyscn_metrics.log 2>&1; then
    echo -e "${GREEN}✓${NC} Analysis complete"
else
    echo -e "${RED}❌ Analysis failed${NC}"
    cat /tmp/pyscn_metrics.log
    exit 1
fi

# Extract metrics from HTML report
HEALTH_SCORE=$(grep -o 'Health Score: [0-9]*' "$REPORT_FILE" | head -1 | grep -o '[0-9]*' || echo "0")
COMPLEXITY_SCORE=$(grep -o '<span class="score-value">[0-9]*</span>' "$REPORT_FILE" | head -1 | sed 's/<[^>]*>//g' || echo "0")
DEAD_CODE_SCORE=$(grep -o '<span class="score-value">[0-9]*</span>' "$REPORT_FILE" | sed -n '2p' | sed 's/<[^>]*>//g' || echo "0")
DUPLICATION_SCORE=$(grep -o '<span class="score-value">[0-9]*</span>' "$REPORT_FILE" | sed -n '3p' | sed 's/<[^>]*>//g' || echo "0")
COUPLING_SCORE=$(grep -o '<span class="score-value">[0-9]*</span>' "$REPORT_FILE" | sed -n '4p' | sed 's/<[^>]*>//g' || echo "100")
DEPENDENCIES_SCORE=$(grep -o '<span class="score-value">[0-9]*</span>' "$REPORT_FILE" | sed -n '5p' | sed 's/<[^>]*>//g' || echo "0")
ARCHITECTURE_SCORE=$(grep -o '<span class="score-value">[0-9]*</span>' "$REPORT_FILE" | sed -n '6p' | sed 's/<[^>]*>//g' || echo "0")

AVG_COMPLEXITY=$(grep -o '<div class="metric-value">[0-9.]*</div>' "$REPORT_FILE" | sed -n '3p' | sed 's/<[^>]*>//g' || echo "0")
MAX_COMPLEXITY=$(grep -o '<div class="metric-value">[0-9]*</div>' "$REPORT_FILE" | sed -n '3p' | sed 's/<[^>]*>//g' || echo "0")
DUPLICATION_PCT=$(grep -o '<div class="metric-value">[0-9.]*%</div>' "$REPORT_FILE" | head -1 | sed 's/<[^>]*>//g' || echo "0%")
DEAD_CODE_ISSUES=$(grep -o '<div class="metric-value">[0-9]*</div>' "$REPORT_FILE" | sed -n '4p' | sed 's/<[^>]*>//g' || echo "0")

echo ""
echo -e "${BLUE}=== Metrics Extracted ===${NC}"
echo "Health Score: $HEALTH_SCORE/100"
echo "Complexity: $COMPLEXITY_SCORE/100 (Avg: $AVG_COMPLEXITY, Max: $MAX_COMPLEXITY)"
echo "Dead Code: $DEAD_CODE_SCORE/100 ($DEAD_CODE_ISSUES issues)"
echo "Duplication: $DUPLICATION_SCORE/100 ($DUPLICATION_PCT)"
echo "Coupling: $COUPLING_SCORE/100"
echo "Dependencies: $DEPENDENCIES_SCORE/100"
echo "Architecture: $ARCHITECTURE_SCORE/100"
echo ""

# Create CSV file if it doesn't exist
CSV_FILE=".pyscn/history/metrics.csv"
if [ ! -f "$CSV_FILE" ]; then
    echo "timestamp,date,health_score,complexity_score,dead_code_score,duplication_score,coupling_score,dependencies_score,architecture_score,avg_complexity,max_complexity,duplication_pct,dead_code_issues" > "$CSV_FILE"
fi

# Append metrics
echo "$TIMESTAMP,$DATE_READABLE,$HEALTH_SCORE,$COMPLEXITY_SCORE,$DEAD_CODE_SCORE,$DUPLICATION_SCORE,$COUPLING_SCORE,$DEPENDENCIES_SCORE,$ARCHITECTURE_SCORE,$AVG_COMPLEXITY,$MAX_COMPLEXITY,$DUPLICATION_PCT,$DEAD_CODE_ISSUES" >> "$CSV_FILE"

echo -e "${GREEN}✓${NC} Metrics saved to $CSV_FILE"
echo ""

# Compare to previous run
if [ $(wc -l < "$CSV_FILE") -gt 2 ]; then
    PREV_HEALTH=$(tail -2 "$CSV_FILE" | head -1 | cut -d',' -f3)
    PREV_DATE=$(tail -2 "$CSV_FILE" | head -1 | cut -d',' -f2)

    echo -e "${BLUE}=== Comparison to Previous Run ===${NC}"
    echo "Previous: $PREV_HEALTH/100 ($(echo "$PREV_DATE" | cut -d' ' -f1))"
    echo "Current:  $HEALTH_SCORE/100 ($(date +%Y-%m-%d))"

    DELTA=$((HEALTH_SCORE - PREV_HEALTH))

    if [ $DELTA -gt 0 ]; then
        echo -e "${GREEN}✅ Improvement: +$DELTA points${NC}"
    elif [ $DELTA -lt 0 ]; then
        ABS_DELTA=${DELTA#-}
        echo -e "${RED}⚠️  Regression: -$ABS_DELTA points${NC}"

        # Alert on significant regression (>5 points)
        if [ $ABS_DELTA -gt 5 ]; then
            echo ""
            echo -e "${RED}🚨 ALERT: Significant quality regression detected!${NC}"
            echo "Health score dropped by $ABS_DELTA points since last check."
            echo ""
            echo "Recommended actions:"
            echo "  1. Review recent changes: git log --since='$PREV_DATE'"
            echo "  2. Compare reports: open $REPORT_FILE"
            echo "  3. Create GitHub issue to track regression"
        fi
    else
        echo -e "${BLUE}➡️  No change${NC}"
    fi
else
    echo -e "${BLUE}ℹ️  No previous metrics for comparison (first run)${NC}"
fi

echo ""
echo -e "${BLUE}=== Trend Summary ===${NC}"
echo "Total measurements: $(tail -n +2 "$CSV_FILE" | wc -l)"
echo "Average health score: $(awk -F',' 'NR>1 {sum+=$3; count++} END {if(count>0) print int(sum/count); else print 0}' "$CSV_FILE")/100"
echo "Highest: $(awk -F',' 'NR>1 {if($3>max || max=="") max=$3} END {print max}' "$CSV_FILE")/100"
echo "Lowest: $(awk -F',' 'NR>1 {if($3<min || min=="") min=$3} END {print min}' "$CSV_FILE")/100"
echo ""

echo -e "${GREEN}✓${NC} Tracking complete"
exit 0

```

--------------------------------------------------------------------------------
/src/mcp_memory_service/web/api/backup.py:
--------------------------------------------------------------------------------

```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Backup management endpoints for MCP Memory Service.

Provides status monitoring, manual backup triggering, and backup listing.
"""

from typing import Dict, Any, List, Optional, TYPE_CHECKING
from datetime import datetime, timezone

from fastapi import APIRouter, HTTPException, Depends
from pydantic import BaseModel

from ...config import OAUTH_ENABLED
from ...backup.scheduler import get_backup_service, get_backup_scheduler

# OAuth authentication imports (conditional)
if OAUTH_ENABLED or TYPE_CHECKING:
    from ..oauth.middleware import require_read_access, require_write_access, AuthenticationResult
else:
    # Provide type stubs when OAuth is disabled
    AuthenticationResult = None
    require_read_access = None
    require_write_access = None

router = APIRouter()


class BackupStatusResponse(BaseModel):
    """Backup status response model."""
    enabled: bool
    interval: str
    retention_days: int
    max_count: int
    backup_count: int
    total_size_bytes: int
    last_backup_time: Optional[float]
    time_since_last_seconds: Optional[float]
    next_backup_at: Optional[str]
    scheduler_running: bool


class BackupCreateResponse(BaseModel):
    """Backup creation response model."""
    success: bool
    filename: Optional[str] = None
    size_bytes: Optional[int] = None
    created_at: Optional[str] = None
    duration_seconds: Optional[float] = None
    error: Optional[str] = None


class BackupInfo(BaseModel):
    """Backup information model."""
    filename: str
    size_bytes: int
    created_at: str
    age_days: int


class BackupListResponse(BaseModel):
    """Backup list response model."""
    backups: List[BackupInfo]
    total_count: int
    total_size_bytes: int


@router.get("/backup/status", response_model=BackupStatusResponse)
async def get_backup_status(
    user: AuthenticationResult = Depends(require_read_access) if OAUTH_ENABLED else None
):
    """
    Get current backup service status.

    Returns backup configuration, last backup time, and next scheduled backup.
    """
    try:
        scheduler = get_backup_scheduler()
        status = scheduler.get_status()

        return BackupStatusResponse(
            enabled=status.get('enabled', False),
            interval=status.get('interval', 'daily'),
            retention_days=status.get('retention_days', 7),
            max_count=status.get('max_count', 10),
            backup_count=status.get('backup_count', 0),
            total_size_bytes=status.get('total_size_bytes', 0),
            last_backup_time=status.get('last_backup_time'),
            time_since_last_seconds=status.get('time_since_last_seconds'),
            next_backup_at=status.get('next_backup_at'),
            scheduler_running=status.get('scheduler_running', False)
        )

    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Failed to get backup status: {str(e)}")


@router.post("/backup/now", response_model=BackupCreateResponse)
async def trigger_backup(
    user: AuthenticationResult = Depends(require_write_access) if OAUTH_ENABLED else None
):
    """
    Manually trigger an immediate backup.

    Creates a new backup of the database regardless of the schedule.
    """
    try:
        backup_service = get_backup_service()
        result = await backup_service.create_backup(description="Manual backup from dashboard")

        if result.get('success'):
            return BackupCreateResponse(
                success=True,
                filename=result.get('filename'),
                size_bytes=result.get('size_bytes'),
                created_at=result.get('created_at'),
                duration_seconds=result.get('duration_seconds')
            )
        else:
            return BackupCreateResponse(
                success=False,
                error=result.get('error', 'Unknown error')
            )

    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Failed to create backup: {str(e)}")


@router.get("/backup/list", response_model=BackupListResponse)
async def list_backups(
    user: AuthenticationResult = Depends(require_read_access) if OAUTH_ENABLED else None
):
    """
    List all available backups.

    Returns list of backups sorted by date (newest first).
    """
    try:
        backup_service = get_backup_service()
        backups = backup_service.list_backups()

        backup_infos = [
            BackupInfo(
                filename=b['filename'],
                size_bytes=b['size_bytes'],
                created_at=b['created_at'],
                age_days=b['age_days']
            )
            for b in backups
        ]

        total_size = sum(b['size_bytes'] for b in backups)

        return BackupListResponse(
            backups=backup_infos,
            total_count=len(backup_infos),
            total_size_bytes=total_size
        )

    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Failed to list backups: {str(e)}")

```

--------------------------------------------------------------------------------
/scripts/utils/smithery_wrapper.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Smithery wrapper for MCP Memory Service
This wrapper is specifically designed for Smithery installations.
It doesn't rely on UV and works with the installed package.
"""
import os
import sys
import subprocess
import traceback
import importlib.util

def print_info(text):
    """Print formatted info text."""
    print(f"[INFO] {text}", file=sys.stderr, flush=True)

def print_error(text):
    """Print formatted error text."""
    print(f"[ERROR] {text}", file=sys.stderr, flush=True)

def print_success(text):
    """Print formatted success text."""
    print(f"[SUCCESS] {text}", file=sys.stderr, flush=True)

def print_warning(text):
    """Print formatted warning text."""
    print(f"[WARNING] {text}", file=sys.stderr, flush=True)

def setup_environment():
    """Set up the environment for proper MCP Memory Service operation."""
    # Set environment variables for better cross-platform compatibility
    os.environ.setdefault("PYTORCH_ENABLE_MPS_FALLBACK", "1")
    
    # For systems with limited GPU memory, use smaller chunks
    os.environ.setdefault("PYTORCH_CUDA_ALLOC_CONF", "max_split_size_mb:128")
    
    # Ensure proper Python path
    script_dir = os.path.dirname(os.path.abspath(__file__))
    src_dir = os.path.join(script_dir, "src")
    if os.path.exists(src_dir) and src_dir not in sys.path:
        sys.path.insert(0, src_dir)

def check_dependencies():
    """Check if required dependencies are available."""
    required_packages = ["mcp", "chromadb", "sentence_transformers"]
    missing_packages = []
    
    for package in required_packages:
        try:
            __import__(package)
            print_info(f"✓ {package} is available")
        except ImportError:
            missing_packages.append(package)
            print_warning(f"✗ {package} is missing")
    
    return missing_packages

def install_missing_packages(packages):
    """Try to install missing packages."""
    if not packages:
        return True
    
    print_warning("Missing packages detected. For Smithery installations, dependencies should be pre-installed.")
    print_info("Attempting to install missing packages with --break-system-packages flag...")
    
    for package in packages:
        try:
            # Try user installation first
            subprocess.check_call([sys.executable, '-m', 'pip', 'install', '--user', package])
            print_success(f"Successfully installed {package}")
        except subprocess.SubprocessError:
            try:
                # Try with --break-system-packages for externally managed environments
                subprocess.check_call([sys.executable, '-m', 'pip', 'install', '--break-system-packages', package])
                print_success(f"Successfully installed {package}")
            except subprocess.SubprocessError as e:
                print_error(f"Failed to install {package}: {e}")
                print_warning("Continuing anyway - dependencies might be available in different location")
                continue
    
    return True

def run_memory_service():
    """Run the memory service."""
    print_info("Starting MCP Memory Service...")
    
    # Display environment configuration
    if "MCP_MEMORY_CHROMA_PATH" in os.environ:
        print_info(f"Using ChromaDB path: {os.environ['MCP_MEMORY_CHROMA_PATH']}")
    
    if "MCP_MEMORY_BACKUPS_PATH" in os.environ:
        print_info(f"Using backups path: {os.environ['MCP_MEMORY_BACKUPS_PATH']}")
    
    try:
        # Try to import and run the server directly
        from mcp_memory_service.server import main
        print_success("Successfully imported memory service")
        main()
    except ImportError as e:
        print_warning(f"Failed to import from installed package: {e}")
        
        # Fallback to source directory import
        script_dir = os.path.dirname(os.path.abspath(__file__))
        src_dir = os.path.join(script_dir, "src")
        
        if os.path.exists(src_dir):
            print_info("Trying to import from source directory...")
            sys.path.insert(0, src_dir)
            try:
                from mcp_memory_service.server import main
                print_success("Successfully imported from source directory")
                main()
            except ImportError as import_error:
                print_error(f"Failed to import from source directory: {import_error}")
                sys.exit(1)
        else:
            print_error("Could not find memory service source code")
            sys.exit(1)
    except Exception as e:
        print_error(f"Error running memory service: {e}")
        traceback.print_exc(file=sys.stderr)
        sys.exit(1)

def main():
    """Main entry point for Smithery wrapper."""
    print_info("MCP Memory Service - Smithery Wrapper")
    
    try:
        # Set up environment
        setup_environment()
        
        # Check dependencies (informational only)
        missing_packages = check_dependencies()
        
        if missing_packages:
            print_warning(f"Some packages appear missing: {', '.join(missing_packages)}")
            print_info("Attempting to proceed anyway - packages might be available in different location")
        
        # Run the memory service
        run_memory_service()
        
    except KeyboardInterrupt:
        print_info("Shutting down gracefully...")
        sys.exit(0)
    except Exception as e:
        print_error(f"Unhandled exception: {e}")
        traceback.print_exc(file=sys.stderr)
        sys.exit(1)

if __name__ == "__main__":
    main()
```

--------------------------------------------------------------------------------
/docs/development/dashboard-workflow.md:
--------------------------------------------------------------------------------

```markdown
# Dashboard Development Workflow

This guide documents the essential workflow for developing the interactive dashboard UI to prevent repetitive trial-and-error cycles.

## Critical Workflow Requirements

### 1. Server Restart After Static File Changes ⚠️

**Problem**: FastAPI/uvicorn caches static files (CSS, JS, HTML) in memory. Changes to these files won't appear in the browser until the server is restarted.

**Symptoms of Forgetting**:
- Modified JavaScript still shows old console.log statements
- CSS changes don't appear in browser
- File modification time is recent but browser serves old version

**Solution**:
```bash
# Restart HTTP server
systemctl --user restart mcp-memory-http.service

# Then hard refresh browser to clear cache
# Ctrl+Shift+R (Linux/Windows) or Cmd+Shift+R (macOS)
```

### 2. Automated Hooks (Claude Code) ✅

To eliminate manual restarts, configure automation hooks in `.claude/settings.local.json`:

```json
{
  "hooks": {
    "PostToolUse": [
      {
        "matchers": [
          "Write(file_path:**/web/static/*.css)",
          "Edit(file_path:**/web/static/*.css)",
          "Write(file_path:**/web/static/*.js)",
          "Edit(file_path:**/web/static/*.js)",
          "Write(file_path:**/web/static/*.html)",
          "Edit(file_path:**/web/static/*.html)"
        ],
        "hooks": [
          {
            "type": "command",
            "command": "bash",
            "args": [
              "-c",
              "systemctl --user restart mcp-memory-http.service && echo '\n⚠️  REMINDER: Hard refresh browser (Ctrl+Shift+R) to clear cache!'"
            ]
          }
        ]
      },
      {
        "matchers": [
          "Write(file_path:**/web/static/*.css)",
          "Edit(file_path:**/web/static/*.css)"
        ],
        "hooks": [
          {
            "type": "command",
            "command": "bash",
            "args": [
              "-c",
              "if grep -E 'background.*:.*white|background.*:.*#fff|color.*:.*white|color.*:.*#fff' /home/hkr/repositories/mcp-memory-service/src/mcp_memory_service/web/static/style.css | grep -v 'dark-mode'; then echo '\n⚠️  WARNING: Found hardcoded light colors in CSS. Check if body.dark-mode overrides are needed!'; fi"
            ]
          }
        ]
      }
    ]
  }
}
```

**What This Automates**:
- ✅ Auto-restart HTTP server when CSS/JS/HTML files are modified
- ✅ Display reminder to hard refresh browser
- ✅ Check for hardcoded light colors that need dark mode overrides
- ✅ Prevent the exact issue we had with chunk backgrounds

### 3. Dark Mode Compatibility Checklist

When adding new UI components, always verify dark mode compatibility:

**Common Issues**:
- Hardcoded `background: white` or `color: white`
- Hardcoded hex colors like `#fff` or `#000`
- Missing `body.dark-mode` overrides for new elements

**Example Fix** (from PR #164):
```css
/* BAD: Hardcoded light background */
.chunk-content {
    background: white;
    color: #333;
}

/* GOOD: Dark mode override */
body.dark-mode .chunk-content {
    background: #111827 !important;
    color: #d1d5db !important;
}
```

**Automation Hook**: The CSS hook automatically scans for hardcoded colors and warns if dark mode overrides might be needed.

### 4. Browser Cache Management

**Cache-Busting Techniques**:

1. **Hard Refresh**: Ctrl+Shift+R (Linux/Windows) or Cmd+Shift+R (macOS)
2. **URL Parameter**: Add `?nocache=timestamp` to force reload
3. **DevTools**: Keep DevTools open with "Disable cache" enabled during development

**Why This Matters**: Even after server restart, browsers aggressively cache static files. You must force a cache clear to see changes.

## Development Checklist

Before testing dashboard changes:

- [ ] Modified CSS/JS/HTML files
- [ ] Restarted HTTP server (`systemctl --user restart mcp-memory-http.service`)
- [ ] Hard refreshed browser (Ctrl+Shift+R)
- [ ] Checked console for JavaScript errors
- [ ] Verified dark mode compatibility (if CSS changes)
- [ ] Tested both light and dark mode

## Performance Benchmarks

Dashboard performance targets (validated v7.2.2):

| Component | Target | Typical |
|-----------|--------|---------|
| Page Load | <2s | ~25ms |
| Memory Operations | <1s | ~26ms |
| Tag Search | <500ms | <100ms |

If performance degrades:
1. Check browser DevTools Network tab for slow requests
2. Verify server logs for backend delays
3. Profile JavaScript execution in DevTools

## Testing with browser-mcp

For UI investigation and debugging:

```bash
# Navigate to dashboard
mcp__browsermcp__browser_navigate http://127.0.0.1:8888/

# Take screenshot
mcp__browsermcp__browser_screenshot

# Get console logs
mcp__browsermcp__browser_get_console_logs

# Click elements (requires ref from snapshot)
mcp__browsermcp__browser_click
```

## Common Pitfalls

1. **Forgetting server restart** → Use automation hooks!
2. **Missing browser cache clear** → Always hard refresh
3. **Dark mode not tested** → Check both themes for every UI change
4. **Console errors ignored** → Always check browser console
5. **Mobile responsiveness** → Test at 768px and 1024px breakpoints

## Related Documentation

- **Interactive Dashboard**: See `CLAUDE.md` section "Interactive Dashboard (v7.2.2+)"
- **Performance**: `docs/implementation/performance.md`
- **API Endpoints**: `CLAUDE.md` section "Key Endpoints"
- **Troubleshooting**: Wiki troubleshooting guide

---

**Note**: These automation hooks eliminate 95% of repetitive trial-and-error during dashboard development. Always verify hooks are configured in your local `.claude/settings.local.json`.

```

--------------------------------------------------------------------------------
/archive/docs-root-cleanup-2025-08-23/AWESOME_LIST_SUBMISSION.md:
--------------------------------------------------------------------------------

```markdown
# Awesome List Submission Guide

## MCP Memory Service - Universal Memory Service for AI Applications

> **Ready-to-use submission templates for awesome lists and community directories**

This guide provides optimized submission content for promoting MCP Memory Service across various awesome lists and community platforms. Each template is tailored for specific communities while highlighting our unique value propositions.

### One-Line Description
**[MCP Memory Service](https://github.com/doobidoo/mcp-memory-service)** - Universal MCP memory service with semantic search, multi-client support, and autonomous consolidation for Claude Desktop, VS Code, and 13+ AI applications.

### Detailed Description
A production-ready Model Context Protocol server that provides intelligent semantic memory, persistent storage, and autonomous memory consolidation for AI assistants and development environments. Features universal compatibility with 13+ AI clients including Claude Desktop, VS Code, Cursor, Continue, WindSurf, LM Studio, and Zed.

### Key Features for Awesome Lists

#### For Awesome MCP:
- ✅ **Full MCP Protocol Compliance** - Complete implementation with resources, prompts, and tools
- 🧠 **Semantic Memory Search** - Vector database with sentence transformers for intelligent retrieval
- 🔄 **Autonomous Memory Consolidation** - Dream-inspired system for automatic memory organization
- 🌐 **Multi-Client Support** - Works with 13+ AI applications simultaneously
- 🗄️ **Multiple Storage Backends** - SQLite-vec (default) and ChromaDB support
- 🚀 **Production Ready** - Deployed at scale with Docker, HTTPS, and service installation

#### For Awesome Claude:
- 🎯 **Native Claude Desktop Integration** - Seamless MCP server configuration
- 💬 **Conversational Memory Commands** - Optional Claude Code commands for direct memory operations
- 🔗 **Multi-Client Coordination** - Use memories across Claude Desktop and other AI tools
- 📊 **Advanced MCP Features** - URI-based resources, guided prompts, progress tracking
- ⚡ **High Performance** - 10x faster startup with SQLite-vec backend
- 🛠️ **Developer Friendly** - Comprehensive documentation and troubleshooting

#### For Awesome Developer Tools:
- 🛠️ **Universal AI Tool Integration** - Works with VS Code, Continue, Cursor, and other IDEs
- 📝 **Persistent Development Context** - Remember project decisions, architectural choices, and solutions
- 🔍 **Intelligent Search** - Natural language queries for finding past development insights
- 🏗️ **Cross-Project Memory** - Share knowledge across different codebases and teams
- 📈 **Productivity Enhancement** - Reduces context switching and information re-discovery
- 🐳 **Easy Deployment** - Docker, pip, or service installation options

### Technical Specifications
- **Language**: Python 3.10+
- **Protocol**: Model Context Protocol (MCP)
- **Storage**: SQLite-vec, ChromaDB
- **ML/AI**: Sentence Transformers, PyTorch
- **API**: FastAPI, Server-Sent Events
- **Platforms**: Windows, macOS, Linux (including Apple Silicon)
- **Deployment**: Local, Remote Server, Docker, System Service

### Links for Submission
- **Repository**: https://github.com/doobidoo/mcp-memory-service
- **Documentation**: Complete README with installation guides
- **Demo**: Production deployment on glama.ai
- **Package**: Available via pip, Docker Hub, and Smithery
- **License**: Apache 2.0

### Submission Categories

#### Awesome MCP
```markdown
- [MCP Memory Service](https://github.com/doobidoo/mcp-memory-service) - Universal memory service with semantic search, autonomous consolidation, and 13+ client support. Features production deployment, multi-client coordination, and dream-inspired memory organization.
```

#### Awesome Claude  
```markdown
- [MCP Memory Service](https://github.com/doobidoo/mcp-memory-service) - Intelligent semantic memory service for Claude Desktop with multi-client support, autonomous consolidation, and optional conversational commands. Production-ready with Docker and service deployment.
```

#### Awesome Developer Tools
```markdown
- [MCP Memory Service](https://github.com/doobidoo/mcp-memory-service) - Universal memory service for AI-powered development workflows. Integrates with VS Code, Continue, Cursor, and 13+ AI tools to provide persistent context and intelligent search across projects.
```

#### Awesome AI Tools
```markdown
- [MCP Memory Service](https://github.com/doobidoo/mcp-memory-service) - Production-ready memory service for AI assistants with semantic search, vector storage, and autonomous consolidation. Works with Claude Desktop, LM Studio, and 13+ AI applications.
```

### Community Engagement Strategy

1. **Submit to Awesome Lists** (in order of priority):
   - Awesome MCP (if exists)
   - Awesome Claude 
   - Awesome AI Tools
   - Awesome Developer Tools
   - Awesome FastAPI
   - Awesome Python

2. **Platform Submissions**:
   - Submit to Smithery (already done)
   - Submit to MseeP (already done)  
   - Consider submission to Product Hunt
   - Submit to relevant Reddit communities (r/MachineLearning, r/Python, r/programming)

3. **Documentation & Tutorials**:
   - Create video walkthrough
   - Write blog post about MCP integration
   - Submit to dev.to or Medium

### SEO-Optimized Tags for GitHub Topics
```
model-context-protocol, mcp-server, claude-desktop, semantic-memory, 
vector-database, ai-memory, sqlite-vec, fastapi, multi-client, 
cross-platform, docker, semantic-search, memory-consolidation, 
ai-productivity, vs-code, cursor, continue, developer-tools, 
production-ready, autonomous-memory
```
```

--------------------------------------------------------------------------------
/scripts/testing/test_cloudflare_backend.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Test script for Cloudflare backend integration.
Run this after setting up your Cloudflare resources.
"""

import os
import asyncio
import logging
from datetime import datetime
from src.mcp_memory_service.storage.cloudflare import CloudflareStorage
from src.mcp_memory_service.models.memory import Memory

# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

async def test_cloudflare_backend():
    """Test all Cloudflare backend functionality."""
    
    # Check environment variables
    required_vars = [
        'CLOUDFLARE_API_TOKEN',
        'CLOUDFLARE_ACCOUNT_ID', 
        'CLOUDFLARE_VECTORIZE_INDEX',
        'CLOUDFLARE_D1_DATABASE_ID'
    ]
    
    missing_vars = [var for var in required_vars if not os.getenv(var)]
    if missing_vars:
        logger.error(f"Missing environment variables: {missing_vars}")
        return False
    
    try:
        # Initialize storage
        logger.info("🔧 Initializing Cloudflare storage...")
        storage = CloudflareStorage(
            api_token=os.getenv('CLOUDFLARE_API_TOKEN'),
            account_id=os.getenv('CLOUDFLARE_ACCOUNT_ID'),
            vectorize_index=os.getenv('CLOUDFLARE_VECTORIZE_INDEX'),
            d1_database_id=os.getenv('CLOUDFLARE_D1_DATABASE_ID'),
            r2_bucket=os.getenv('CLOUDFLARE_R2_BUCKET')  # Optional
        )
        
        # Test initialization
        logger.info("🚀 Testing storage initialization...")
        await storage.initialize()
        logger.info("✅ Storage initialized successfully")
        
        # Test storing a memory
        logger.info("💾 Testing memory storage...")
        test_memory = Memory(
            content="This is a test memory for Cloudflare backend integration.",
            tags=["test", "cloudflare", "integration"],
            memory_type="test",
            metadata={"test_run": datetime.now().isoformat()}
        )
        
        success, message = await storage.store(test_memory)
        if success:
            logger.info(f"✅ Memory stored: {message}")
        else:
            logger.error(f"❌ Failed to store memory: {message}")
            return False
        
        # Test retrieval
        logger.info("🔍 Testing memory retrieval...")
        results = await storage.retrieve("test memory cloudflare", n_results=5)
        if results:
            logger.info(f"✅ Retrieved {len(results)} memories")
            for i, result in enumerate(results):
                logger.info(f"  {i+1}. Score: {result.similarity_score:.3f} - {result.memory.content[:50]}...")
        else:
            logger.warning("⚠️  No memories retrieved")
        
        # Test tag search
        logger.info("🏷️  Testing tag search...")
        tag_results = await storage.search_by_tag(["test"])
        if tag_results:
            logger.info(f"✅ Found {len(tag_results)} memories with 'test' tag")
        else:
            logger.warning("⚠️  No memories found with 'test' tag")
        
        # Test statistics
        logger.info("📊 Testing statistics...")
        stats = await storage.get_stats()
        logger.info(f"✅ Stats: {stats['total_memories']} memories, {stats['status']} status")
        
        # Test cleanup (optional - uncomment to clean up test data)
        # logger.info("🧹 Cleaning up test data...")
        # deleted_count, delete_message = await storage.delete_by_tag("test")
        # logger.info(f"✅ Cleaned up: {delete_message}")
        
        logger.info("🎉 All tests passed! Cloudflare backend is working correctly.")
        return True
        
    except Exception as e:
        logger.error(f"❌ Test failed: {e}")
        return False
    
    finally:
        if 'storage' in locals():
            await storage.close()
            logger.info("🔒 Storage connection closed")

def print_setup_instructions():
    """Print setup instructions if environment is not configured."""
    print("\n" + "="*60)
    print("🔧 CLOUDFLARE BACKEND SETUP REQUIRED")
    print("="*60)
    print()
    print("Please complete these steps:")
    print()
    print("1. Create API token with these permissions:")
    print("   - Vectorize:Edit")
    print("   - D1:Edit") 
    print("   - Workers AI:Edit")
    print("   - R2:Edit (optional)")
    print()
    print("2. Create Cloudflare resources:")
    print("   wrangler vectorize create mcp-memory-index --dimensions=768 --metric=cosine")
    print("   wrangler d1 create mcp-memory-db")
    print("   wrangler r2 bucket create mcp-memory-content  # optional")
    print()
    print("3. Set environment variables:")
    print("   export CLOUDFLARE_API_TOKEN='your-token'")
    print("   export CLOUDFLARE_ACCOUNT_ID='be0e35a26715043ef8df90253268c33f'")
    print("   export CLOUDFLARE_VECTORIZE_INDEX='mcp-memory-index'") 
    print("   export CLOUDFLARE_D1_DATABASE_ID='your-d1-id'")
    print("   export CLOUDFLARE_R2_BUCKET='mcp-memory-content'  # optional")
    print()
    print("4. Run this test again:")
    print("   python test_cloudflare_backend.py")
    print()
    print("See docs/cloudflare-setup.md for detailed instructions.")
    print("="*60)

if __name__ == "__main__":
    # Check if basic environment is set up
    if not all(os.getenv(var) for var in ['CLOUDFLARE_API_TOKEN', 'CLOUDFLARE_ACCOUNT_ID']):
        print_setup_instructions()
    else:
        success = asyncio.run(test_cloudflare_backend())
        if success:
            print("\n🎉 Cloudflare backend is ready for production use!")
        else:
            print("\n❌ Tests failed. Check the logs above for details.")
```

--------------------------------------------------------------------------------
/scripts/testing/test_docker_functionality.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Test script to verify Docker container functionality after cleanup.
Tests basic memory operations and timestamp handling.
"""

import subprocess
import time
import json
import sys
from pathlib import Path

def run_command(cmd, capture_output=True, timeout=30):
    """Run a command and return the result."""
    try:
        result = subprocess.run(
            cmd, 
            shell=True, 
            capture_output=capture_output, 
            text=True, 
            timeout=timeout
        )
        return result.returncode, result.stdout, result.stderr
    except subprocess.TimeoutExpired:
        return -1, "", "Command timed out"

def test_docker_build():
    """Test Docker image build."""
    print("🔨 Testing Docker build...")
    
    # Build the Docker image
    cmd = "docker build -f tools/docker/Dockerfile -t mcp-memory-service:test ."
    returncode, stdout, stderr = run_command(cmd, timeout=300)
    
    if returncode != 0:
        print(f"❌ Docker build failed:")
        print(f"STDOUT: {stdout}")
        print(f"STDERR: {stderr}")
        return False
    
    print("✅ Docker build successful")
    return True

def test_docker_import():
    """Test that the server can import without errors."""
    print("🧪 Testing Python imports in container...")
    
    # Test import using python directly instead of the entrypoint
    cmd = '''docker run --rm --entrypoint python mcp-memory-service:test -c "
import sys
sys.path.append('/app/src')
from mcp_memory_service.server import main
from mcp_memory_service.models.memory import Memory
from datetime import datetime
print('✅ All imports successful')
print('✅ Memory model available')
print('✅ Server main function available')
"'''
    
    returncode, stdout, stderr = run_command(cmd, timeout=60)
    
    if returncode != 0:
        print(f"❌ Import test failed:")
        print(f"STDOUT: {stdout}")
        print(f"STDERR: {stderr}")
        return False
    
    print(stdout.strip())
    return True

def test_memory_model():
    """Test Memory model and timestamp functionality."""
    print("📝 Testing Memory model and timestamps...")
    
    cmd = '''docker run --rm --entrypoint python mcp-memory-service:test -c "
import sys
sys.path.append('/app/src')
from mcp_memory_service.models.memory import Memory
from datetime import datetime
import json

# Test Memory creation
memory = Memory(
    content='Test memory content',
    content_hash='testhash123',
    tags=['test', 'docker'],
    metadata={'source': 'test_script'}
)

print(f'✅ Memory created successfully')
print(f'✅ Content: {memory.content}')
print(f'✅ Tags: {memory.tags}')
print(f'✅ Timestamp type: {type(memory.timestamp).__name__}')
print(f'✅ Timestamp value: {memory.timestamp}')

# Test that timestamp is already datetime (no conversion needed)
if isinstance(memory.timestamp, datetime):
    print('✅ Timestamp is correctly a datetime object')
else:
    print('❌ Timestamp is not a datetime object')
    sys.exit(1)
"'''
    
    returncode, stdout, stderr = run_command(cmd, timeout=60)
    
    if returncode != 0:
        print(f"❌ Memory model test failed:")
        print(f"STDOUT: {stdout}")
        print(f"STDERR: {stderr}")
        return False
    
    print(stdout.strip())
    return True

def test_server_startup():
    """Test that server can start without crashing immediately."""
    print("🚀 Testing server startup...")
    
    # Start server in background and check if it runs for a few seconds
    # Test server startup by running it briefly
    cmd = '''timeout 5s docker run --rm mcp-memory-service:test 2>/dev/null || echo "✅ Server startup test completed (timeout expected)"'''
    
    returncode, stdout, stderr = run_command(cmd, timeout=15)
    
    # We expect a timeout or success message
    if "Server started successfully" in stdout or "Server startup test completed" in stdout:
        print("✅ Server can start without immediate crashes")
        return True
    else:
        print(f"❌ Server startup test unclear:")
        print(f"STDOUT: {stdout}")
        print(f"STDERR: {stderr}")
        return False

def cleanup_docker():
    """Clean up test Docker images."""
    print("🧹 Cleaning up test images...")
    run_command("docker rmi mcp-memory-service:test", capture_output=False)

def main():
    """Run all tests."""
    print("🔍 DOCKER FUNCTIONALITY TEST SUITE")
    print("=" * 50)
    
    tests = [
        ("Docker Build", test_docker_build),
        ("Python Imports", test_docker_import),
        ("Memory Model", test_memory_model),
        ("Server Startup", test_server_startup),
    ]
    
    passed = 0
    failed = 0
    
    for test_name, test_func in tests:
        print(f"\n📋 Running: {test_name}")
        print("-" * 30)
        
        try:
            if test_func():
                passed += 1
                print(f"✅ {test_name} PASSED")
            else:
                failed += 1
                print(f"❌ {test_name} FAILED")
        except Exception as e:
            failed += 1
            print(f"❌ {test_name} ERROR: {e}")
    
    print("\n" + "=" * 50)
    print(f"📊 TEST SUMMARY:")
    print(f"✅ Passed: {passed}")
    print(f"❌ Failed: {failed}")
    print(f"📈 Success Rate: {passed/(passed+failed)*100:.1f}%")
    
    if failed == 0:
        print("\n🎉 ALL TESTS PASSED! Docker functionality is working correctly.")
        cleanup_docker()
        return 0
    else:
        print(f"\n⚠️  {failed} test(s) failed. Please review the issues above.")
        cleanup_docker()
        return 1

if __name__ == "__main__":
    sys.exit(main())
```

--------------------------------------------------------------------------------
/archive/docs-root-cleanup-2025-08-23/CLOUDFLARE_IMPLEMENTATION.md:
--------------------------------------------------------------------------------

```markdown
# Cloudflare Native Integration Implementation Log

## Project Overview
Adding Cloudflare as a native backend option to MCP Memory Service while maintaining full compatibility with existing deployments.

## Implementation Timeline
- **Start Date:** 2025-08-16
- **Target Completion:** 4 weeks
- **Current Phase:** Phase 1 - Foundation Setup

## Phase 1: Core Backend Implementation (Weeks 1-2)

### Week 1 Progress

#### Day 1 (2025-08-16)
- ✅ Created implementation tracking infrastructure
- ✅ Analyzed current MCP Memory Service architecture
- ✅ Researched Cloudflare Vectorize, D1, and R2 APIs
- ✅ Designed overall architecture approach
- ✅ Set up feature branch and task files
- ✅ **COMPLETED:** Core CloudflareStorage backend implementation

#### Foundation Setup Tasks ✅
- ✅ Create feature branch: `feature/cloudflare-native-backend`
- ✅ Set up task tracking files in `tasks/` directory
- ✅ Store initial plan in memory service
- ✅ Document Cloudflare API requirements and limits

#### CloudflareStorage Backend Tasks ✅
- ✅ Implement base CloudflareStorage class extending MemoryStorage
- ✅ Add Vectorize vector operations (store, query, delete)
- ✅ Implement D1 metadata operations (tags, timestamps, content hashes)
- ✅ Add R2 content storage for large objects (>1MB)
- ✅ Implement comprehensive error handling and retry logic
- ✅ Add logging and performance metrics
- ✅ Update config.py for Cloudflare backend support
- ✅ Update server.py for Cloudflare backend initialization
- ✅ Create comprehensive unit tests

#### Configuration Updates ✅
- ✅ Add `cloudflare` to SUPPORTED_BACKENDS
- ✅ Implement Cloudflare-specific environment variables
- ✅ Add Workers AI embedding model configuration
- ✅ Update validation logic for Cloudflare backend
- ✅ Add server initialization code

#### Implementation Highlights
- **Full Interface Compliance**: All MemoryStorage methods implemented
- **Robust Error Handling**: Exponential backoff, retry logic, circuit breaker patterns
- **Performance Optimizations**: Embedding caching, connection pooling, async operations
- **Smart Content Strategy**: Small content in D1, large content in R2
- **Comprehensive Testing**: 15 unit tests covering all major functionality

#### Files Created/Modified
- ✅ `src/mcp_memory_service/storage/cloudflare.py` - Core implementation (740 lines)
- ✅ `src/mcp_memory_service/config.py` - Configuration updates
- ✅ `src/mcp_memory_service/server.py` - Backend initialization
- ✅ `tests/unit/test_cloudflare_storage.py` - Comprehensive test suite
- ✅ `requirements-cloudflare.txt` - Additional dependencies
- ✅ `tasks/cloudflare-api-requirements.md` - API documentation

### Architecture Decisions Made

#### Storage Strategy
- **Vectors:** Cloudflare Vectorize for semantic embeddings
- **Metadata:** D1 SQLite for tags, timestamps, relationships, content hashes
- **Content:** Inline for small content (<1MB), R2 for larger content
- **Embeddings:** Workers AI `@cf/baai/bge-base-en-v1.5` with local fallback

#### Configuration Approach
- Environment variable: `MCP_MEMORY_BACKEND=cloudflare`
- Required: `CLOUDFLARE_API_TOKEN`, `CLOUDFLARE_ACCOUNT_ID`
- Services: `CLOUDFLARE_VECTORIZE_INDEX`, `CLOUDFLARE_D1_DATABASE_ID`
- Optional: `CLOUDFLARE_R2_BUCKET` for large content storage

## Phase 2: Workers Deployment Support (Week 3)
- [ ] Worker entry point implementation
- [ ] Deployment configuration (wrangler.toml)
- [ ] Build system updates
- [ ] CI/CD pipeline integration

## Phase 3: Migration & Testing (Week 4)
- [ ] Data migration tools
- [ ] Comprehensive testing suite
- [ ] Performance benchmarking
- [ ] Documentation completion

## Phase 1 Status: ✅ COMPLETE

### Final Deliverables ✅
- ✅ **Core Implementation**: CloudflareStorage backend (740 lines) with full interface compliance
- ✅ **Configuration**: Complete environment variable setup and validation
- ✅ **Server Integration**: Seamless backend initialization in server.py
- ✅ **Testing**: Comprehensive test suite with 15 unit tests covering all functionality
- ✅ **Documentation**: Complete setup guide, API documentation, and troubleshooting
- ✅ **Migration Tools**: Universal migration script supporting SQLite-vec and ChromaDB
- ✅ **README Updates**: Integration with main project documentation

### Performance Achievements
- **Memory Efficiency**: Minimal local footprint with cloud-based storage
- **Global Performance**: <100ms latency from most global locations
- **Smart Caching**: 1000-entry embedding cache with LRU eviction
- **Error Resilience**: Exponential backoff, retry logic, circuit breaker patterns
- **Async Operations**: Full async/await implementation for optimal performance

### Architecture Success
- **Vectorize Integration**: Semantic search with Workers AI embeddings
- **D1 Database**: Relational metadata storage with ACID compliance
- **R2 Storage**: Smart content strategy for large objects (>1MB)
- **Connection Pooling**: HTTP client optimization for API efficiency
- **Batch Processing**: Bulk operations for improved throughput

## Current Blockers
- None - Phase 1 complete and ready for production use

## Next Steps - Phase 2: Workers Deployment
1. **Worker Entry Point**: Create cloudflare/worker.js for Workers runtime
2. **Deployment Configuration**: Complete wrangler.toml setup
3. **Build System**: Workers-compatible bundling and optimization
4. **CI/CD Pipeline**: Automated deployment workflows
5. **Testing**: Integration tests with real Cloudflare Workers environment

## Technical Notes
- Maintaining full backward compatibility with existing storage backends
- Zero breaking changes to current deployments
- Gradual migration capability for existing users
```

--------------------------------------------------------------------------------
/claude-hooks/test-adaptive-weights.js:
--------------------------------------------------------------------------------

```javascript
#!/usr/bin/env node
/**
 * Test Adaptive Memory Weight Adjustment
 * Tests the new dynamic weight adjustment based on memory age distribution
 */

const { analyzeMemoryAgeDistribution, calculateAdaptiveGitWeight } = require('./utilities/memory-scorer');

console.log('=== ADAPTIVE WEIGHT ADJUSTMENT TEST ===\n');

// Scenario 1: Stale memory set (your actual problem)
console.log('📊 Scenario 1: Stale Memory Set (Median > 30 days)');
console.log('─'.repeat(80));

const staleMemories = [
    { created_at_iso: new Date(Date.now() - 60 * 24 * 60 * 60 * 1000).toISOString(), content: 'Old README work' },
    { created_at_iso: new Date(Date.now() - 54 * 24 * 60 * 60 * 1000).toISOString(), content: 'Old wiki docs' },
    { created_at_iso: new Date(Date.now() - 24 * 24 * 60 * 60 * 1000).toISOString(), content: 'Contributing guide' },
    { created_at_iso: new Date(Date.now() - 57 * 24 * 60 * 60 * 1000).toISOString(), content: 'GitHub issue work' },
    { created_at_iso: new Date(Date.now() - 52 * 24 * 60 * 60 * 1000).toISOString(), content: 'Old PR merge' },
];

const ageAnalysis1 = analyzeMemoryAgeDistribution(staleMemories, { verbose: true });

console.log('\n🔍 Analysis Results:');
console.log(`  Median Age: ${Math.round(ageAnalysis1.medianAge)} days`);
console.log(`  Average Age: ${Math.round(ageAnalysis1.avgAge)} days`);
console.log(`  Recent Count: ${ageAnalysis1.recentCount}/${ageAnalysis1.totalCount} (${Math.round(ageAnalysis1.recentCount/ageAnalysis1.totalCount*100)}%)`);
console.log(`  Is Stale: ${ageAnalysis1.isStale ? '✅ YES' : '❌ NO'}`);

if (ageAnalysis1.recommendedAdjustments.reason) {
    console.log('\n💡 Recommended Adjustments:');
    console.log(`  Reason: ${ageAnalysis1.recommendedAdjustments.reason}`);
    console.log(`  Time Decay Weight: 0.25 → ${ageAnalysis1.recommendedAdjustments.timeDecay}`);
    console.log(`  Tag Relevance Weight: 0.35 → ${ageAnalysis1.recommendedAdjustments.tagRelevance}`);
}

// Test adaptive git weight with recent commits but stale memories
const gitContext1 = {
    recentCommits: [
        { date: new Date(Date.now() - 1 * 24 * 60 * 60 * 1000).toISOString(), message: 'chore: bump version to v8.5.0' },
        { date: new Date(Date.now() - 1 * 24 * 60 * 60 * 1000).toISOString(), message: 'fix: sync script import path' },
    ]
};

const gitWeightResult1 = calculateAdaptiveGitWeight(gitContext1, ageAnalysis1, 1.8, { verbose: true });

console.log('\n⚙️  Adaptive Git Weight:');
console.log(`  Configured Weight: 1.8x`);
console.log(`  Adaptive Weight: ${gitWeightResult1.weight.toFixed(1)}x`);
console.log(`  Adjusted: ${gitWeightResult1.adjusted ? '✅ YES' : '❌ NO'}`);
console.log(`  Reason: ${gitWeightResult1.reason}`);

// Scenario 2: Recent memory set
console.log('\n\n📊 Scenario 2: Recent Memory Set (All memories < 14 days)');
console.log('─'.repeat(80));

const recentMemories = [
    { created_at_iso: new Date(Date.now() - 3 * 24 * 60 * 60 * 1000).toISOString(), content: 'Recent HTTP fix' },
    { created_at_iso: new Date(Date.now() - 5 * 24 * 60 * 60 * 1000).toISOString(), content: 'Dark mode feature' },
    { created_at_iso: new Date(Date.now() - 4 * 24 * 60 * 60 * 1000).toISOString(), content: 'ChromaDB removal' },
    { created_at_iso: new Date(Date.now() - 7 * 24 * 60 * 60 * 1000).toISOString(), content: 'Memory optimization' },
    { created_at_iso: new Date(Date.now() - 2 * 24 * 60 * 60 * 1000).toISOString(), content: 'Token savings' },
];

const ageAnalysis2 = analyzeMemoryAgeDistribution(recentMemories, { verbose: true });

console.log('\n🔍 Analysis Results:');
console.log(`  Median Age: ${Math.round(ageAnalysis2.medianAge)} days`);
console.log(`  Average Age: ${Math.round(ageAnalysis2.avgAge)} days`);
console.log(`  Recent Count: ${ageAnalysis2.recentCount}/${ageAnalysis2.totalCount} (${Math.round(ageAnalysis2.recentCount/ageAnalysis2.totalCount*100)}%)`);
console.log(`  Is Stale: ${ageAnalysis2.isStale ? '✅ YES' : '❌ NO'}`);

const gitContext2 = {
    recentCommits: [
        { date: new Date(Date.now() - 1 * 24 * 60 * 60 * 1000).toISOString(), message: 'chore: bump version' },
    ]
};

const gitWeightResult2 = calculateAdaptiveGitWeight(gitContext2, ageAnalysis2, 1.8, { verbose: true });

console.log('\n⚙️  Adaptive Git Weight:');
console.log(`  Configured Weight: 1.8x`);
console.log(`  Adaptive Weight: ${gitWeightResult2.weight.toFixed(1)}x`);
console.log(`  Adjusted: ${gitWeightResult2.adjusted ? '✅ YES' : '❌ NO'}`);
console.log(`  Reason: ${gitWeightResult2.reason}`);

// Summary
console.log('\n\n✅ Test Summary:');
console.log('─'.repeat(80));
console.log('Expected Behavior:');
console.log('  1. Stale memories (median > 30d) should trigger auto-calibration');
console.log('     → Increase time decay weight, reduce tag relevance weight');
console.log('  2. Recent commits + stale memories should reduce git weight');
console.log('     → Prevents old git memories from dominating');
console.log('  3. Recent commits + recent memories should keep git weight');
console.log('     → Git context is relevant and aligned');
console.log('\nActual Results:');
console.log(`  ✅ Scenario 1: ${ageAnalysis1.isStale ? 'Auto-calibrated weights' : 'ERROR: Should calibrate'}`);
console.log(`  ✅ Scenario 1 Git: ${gitWeightResult1.adjusted ? 'Reduced git weight from 1.8 to ' + gitWeightResult1.weight.toFixed(1) : 'ERROR: Should adjust'}`);
console.log(`  ✅ Scenario 2: ${!ageAnalysis2.isStale ? 'No calibration needed' : 'ERROR: Should not calibrate'}`);
console.log(`  ✅ Scenario 2 Git: ${!gitWeightResult2.adjusted ? 'Kept git weight at 1.8' : 'ERROR: Should not adjust'}`);
console.log('\n🎉 Dynamic weight adjustment is working as expected!');

```

--------------------------------------------------------------------------------
/docs/troubleshooting/general.md:
--------------------------------------------------------------------------------

```markdown
# MCP Memory Service Troubleshooting Guide

This guide covers common issues and their solutions when working with the MCP Memory Service.

## First-Time Setup Warnings (Normal Behavior)

### Expected Warnings on First Run

The following warnings are **completely normal** during first-time setup:

#### "No snapshots directory" Warning
```
WARNING:mcp_memory_service.storage.sqlite_vec:Failed to load from cache: No snapshots directory
```
- **Status:** ✅ Normal - Service is checking for cached models
- **Action:** None required - Model will download automatically
- **Duration:** Appears only on first run

#### "TRANSFORMERS_CACHE deprecated" Warning  
```
WARNING: Using TRANSFORMERS_CACHE is deprecated
```
- **Status:** ✅ Normal - Informational warning from Hugging Face
- **Action:** None required - Doesn't affect functionality
- **Duration:** May appear on each run (can be ignored)

#### Model Download Messages
```
Downloading model 'all-MiniLM-L6-v2'...
```
- **Status:** ✅ Normal - One-time model download (~25MB)
- **Action:** Wait 1-2 minutes for download to complete
- **Duration:** First run only

For detailed information, see the [First-Time Setup Guide](../first-time-setup.md).

## Python 3.13 sqlite-vec Issues

### Problem: sqlite-vec Installation Fails on Python 3.13
**Error:** `Failed to install SQLite-vec: Command ... returned non-zero exit status 1`

**Cause:** sqlite-vec doesn't have pre-built wheels for Python 3.13 yet, and no source distribution is available on PyPI.

**Solutions:**

1. **Automatic Fallback (v6.13.2+)**
   - The installer now automatically tries multiple installation methods
   - It will attempt: uv pip, standard pip, source build, and GitHub installation
   - If all fail, you'll be prompted to switch to ChromaDB

2. **Use Python 3.12 (Recommended)**
   ```bash
   # macOS
   brew install [email protected]
   python3.12 -m venv .venv
   source .venv/bin/activate
   python install.py
   ```

3. **Switch to ChromaDB Backend**
   ```bash
   python install.py --storage-backend chromadb
   ```

4. **Manual Installation Attempts**
   ```bash
   # Force source build
   pip install --no-binary :all: sqlite-vec
   
   # Install from GitHub
   pip install git+https://github.com/asg017/sqlite-vec.git#subdirectory=python
   
   # Alternative: pysqlite3-binary
   pip install pysqlite3-binary
   ```

5. **Report Issue**
   - Check for updates: https://github.com/asg017/sqlite-vec/issues
   - sqlite-vec may add Python 3.13 support in future releases

## macOS SQLite Extension Issues

### Problem: `AttributeError: 'sqlite3.Connection' object has no attribute 'enable_load_extension'`
**Error:** Python 3.12 (and other versions) on macOS failing with sqlite-vec backend

**Cause:** Python on macOS is not compiled with `--enable-loadable-sqlite-extensions` by default. The system SQLite library doesn't support extensions.

**Platform:** Affects macOS (all versions), particularly with system Python

**Solutions:**

1. **Use Homebrew Python (Recommended)**
   ```bash
   # Install Homebrew Python (includes extension support)
   brew install python
   hash -r  # Refresh shell command cache
   python3 --version  # Verify Homebrew version
   
   # Reinstall MCP Memory Service
   python3 install.py
   ```

2. **Use pyenv with Extension Support**
   ```bash
   # Install pyenv
   brew install pyenv
   
   # Install Python with extension support
   PYTHON_CONFIGURE_OPTS="--enable-loadable-sqlite-extensions" \
   LDFLAGS="-L$(brew --prefix sqlite)/lib" \
   CPPFLAGS="-I$(brew --prefix sqlite)/include" \
   pyenv install 3.12.0
   
   pyenv local 3.12.0
   python install.py
   ```

3. **Switch to ChromaDB Backend**
   ```bash
   # ChromaDB doesn't require SQLite extensions
   export MCP_MEMORY_STORAGE_BACKEND=chromadb
   python install.py --storage-backend chromadb
   ```

4. **Verify Extension Support**
   ```bash
   python3 -c "
   import sqlite3
   conn = sqlite3.connect(':memory:')
   if hasattr(conn, 'enable_load_extension'):
       try:
           conn.enable_load_extension(True)
           print('✅ Extension support working')
       except Exception as e:
           print(f'❌ Extension support disabled: {e}')
   else:
       print('❌ No enable_load_extension attribute')
   "
   ```

**Why this happens:**
- Security: Extension loading disabled by default
- Compilation: System Python not built with extension support
- Library: macOS bundled SQLite lacks extension loading capability

**Detection:** The installer now automatically detects this issue and provides guidance.

## Common Installation Issues

[Content from installation.md's troubleshooting section - already well documented]

## MCP Protocol Issues

### Method Not Found Errors

If you're seeing "Method not found" errors or JSON error popups in Claude Desktop:

#### Symptoms
- "Method not found" errors in logs
- JSON error popups in Claude Desktop
- Connection issues between Claude Desktop and the memory service

#### Solution
1. Ensure you have the latest version of the MCP Memory Service
2. Verify your server implements all required MCP protocol methods:
   - resources/list
   - resources/read
   - resource_templates/list
3. Update your Claude Desktop configuration using the provided template

[Additional content from MCP_PROTOCOL_FIX.md]

## Windows-Specific Issues

[Content from WINDOWS_JSON_FIX.md and windows-specific sections]

## Performance Optimization

### Memory Issues
[Content from installation.md's performance section]

### Acceleration Issues
[Content from installation.md's acceleration section]

## Debugging Tools

[Content from installation.md's debugging section]

## Getting Help

[Content from installation.md's help section]

```

--------------------------------------------------------------------------------
/docs/remote-configuration-wiki-section.md:
--------------------------------------------------------------------------------

```markdown
# Remote Server Configuration (Wiki Section)

This content can be added to the **03 Integration Guide** wiki page under the "1. Claude Desktop Integration" section.

---

## Remote Server Configuration

For users who want to connect Claude Desktop or Cursor to a remote MCP Memory Service instance (running on a VPS, server, or different machine), use the HTTP-to-MCP bridge included in the repository.

### Quick Setup

The MCP Memory Service includes a Node.js bridge that translates HTTP API calls to MCP protocol messages, allowing remote connections.

**Configuration for Claude Desktop:**

```json
{
  "mcpServers": {
    "memory": {
      "command": "node",
      "args": ["/path/to/mcp-memory-service/examples/http-mcp-bridge.js"],
      "env": {
        "MCP_HTTP_ENDPOINT": "https://your-server:8000/api",
        "MCP_MEMORY_API_KEY": "your-secure-api-key"
      }
    }
  }
}
```

### Configuration Options

#### Manual Endpoint Configuration (Recommended for Remote Servers)
```json
{
  "mcpServers": {
    "memory": {
      "command": "node",
      "args": ["/path/to/mcp-memory-service/examples/http-mcp-bridge.js"],
      "env": {
        "MCP_HTTP_ENDPOINT": "https://your-server:8000/api",
        "MCP_MEMORY_API_KEY": "your-secure-api-key",
        "MCP_MEMORY_AUTO_DISCOVER": "false",
        "MCP_MEMORY_PREFER_HTTPS": "true"
      }
    }
  }
}
```

#### Auto-Discovery (For Local Network)
```json
{
  "mcpServers": {
    "memory": {
      "command": "node",
      "args": ["/path/to/mcp-memory-service/examples/http-mcp-bridge.js"],
      "env": {
        "MCP_MEMORY_AUTO_DISCOVER": "true",
        "MCP_MEMORY_PREFER_HTTPS": "true",
        "MCP_MEMORY_API_KEY": "your-api-key"
      }
    }
  }
}
```

### Step-by-Step Setup

1. **Download the HTTP Bridge**
   - Copy [`examples/http-mcp-bridge.js`](https://github.com/doobidoo/mcp-memory-service/blob/main/examples/http-mcp-bridge.js) to your local machine

2. **Update Configuration**
   - Open your Claude Desktop configuration file:
     - **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
     - **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
   - Add the remote server configuration (see examples above)
   - Replace `/path/to/mcp-memory-service/examples/http-mcp-bridge.js` with the actual path
   - Replace `https://your-server:8000/api` with your server's endpoint
   - Replace `your-secure-api-key` with your actual API key

3. **Verify Connection**
   - Restart Claude Desktop
   - Test the connection with a simple memory operation
   - Check the bridge logs for any connection issues

### Bridge Features

The HTTP-to-MCP bridge supports:

- ✅ **Manual endpoint configuration** - Direct connection to your remote server
- ✅ **API key authentication** - Secure access to your memory service
- ✅ **HTTPS with self-signed certificates** - Works with development SSL certificates
- ✅ **Automatic service discovery via mDNS** - Auto-detects local network services
- ✅ **Retry logic and error handling** - Robust connection management
- ✅ **Comprehensive logging** - Detailed logs for troubleshooting

### Environment Variables Reference

| Variable | Description | Default | Example |
|----------|-------------|---------|---------|
| `MCP_HTTP_ENDPOINT` | Remote server API endpoint (Streamable HTTP lives at `/mcp`; REST at `/api`) | `http://localhost:8000/api` | `https://myserver.com:8000/api` |
| `MCP_MEMORY_API_KEY` | Authentication token for client bridge (server uses `MCP_API_KEY`) | None | `abc123xyz789` |
| `MCP_MEMORY_AUTO_DISCOVER` | Enable mDNS service discovery | `false` | `true` |
| `MCP_MEMORY_PREFER_HTTPS` | Prefer HTTPS over HTTP when discovering | `true` | `false` |

### Troubleshooting Remote Connections

#### Connection Refused
- **Issue**: Bridge can't connect to the remote server
- **Solutions**:
  - Verify the server is running and accessible
  - Check firewall rules allow connections on port 8000
  - Confirm the endpoint URL is correct
  - Test with curl: `curl https://your-server:8000/api/health`

#### SSL Certificate Issues
- **Issue**: HTTPS connections fail with SSL errors
- **Solutions**:
  - The bridge automatically accepts self-signed certificates
  - Ensure your server is running with HTTPS enabled
  - Check server logs for SSL configuration issues

#### API Key Authentication Failed
- **Issue**: Server returns 401 Unauthorized
- **Solutions**:
  - Verify the API key is correctly set on the server
  - Check the key is properly configured in the bridge environment
  - Ensure no extra whitespace in the API key value

#### Service Discovery Not Working
- **Issue**: Auto-discovery can't find the service
- **Solutions**:
  - Use manual endpoint configuration instead
  - Ensure both devices are on the same network
  - Check if mDNS/Bonjour is enabled on your network

#### Bridge Logs Not Appearing
- **Issue**: Can't see bridge connection logs
- **Solutions**:
  - Bridge logs appear in Claude Desktop's console/stderr
  - On macOS, use Console.app to view logs
  - On Windows, check Event Viewer or run Claude Desktop from command line

### Complete Example Files

For complete working examples, see:
- [`examples/claude-desktop-http-config.json`](https://github.com/doobidoo/mcp-memory-service/blob/main/examples/claude-desktop-http-config.json) - Complete configuration template
- [`examples/http-mcp-bridge.js`](https://github.com/doobidoo/mcp-memory-service/blob/main/examples/http-mcp-bridge.js) - Full bridge implementation with documentation

---

*This section should be added to the existing "1. Claude Desktop Integration" section of the 03 Integration Guide wiki page, positioned after the basic local configuration examples.*

```

--------------------------------------------------------------------------------
/scripts/maintenance/cleanup_corrupted_encoding.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Clean up memories with corrupted emoji encoding from the database.
This script identifies and removes entries where emojis were incorrectly encoded.
"""

import sqlite3
import json
import re
import sys
from pathlib import Path

def detect_corrupted_encoding(text):
    """
    Detect if text contains corrupted emoji encoding patterns.
    Common patterns include:
    - üöÄ (corrupted 🚀)
    - ‚ö° (corrupted ⚡)
    - üéØ (corrupted 🎯)
    - ‚úÖ (corrupted ✅)
    - ‚û°Ô∏è (corrupted ➡️)
    - and other mangled Unicode sequences
    """
    # Pattern for corrupted emojis - looking for specific corrupted sequences
    corrupted_patterns = [
        r'[üöÄ]{2,}',  # Multiple Germanic umlauts together
        r'‚[öûú][°ÖØ]',  # Specific corrupted emoji patterns
        r'️',  # Part of corrupted arrow emoji
        r'\uf8ff',  # Apple logo character that shouldn't be there
        r'ü[öéì][ÄØÖ]',  # Common corrupted emoji starts
        r'‚Ä[窆]',  # Another corruption pattern
    ]
    
    for pattern in corrupted_patterns:
        if re.search(pattern, text):
            return True
    
    # Also check for suspicious character combinations
    # Real emojis are typically in ranges U+1F300-U+1F9FF, U+2600-U+27BF
    # Corrupted text often has unusual combinations of Latin extended characters
    suspicious_chars = ['ü', 'ö', 'Ä', '‚', 'Ô', '∏', 'è', '°', 'Ö', 'Ø', 'û', 'ú', 'ì', '†', 'ª', 'ç']
    char_count = sum(text.count(char) for char in suspicious_chars)
    
    # If we have multiple suspicious characters in a short span, likely corrupted
    if char_count > 3 and len(text) < 200:
        return True
    
    return False

def cleanup_corrupted_memories(db_path, dry_run=True):
    """
    Clean up memories with corrupted encoding.
    
    Args:
        db_path: Path to the SQLite database
        dry_run: If True, only show what would be deleted without actually deleting
    """
    conn = sqlite3.connect(db_path)
    cursor = conn.cursor()
    
    print(f"{'DRY RUN - ' if dry_run else ''}Scanning for memories with corrupted encoding...")
    
    # Get all memories with potential corruption
    cursor.execute("""
        SELECT content_hash, content, tags, created_at 
        FROM memories 
        WHERE tags LIKE '%readme%' OR tags LIKE '%documentation%'
        ORDER BY created_at DESC
    """)
    
    all_memories = cursor.fetchall()
    corrupted_memories = []
    
    for content_hash, content, tags_json, created_at in all_memories:
        if detect_corrupted_encoding(content):
            try:
                tags = json.loads(tags_json) if tags_json else []
            except:
                tags = []
            
            # Skip if already marked as UTF8-fixed (these are the corrected versions)
            if 'utf8-fixed' in tags:
                continue
                
            corrupted_memories.append({
                'hash': content_hash,
                'content_preview': content[:200],
                'tags': tags,
                'created_at': created_at
            })
    
    print(f"\nFound {len(corrupted_memories)} memories with corrupted encoding")
    
    if corrupted_memories:
        print("\nCorrupted memories to be deleted:")
        print("-" * 80)
        
        for i, mem in enumerate(corrupted_memories[:10], 1):  # Show first 10
            print(f"\n{i}. Hash: {mem['hash'][:20]}...")
            print(f"   Created: {mem['created_at']}")
            print(f"   Tags: {', '.join(mem['tags'][:5])}")
            print(f"   Content preview: {mem['content_preview'][:100]}...")
        
        if len(corrupted_memories) > 10:
            print(f"\n... and {len(corrupted_memories) - 10} more")
        
        if not dry_run:
            print("\n" + "="*80)
            print("DELETING CORRUPTED MEMORIES...")
            
            # Delete from memories table
            for mem in corrupted_memories:
                cursor.execute("DELETE FROM memories WHERE content_hash = ?", (mem['hash'],))
                
                # Also delete from embeddings table if it exists
                try:
                    cursor.execute("DELETE FROM memory_embeddings WHERE rowid = ?", (mem['hash'],))
                except:
                    pass  # Embeddings table might use different structure
            
            conn.commit()
            print(f"✅ Deleted {len(corrupted_memories)} corrupted memories")
            
            # Verify deletion
            cursor.execute("SELECT COUNT(*) FROM memories")
            remaining = cursor.fetchone()[0]
            print(f"📊 Remaining memories in database: {remaining}")
        else:
            print("\n" + "="*80)
            print("DRY RUN COMPLETE - No changes made")
            print(f"To actually delete these {len(corrupted_memories)} memories, run with --execute flag")
    else:
        print("✅ No corrupted memories found!")
    
    conn.close()

def main():
    """Main entry point."""
    import argparse
    
    parser = argparse.ArgumentParser(description='Clean up memories with corrupted emoji encoding')
    parser.add_argument('--db-path', type=str, 
                        default='/home/hkr/.local/share/mcp-memory/sqlite_vec.db',
                        help='Path to SQLite database')
    parser.add_argument('--execute', action='store_true',
                        help='Actually delete the corrupted memories (default is dry run)')
    
    args = parser.parse_args()
    
    if not Path(args.db_path).exists():
        print(f"❌ Database not found: {args.db_path}")
        sys.exit(1)
    
    cleanup_corrupted_memories(args.db_path, dry_run=not args.execute)

if __name__ == "__main__":
    main()
```

--------------------------------------------------------------------------------
/scripts/benchmarks/benchmark_server_caching.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Benchmark production MCP server (server.py) caching performance.

Tests global caching implementation to measure performance improvement
from baseline ~1,810ms to target <400ms on cache hits.

Usage:
    python scripts/benchmarks/benchmark_server_caching.py
"""

import asyncio
import time
import sys
from pathlib import Path

# Add src to path
sys.path.insert(0, str(Path(__file__).parent.parent.parent / "src"))

from mcp_memory_service.server import MemoryServer
from mcp_memory_service import config


async def benchmark_server_caching():
    """Benchmark production MCP server caching performance."""

    print("=" * 80)
    print("PRODUCTION MCP SERVER CACHING PERFORMANCE BENCHMARK")
    print("=" * 80)
    print(f"Storage Backend: {config.STORAGE_BACKEND}")
    print(f"Database Path: {config.SQLITE_VEC_PATH}")
    print()

    # Create server instance
    server = MemoryServer()

    results = []
    num_calls = 10

    print(f"Running {num_calls} consecutive storage initialization calls...\n")

    for i in range(num_calls):
        # Reset storage flag to simulate fresh initialization check
        # (but cache will persist between calls)
        if i > 0:
            server._storage_initialized = False

        start = time.time()

        # Call the lazy initialization method
        await server._ensure_storage_initialized()

        duration_ms = (time.time() - start) * 1000
        results.append(duration_ms)

        call_type = "CACHE MISS" if i == 0 else "CACHE HIT"
        print(f"Call #{i+1:2d}: {duration_ms:7.2f}ms  ({call_type})")

    # Import cache stats from server module
    from mcp_memory_service import server as server_module
    cache_stats = server_module._CACHE_STATS

    # Calculate statistics
    first_call = results[0]  # Cache miss
    cached_calls = results[1:]  # Cache hits
    avg_cached = sum(cached_calls) / len(cached_calls) if cached_calls else 0
    min_cached = min(cached_calls) if cached_calls else 0
    max_cached = max(cached_calls) if cached_calls else 0

    print()
    print("=" * 80)
    print("RESULTS")
    print("=" * 80)
    print(f"First Call (Cache Miss):  {first_call:7.2f}ms")
    print(f"Cached Calls Average:     {avg_cached:7.2f}ms")
    print(f"Cached Calls Min:         {min_cached:7.2f}ms")
    print(f"Cached Calls Max:         {max_cached:7.2f}ms")
    print()

    # Calculate improvement
    if avg_cached > 0:
        improvement = ((first_call - avg_cached) / first_call) * 100
        speedup = first_call / avg_cached
        print(f"Performance Improvement:  {improvement:.1f}%")
        print(f"Speedup Factor:           {speedup:.2f}x faster")

    print()
    print("=" * 80)
    print("CACHE STATISTICS")
    print("=" * 80)
    print(f"Total Initialization Calls: {cache_stats['total_calls']}")
    print(f"Storage Cache Hits:         {cache_stats['storage_hits']}")
    print(f"Storage Cache Misses:       {cache_stats['storage_misses']}")
    print(f"Service Cache Hits:         {cache_stats['service_hits']}")
    print(f"Service Cache Misses:       {cache_stats['service_misses']}")

    storage_cache = server_module._STORAGE_CACHE
    service_cache = server_module._MEMORY_SERVICE_CACHE
    print(f"Storage Cache Size:         {len(storage_cache)} instances")
    print(f"Service Cache Size:         {len(service_cache)} instances")

    total_checks = cache_stats['total_calls'] * 2
    total_hits = cache_stats['storage_hits'] + cache_stats['service_hits']
    hit_rate = (total_hits / total_checks * 100) if total_checks > 0 else 0
    print(f"Overall Cache Hit Rate:     {hit_rate:.1f}%")

    print()
    print("=" * 80)
    print("COMPARISON TO BASELINE")
    print("=" * 80)
    print("Baseline (no caching):      1,810ms per call")
    print(f"Optimized (cache miss):     {first_call:7.2f}ms")
    print(f"Optimized (cache hit):      {avg_cached:7.2f}ms")
    print()

    # Determine success
    target_cached_time = 400  # ms
    if avg_cached < target_cached_time:
        print(f"✅ SUCCESS: Cache hit average ({avg_cached:.2f}ms) is under target ({target_cached_time}ms)")
        success = True
    else:
        print(f"⚠️  PARTIAL: Cache hit average ({avg_cached:.2f}ms) exceeds target ({target_cached_time}ms)")
        print(f"   Note: Still a significant improvement over baseline!")
        success = avg_cached < 1000  # Consider <1s a success

    print()

    # Test get_cache_stats MCP tool
    print("=" * 80)
    print("TESTING get_cache_stats MCP TOOL")
    print("=" * 80)

    try:
        # Call with empty arguments dict (tool takes no parameters)
        result = await server.handle_get_cache_stats({})
        # Extract the actual stats from MCP response format (safely parse JSON)
        import json
        stats_result = json.loads(result[0].text) if result else {}
        print("✅ get_cache_stats tool works!")
        print(f"   Hit Rate: {stats_result.get('hit_rate', 'N/A')}%")
        print(f"   Message: {stats_result.get('message', 'N/A')}")
    except Exception as e:
        print(f"❌ get_cache_stats tool failed: {e}")

    print()

    return {
        "success": success,
        "first_call_ms": first_call,
        "avg_cached_ms": avg_cached,
        "min_cached_ms": min_cached,
        "max_cached_ms": max_cached,
        "improvement_pct": improvement if avg_cached > 0 else 0,
        "cache_hit_rate": hit_rate
    }


if __name__ == "__main__":
    try:
        results = asyncio.run(benchmark_server_caching())

        # Exit code based on success
        sys.exit(0 if results["success"] else 1)

    except Exception as e:
        print(f"\n❌ Benchmark failed with error: {e}")
        import traceback
        traceback.print_exc()
        sys.exit(2)

```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/development/CLEANUP_PLAN.md:
--------------------------------------------------------------------------------

```markdown
# Cleanup Plan for MCP-MEMORY-SERVICE

## 1. Test Files Organization

### Current Test Files
- `test_chromadb.py` - Tests ChromaDB initialization with new API pattern
- `test_health_check_fixes.py` - Tests health check fixes and validation
- `test_issue_5_fix.py` - Tests tag deletion functionality
- `test_performance_optimizations.py` - Tests performance improvements

### Recommended Organization
1. **Create a structured tests directory:**
   ```
   tests/
   ├── integration/         # Integration tests between components
   │   ├── test_server.py   # Server integration tests
   │   └── test_storage.py  # Storage integration tests
   ├── unit/                # Unit tests for individual components
   │   ├── test_chroma.py   # ChromaDB-specific tests
   │   ├── test_config.py   # Configuration tests
   │   └── test_utils.py    # Utility function tests
   └── performance/         # Performance benchmarks
       ├── test_caching.py  # Cache performance tests
       └── test_queries.py  # Query performance tests
   ```

2. **Move existing test files to appropriate directories:**
   - `test_chromadb.py` → `tests/unit/test_chroma.py`
   - `test_health_check_fixes.py` → `tests/integration/test_storage.py`
   - `test_issue_5_fix.py` → `tests/unit/test_tags.py`
   - `test_performance_optimizations.py` → `tests/performance/test_caching.py`

3. **Create a proper test runner:**
   - Add `pytest.ini` configuration
   - Add `conftest.py` with common fixtures
   - Create a `.coveragerc` file for coverage reporting

## 2. Documentation Organization

### Current Documentation
- `CHANGELOG.md` - Release history and changes
- `CLAUDE.md` - Claude-specific documentation
- `CLEANUP_SUMMARY.md` - Cleanup summary
- `HEALTH_CHECK_FIXES_SUMMARY.md` - Health check fixes documentation
- `PERFORMANCE_OPTIMIZATION_SUMMARY.md` - Performance optimization documentation
- `README.md` - Main project documentation

### Recommended Organization
1. **Consolidate implementation documentation:**
   ```
   docs/
   ├── guides/                # User guides
   │   ├── getting_started.md # Quick start guide
   │   └── configuration.md   # Configuration options
   ├── implementation/        # Implementation details
   │   ├── health_checks.md   # Health check documentation
   │   ├── performance.md     # Performance optimization details
   │   └── tags.md           # Tag functionality documentation
   ├── api/                   # API documentation
   │   ├── server.md          # Server API documentation
   │   └── storage.md         # Storage API documentation
   └── examples/              # Example code
       ├── basic_usage.md     # Basic usage examples
       └── advanced.md        # Advanced usage examples
   ```

2. **Move existing documentation:**
   - `HEALTH_CHECK_FIXES_SUMMARY.md` → `docs/implementation/health_checks.md`
   - `PERFORMANCE_OPTIMIZATION_SUMMARY.md` → `docs/implementation/performance.md`
   - Keep `CHANGELOG.md` in the root directory
   - Move `CLAUDE.md` to `docs/guides/claude_integration.md`

## 3. Backup and Archive Files

### Files to Archive
- `backup_performance_optimization/` - Archive this directory
- Any development artifacts that are no longer needed

### Recommended Action
1. **Create an archive directory:**
   ```
   archive/
   ├── 2025-06-24/            # Archive by date
   │   ├── tests/             # Old test files
   │   └── docs/              # Old documentation
   ```

2. **Move backup files to the archive:**
   - Move `backup_performance_optimization/` to `archive/2025-06-24/`
   - Create a README in the archive directory explaining what's stored there

## 4. Git Cleanup Actions

### Recommended Git Actions
1. **Create a new branch for changes:**
   ```bash
   git checkout -b feature/cleanup-and-organization
   ```

2. **Add and organize files:**
   ```bash
   # Create new directories
   mkdir -p tests/integration tests/unit tests/performance
   mkdir -p docs/guides docs/implementation docs/api docs/examples
   mkdir -p archive/2025-06-24
   
   # Move test files
   git mv test_chromadb.py tests/unit/test_chroma.py
   git mv test_health_check_fixes.py tests/integration/test_storage.py
   git mv test_issue_5_fix.py tests/unit/test_tags.py
   git mv test_performance_optimizations.py tests/performance/test_caching.py
   
   # Move documentation
   git mv HEALTH_CHECK_FIXES_SUMMARY.md docs/implementation/health_checks.md
   git mv PERFORMANCE_OPTIMIZATION_SUMMARY.md docs/implementation/performance.md
   git mv CLAUDE.md docs/guides/claude_integration.md
   
   # Archive backup files
   git mv backup_performance_optimization archive/2025-06-24/
   ```

3. **Update CHANGELOG.md:**
   ```bash
   git mv CHANGELOG.md.new CHANGELOG.md
   ```

4. **Commit changes:**
   ```bash
   git add .
   git commit -m "Organize tests, documentation, and archive old files"
   ```

5. **Create new branch for hardware testing:**
   ```bash
   git checkout -b test/hardware-validation
   ```

## 5. Final Verification Steps

1. **Run tests to ensure everything still works:**
   ```bash
   cd tests
   pytest
   ```

2. **Verify documentation links are updated:**
   - Check README.md for any links to moved files
   - Update any cross-references in documentation

3. **Ensure CHANGELOG is complete:**
   - Verify all changes are documented
   - Check version numbers and dates

4. **Track changes in memory:**
   ```bash
   # Store the changes in memory
   memory store_memory --content "Reorganized MCP-MEMORY-SERVICE project structure on June 24, 2025. Created proper test directory structure, consolidated documentation in docs/ directory, and archived old backup files. Changes are in the feature/cleanup-and-organization branch, with hardware testing in test/hardware-validation branch." --tags "mcp-memory-service,cleanup,reorganization,memory-driven"
   ```
```

--------------------------------------------------------------------------------
/scripts/pr/thread_status.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
# scripts/pr/thread_status.sh - Display PR review thread status
#
# Shows comprehensive status of all review threads on a PR with filtering options.
# Uses GitHub GraphQL API to access review thread data.
#
# Usage: bash scripts/pr/thread_status.sh <PR_NUMBER> [--unresolved|--resolved|--outdated]
# Example: bash scripts/pr/thread_status.sh 212 --unresolved
#
# Flags:
#   --unresolved: Show only unresolved threads
#   --resolved: Show only resolved threads
#   --outdated: Show only outdated threads
#   (no flag): Show all threads with summary

set -e

# Get script directory for sourcing helpers
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"

# Source GraphQL helpers
if [ -f "$SCRIPT_DIR/lib/graphql_helpers.sh" ]; then
    source "$SCRIPT_DIR/lib/graphql_helpers.sh"
else
    echo "Error: GraphQL helpers not found at $SCRIPT_DIR/lib/graphql_helpers.sh"
    exit 1
fi

# Parse arguments
PR_NUMBER=$1
FILTER=${2:-all}

if [ -z "$PR_NUMBER" ]; then
    echo "Usage: $0 <PR_NUMBER> [--unresolved|--resolved|--outdated]"
    echo "Example: $0 212 --unresolved"
    exit 1
fi

# Verify gh CLI supports GraphQL
if ! check_graphql_support; then
    exit 1
fi

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
GRAY='\033[0;90m'
NC='\033[0m' # No Color

echo "========================================"
echo "  PR Review Thread Status"
echo "========================================"
echo "PR Number: #$PR_NUMBER"
echo "Filter: ${FILTER/--/}"
echo ""

# Get all review threads
echo "Fetching review threads..."
threads_json=$(get_review_threads "$PR_NUMBER")

# Get thread statistics
stats=$(get_thread_stats "$PR_NUMBER")

total=$(echo "$stats" | jq -r '.total')
resolved=$(echo "$stats" | jq -r '.resolved')
unresolved=$(echo "$stats" | jq -r '.unresolved')
outdated=$(echo "$stats" | jq -r '.outdated')

# Display summary
echo "========================================"
echo "  Summary"
echo "========================================"
echo -e "Total Threads:      $total"
echo -e "${GREEN}Resolved:${NC}           $resolved"
echo -e "${RED}Unresolved:${NC}         $unresolved"
echo -e "${YELLOW}Outdated:${NC}           $outdated"
echo ""

if [ "$total" -eq 0 ]; then
    echo "✅ No review threads found for PR #$PR_NUMBER"
    exit 0
fi

# Display detailed thread list
echo "========================================"
echo "  Thread Details"
echo "========================================"

# Determine jq filter based on flag
case "$FILTER" in
    --unresolved)
        jq_filter='select(.isResolved == false)'
        ;;
    --resolved)
        jq_filter='select(.isResolved == true)'
        ;;
    --outdated)
        jq_filter='select(.isOutdated == true)'
        ;;
    *)
        jq_filter='.'
        ;;
esac

# Process and display threads
thread_count=0

echo "$threads_json" | jq -r ".data.repository.pullRequest.reviewThreads.nodes[] | $jq_filter | @json" | while IFS= read -r thread_json; do
    thread_count=$((thread_count + 1))

    thread_id=$(echo "$thread_json" | jq -r '.id')
    path=$(echo "$thread_json" | jq -r '.path // "unknown"')
    line=$(echo "$thread_json" | jq -r '.line // 0')
    original_line=$(echo "$thread_json" | jq -r '.originalLine // 0')
    diff_side=$(echo "$thread_json" | jq -r '.diffSide // "unknown"')
    is_resolved=$(echo "$thread_json" | jq -r '.isResolved')
    is_outdated=$(echo "$thread_json" | jq -r '.isOutdated')

    # Get first comment details
    author=$(echo "$thread_json" | jq -r '.comments.nodes[0].author.login // "unknown"')
    comment_body=$(echo "$thread_json" | jq -r '.comments.nodes[0].body // "No comment"')
    created_at=$(echo "$thread_json" | jq -r '.comments.nodes[0].createdAt // "unknown"')
    comment_count=$(echo "$thread_json" | jq -r '.comments.nodes | length')

    # Truncate comment to 150 chars for display
    comment_preview=$(echo "$comment_body" | head -c 150 | tr '\n' ' ')
    if [ ${#comment_body} -gt 150 ]; then
        comment_preview="${comment_preview}..."
    fi

    # Format status indicators
    if [ "$is_resolved" = "true" ]; then
        status_icon="${GREEN}✓${NC}"
        status_text="${GREEN}RESOLVED${NC}"
    else
        status_icon="${RED}○${NC}"
        status_text="${RED}UNRESOLVED${NC}"
    fi

    if [ "$is_outdated" = "true" ]; then
        outdated_icon="${YELLOW}⚠${NC}"
        outdated_text="${YELLOW}OUTDATED${NC}"
    else
        outdated_icon=" "
        outdated_text="${GRAY}current${NC}"
    fi

    # Display thread
    echo ""
    echo -e "$status_icon Thread #$thread_count"
    echo -e "  Status: $status_text | $outdated_text"
    echo -e "  File: ${BLUE}$path${NC}:$line (original: $original_line)"
    echo -e "  Side: $diff_side"
    echo -e "  Author: $author"
    echo -e "  Created: $created_at"
    echo -e "  Comments: $comment_count"
    echo -e "  ${GRAY}\"${comment_preview}\"${NC}"

    # Show thread ID for reference (can be used with resolve_threads.sh)
    echo -e "  ${GRAY}Thread ID: ${thread_id:0:20}...${NC}"
done

echo ""
echo "========================================"

# Provide actionable next steps
if [ "$unresolved" -gt 0 ]; then
    echo ""
    echo "📝 Next Steps:"
    echo ""
    echo "  1. Review unresolved threads:"
    echo "     gh pr view $PR_NUMBER --web"
    echo ""
    echo "  2. After fixing issues and pushing commits, resolve threads:"
    echo "     bash scripts/pr/resolve_threads.sh $PR_NUMBER HEAD --auto"
    echo ""
    echo "  3. Manually resolve specific threads via GitHub web interface"
    echo ""
    echo "  4. Trigger new Gemini review after fixes:"
    echo "     gh pr comment $PR_NUMBER --body '/gemini review'"
    echo ""
fi

# Exit with status indicating unresolved threads
if [ "$unresolved" -gt 0 ]; then
    exit 1
else
    echo "✅ All review threads resolved!"
    exit 0
fi

```

--------------------------------------------------------------------------------
/scripts/pr/watch_reviews.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
# scripts/pr/watch_reviews.sh - Watch for Gemini reviews and auto-respond
#
# Usage: bash scripts/pr/watch_reviews.sh <PR_NUMBER> [CHECK_INTERVAL_SECONDS]
# Example: bash scripts/pr/watch_reviews.sh 212 180
#
# Press Ctrl+C to stop watching

set -e

# Get script directory for sourcing helpers
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"

# Source GraphQL helpers for thread resolution
if [ -f "$SCRIPT_DIR/lib/graphql_helpers.sh" ]; then
    source "$SCRIPT_DIR/lib/graphql_helpers.sh"
    GRAPHQL_AVAILABLE=true
else
    echo "Warning: GraphQL helpers not available, thread status disabled"
    GRAPHQL_AVAILABLE=false
fi

PR_NUMBER=$1
CHECK_INTERVAL=${2:-180}  # Default: 3 minutes

if [ -z "$PR_NUMBER" ]; then
    echo "Usage: $0 <PR_NUMBER> [CHECK_INTERVAL_SECONDS]"
    echo "Example: $0 212 180"
    exit 1
fi

echo "========================================"
echo "  Gemini PR Review Watch Mode"
echo "========================================"
echo "PR Number: #$PR_NUMBER"
echo "Check Interval: ${CHECK_INTERVAL}s"
echo "GraphQL Thread Tracking: $([ "$GRAPHQL_AVAILABLE" = true ] && echo "Enabled" || echo "Disabled")"
echo "Press Ctrl+C to stop"
echo ""

# Get repository from git remote (portable across forks)
REPO=$(gh repo view --json nameWithOwner -q .nameWithOwner 2>/dev/null || echo "doobidoo/mcp-memory-service")

# Track last review timestamp to detect new reviews
last_review_time=""

while true; do
    echo "[$(date '+%H:%M:%S')] Checking for new reviews..."

    # Get latest Gemini review timestamp
    current_review_time=$(gh api "repos/$REPO/pulls/$PR_NUMBER/reviews" 2>/dev/null | \
        jq -r '[.[] | select(.user.login == "gemini-code-assist[bot]")] | last | .submitted_at' 2>/dev/null || echo "")

    # Get review state
    review_state=$(gh pr view $PR_NUMBER --json reviews --jq '[.reviews[] | select(.author.login == "gemini-code-assist[bot]")] | last | .state' 2>/dev/null || echo "")

    # Get inline comments count (from latest review)
    comments_count=$(gh api "repos/$REPO/pulls/$PR_NUMBER/comments" 2>/dev/null | \
        jq '[.[] | select(.user.login == "gemini-code-assist[bot]")] | length' 2>/dev/null || echo "0")

    echo "  Review State: ${review_state:-none}"
    echo "  Inline Comments: $comments_count"
    echo "  Last Review: ${current_review_time:-never}"

    # Display thread status if GraphQL available
    if [ "$GRAPHQL_AVAILABLE" = true ]; then
        thread_stats=$(get_thread_stats "$PR_NUMBER" 2>/dev/null || echo '{"total":0,"resolved":0,"unresolved":0}')
        total_threads=$(echo "$thread_stats" | jq -r '.total // 0')
        resolved_threads=$(echo "$thread_stats" | jq -r '.resolved // 0')
        unresolved_threads=$(echo "$thread_stats" | jq -r '.unresolved // 0')
        echo "  Review Threads: $total_threads total, $resolved_threads resolved, $unresolved_threads unresolved"
    fi

    # Check if there's a new review
    if [ -n "$current_review_time" ] && [ "$current_review_time" != "$last_review_time" ]; then
        echo ""
        echo "🔔 NEW REVIEW DETECTED!"
        echo "  Timestamp: $current_review_time"
        echo "  State: $review_state"
        echo ""

        last_review_time="$current_review_time"

        # Check if approved
        if [ "$review_state" = "APPROVED" ]; then
            echo "✅ PR APPROVED by Gemini!"
            echo "  No further action needed"
            echo ""
            echo "You can now merge the PR:"
            echo "  gh pr merge $PR_NUMBER --squash"
            echo ""
            echo "Watch mode will continue monitoring..."

        elif [ "$review_state" = "CHANGES_REQUESTED" ] || [ "$comments_count" -gt 0 ]; then
            echo "📝 Review feedback received ($comments_count inline comments)"
            echo ""

            # Display detailed thread status if GraphQL available
            if [ "$GRAPHQL_AVAILABLE" = true ] && [ "$unresolved_threads" -gt 0 ]; then
                echo "Thread Status:"
                bash "$SCRIPT_DIR/thread_status.sh" "$PR_NUMBER" --unresolved 2>/dev/null || true
                echo ""
            fi

            echo "Options:"
            echo "  1. View detailed thread status:"
            echo "     bash scripts/pr/thread_status.sh $PR_NUMBER"
            echo ""
            echo "  2. View inline comments on GitHub:"
            echo "     gh pr view $PR_NUMBER --web"
            echo ""
            echo "  3. Run auto-review to fix issues automatically:"
            echo "     bash scripts/pr/auto_review.sh $PR_NUMBER 5 true"
            echo ""
            echo "  4. Fix manually, push, and resolve threads:"
            echo "     # After pushing fixes:"
            echo "     bash scripts/pr/resolve_threads.sh $PR_NUMBER HEAD --auto"
            echo "     gh pr comment $PR_NUMBER --body '/gemini review'"
            echo ""

            # Optionally auto-trigger review cycle
            read -t 30 -p "Auto-run review cycle? (y/N): " response || response="n"
            echo ""

            if [[ "$response" =~ ^[Yy]$ ]]; then
                echo "🤖 Starting automated review cycle..."
                bash scripts/pr/auto_review.sh $PR_NUMBER 3 true
                echo ""
                echo "✅ Auto-review cycle completed"
                echo "   Watch mode resuming..."
            else
                echo "⏭️  Skipped auto-review"
                echo "   Manual fixes expected"
            fi

        elif [ "$review_state" = "COMMENTED" ]; then
            echo "💬 General comments received (no changes requested)"
            echo "  Review: $review_state"

        else
            echo "ℹ️  Review state: ${review_state:-unknown}"
        fi

        echo ""
        echo "----------------------------------------"
    fi

    echo "  Next check in ${CHECK_INTERVAL}s..."
    echo ""
    sleep $CHECK_INTERVAL
done

```

--------------------------------------------------------------------------------
/src/mcp_memory_service/cli/main.py:
--------------------------------------------------------------------------------

```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Main CLI entry point for MCP Memory Service.
"""

import click
import sys
import os

from .. import __version__
from .ingestion import add_ingestion_commands


@click.group(invoke_without_command=True)
@click.version_option(version=__version__, prog_name="MCP Memory Service")
@click.pass_context
def cli(ctx):
    """
    MCP Memory Service - A semantic memory service using the Model Context Protocol.
    
    Provides document ingestion, memory management, and MCP server functionality.
    """
    ctx.ensure_object(dict)
    
    # Backward compatibility: if no subcommand provided, default to server
    if ctx.invoked_subcommand is None:
        import warnings
        warnings.warn(
            "Running 'memory' without a subcommand is deprecated. "
            "Please use 'memory server' explicitly. "
            "This backward compatibility will be removed in a future version.",
            DeprecationWarning,
            stacklevel=2
        )
        # Default to server command with default options for backward compatibility
        ctx.invoke(server, debug=False, storage_backend=None)


@cli.command()
@click.option('--debug', is_flag=True, help='Enable debug logging')
@click.option('--storage-backend', '-s', default=None,
              type=click.Choice(['sqlite_vec', 'sqlite-vec', 'cloudflare', 'hybrid']), help='Storage backend to use (defaults to environment or sqlite_vec)')
def server(debug, storage_backend):
    """
    Start the MCP Memory Service server.

    This starts the Model Context Protocol server that can be used by
    Claude Desktop, VS Code extensions, and other MCP-compatible clients.
    """
    # Set environment variables if explicitly provided
    if storage_backend is not None:
        os.environ['MCP_MEMORY_STORAGE_BACKEND'] = storage_backend
    
    # Import and run the server main function
    from ..server import main as server_main
    
    # Set debug flag
    if debug:
        import logging
        logging.basicConfig(level=logging.DEBUG)
    
    # Start the server
    server_main()


@cli.command()
@click.option('--storage-backend', '-s', default='sqlite_vec',
              type=click.Choice(['sqlite_vec', 'sqlite-vec', 'cloudflare', 'hybrid']), help='Storage backend to use')
def status():
    """
    Show memory service status and statistics.
    """
    import asyncio
    
    async def show_status():
        try:
            from .utils import get_storage
            
            storage = await get_storage(storage_backend)
            stats = await storage.get_stats() if hasattr(storage, 'get_stats') else {}
            
            click.echo("📊 MCP Memory Service Status\n")
            click.echo(f"   Version: {__version__}")
            click.echo(f"   Backend: {storage.__class__.__name__}")
            
            if stats:
                click.echo(f"   Memories: {stats.get('total_memories', 'Unknown')}")
                click.echo(f"   Database size: {stats.get('database_size_mb', 'Unknown')} MB")
                click.echo(f"   Unique tags: {stats.get('unique_tags', 'Unknown')}")
            
            click.echo("\n✅ Service is healthy")
            
            await storage.close()
            
        except Exception as e:
            click.echo(f"❌ Error connecting to storage: {str(e)}", err=True)
            sys.exit(1)
    
    asyncio.run(show_status())


# Add ingestion commands to the CLI group
add_ingestion_commands(cli)


def memory_server_main():
    """
    Compatibility entry point for memory-server command.
    
    This function provides backward compatibility for the old memory-server
    entry point by parsing argparse-style arguments and routing them to 
    the Click-based CLI.
    """
    import argparse
    import warnings
    
    # Issue deprecation warning
    warnings.warn(
        "The 'memory-server' command is deprecated. Please use 'memory server' instead. "
        "This compatibility wrapper will be removed in a future version.",
        DeprecationWarning,
        stacklevel=2
    )
    
    # Parse arguments using the same structure as the old argparse CLI
    parser = argparse.ArgumentParser(
        description="MCP Memory Service - A semantic memory service using the Model Context Protocol"
    )
    parser.add_argument(
        "--version",
        action="version", 
        version=f"MCP Memory Service {__version__}",
        help="Show version information"
    )
    parser.add_argument(
        "--debug",
        action="store_true",
        help="Enable debug logging"
    )
    args = parser.parse_args()

    # Convert to Click CLI arguments and call server command
    click_args = ['server']
    if args.debug:
        click_args.append('--debug')
    
    # Call the Click CLI with the converted arguments
    try:
        # Temporarily replace sys.argv to pass arguments to Click
        original_argv = sys.argv
        sys.argv = ['memory'] + click_args
        cli()
    finally:
        sys.argv = original_argv


def main():
    """Main entry point for the CLI."""
    try:
        cli()
    except KeyboardInterrupt:
        click.echo("\n⚠️  Operation cancelled by user")
        sys.exit(130)
    except Exception as e:
        click.echo(f"❌ Unexpected error: {str(e)}", err=True)
        sys.exit(1)


if __name__ == '__main__':
    main()
```

--------------------------------------------------------------------------------
/archive/docs-removed-2025-08-23/UBUNTU_SETUP.md:
--------------------------------------------------------------------------------

```markdown
# Ubuntu Setup Guide for MCP Memory Service with SQLite-vec

## 🎯 Overview

This guide shows how to set up the MCP Memory Service with SQLite-vec backend on Ubuntu for integration with Claude Code and VS Code.

## ✅ Prerequisites Met

You have successfully completed:
- ✅ SQLite-vec installation and testing  
- ✅ Basic dependencies (sentence-transformers, torch, mcp)
- ✅ Environment configuration

## 🔧 Current Setup Status

Your Ubuntu machine now has:

```bash
# Virtual environment active
source venv/bin/activate

# SQLite-vec backend configured
export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec

# Key packages installed:
- sqlite-vec (0.1.6)
- sentence-transformers (5.0.0)  
- torch (2.7.1+cpu)
- mcp (1.11.0)
```

## Claude Code Integration

### 1. Install Claude Code CLI

If not already installed:
```bash
# Install Claude Code CLI
curl -fsSL https://claude.ai/install.sh | sh
```

### 2. Configure MCP Memory Service with Claude Code

#### Option A: Automatic Configuration (Recommended)
```bash
# Run installer with Claude Code auto-configuration
python install.py --configure-claude-code
```

This automatically:
- Detects Claude Code installation
- Creates personalized .mcp.json from template
- Replaces placeholder paths with your system paths
- Adds optimized environment variables
- Adds .mcp.json to .gitignore (protects personal info)
- Verifies the configuration works

#### Option B: Manual Configuration
```bash
# Navigate to project directory
cd /home/hkr/repositories/mcp-memory-service

# Add memory service with optimized settings
claude mcp add memory-service --scope project \
  -e MCP_MEMORY_CHROMA_PATH=$HOME/.mcp_memory_chroma \
  -e LOG_LEVEL=INFO \
  -e MCP_TIMEOUT=30000 \
  -- python scripts/run_memory_server.py

# Verify configuration
claude mcp list
```

### 3. Secure Configuration Management

The system uses a template-based approach to protect personal information:

- **Template**: .mcp.json.template (shared, no personal data)
- **Generated**: .mcp.json (personalized, automatically added to .gitignore)
- **Placeholders**: {{USER_HOME}} replaced with your actual home directory

### 2. Database Location

Your SQLite-vec database will be created at:
```
/home/hkr/.local/share/mcp-memory/sqlite_vec.db
```

This single file contains all your memories and can be easily backed up.

### 3. Claude Code Usage

With the MCP Memory Service running, Claude Code can:

- **Store memories**: "Remember that I prefer using Ubuntu for development"
- **Retrieve memories**: "What did I tell you about my development preferences?"
- **Search by tags**: Find memories with specific topics
- **Time-based recall**: "What did we discuss yesterday about databases?"

### 4. Performance Benefits

SQLite-vec backend provides:
- **75% less memory usage** vs ChromaDB
- **Faster startup times** (2-3x faster)
- **Single file database** (easy backup/share)
- **Better for <100K memories**

## 💻 VS Code Integration Options

### Option 1: Claude Code in VS Code Terminal
```bash
# Open VS Code in your project
code /home/hkr/repositories/mcp-memory-service

# Use integrated terminal to run Claude Code with memory support
# The memory service will automatically use sqlite-vec backend
```

### Option 2: MCP Extension (if available)
```bash
# Install VS Code MCP extension when available
# Configure to use local MCP Memory Service
```

### Option 3: Development Workflow
```bash
# 1. Keep MCP Memory Service running in background
python -m src.mcp_memory_service.server &

# 2. Use Claude Code normally - it will connect to your local service
# 3. All memories stored in local sqlite-vec database
```

## 🔄 Migration from ChromaDB (if needed)

If you have existing ChromaDB data to migrate:

```bash
# Simple migration
python migrate_to_sqlite_vec.py

# Or with custom paths
python scripts/migrate_storage.py \
  --from chroma \
  --to sqlite_vec \
  --backup \
  --backup-path backup.json
```

## 🧪 Testing the Setup

### Quick Test
```bash
# Test that everything works
source venv/bin/activate
export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec
python simple_sqlite_vec_test.py
```

### Full Test (when server is ready)
```bash
# Test MCP server startup
python -c "
import os
os.environ['MCP_MEMORY_STORAGE_BACKEND'] = 'sqlite_vec'
from src.mcp_memory_service.server import main
print('✅ Server can start with sqlite-vec backend')
"
```

## 🛠️ Troubleshooting

### Common Issues

1. **Module Import Errors**
   ```bash
   # Make sure you're in the virtual environment
   source venv/bin/activate
   
   # Check installed packages
   pip list | grep -E "(sqlite-vec|sentence|torch|mcp)"
   ```

2. **Permission Errors**
   ```bash
   # Ensure database directory is writable
   mkdir -p ~/.local/share/mcp-memory
   chmod 755 ~/.local/share/mcp-memory
   ```

3. **Memory/Performance Issues**
   ```bash
   # SQLite-vec uses much less memory than ChromaDB
   # Monitor with: htop or free -h
   ```

### Environment Variables

Add to your `~/.bashrc` for permanent configuration:
```bash
echo 'export MCP_MEMORY_STORAGE_BACKEND=sqlite_vec' >> ~/.bashrc
source ~/.bashrc
```

## 📊 Performance Comparison

| Metric | ChromaDB | SQLite-vec | Improvement |
|--------|----------|------------|-------------|
| Memory Usage (1K memories) | ~200MB | ~50MB | 75% less |
| Startup Time | ~5-10s | ~2-3s | 2-3x faster |
| Disk Usage | ~50MB | ~35MB | 30% less |
| Database Files | Multiple | Single | Simpler |

## 🎉 Next Steps

1. **Start using the memory service** with Claude Code
2. **Store development notes** and project information  
3. **Build up your memory database** over time
4. **Enjoy faster, lighter memory operations**

## 📞 Support

If you encounter issues:
1. Check the troubleshooting section above
2. Review the [SQLite-vec Backend Guide](../sqlite-vec-backend.md)
3. Test with `simple_sqlite_vec_test.py`

Your Ubuntu setup is ready for high-performance memory operations with Claude Code! 🚀
```

--------------------------------------------------------------------------------
/claude-hooks/utilities/version-checker.js:
--------------------------------------------------------------------------------

```javascript
/**
 * Version Checker Utility
 * Reads local version from __init__.py and checks PyPI for latest published version
 */

const fs = require('fs').promises;
const path = require('path');
const https = require('https');

/**
 * Read version from __init__.py
 * @param {string} projectRoot - Path to project root directory
 * @returns {Promise<string|null>} Version string or null if not found
 */
async function readLocalVersion(projectRoot) {
    try {
        const initPath = path.join(projectRoot, 'src', 'mcp_memory_service', '__init__.py');
        const content = await fs.readFile(initPath, 'utf8');

        // Match __version__ = "X.Y.Z" or __version__ = 'X.Y.Z'
        const versionMatch = content.match(/__version__\s*=\s*['"]([\d.]+)['"]/);

        if (versionMatch && versionMatch[1]) {
            return versionMatch[1];
        }

        return null;
    } catch (error) {
        return null;
    }
}

/**
 * Fetch latest version from PyPI
 * @param {string} packageName - Name of the package on PyPI
 * @param {number} timeout - Request timeout in ms (default: 2000)
 * @returns {Promise<string|null>} Latest version string or null if error
 */
async function fetchPyPIVersion(packageName = 'mcp-memory-service', timeout = 2000) {
    return new Promise((resolve) => {
        const url = `https://pypi.org/pypi/${packageName}/json`;

        const timeoutId = setTimeout(() => {
            resolve(null);
        }, timeout);

        https.get(url, {
            headers: {
                'User-Agent': 'mcp-memory-service-hook'
            }
        }, (res) => {
            let data = '';

            res.on('data', (chunk) => {
                data += chunk;
            });

            res.on('end', () => {
                clearTimeout(timeoutId);
                try {
                    const parsed = JSON.parse(data);
                    const latestVersion = parsed?.info?.version;
                    resolve(latestVersion || null);
                } catch (error) {
                    resolve(null);
                }
            });
        }).on('error', () => {
            clearTimeout(timeoutId);
            resolve(null);
        });
    });
}

/**
 * Compare two semantic versions
 * @param {string} local - Local version (e.g., "8.39.1")
 * @param {string} pypi - PyPI version (e.g., "8.38.0")
 * @returns {number} -1 if local < pypi, 0 if equal, 1 if local > pypi
 */
function compareVersions(local, pypi) {
    const localParts = local.split('.').map(Number);
    const pypiParts = pypi.split('.').map(Number);

    for (let i = 0; i < Math.max(localParts.length, pypiParts.length); i++) {
        const localPart = localParts[i] || 0;
        const pypiPart = pypiParts[i] || 0;

        if (localPart < pypiPart) return -1;
        if (localPart > pypiPart) return 1;
    }

    return 0;
}

/**
 * Get version information with local and PyPI comparison
 * @param {string} projectRoot - Path to project root directory
 * @param {Object} options - Options for version check
 * @param {boolean} options.checkPyPI - Whether to check PyPI (default: true)
 * @param {number} options.timeout - PyPI request timeout in ms (default: 2000)
 * @returns {Promise<Object>} Version info object
 */
async function getVersionInfo(projectRoot, options = {}) {
    const { checkPyPI = true, timeout = 2000 } = options;

    const localVersion = await readLocalVersion(projectRoot);

    const result = {
        local: localVersion,
        pypi: null,
        comparison: null,
        status: 'unknown'
    };

    if (!localVersion) {
        result.status = 'error';
        return result;
    }

    if (checkPyPI) {
        const pypiVersion = await fetchPyPIVersion('mcp-memory-service', timeout);
        result.pypi = pypiVersion;

        if (pypiVersion) {
            const comparison = compareVersions(localVersion, pypiVersion);
            result.comparison = comparison;

            if (comparison === 0) {
                result.status = 'published';
            } else if (comparison > 0) {
                result.status = 'development';
            } else {
                result.status = 'outdated';
            }
        } else {
            result.status = 'local-only';
        }
    } else {
        result.status = 'local-only';
    }

    return result;
}

/**
 * Format version information for display
 * @param {Object} versionInfo - Version info from getVersionInfo()
 * @param {Object} colors - Console color codes
 * @returns {string} Formatted version string
 */
function formatVersionDisplay(versionInfo, colors) {
    const { local, pypi, status } = versionInfo;

    if (!local) {
        return `${colors.CYAN}📦 Version${colors.RESET} ${colors.DIM}→${colors.RESET} ${colors.GRAY}Unable to read version${colors.RESET}`;
    }

    let statusLabel = '';
    let pypiDisplay = '';

    switch (status) {
        case 'published':
            statusLabel = `${colors.GRAY}(published)${colors.RESET}`;
            break;
        case 'development':
            statusLabel = `${colors.GRAY}(local)${colors.RESET}`;
            pypiDisplay = pypi ? ` ${colors.DIM}•${colors.RESET} PyPI: ${colors.YELLOW}${pypi}${colors.RESET}` : '';
            break;
        case 'outdated':
            statusLabel = `${colors.RED}(outdated)${colors.RESET}`;
            pypiDisplay = pypi ? ` ${colors.DIM}•${colors.RESET} PyPI: ${colors.GREEN}${pypi}${colors.RESET}` : '';
            break;
        case 'local-only':
            statusLabel = `${colors.GRAY}(local)${colors.RESET}`;
            break;
        default:
            statusLabel = `${colors.GRAY}(unknown)${colors.RESET}`;
    }

    return `${colors.CYAN}📦 Version${colors.RESET} ${colors.DIM}→${colors.RESET} ${colors.BRIGHT}${local}${colors.RESET} ${statusLabel}${pypiDisplay}`;
}

module.exports = {
    readLocalVersion,
    fetchPyPIVersion,
    compareVersions,
    getVersionInfo,
    formatVersionDisplay
};

```

--------------------------------------------------------------------------------
/scripts/testing/test_mdns.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Test script for mDNS functionality.

This script runs the mDNS unit and integration tests to verify that
the service discovery functionality works correctly.
"""

import os
import sys
import subprocess
import argparse

# Add the src directory to the Python path
sys.path.insert(0, os.path.join(os.path.dirname(os.path.dirname(__file__)), 'src'))

def run_unit_tests():
    """Run unit tests for mDNS functionality."""
    print("🧪 Running mDNS unit tests...")
    
    # Try pytest first, fall back to simple test
    test_file_pytest = os.path.join(
        os.path.dirname(os.path.dirname(__file__)),
        'tests', 'unit', 'test_mdns.py'
    )
    
    test_file_simple = os.path.join(
        os.path.dirname(os.path.dirname(__file__)),
        'tests', 'unit', 'test_mdns_simple.py'
    )
    
    # Try pytest first
    try:
        result = subprocess.run([
            sys.executable, '-m', 'pytest', test_file_pytest, '-v'
        ], check=True, capture_output=True, text=True)
        
        print("✅ Unit tests passed (pytest)!")
        print(result.stdout)
        return True
        
    except (subprocess.CalledProcessError, FileNotFoundError):
        # Fall back to simple test
        try:
            result = subprocess.run([
                sys.executable, test_file_simple
            ], check=True, capture_output=True, text=True)
            
            print("✅ Unit tests passed (simple)!")
            print(result.stdout)
            return True
            
        except subprocess.CalledProcessError as e:
            print("❌ Unit tests failed!")
            print(e.stdout)
            print(e.stderr)
            return False

def run_integration_tests():
    """Run integration tests for mDNS functionality."""
    print("🌐 Running mDNS integration tests...")
    
    test_file = os.path.join(
        os.path.dirname(os.path.dirname(__file__)),
        'tests', 'integration', 'test_mdns_integration.py'
    )
    
    try:
        result = subprocess.run([
            sys.executable, '-m', 'pytest', test_file, '-v', '-m', 'integration'
        ], check=True, capture_output=True, text=True)
        
        print("✅ Integration tests passed!")
        print(result.stdout)
        return True
        
    except subprocess.CalledProcessError as e:
        print("⚠️ Integration tests had issues (may be expected in CI):")
        print(e.stdout)
        print(e.stderr)
        # Integration tests may fail in CI environments, so don't fail the script
        return True

def check_dependencies():
    """Check if required dependencies are available."""
    print("🔍 Checking mDNS test dependencies...")
    
    pytest_available = True
    try:
        import pytest
        print("✅ pytest available")
    except ImportError:
        print("⚠️ pytest not available - will use simple tests")
        pytest_available = False
    
    try:
        import zeroconf
        print("✅ zeroconf available")
    except ImportError:
        print("❌ zeroconf not available - this should have been installed with the package")
        return False
    
    try:
        import aiohttp
        print("✅ aiohttp available")
    except ImportError:
        print("❌ aiohttp not available - install with: pip install aiohttp")
        return False
    
    return True

def test_basic_imports():
    """Test that mDNS modules can be imported."""
    print("📦 Testing mDNS module imports...")
    
    try:
        from mcp_memory_service.discovery.mdns_service import ServiceAdvertiser, ServiceDiscovery
        print("✅ mDNS service modules imported successfully")
        
        from mcp_memory_service.discovery.client import DiscoveryClient
        print("✅ Discovery client imported successfully")
        
        return True
        
    except ImportError as e:
        print(f"❌ Import failed: {e}")
        return False

def main():
    """Main test function."""
    parser = argparse.ArgumentParser(description="Test mDNS functionality")
    parser.add_argument(
        "--unit-only", 
        action="store_true", 
        help="Run only unit tests (skip integration tests)"
    )
    parser.add_argument(
        "--integration-only", 
        action="store_true", 
        help="Run only integration tests (skip unit tests)"
    )
    parser.add_argument(
        "--no-integration", 
        action="store_true", 
        help="Skip integration tests (same as --unit-only)"
    )
    
    args = parser.parse_args()
    
    print("🔧 MCP Memory Service - mDNS Functionality Test")
    print("=" * 50)
    
    # Check dependencies
    if not check_dependencies():
        print("\n❌ Dependency check failed!")
        return 1
    
    # Test imports
    if not test_basic_imports():
        print("\n❌ Import test failed!")
        return 1
    
    success = True
    
    # Run unit tests
    if not args.integration_only:
        if not run_unit_tests():
            success = False
    
    # Run integration tests
    if not (args.unit_only or args.no_integration):
        if not run_integration_tests():
            success = False
    
    print("\n" + "=" * 50)
    if success:
        print("🎉 All mDNS tests completed successfully!")
        return 0
    else:
        print("❌ Some tests failed!")
        return 1

if __name__ == "__main__":
    sys.exit(main())
```

--------------------------------------------------------------------------------
/tests/test_client.py:
--------------------------------------------------------------------------------

```python
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
MCP Memory Service Test Client
Copyright (c) 2024 Heinrich Krupp
Licensed under the MIT License. See LICENSE file in the project root for full license text.
"""
import json
import logging
import sys
import os
from typing import Dict, Any
import threading
import queue
import time

# Configure logging
logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(levelname)s - %(message)s',
    stream=sys.stderr
)
logger = logging.getLogger(__name__)

class MCPTestClient:
    def __init__(self):
        self.message_id = 0
        self.client_name = "test_client"
        self.client_version = "0.1.0"
        self.protocol_version = "0.1.0"
        self.response_queue = queue.Queue()
        self._setup_io()

    def _setup_io(self):
        """Set up binary mode for Windows."""
        if os.name == 'nt':
            import msvcrt
            msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
            msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
            sys.stdin.reconfigure(encoding='utf-8')
            sys.stdout.reconfigure(encoding='utf-8')

    def get_message_id(self) -> str:
        """Generate a unique message ID."""
        self.message_id += 1
        return f"msg_{self.message_id}"

    def send_message(self, message: Dict[str, Any], timeout: float = 30.0) -> Dict[str, Any]:
        """Send a message and wait for response."""
        try:
            message_str = json.dumps(message) + '\n'
            logger.debug(f"Sending message: {message_str.strip()}")
            
            # Write message to stdout
            sys.stdout.write(message_str)
            sys.stdout.flush()
            
            # Read response from stdin with timeout
            start_time = time.time()
            while True:
                if time.time() - start_time > timeout:
                    raise TimeoutError(f"No response received within {timeout} seconds")
                
                try:
                    response = sys.stdin.readline()
                    if response:
                        logger.debug(f"Received response: {response.strip()}")
                        return json.loads(response)
                except Exception as e:
                    logger.error(f"Error reading response: {str(e)}")
                    raise
                
                time.sleep(0.1)  # Small delay to prevent busy waiting

        except Exception as e:
            logger.error(f"Error in communication: {str(e)}")
            raise

    def test_memory_operations(self):
        """Run through a series of test operations."""
        try:
            # Initialize connection
            logger.info("Initializing connection...")
            init_message = {
                "jsonrpc": "2.0",
                "method": "initialize",
                "params": {
                    "client_name": self.client_name,
                    "client_version": self.client_version,
                    "protocol_version": self.protocol_version
                },
                "id": self.get_message_id()
            }
            init_response = self.send_message(init_message)
            logger.info(f"Initialization response: {json.dumps(init_response, indent=2)}")

            # List available tools
            logger.info("\nListing available tools...")
            tools_message = {
                "jsonrpc": "2.0",
                "method": "list_tools",
                "params": {},
                "id": self.get_message_id()
            }
            tools_response = self.send_message(tools_message)
            logger.info(f"Available tools: {json.dumps(tools_response, indent=2)}")

            # Store test memories
            test_memories = [
                {
                    "content": "Remember to update documentation for API changes",
                    "metadata": {
                        "tags": ["todo", "documentation", "api"],
                        "type": "task"
                    }
                },
                {
                    "content": "Team meeting notes: Discussed new feature rollout plan",
                    "metadata": {
                        "tags": ["meeting", "notes", "features"],
                        "type": "note"
                    }
                }
            ]

            logger.info("\nStoring test memories...")
            for memory in test_memories:
                store_message = {
                    "jsonrpc": "2.0",
                    "method": "call_tool",
                    "params": {
                        "name": "store_memory",
                        "arguments": memory
                    },
                    "id": self.get_message_id()
                }
                store_response = self.send_message(store_message)
                logger.info(f"Store response: {json.dumps(store_response, indent=2)}")

        except TimeoutError as e:
            logger.error(f"Operation timed out: {str(e)}")
        except Exception as e:
            logger.error(f"An error occurred: {str(e)}")
            raise

def main():
    client = MCPTestClient()
    client.test_memory_operations()

if __name__ == "__main__":
    try:
        main()
    except KeyboardInterrupt:
        logger.info("Test client stopped by user")
    except Exception as e:
        logger.error(f"Test client failed: {str(e)}")
```

--------------------------------------------------------------------------------
/scripts/maintenance/regenerate_embeddings.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Regenerate embeddings for all memories after cosine distance migration.

This script regenerates embeddings for all existing memories in the database.
Useful after migrations that drop the embeddings table but preserve memories.

Usage:
    python scripts/maintenance/regenerate_embeddings.py
"""

import asyncio
import sys
import logging
from pathlib import Path

# Add parent directory to path for imports
sys.path.insert(0, str(Path(__file__).parent.parent.parent))

from src.mcp_memory_service.storage.factory import create_storage_instance
from src.mcp_memory_service.config import SQLITE_VEC_PATH

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)


async def regenerate_embeddings():
    """Regenerate embeddings for all memories."""

    database_path = SQLITE_VEC_PATH
    logger.info(f"Using database: {database_path}")

    # Create storage instance
    logger.info("Initializing storage backend...")
    storage = await create_storage_instance(database_path)

    try:
        # Get all memories (this accesses the memories table, not embeddings)
        logger.info("Fetching all memories from database...")

        # Access the primary storage directly for hybrid backend
        if hasattr(storage, 'primary'):
            actual_storage = storage.primary
        else:
            actual_storage = storage

        # Get count first
        if hasattr(actual_storage, 'conn'):
            cursor = actual_storage.conn.execute('SELECT COUNT(*) FROM memories')
            total_count = cursor.fetchone()[0]
            logger.info(f"Found {total_count} memories to process")

            # Get all memories
            cursor = actual_storage.conn.execute('''
                SELECT content_hash, content, tags, memory_type, metadata,
                       created_at, updated_at, created_at_iso, updated_at_iso
                FROM memories
            ''')

            memories = []
            for row in cursor.fetchall():
                content_hash, content, tags_str, memory_type, metadata_str = row[:5]
                created_at, updated_at, created_at_iso, updated_at_iso = row[5:]

                # Parse tags
                tags = [tag.strip() for tag in tags_str.split(",") if tag.strip()] if tags_str else []

                # Parse metadata
                import json
                metadata = json.loads(metadata_str) if metadata_str else {}

                memories.append({
                    'content_hash': content_hash,
                    'content': content,
                    'tags': tags,
                    'memory_type': memory_type,
                    'metadata': metadata,
                    'created_at': created_at,
                    'updated_at': updated_at,
                    'created_at_iso': created_at_iso,
                    'updated_at_iso': updated_at_iso
                })

            logger.info(f"Loaded {len(memories)} memories")

            # Regenerate embeddings
            logger.info("Regenerating embeddings...")
            success_count = 0
            error_count = 0

            for i, mem in enumerate(memories, 1):
                try:
                    # Generate embedding
                    embedding = actual_storage._generate_embedding(mem['content'])

                    # Get the rowid for this memory
                    cursor = actual_storage.conn.execute(
                        'SELECT id FROM memories WHERE content_hash = ?',
                        (mem['content_hash'],)
                    )
                    result = cursor.fetchone()
                    if not result:
                        logger.warning(f"Memory {mem['content_hash'][:8]} not found, skipping")
                        error_count += 1
                        continue

                    memory_id = result[0]

                    # Insert embedding
                    from src.mcp_memory_service.storage.sqlite_vec import serialize_float32
                    actual_storage.conn.execute(
                        'INSERT OR REPLACE INTO memory_embeddings(rowid, content_embedding) VALUES (?, ?)',
                        (memory_id, serialize_float32(embedding))
                    )

                    success_count += 1

                    if i % 10 == 0:
                        logger.info(f"Progress: {i}/{len(memories)} ({(i/len(memories)*100):.1f}%)")
                        actual_storage.conn.commit()

                except Exception as e:
                    logger.error(f"Error processing memory {mem['content_hash'][:8]}: {e}")
                    error_count += 1
                    continue

            # Final commit
            actual_storage.conn.commit()

            logger.info(f"\n{'='*60}")
            logger.info(f"Regeneration complete!")
            logger.info(f"  ✅ Success: {success_count} embeddings")
            logger.info(f"  ❌ Errors: {error_count}")
            logger.info(f"  📊 Total: {len(memories)} memories")
            logger.info(f"{'='*60}\n")

            # Verify
            cursor = actual_storage.conn.execute('SELECT COUNT(*) FROM memory_embeddings')
            embedding_count = cursor.fetchone()[0]
            logger.info(f"Verification: {embedding_count} embeddings in database")

        else:
            logger.error("Storage backend doesn't support direct database access")
            return False

        return True

    finally:
        # Cleanup
        if hasattr(storage, 'close'):
            await storage.close()


if __name__ == '__main__':
    try:
        result = asyncio.run(regenerate_embeddings())
        sys.exit(0 if result else 1)
    except KeyboardInterrupt:
        logger.info("\nOperation cancelled by user")
        sys.exit(1)
    except Exception as e:
        logger.error(f"Fatal error: {e}", exc_info=True)
        sys.exit(1)

```

--------------------------------------------------------------------------------
/scripts/validation/check_documentation_links.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Script to check for broken internal links in markdown files.
Checks relative links to files within the repository.

Usage:
    python scripts/check_documentation_links.py
    python scripts/check_documentation_links.py --verbose
    python scripts/check_documentation_links.py --fix-suggestions
"""

import os
import re
import argparse
from pathlib import Path
from typing import List, Tuple, Dict

def find_markdown_files(root_dir: str) -> List[Path]:
    """Find all markdown files in the repository."""
    root = Path(root_dir)
    md_files = []
    
    for path in root.rglob("*.md"):
        # Skip venv and node_modules
        if ".venv" in path.parts or "venv" in path.parts or "node_modules" in path.parts:
            continue
        md_files.append(path)
    
    return md_files

def extract_links(content: str) -> List[Tuple[str, str]]:
    """Extract markdown links from content with their text."""
    # Pattern for markdown links: [text](url)
    link_pattern = r'\[([^\]]*)\]\(([^)]+)\)'
    links = re.findall(link_pattern, content)
    return links  # Return (text, url) tuples

def is_internal_link(link: str) -> bool:
    """Check if a link is internal (relative path)."""
    # Skip external URLs, anchors, and mailto links
    if (link.startswith('http://') or 
        link.startswith('https://') or 
        link.startswith('mailto:') or
        link.startswith('#')):
        return False
    return True

def resolve_link_path(md_file_path: Path, link: str) -> Path:
    """Resolve relative link path from markdown file location."""
    # Remove any anchor fragments
    link_path = link.split('#')[0]
    
    # Resolve relative to the markdown file's directory
    return (md_file_path.parent / link_path).resolve()

def suggest_fixes(broken_link: str, repo_root: Path) -> List[str]:
    """Suggest possible fixes for broken links."""
    suggestions = []
    
    # Extract filename from the broken link
    filename = Path(broken_link).name
    
    # Search for files with similar names
    for md_file in find_markdown_files(str(repo_root)):
        if md_file.name.lower() == filename.lower():
            suggestions.append(str(md_file.relative_to(repo_root)))
        elif filename.lower() in md_file.name.lower():
            suggestions.append(str(md_file.relative_to(repo_root)))
    
    return suggestions[:3]  # Return top 3 suggestions

def check_links_in_file(md_file: Path, repo_root: Path) -> List[Tuple[str, str, str, bool]]:
    """Check all internal links in a markdown file."""
    try:
        with open(md_file, 'r', encoding='utf-8') as f:
            content = f.read()
    except Exception as e:
        print(f"Error reading {md_file}: {e}")
        return []
    
    links = extract_links(content)
    internal_links = [(text, link) for text, link in links if is_internal_link(link)]
    
    results = []
    for link_text, link in internal_links:
        try:
            target_path = resolve_link_path(md_file, link)
            exists = target_path.exists()
            results.append((link_text, link, str(target_path), exists))
        except Exception as e:
            results.append((link_text, link, f"Error resolving: {e}", False))
    
    return results

def main():
    parser = argparse.ArgumentParser(description='Check for broken internal links in markdown documentation')
    parser.add_argument('--verbose', '-v', action='store_true', help='Show all links, not just broken ones')
    parser.add_argument('--fix-suggestions', '-s', action='store_true', help='Suggest fixes for broken links')
    parser.add_argument('--format', choices=['text', 'markdown', 'json'], default='text', help='Output format')
    
    args = parser.parse_args()
    
    repo_root = Path(__file__).parent.parent
    md_files = find_markdown_files(str(repo_root))
    
    print(f"Checking {len(md_files)} markdown files for broken links...\n")
    
    broken_links = []
    total_links = 0
    file_results = {}
    
    for md_file in sorted(md_files):
        rel_path = md_file.relative_to(repo_root)
        link_results = check_links_in_file(md_file, repo_root)
        
        if link_results:
            file_results[str(rel_path)] = link_results
            
            if args.verbose or any(not exists for _, _, _, exists in link_results):
                print(f"\n[FILE] {rel_path}")
                
            for link_text, link, target, exists in link_results:
                total_links += 1
                status = "[OK]" if exists else "[ERROR]"
                
                if args.verbose or not exists:
                    print(f"  {status} [{link_text}]({link})")
                    if not exists:
                        print(f"     -> Target: {target}")
                        broken_links.append((str(rel_path), link_text, link, target))
    
    # Summary
    print(f"\n" + "="*60)
    print(f"SUMMARY:")
    print(f"Total internal links checked: {total_links}")
    print(f"Broken links found: {len(broken_links)}")
    
    if broken_links:
        print(f"\n❌ BROKEN LINKS:")
        for file_path, link_text, link, target in broken_links:
            print(f"\n  📄 {file_path}")
            print(f"     Text: {link_text}")
            print(f"     Link: {link}")
            print(f"     Target: {target}")
            
            if args.fix_suggestions:
                suggestions = suggest_fixes(link, repo_root)
                if suggestions:
                    print(f"     💡 Suggestions:")
                    for suggestion in suggestions:
                        print(f"        - {suggestion}")
    
    # Exit with error code if broken links found
    exit_code = 1 if broken_links else 0
    
    if broken_links:
        print(f"\n⚠️  Found {len(broken_links)} broken links. Use --fix-suggestions for repair ideas.")
    else:
        print(f"\n✅ All documentation links are working correctly!")
    
    return exit_code

if __name__ == "__main__":
    exit(main())
```

--------------------------------------------------------------------------------
/scripts/backup/restore_memories.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Restoration script to import memories from a backup JSON file into the database.
This can be used to restore memories after a database issue or migration problem.
"""
import sys
import os
import json
import asyncio
import logging
import argparse
from pathlib import Path

# Add parent directory to path so we can import from the src directory
sys.path.insert(0, str(Path(__file__).parent.parent))

from src.mcp_memory_service.storage.chroma import ChromaMemoryStorage
from src.mcp_memory_service.config import CHROMA_PATH, BACKUPS_PATH

# Configure logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
)
logger = logging.getLogger("memory_restore")

def parse_args():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Restore memories from backup file")
    parser.add_argument("backup_file", help="Path to backup JSON file", type=str)
    parser.add_argument("--reset", action="store_true", help="Reset database before restoration")
    return parser.parse_args()

async def restore_memories(backup_file, reset_db=False):
    """
    Import memories from a backup JSON file into the database.
    
    Args:
        backup_file: Path to the backup JSON file
        reset_db: If True, reset the database before restoration
    """
    logger.info(f"Initializing ChromaDB storage at {CHROMA_PATH}")
    storage = ChromaMemoryStorage(CHROMA_PATH)
    
    # Check if backup file exists
    if not os.path.exists(backup_file):
        # Check if it's a filename in the backups directory
        potential_path = os.path.join(BACKUPS_PATH, backup_file)
        if os.path.exists(potential_path):
            backup_file = potential_path
        else:
            raise FileNotFoundError(f"Backup file not found: {backup_file}")
    
    logger.info(f"Loading backup from {backup_file}")
    
    try:
        # Load backup data
        with open(backup_file, 'r', encoding='utf-8') as f:
            backup_data = json.load(f)
        
        total_memories = backup_data.get("total_memories", 0)
        memories = backup_data.get("memories", [])
        
        if not memories:
            logger.warning("No memories found in backup file")
            return
        
        logger.info(f"Found {len(memories)} memories in backup file")
        
        # Reset database if requested
        if reset_db:
            logger.warning("Resetting database before restoration")
            try:
                storage.client.delete_collection("memory_collection")
                logger.info("Deleted existing collection")
            except Exception as e:
                logger.error(f"Error deleting collection: {str(e)}")
            
            # Reinitialize collection
            storage.collection = storage.client.create_collection(
                name="memory_collection",
                metadata={"hnsw:space": "cosine"},
                embedding_function=storage.embedding_function
            )
            logger.info("Created new collection")
        
        # Process memories in batches
        batch_size = 50
        success_count = 0
        error_count = 0
        
        for i in range(0, len(memories), batch_size):
            batch = memories[i:i+batch_size]
            logger.info(f"Processing batch {i//batch_size + 1}/{(len(memories)-1)//batch_size + 1}")
            
            # Prepare batch data
            batch_ids = []
            batch_documents = []
            batch_metadatas = []
            batch_embeddings = []
            
            for memory in batch:
                batch_ids.append(memory["id"])
                batch_documents.append(memory["document"])
                batch_metadatas.append(memory["metadata"])
                if memory.get("embedding") is not None:
                    batch_embeddings.append(memory["embedding"])
            
            try:
                # Use upsert to avoid duplicates
                if batch_embeddings and len(batch_embeddings) > 0 and len(batch_embeddings) == len(batch_ids):
                    storage.collection.upsert(
                        ids=batch_ids,
                        documents=batch_documents,
                        metadatas=batch_metadatas,
                        embeddings=batch_embeddings
                    )
                else:
                    storage.collection.upsert(
                        ids=batch_ids,
                        documents=batch_documents,
                        metadatas=batch_metadatas
                    )
                success_count += len(batch)
            except Exception as e:
                logger.error(f"Error restoring batch: {str(e)}")
                error_count += len(batch)
        
        logger.info(f"Restoration completed: {success_count} memories restored, {error_count} errors")
        
    except Exception as e:
        logger.error(f"Error restoring backup: {str(e)}")
        raise

async def main():
    """Main function to run the restoration."""
    args = parse_args()
    
    logger.info("=== Starting memory restoration ===")
    
    try:
        await restore_memories(args.backup_file, args.reset)
        logger.info("=== Restoration completed successfully ===")
    except Exception as e:
        logger.error(f"Restoration failed: {str(e)}")
        sys.exit(1)

if __name__ == "__main__":
    asyncio.run(main())
```

--------------------------------------------------------------------------------
/scripts/maintenance/cleanup_memories.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
# Copyright 2024 Heinrich Krupp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Script to clean up erroneous memory entries from ChromaDB.
"""
import sys
import os
import asyncio
import logging
import argparse
from pathlib import Path

# Add parent directory to path so we can import from the src directory
sys.path.insert(0, str(Path(__file__).parent.parent))

from src.mcp_memory_service.storage.chroma import ChromaMemoryStorage
from src.mcp_memory_service.config import CHROMA_PATH

# Configure logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
)
logger = logging.getLogger("memory_cleanup")

def parse_args():
    """Parse command line arguments."""
    parser = argparse.ArgumentParser(description="Clean up erroneous memory entries")
    parser.add_argument("--error-text", help="Text pattern found in erroneous entries", type=str)
    parser.add_argument("--dry-run", action="store_true", help="Show what would be deleted without actually deleting")
    parser.add_argument("--reset", action="store_true", help="Completely reset the database (use with caution!)")
    return parser.parse_args()

async def cleanup_memories(error_text=None, dry_run=False, reset=False):
    """
    Clean up erroneous memory entries from ChromaDB.
    
    Args:
        error_text: Text pattern found in erroneous entries
        dry_run: If True, only show what would be deleted without actually deleting
        reset: If True, completely reset the database
    """
    logger.info(f"Initializing ChromaDB storage at {CHROMA_PATH}")
    storage = ChromaMemoryStorage(CHROMA_PATH)
    
    if reset:
        if dry_run:
            logger.warning("[DRY RUN] Would reset the entire database")
        else:
            logger.warning("Resetting the entire database")
            try:
                storage.client.delete_collection("memory_collection")
                logger.info("Deleted existing collection")
                
                # Reinitialize collection
                storage.collection = storage.client.create_collection(
                    name="memory_collection",
                    metadata={"hnsw:space": "cosine"},
                    embedding_function=storage.embedding_function
                )
                logger.info("Created new empty collection")
            except Exception as e:
                logger.error(f"Error resetting collection: {str(e)}")
        return
    
    # Get all memory entries
    try:
        # Query all entries
        result = storage.collection.get()
        total_memories = len(result['ids']) if 'ids' in result else 0
        logger.info(f"Found {total_memories} total memories in the database")
        
        if total_memories == 0:
            logger.info("No memories found in the database")
            return
        
        # Find erroneous entries
        error_ids = []
        
        if error_text:
            logger.info(f"Searching for entries containing text pattern: '{error_text}'")
            for i, doc in enumerate(result['documents']):
                if error_text in doc:
                    error_ids.append(result['ids'][i])
                    if len(error_ids) <= 5:  # Show a few examples
                        logger.info(f"Found erroneous entry: {doc[:100]}...")
        
        # If no specific error text, look for common error patterns
        if not error_text and not error_ids:
            logger.info("No specific error text provided, looking for common error patterns")
            for i, doc in enumerate(result['documents']):
                # Look for very short documents (likely errors)
                if len(doc.strip()) < 10:
                    error_ids.append(result['ids'][i])
                    logger.info(f"Found suspiciously short entry: '{doc}'")
                # Look for error messages
                elif any(err in doc.lower() for err in ['error', 'exception', 'failed', 'invalid']):
                    error_ids.append(result['ids'][i])
                    if len(error_ids) <= 5:  # Show a few examples
                        logger.info(f"Found likely error entry: {doc[:100]}...")
        
        if not error_ids:
            logger.info("No erroneous entries found")
            return
        
        logger.info(f"Found {len(error_ids)} erroneous entries")
        
        # Delete erroneous entries
        if dry_run:
            logger.info(f"[DRY RUN] Would delete {len(error_ids)} erroneous entries")
        else:
            logger.info(f"Deleting {len(error_ids)} erroneous entries")
            # Process in batches to avoid overwhelming the database
            batch_size = 100
            for i in range(0, len(error_ids), batch_size):
                batch = error_ids[i:i+batch_size]
                logger.info(f"Deleting batch {i//batch_size + 1}/{(len(error_ids)-1)//batch_size + 1}")
                storage.collection.delete(ids=batch)
            
            logger.info("Deletion completed")
        
    except Exception as e:
        logger.error(f"Error cleaning up memories: {str(e)}")
        raise

async def main():
    """Main function to run the cleanup."""
    args = parse_args()
    
    logger.info("=== Starting memory cleanup ===")
    
    try:
        await cleanup_memories(args.error_text, args.dry_run, args.reset)
        logger.info("=== Cleanup completed successfully ===")
    except Exception as e:
        logger.error(f"Cleanup failed: {str(e)}")
        sys.exit(1)

if __name__ == "__main__":
    asyncio.run(main())

```

--------------------------------------------------------------------------------
/docs/development/code-quality/phase-2a-install-package.md:
--------------------------------------------------------------------------------

```markdown
# Refactoring: install_package() - Phase 2, Function #4

## Summary
Refactored `install.py::install_package()` to reduce cyclomatic complexity and improve maintainability.

**Metrics:**
- **Original Complexity:** 27-33 (high-risk)
- **Refactored Main Function:** Complexity 7 (79% reduction)
- **Total Lines:** Original 199 → Refactored 39 (main function only)

## Refactoring Strategy: Extract Method Pattern

The function contained multiple responsibility areas with high nesting and branching. Extracted into 8 focused helper functions:

### Helper Functions Created

#### 1. `_setup_installer_command()` - CC: 6
**Purpose:** Detect and configure pip/uv package manager
**Responsibilities:**
- Check if pip is available
- Attempt uv installation if pip missing
- Return appropriate installer command

**Location:** Lines 1257-1298
**Usage:** Called at the beginning of `install_package()`

---

#### 2. `_configure_storage_and_gpu()` - CC: 9
**Purpose:** Configure storage backend and GPU environment setup
**Responsibilities:**
- Detect system and GPU capabilities
- Choose and install storage backend
- Set environment variables for backend and GPU type
- Return configured environment and system info

**Location:** Lines 1301-1357
**Usage:** After installer command setup

---

#### 3. `_handle_pytorch_setup()` - CC: 3
**Purpose:** Orchestrate PyTorch installation
**Responsibilities:**
- Detect Homebrew PyTorch installation
- Trigger platform-specific PyTorch installation if needed
- Set environment variables for ONNX when using Homebrew PyTorch

**Location:** Lines 1360-1387
**Usage:** After storage backend configuration

---

#### 4. `_should_use_onnx_installation()` - CC: 1
**Purpose:** Simple decision function for ONNX installation path
**Logic:** Return True if:
- macOS with Intel CPU AND
- (Python 3.13+ OR using Homebrew PyTorch OR skip_pytorch flag)

**Location:** Lines 1390-1402
**Usage:** Determines which installation flow to follow

---

#### 5. `_install_with_onnx()` - CC: 7
**Purpose:** SQLite-vec + ONNX specialized installation path
**Responsibilities:**
- Install without ML dependencies (--no-deps)
- Build dependency list (ONNX, tokenizers, etc.)
- Install backend-specific packages
- Configure environment for ONNX runtime
- Fall back to standard installation if fails

**Location:** Lines 1405-1477
**Usage:** Called when ONNX installation is appropriate

---

#### 6. `_install_standard()` - CC: 2
**Purpose:** Standard pip/uv installation
**Responsibilities:**
- Run pip/uv install command
- Handle success/failure cases

**Location:** Lines 1480-1502
**Usage:** Called for normal installation flow

---

#### 7. `_handle_installation_failure()` - CC: 3
**Purpose:** Provide troubleshooting guidance on failure
**Responsibilities:**
- Detect if macOS Intel platform
- Print platform-specific installation instructions
- Suggest Homebrew PyTorch workarounds if available

**Location:** Lines 1505-1521
**Usage:** Called only when installation fails

---

## Refactored `install_package()` Function - CC: 7

**New Structure:**
```python
def install_package(args):
    1. Setup installer command (pip/uv)
    2. Configure storage backend and GPU
    3. Handle PyTorch setup
    4. Decide installation path (ONNX vs Standard)
    5. Execute installation
    6. Handle failures if needed
    7. Return status
```

**Lines:** 39 (vs 199 original)
**Control Flow:** Reduced from 26 branches to 6

## Benefits

### Code Quality
- ✅ **Single Responsibility:** Each function has one clear purpose
- ✅ **Testability:** Helper functions can be unit tested independently
- ✅ **Readability:** Main function now reads like a high-level workflow
- ✅ **Maintainability:** Changes isolated to specific functions

### Complexity Reduction
- Main function complexity: 27 → 7 (74% reduction)
- Maximum helper function complexity: 9 (vs 27 original)
- Total cyclomatic complexity across all functions: ~38 (distributed vs monolithic)

### Architecture
- **Concerns separated:** GPU detection, backend selection, PyTorch setup, installation paths
- **Clear flow:** Easy to understand the order of operations
- **Error handling:** Dedicated failure handler with platform-specific guidance
- **Extensibility:** Easy to add new installation paths or backends

## Backward Compatibility

✅ **Fully compatible** - No changes to:
- Function signature: `install_package(args)`
- Return values: `bool`
- Environment variable handling
- Command-line argument processing
- Error messages and output format

## Testing Recommendations

1. **Unit Tests for Helpers:**
   - `test_setup_installer_command()` - Verify pip/uv detection
   - `test_configure_storage_and_gpu()` - Test backend selection
   - `test_should_use_onnx_installation()` - Test platform detection logic

2. **Integration Tests:**
   - Test full installation on macOS Intel with Python 3.13+
   - Test with Homebrew PyTorch detected
   - Test ONNX fallback path

3. **Manual Testing:**
   - Run with `--skip-pytorch` flag
   - Run with `--storage-backend sqlite_vec`
   - Verify error messages on intentional failures

## Related Issues

- **Issue #246:** Code Quality Phase 2 - Reduce Function Complexity
- **Phase 2 Progress:** 2/5 top functions completed
  - ✅ `install.py::main()` - Complexity 62 → ~8
  - ✅ `sqlite_vec.py::initialize()` - Complexity 38 → Reduced
  - ⏳ `config.py::__main__()` - Complexity 42 (next)
  - ⏳ `oauth/authorization.py::token()` - Complexity 35
  - ⏳ `install_package()` - Complexity 33 (this refactoring)

## Files Modified

- `install.py`: Refactored `install_package()` function with 7 new helper functions

## Git Commit

Use semantic commit message:
```
refactor: reduce install_package() complexity from 27 to 7 (74% reduction)

Extract helper functions for:
- Installer command setup (pip/uv detection)
- Storage backend and GPU configuration
- PyTorch installation orchestration
- ONNX installation path decision
- ONNX-specific installation
- Standard pip/uv installation
- Installation failure handling

All functions individually testable and maintainable.
Addresses issue #246 Phase 2, function #4 of top 5 high-complexity targets.
```

```

--------------------------------------------------------------------------------
/docs/troubleshooting/cloudflare-api-token-setup.md:
--------------------------------------------------------------------------------

```markdown
# Cloudflare API Token Configuration for MCP Memory Service

This guide helps you create and configure a Cloudflare API token with the correct permissions for the MCP Memory Service Cloudflare backend.

## Required API Token Permissions

To use the Cloudflare backend, your API token must have these permissions:

### Essential Permissions

#### D1 Database
- **Permission**: `Cloudflare D1:Edit`
- **Purpose**: Storing memory metadata, tags, and relationships
- **Required**: Yes

#### Vectorize Index
- **Permission**: `AI Gateway:Edit` or `Vectorize:Edit`
- **Purpose**: Storing and querying memory embeddings
- **Required**: Yes

#### Workers AI
- **Permission**: `AI Gateway:Read` or `Workers AI:Read`
- **Purpose**: Generating embeddings using Cloudflare's AI models
- **Model Used**: `@cf/baai/bge-base-en-v1.5`
- **Required**: Yes

#### Account Access
- **Permission**: `Account:Read`
- **Purpose**: Basic account-level operations
- **Required**: Yes

#### R2 Storage (Optional)
- **Permission**: `R2:Edit`
- **Purpose**: Large content storage (files > 1MB)
- **Required**: Only if using R2 for large content storage

## Token Creation Steps

1. **Navigate to Cloudflare Dashboard**
   - Go to: https://dash.cloudflare.com/profile/api-tokens

2. **Create Custom Token**
   - Click "Create Token" > "Custom token"

3. **Configure Token Permissions**
   - **Token name**: `MCP Memory Service Token` (or similar)
   - **Permissions**: Add all required permissions listed above
   - **Account resources**: Include your Cloudflare account
   - **Zone resources**: Include required zones (or all zones)
   - **IP address filtering**: Leave blank for maximum compatibility
   - **TTL**: Set appropriate expiration date

4. **Save and Copy Token**
   - Click "Continue to summary" > "Create Token"
   - **Important**: Copy the token immediately - it won't be shown again

## Environment Configuration

Add the token to your environment configuration:

### Option 1: Project .env File
```bash
# Add to .env file in project root
MCP_MEMORY_STORAGE_BACKEND=cloudflare
CLOUDFLARE_API_TOKEN=your_new_token_here
CLOUDFLARE_ACCOUNT_ID=your_account_id
CLOUDFLARE_D1_DATABASE_ID=your_d1_database_id
CLOUDFLARE_VECTORIZE_INDEX=your_vectorize_index_name
```

### Option 2: Claude Desktop Configuration
```json
{
  "mcpServers": {
    "memory": {
      "command": "uv",
      "args": [
        "--directory", "path/to/mcp-memory-service",
        "run", "python", "-m", "mcp_memory_service.server"
      ],
      "env": {
        "MCP_MEMORY_STORAGE_BACKEND": "cloudflare",
        "CLOUDFLARE_API_TOKEN": "your_new_token_here",
        "CLOUDFLARE_ACCOUNT_ID": "your_account_id",
        "CLOUDFLARE_D1_DATABASE_ID": "your_d1_database_id",
        "CLOUDFLARE_VECTORIZE_INDEX": "your_vectorize_index_name"
      }
    }
  }
}
```

## Verification

Test your token configuration:

```bash
# Navigate to project directory
cd path/to/mcp-memory-service

# Test the configuration
uv run python -c "
import asyncio
from src.mcp_memory_service.storage.cloudflare import CloudflareStorage
import os

async def test():
    storage = CloudflareStorage(
        api_token=os.getenv('CLOUDFLARE_API_TOKEN'),
        account_id=os.getenv('CLOUDFLARE_ACCOUNT_ID'),
        vectorize_index=os.getenv('CLOUDFLARE_VECTORIZE_INDEX'),
        d1_database_id=os.getenv('CLOUDFLARE_D1_DATABASE_ID')
    )
    await storage.initialize()
    print('Token configuration successful!')

asyncio.run(test())
"
```

## Common Authentication Issues

### Error Codes and Solutions

#### Error 9109: Location Restriction
- **Symptom**: "Cannot use the access token from location: [IP]"
- **Cause**: Token has IP address restrictions
- **Solution**: Remove IP restrictions or add current IP to allowlist

#### Error 7403: Insufficient Permissions
- **Symptom**: "The given account is not valid or is not authorized"
- **Cause**: Token lacks required service permissions
- **Solution**: Add missing permissions (D1, Vectorize, Workers AI)

#### Error 10000: Authentication Error
- **Symptom**: "Authentication error" for specific services
- **Cause**: Token missing permissions for specific services
- **Solution**: Verify all required permissions are granted

#### Error 1000: Invalid API Token
- **Symptom**: "Invalid API Token"
- **Cause**: Token may be malformed or expired
- **Solution**: Create a new token or check token format

### Google SSO Accounts

If you use Google SSO for Cloudflare:

1. **Set Account Password**
   - Go to **My Profile** → **Authentication**
   - Click **"Set Password"** to add a password to your account
   - Use this password when prompted during token creation

2. **Alternative: Global API Key**
   - Go to **My Profile** → **API Tokens**
   - Scroll to **"Global API Key"** section
   - Use Global API Key + email for authentication

## Security Best Practices

1. **Minimal Permissions**: Only grant permissions required for your use case
2. **Token Rotation**: Regularly rotate API tokens (e.g., every 90 days)
3. **Environment Variables**: Never commit tokens to version control
4. **IP Restrictions**: Use IP restrictions in production environments
5. **Monitoring**: Monitor token usage in Cloudflare dashboard
6. **Expiration**: Set reasonable TTL for tokens

## Troubleshooting Steps

If authentication continues to fail:

1. **Verify Configuration**
   - Check all environment variables are set correctly
   - Confirm resource IDs (account, database, index) are accurate

2. **Test Individual Services**
   - Test account access first
   - Then test each service (D1, Vectorize, Workers AI) individually

3. **Check Cloudflare Logs**
   - Review API usage logs in Cloudflare dashboard
   - Look for specific error messages and timestamps

4. **Validate Permissions**
   - Double-check all required permissions are selected
   - Ensure permissions include both read and write access where needed

5. **Network Issues**
   - Verify network connectivity to Cloudflare APIs
   - Check if corporate firewall blocks API access

For additional help, see the [Cloudflare Setup Guide](../cloudflare-setup.md) or the main [troubleshooting documentation](./README.md).
```
Page 6/35FirstPrevNextLast