#
tokens: 40205/50000 2/625 files (page 36/47)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 36 of 47. Use http://codebase.md/doobidoo/mcp-memory-service?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .claude
│   ├── agents
│   │   ├── amp-bridge.md
│   │   ├── amp-pr-automator.md
│   │   ├── code-quality-guard.md
│   │   ├── gemini-pr-automator.md
│   │   └── github-release-manager.md
│   ├── settings.local.json.backup
│   └── settings.local.json.local
├── .commit-message
├── .dockerignore
├── .env.example
├── .env.sqlite.backup
├── .envnn#
├── .gitattributes
├── .github
│   ├── FUNDING.yml
│   ├── ISSUE_TEMPLATE
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── feature_request.yml
│   │   └── performance_issue.yml
│   ├── pull_request_template.md
│   └── workflows
│       ├── bridge-tests.yml
│       ├── CACHE_FIX.md
│       ├── claude-code-review.yml
│       ├── claude.yml
│       ├── cleanup-images.yml.disabled
│       ├── dev-setup-validation.yml
│       ├── docker-publish.yml
│       ├── LATEST_FIXES.md
│       ├── main-optimized.yml.disabled
│       ├── main.yml
│       ├── publish-and-test.yml
│       ├── README_OPTIMIZATION.md
│       ├── release-tag.yml.disabled
│       ├── release.yml
│       ├── roadmap-review-reminder.yml
│       ├── SECRET_CONDITIONAL_FIX.md
│       └── WORKFLOW_FIXES.md
├── .gitignore
├── .mcp.json.backup
├── .mcp.json.template
├── .pyscn
│   ├── .gitignore
│   └── reports
│       └── analyze_20251123_214224.html
├── AGENTS.md
├── archive
│   ├── deployment
│   │   ├── deploy_fastmcp_fixed.sh
│   │   ├── deploy_http_with_mcp.sh
│   │   └── deploy_mcp_v4.sh
│   ├── deployment-configs
│   │   ├── empty_config.yml
│   │   └── smithery.yaml
│   ├── development
│   │   └── test_fastmcp.py
│   ├── docs-removed-2025-08-23
│   │   ├── authentication.md
│   │   ├── claude_integration.md
│   │   ├── claude-code-compatibility.md
│   │   ├── claude-code-integration.md
│   │   ├── claude-code-quickstart.md
│   │   ├── claude-desktop-setup.md
│   │   ├── complete-setup-guide.md
│   │   ├── database-synchronization.md
│   │   ├── development
│   │   │   ├── autonomous-memory-consolidation.md
│   │   │   ├── CLEANUP_PLAN.md
│   │   │   ├── CLEANUP_README.md
│   │   │   ├── CLEANUP_SUMMARY.md
│   │   │   ├── dream-inspired-memory-consolidation.md
│   │   │   ├── hybrid-slm-memory-consolidation.md
│   │   │   ├── mcp-milestone.md
│   │   │   ├── multi-client-architecture.md
│   │   │   ├── test-results.md
│   │   │   └── TIMESTAMP_FIX_SUMMARY.md
│   │   ├── distributed-sync.md
│   │   ├── invocation_guide.md
│   │   ├── macos-intel.md
│   │   ├── master-guide.md
│   │   ├── mcp-client-configuration.md
│   │   ├── multi-client-server.md
│   │   ├── service-installation.md
│   │   ├── sessions
│   │   │   └── MCP_ENHANCEMENT_SESSION_MEMORY_v4.1.0.md
│   │   ├── UBUNTU_SETUP.md
│   │   ├── ubuntu.md
│   │   ├── windows-setup.md
│   │   └── windows.md
│   ├── docs-root-cleanup-2025-08-23
│   │   ├── AWESOME_LIST_SUBMISSION.md
│   │   ├── CLOUDFLARE_IMPLEMENTATION.md
│   │   ├── DOCUMENTATION_ANALYSIS.md
│   │   ├── DOCUMENTATION_CLEANUP_PLAN.md
│   │   ├── DOCUMENTATION_CONSOLIDATION_COMPLETE.md
│   │   ├── LITESTREAM_SETUP_GUIDE.md
│   │   ├── lm_studio_system_prompt.md
│   │   ├── PYTORCH_DOWNLOAD_FIX.md
│   │   └── README-ORIGINAL-BACKUP.md
│   ├── investigations
│   │   └── MACOS_HOOKS_INVESTIGATION.md
│   ├── litestream-configs-v6.3.0
│   │   ├── install_service.sh
│   │   ├── litestream_master_config_fixed.yml
│   │   ├── litestream_master_config.yml
│   │   ├── litestream_replica_config_fixed.yml
│   │   ├── litestream_replica_config.yml
│   │   ├── litestream_replica_simple.yml
│   │   ├── litestream-http.service
│   │   ├── litestream.service
│   │   └── requirements-cloudflare.txt
│   ├── release-notes
│   │   └── release-notes-v7.1.4.md
│   └── setup-development
│       ├── README.md
│       ├── setup_consolidation_mdns.sh
│       ├── STARTUP_SETUP_GUIDE.md
│       └── test_service.sh
├── CHANGELOG-HISTORIC.md
├── CHANGELOG.md
├── claude_commands
│   ├── memory-context.md
│   ├── memory-health.md
│   ├── memory-ingest-dir.md
│   ├── memory-ingest.md
│   ├── memory-recall.md
│   ├── memory-search.md
│   ├── memory-store.md
│   ├── README.md
│   └── session-start.md
├── claude-hooks
│   ├── config.json
│   ├── config.template.json
│   ├── CONFIGURATION.md
│   ├── core
│   │   ├── memory-retrieval.js
│   │   ├── mid-conversation.js
│   │   ├── session-end.js
│   │   ├── session-start.js
│   │   └── topic-change.js
│   ├── debug-pattern-test.js
│   ├── install_claude_hooks_windows.ps1
│   ├── install_hooks.py
│   ├── memory-mode-controller.js
│   ├── MIGRATION.md
│   ├── README-NATURAL-TRIGGERS.md
│   ├── README-phase2.md
│   ├── README.md
│   ├── simple-test.js
│   ├── statusline.sh
│   ├── test-adaptive-weights.js
│   ├── test-dual-protocol-hook.js
│   ├── test-mcp-hook.js
│   ├── test-natural-triggers.js
│   ├── test-recency-scoring.js
│   ├── tests
│   │   ├── integration-test.js
│   │   ├── phase2-integration-test.js
│   │   ├── test-code-execution.js
│   │   ├── test-cross-session.json
│   │   ├── test-session-tracking.json
│   │   └── test-threading.json
│   ├── utilities
│   │   ├── adaptive-pattern-detector.js
│   │   ├── context-formatter.js
│   │   ├── context-shift-detector.js
│   │   ├── conversation-analyzer.js
│   │   ├── dynamic-context-updater.js
│   │   ├── git-analyzer.js
│   │   ├── mcp-client.js
│   │   ├── memory-client.js
│   │   ├── memory-scorer.js
│   │   ├── performance-manager.js
│   │   ├── project-detector.js
│   │   ├── session-tracker.js
│   │   ├── tiered-conversation-monitor.js
│   │   └── version-checker.js
│   └── WINDOWS-SESSIONSTART-BUG.md
├── CLAUDE.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Development-Sprint-November-2025.md
├── docs
│   ├── amp-cli-bridge.md
│   ├── api
│   │   ├── code-execution-interface.md
│   │   ├── memory-metadata-api.md
│   │   ├── PHASE1_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_IMPLEMENTATION_SUMMARY.md
│   │   ├── PHASE2_REPORT.md
│   │   └── tag-standardization.md
│   ├── architecture
│   │   ├── search-enhancement-spec.md
│   │   └── search-examples.md
│   ├── architecture.md
│   ├── archive
│   │   └── obsolete-workflows
│   │       ├── load_memory_context.md
│   │       └── README.md
│   ├── assets
│   │   └── images
│   │       ├── dashboard-v3.3.0-preview.png
│   │       ├── memory-awareness-hooks-example.png
│   │       ├── project-infographic.svg
│   │       └── README.md
│   ├── CLAUDE_CODE_QUICK_REFERENCE.md
│   ├── cloudflare-setup.md
│   ├── deployment
│   │   ├── docker.md
│   │   ├── dual-service.md
│   │   ├── production-guide.md
│   │   └── systemd-service.md
│   ├── development
│   │   ├── ai-agent-instructions.md
│   │   ├── code-quality
│   │   │   ├── phase-2a-completion.md
│   │   │   ├── phase-2a-handle-get-prompt.md
│   │   │   ├── phase-2a-index.md
│   │   │   ├── phase-2a-install-package.md
│   │   │   └── phase-2b-session-summary.md
│   │   ├── code-quality-workflow.md
│   │   ├── dashboard-workflow.md
│   │   ├── issue-management.md
│   │   ├── pr-review-guide.md
│   │   ├── refactoring-notes.md
│   │   ├── release-checklist.md
│   │   └── todo-tracker.md
│   ├── docker-optimized-build.md
│   ├── document-ingestion.md
│   ├── DOCUMENTATION_AUDIT.md
│   ├── enhancement-roadmap-issue-14.md
│   ├── examples
│   │   ├── analysis-scripts.js
│   │   ├── maintenance-session-example.md
│   │   ├── memory-distribution-chart.jsx
│   │   └── tag-schema.json
│   ├── first-time-setup.md
│   ├── glama-deployment.md
│   ├── guides
│   │   ├── advanced-command-examples.md
│   │   ├── chromadb-migration.md
│   │   ├── commands-vs-mcp-server.md
│   │   ├── mcp-enhancements.md
│   │   ├── mdns-service-discovery.md
│   │   ├── memory-consolidation-guide.md
│   │   ├── migration.md
│   │   ├── scripts.md
│   │   └── STORAGE_BACKENDS.md
│   ├── HOOK_IMPROVEMENTS.md
│   ├── hooks
│   │   └── phase2-code-execution-migration.md
│   ├── http-server-management.md
│   ├── ide-compatability.md
│   ├── IMAGE_RETENTION_POLICY.md
│   ├── images
│   │   └── dashboard-placeholder.md
│   ├── implementation
│   │   ├── health_checks.md
│   │   └── performance.md
│   ├── IMPLEMENTATION_PLAN_HTTP_SSE.md
│   ├── integration
│   │   ├── homebrew.md
│   │   └── multi-client.md
│   ├── integrations
│   │   ├── gemini.md
│   │   ├── groq-bridge.md
│   │   ├── groq-integration-summary.md
│   │   └── groq-model-comparison.md
│   ├── integrations.md
│   ├── legacy
│   │   └── dual-protocol-hooks.md
│   ├── LM_STUDIO_COMPATIBILITY.md
│   ├── maintenance
│   │   └── memory-maintenance.md
│   ├── mastery
│   │   ├── api-reference.md
│   │   ├── architecture-overview.md
│   │   ├── configuration-guide.md
│   │   ├── local-setup-and-run.md
│   │   ├── testing-guide.md
│   │   └── troubleshooting.md
│   ├── migration
│   │   └── code-execution-api-quick-start.md
│   ├── natural-memory-triggers
│   │   ├── cli-reference.md
│   │   ├── installation-guide.md
│   │   └── performance-optimization.md
│   ├── oauth-setup.md
│   ├── pr-graphql-integration.md
│   ├── quick-setup-cloudflare-dual-environment.md
│   ├── README.md
│   ├── remote-configuration-wiki-section.md
│   ├── research
│   │   ├── code-execution-interface-implementation.md
│   │   └── code-execution-interface-summary.md
│   ├── ROADMAP.md
│   ├── sqlite-vec-backend.md
│   ├── statistics
│   │   ├── charts
│   │   │   ├── activity_patterns.png
│   │   │   ├── contributors.png
│   │   │   ├── growth_trajectory.png
│   │   │   ├── monthly_activity.png
│   │   │   └── october_sprint.png
│   │   ├── data
│   │   │   ├── activity_by_day.csv
│   │   │   ├── activity_by_hour.csv
│   │   │   ├── contributors.csv
│   │   │   └── monthly_activity.csv
│   │   ├── generate_charts.py
│   │   └── REPOSITORY_STATISTICS.md
│   ├── technical
│   │   ├── development.md
│   │   ├── memory-migration.md
│   │   ├── migration-log.md
│   │   ├── sqlite-vec-embedding-fixes.md
│   │   └── tag-storage.md
│   ├── testing
│   │   └── regression-tests.md
│   ├── testing-cloudflare-backend.md
│   ├── troubleshooting
│   │   ├── cloudflare-api-token-setup.md
│   │   ├── cloudflare-authentication.md
│   │   ├── general.md
│   │   ├── hooks-quick-reference.md
│   │   ├── pr162-schema-caching-issue.md
│   │   ├── session-end-hooks.md
│   │   └── sync-issues.md
│   └── tutorials
│       ├── advanced-techniques.md
│       ├── data-analysis.md
│       └── demo-session-walkthrough.md
├── examples
│   ├── claude_desktop_config_template.json
│   ├── claude_desktop_config_windows.json
│   ├── claude-desktop-http-config.json
│   ├── config
│   │   └── claude_desktop_config.json
│   ├── http-mcp-bridge.js
│   ├── memory_export_template.json
│   ├── README.md
│   ├── setup
│   │   └── setup_multi_client_complete.py
│   └── start_https_example.sh
├── install_service.py
├── install.py
├── LICENSE
├── NOTICE
├── pyproject.toml
├── pytest.ini
├── README.md
├── run_server.py
├── scripts
│   ├── .claude
│   │   └── settings.local.json
│   ├── archive
│   │   └── check_missing_timestamps.py
│   ├── backup
│   │   ├── backup_memories.py
│   │   ├── backup_sqlite_vec.sh
│   │   ├── export_distributable_memories.sh
│   │   └── restore_memories.py
│   ├── benchmarks
│   │   ├── benchmark_code_execution_api.py
│   │   ├── benchmark_hybrid_sync.py
│   │   └── benchmark_server_caching.py
│   ├── database
│   │   ├── analyze_sqlite_vec_db.py
│   │   ├── check_sqlite_vec_status.py
│   │   ├── db_health_check.py
│   │   └── simple_timestamp_check.py
│   ├── development
│   │   ├── debug_server_initialization.py
│   │   ├── find_orphaned_files.py
│   │   ├── fix_mdns.sh
│   │   ├── fix_sitecustomize.py
│   │   ├── remote_ingest.sh
│   │   ├── setup-git-merge-drivers.sh
│   │   ├── uv-lock-merge.sh
│   │   └── verify_hybrid_sync.py
│   ├── hooks
│   │   └── pre-commit
│   ├── installation
│   │   ├── install_linux_service.py
│   │   ├── install_macos_service.py
│   │   ├── install_uv.py
│   │   ├── install_windows_service.py
│   │   ├── install.py
│   │   ├── setup_backup_cron.sh
│   │   ├── setup_claude_mcp.sh
│   │   └── setup_cloudflare_resources.py
│   ├── linux
│   │   ├── service_status.sh
│   │   ├── start_service.sh
│   │   ├── stop_service.sh
│   │   ├── uninstall_service.sh
│   │   └── view_logs.sh
│   ├── maintenance
│   │   ├── assign_memory_types.py
│   │   ├── check_memory_types.py
│   │   ├── cleanup_corrupted_encoding.py
│   │   ├── cleanup_memories.py
│   │   ├── cleanup_organize.py
│   │   ├── consolidate_memory_types.py
│   │   ├── consolidation_mappings.json
│   │   ├── delete_orphaned_vectors_fixed.py
│   │   ├── fast_cleanup_duplicates_with_tracking.sh
│   │   ├── find_all_duplicates.py
│   │   ├── find_cloudflare_duplicates.py
│   │   ├── find_duplicates.py
│   │   ├── memory-types.md
│   │   ├── README.md
│   │   ├── recover_timestamps_from_cloudflare.py
│   │   ├── regenerate_embeddings.py
│   │   ├── repair_malformed_tags.py
│   │   ├── repair_memories.py
│   │   ├── repair_sqlite_vec_embeddings.py
│   │   ├── repair_zero_embeddings.py
│   │   ├── restore_from_json_export.py
│   │   └── scan_todos.sh
│   ├── migration
│   │   ├── cleanup_mcp_timestamps.py
│   │   ├── legacy
│   │   │   └── migrate_chroma_to_sqlite.py
│   │   ├── mcp-migration.py
│   │   ├── migrate_sqlite_vec_embeddings.py
│   │   ├── migrate_storage.py
│   │   ├── migrate_tags.py
│   │   ├── migrate_timestamps.py
│   │   ├── migrate_to_cloudflare.py
│   │   ├── migrate_to_sqlite_vec.py
│   │   ├── migrate_v5_enhanced.py
│   │   ├── TIMESTAMP_CLEANUP_README.md
│   │   └── verify_mcp_timestamps.py
│   ├── pr
│   │   ├── amp_collect_results.sh
│   │   ├── amp_detect_breaking_changes.sh
│   │   ├── amp_generate_tests.sh
│   │   ├── amp_pr_review.sh
│   │   ├── amp_quality_gate.sh
│   │   ├── amp_suggest_fixes.sh
│   │   ├── auto_review.sh
│   │   ├── detect_breaking_changes.sh
│   │   ├── generate_tests.sh
│   │   ├── lib
│   │   │   └── graphql_helpers.sh
│   │   ├── quality_gate.sh
│   │   ├── resolve_threads.sh
│   │   ├── run_pyscn_analysis.sh
│   │   ├── run_quality_checks.sh
│   │   ├── thread_status.sh
│   │   └── watch_reviews.sh
│   ├── quality
│   │   ├── fix_dead_code_install.sh
│   │   ├── phase1_dead_code_analysis.md
│   │   ├── phase2_complexity_analysis.md
│   │   ├── README_PHASE1.md
│   │   ├── README_PHASE2.md
│   │   ├── track_pyscn_metrics.sh
│   │   └── weekly_quality_review.sh
│   ├── README.md
│   ├── run
│   │   ├── run_mcp_memory.sh
│   │   ├── run-with-uv.sh
│   │   └── start_sqlite_vec.sh
│   ├── run_memory_server.py
│   ├── server
│   │   ├── check_http_server.py
│   │   ├── check_server_health.py
│   │   ├── memory_offline.py
│   │   ├── preload_models.py
│   │   ├── run_http_server.py
│   │   ├── run_memory_server.py
│   │   ├── start_http_server.bat
│   │   └── start_http_server.sh
│   ├── service
│   │   ├── deploy_dual_services.sh
│   │   ├── install_http_service.sh
│   │   ├── mcp-memory-http.service
│   │   ├── mcp-memory.service
│   │   ├── memory_service_manager.sh
│   │   ├── service_control.sh
│   │   ├── service_utils.py
│   │   └── update_service.sh
│   ├── sync
│   │   ├── check_drift.py
│   │   ├── claude_sync_commands.py
│   │   ├── export_memories.py
│   │   ├── import_memories.py
│   │   ├── litestream
│   │   │   ├── apply_local_changes.sh
│   │   │   ├── enhanced_memory_store.sh
│   │   │   ├── init_staging_db.sh
│   │   │   ├── io.litestream.replication.plist
│   │   │   ├── manual_sync.sh
│   │   │   ├── memory_sync.sh
│   │   │   ├── pull_remote_changes.sh
│   │   │   ├── push_to_remote.sh
│   │   │   ├── README.md
│   │   │   ├── resolve_conflicts.sh
│   │   │   ├── setup_local_litestream.sh
│   │   │   ├── setup_remote_litestream.sh
│   │   │   ├── staging_db_init.sql
│   │   │   ├── stash_local_changes.sh
│   │   │   ├── sync_from_remote_noconfig.sh
│   │   │   └── sync_from_remote.sh
│   │   ├── README.md
│   │   ├── safe_cloudflare_update.sh
│   │   ├── sync_memory_backends.py
│   │   └── sync_now.py
│   ├── testing
│   │   ├── run_complete_test.py
│   │   ├── run_memory_test.sh
│   │   ├── simple_test.py
│   │   ├── test_cleanup_logic.py
│   │   ├── test_cloudflare_backend.py
│   │   ├── test_docker_functionality.py
│   │   ├── test_installation.py
│   │   ├── test_mdns.py
│   │   ├── test_memory_api.py
│   │   ├── test_memory_simple.py
│   │   ├── test_migration.py
│   │   ├── test_search_api.py
│   │   ├── test_sqlite_vec_embeddings.py
│   │   ├── test_sse_events.py
│   │   ├── test-connection.py
│   │   └── test-hook.js
│   ├── utils
│   │   ├── claude_commands_utils.py
│   │   ├── generate_personalized_claude_md.sh
│   │   ├── groq
│   │   ├── groq_agent_bridge.py
│   │   ├── list-collections.py
│   │   ├── memory_wrapper_uv.py
│   │   ├── query_memories.py
│   │   ├── smithery_wrapper.py
│   │   ├── test_groq_bridge.sh
│   │   └── uv_wrapper.py
│   └── validation
│       ├── check_dev_setup.py
│       ├── check_documentation_links.py
│       ├── diagnose_backend_config.py
│       ├── validate_configuration_complete.py
│       ├── validate_memories.py
│       ├── validate_migration.py
│       ├── validate_timestamp_integrity.py
│       ├── verify_environment.py
│       ├── verify_pytorch_windows.py
│       └── verify_torch.py
├── SECURITY.md
├── selective_timestamp_recovery.py
├── SPONSORS.md
├── src
│   └── mcp_memory_service
│       ├── __init__.py
│       ├── api
│       │   ├── __init__.py
│       │   ├── client.py
│       │   ├── operations.py
│       │   ├── sync_wrapper.py
│       │   └── types.py
│       ├── backup
│       │   ├── __init__.py
│       │   └── scheduler.py
│       ├── cli
│       │   ├── __init__.py
│       │   ├── ingestion.py
│       │   ├── main.py
│       │   └── utils.py
│       ├── config.py
│       ├── consolidation
│       │   ├── __init__.py
│       │   ├── associations.py
│       │   ├── base.py
│       │   ├── clustering.py
│       │   ├── compression.py
│       │   ├── consolidator.py
│       │   ├── decay.py
│       │   ├── forgetting.py
│       │   ├── health.py
│       │   └── scheduler.py
│       ├── dependency_check.py
│       ├── discovery
│       │   ├── __init__.py
│       │   ├── client.py
│       │   └── mdns_service.py
│       ├── embeddings
│       │   ├── __init__.py
│       │   └── onnx_embeddings.py
│       ├── ingestion
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── chunker.py
│       │   ├── csv_loader.py
│       │   ├── json_loader.py
│       │   ├── pdf_loader.py
│       │   ├── registry.py
│       │   ├── semtools_loader.py
│       │   └── text_loader.py
│       ├── lm_studio_compat.py
│       ├── mcp_server.py
│       ├── models
│       │   ├── __init__.py
│       │   └── memory.py
│       ├── server.py
│       ├── services
│       │   ├── __init__.py
│       │   └── memory_service.py
│       ├── storage
│       │   ├── __init__.py
│       │   ├── base.py
│       │   ├── cloudflare.py
│       │   ├── factory.py
│       │   ├── http_client.py
│       │   ├── hybrid.py
│       │   └── sqlite_vec.py
│       ├── sync
│       │   ├── __init__.py
│       │   ├── exporter.py
│       │   ├── importer.py
│       │   └── litestream_config.py
│       ├── utils
│       │   ├── __init__.py
│       │   ├── cache_manager.py
│       │   ├── content_splitter.py
│       │   ├── db_utils.py
│       │   ├── debug.py
│       │   ├── document_processing.py
│       │   ├── gpu_detection.py
│       │   ├── hashing.py
│       │   ├── http_server_manager.py
│       │   ├── port_detection.py
│       │   ├── system_detection.py
│       │   └── time_parser.py
│       └── web
│           ├── __init__.py
│           ├── api
│           │   ├── __init__.py
│           │   ├── analytics.py
│           │   ├── backup.py
│           │   ├── consolidation.py
│           │   ├── documents.py
│           │   ├── events.py
│           │   ├── health.py
│           │   ├── manage.py
│           │   ├── mcp.py
│           │   ├── memories.py
│           │   ├── search.py
│           │   └── sync.py
│           ├── app.py
│           ├── dependencies.py
│           ├── oauth
│           │   ├── __init__.py
│           │   ├── authorization.py
│           │   ├── discovery.py
│           │   ├── middleware.py
│           │   ├── models.py
│           │   ├── registration.py
│           │   └── storage.py
│           ├── sse.py
│           └── static
│               ├── app.js
│               ├── index.html
│               ├── README.md
│               ├── sse_test.html
│               └── style.css
├── start_http_debug.bat
├── start_http_server.sh
├── test_document.txt
├── test_version_checker.js
├── tests
│   ├── __init__.py
│   ├── api
│   │   ├── __init__.py
│   │   ├── test_compact_types.py
│   │   └── test_operations.py
│   ├── bridge
│   │   ├── mock_responses.js
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   └── test_http_mcp_bridge.js
│   ├── conftest.py
│   ├── consolidation
│   │   ├── __init__.py
│   │   ├── conftest.py
│   │   ├── test_associations.py
│   │   ├── test_clustering.py
│   │   ├── test_compression.py
│   │   ├── test_consolidator.py
│   │   ├── test_decay.py
│   │   └── test_forgetting.py
│   ├── contracts
│   │   └── api-specification.yml
│   ├── integration
│   │   ├── package-lock.json
│   │   ├── package.json
│   │   ├── test_api_key_fallback.py
│   │   ├── test_api_memories_chronological.py
│   │   ├── test_api_tag_time_search.py
│   │   ├── test_api_with_memory_service.py
│   │   ├── test_bridge_integration.js
│   │   ├── test_cli_interfaces.py
│   │   ├── test_cloudflare_connection.py
│   │   ├── test_concurrent_clients.py
│   │   ├── test_data_serialization_consistency.py
│   │   ├── test_http_server_startup.py
│   │   ├── test_mcp_memory.py
│   │   ├── test_mdns_integration.py
│   │   ├── test_oauth_basic_auth.py
│   │   ├── test_oauth_flow.py
│   │   ├── test_server_handlers.py
│   │   └── test_store_memory.py
│   ├── performance
│   │   ├── test_background_sync.py
│   │   └── test_hybrid_live.py
│   ├── README.md
│   ├── smithery
│   │   └── test_smithery.py
│   ├── sqlite
│   │   └── simple_sqlite_vec_test.py
│   ├── test_client.py
│   ├── test_content_splitting.py
│   ├── test_database.py
│   ├── test_hybrid_cloudflare_limits.py
│   ├── test_hybrid_storage.py
│   ├── test_memory_ops.py
│   ├── test_semantic_search.py
│   ├── test_sqlite_vec_storage.py
│   ├── test_time_parser.py
│   ├── test_timestamp_preservation.py
│   ├── timestamp
│   │   ├── test_hook_vs_manual_storage.py
│   │   ├── test_issue99_final_validation.py
│   │   ├── test_search_retrieval_inconsistency.py
│   │   ├── test_timestamp_issue.py
│   │   └── test_timestamp_simple.py
│   └── unit
│       ├── conftest.py
│       ├── test_cloudflare_storage.py
│       ├── test_csv_loader.py
│       ├── test_fastapi_dependencies.py
│       ├── test_import.py
│       ├── test_json_loader.py
│       ├── test_mdns_simple.py
│       ├── test_mdns.py
│       ├── test_memory_service.py
│       ├── test_memory.py
│       ├── test_semtools_loader.py
│       ├── test_storage_interface_compatibility.py
│       └── test_tag_time_filtering.py
├── tools
│   ├── docker
│   │   ├── DEPRECATED.md
│   │   ├── docker-compose.http.yml
│   │   ├── docker-compose.pythonpath.yml
│   │   ├── docker-compose.standalone.yml
│   │   ├── docker-compose.uv.yml
│   │   ├── docker-compose.yml
│   │   ├── docker-entrypoint-persistent.sh
│   │   ├── docker-entrypoint-unified.sh
│   │   ├── docker-entrypoint.sh
│   │   ├── Dockerfile
│   │   ├── Dockerfile.glama
│   │   ├── Dockerfile.slim
│   │   ├── README.md
│   │   └── test-docker-modes.sh
│   └── README.md
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/claude-hooks/utilities/context-formatter.js:
--------------------------------------------------------------------------------

```javascript
   1 | /**
   2 |  * Context Formatting Utility
   3 |  * Formats memories for injection into Claude Code sessions
   4 |  */
   5 | 
   6 | /**
   7 |  * Detect if running in Claude Code CLI environment
   8 |  */
   9 | function isCLIEnvironment() {
  10 |     // Check for Claude Code specific environment indicators
  11 |     return process.env.CLAUDE_CODE_CLI === 'true' || 
  12 |            process.env.TERM_PROGRAM === 'claude-code' ||
  13 |            process.argv.some(arg => arg.includes('claude')) ||
  14 |            (process.stdout.isTTY === false); // Explicitly check for non-TTY contexts
  15 | }
  16 | 
  17 | /**
  18 |  * ANSI Color codes for CLI formatting
  19 |  */
  20 | const COLORS = {
  21 |     RESET: '\x1b[0m',
  22 |     BRIGHT: '\x1b[1m',
  23 |     DIM: '\x1b[2m',
  24 |     CYAN: '\x1b[36m',
  25 |     GREEN: '\x1b[32m',
  26 |     BLUE: '\x1b[34m',
  27 |     YELLOW: '\x1b[33m',
  28 |     MAGENTA: '\x1b[35m',
  29 |     GRAY: '\x1b[90m'
  30 | };
  31 | 
  32 | /**
  33 |  * Convert markdown formatting to ANSI color codes for terminal display
  34 |  * Provides clean, formatted output without raw markdown syntax
  35 |  */
  36 | function convertMarkdownToANSI(text, options = {}) {
  37 |     const {
  38 |         stripOnly = false,  // If true, only strip markdown without adding ANSI
  39 |         preserveStructure = true  // If true, maintain line breaks and spacing
  40 |     } = options;
  41 |     
  42 |     if (!text || typeof text !== 'string') {
  43 |         return text;
  44 |     }
  45 |     
  46 |     // Check if markdown conversion is disabled via environment
  47 |     if (process.env.CLAUDE_MARKDOWN_TO_ANSI === 'false') {
  48 |         return text;
  49 |     }
  50 |     
  51 |     let processed = text;
  52 |     
  53 |     // Process headers (must be done before other replacements)
  54 |     // H1: # Header -> Bold Cyan
  55 |     processed = processed.replace(/^#\s+(.+)$/gm, (match, content) => {
  56 |         return stripOnly ? content : `${COLORS.BRIGHT}${COLORS.CYAN}${content}${COLORS.RESET}`;
  57 |     });
  58 |     
  59 |     // H2: ## Header -> Bold Cyan (slightly different from H1 in real terminal apps)
  60 |     processed = processed.replace(/^##\s+(.+)$/gm, (match, content) => {
  61 |         return stripOnly ? content : `${COLORS.BRIGHT}${COLORS.CYAN}${content}${COLORS.RESET}`;
  62 |     });
  63 |     
  64 |     // H3: ### Header -> Bold
  65 |     processed = processed.replace(/^###\s+(.+)$/gm, (match, content) => {
  66 |         return stripOnly ? content : `${COLORS.BRIGHT}${content}${COLORS.RESET}`;
  67 |     });
  68 |     
  69 |     // H4-H6: #### Header -> Bold (but could be differentiated if needed)
  70 |     processed = processed.replace(/^#{4,6}\s+(.+)$/gm, (match, content) => {
  71 |         return stripOnly ? content : `${COLORS.BRIGHT}${content}${COLORS.RESET}`;
  72 |     });
  73 |     
  74 |     // Bold text: **text** or __text__
  75 |     processed = processed.replace(/\*\*([^*]+)\*\*/g, (match, content) => {
  76 |         return stripOnly ? content : `${COLORS.BRIGHT}${content}${COLORS.RESET}`;
  77 |     });
  78 |     processed = processed.replace(/__([^_]+)__/g, (match, content) => {
  79 |         return stripOnly ? content : `${COLORS.BRIGHT}${content}${COLORS.RESET}`;
  80 |     });
  81 |     
  82 |     // Code blocks MUST be processed before inline code to avoid conflicts
  83 |     // Code blocks: ```language\ncode\n```
  84 |     processed = processed.replace(/```(\w*)\n?([\s\S]*?)```/g, (match, lang, content) => {
  85 |         if (stripOnly) {
  86 |             return content.trim();
  87 |         }
  88 |         const lines = content.trim().split('\n').map(line => 
  89 |             `${COLORS.GRAY}${line}${COLORS.RESET}`
  90 |         );
  91 |         return lines.join('\n');
  92 |     });
  93 |     
  94 |     // Italic text: *text* or _text_ (avoiding URLs and bold syntax)
  95 |     // More conservative pattern to avoid matching within URLs
  96 |     processed = processed.replace(/(?<!\*)\*(?!\*)([^*\n]+)(?<!\*)\*(?!\*)/g, (match, content) => {
  97 |         return stripOnly ? content : `${COLORS.DIM}${content}${COLORS.RESET}`;
  98 |     });
  99 |     processed = processed.replace(/(?<!_)_(?!_)([^_\n]+)(?<!_)_(?!_)/g, (match, content) => {
 100 |         return stripOnly ? content : `${COLORS.DIM}${content}${COLORS.RESET}`;
 101 |     });
 102 |     
 103 |     // Inline code: `code` (after code blocks to avoid matching backticks in blocks)
 104 |     processed = processed.replace(/`([^`]+)`/g, (match, content) => {
 105 |         return stripOnly ? content : `${COLORS.GRAY}${content}${COLORS.RESET}`;
 106 |     });
 107 |     
 108 |     // Lists: Convert markdown bullets to better symbols
 109 |     // Unordered lists: - item or * item
 110 |     processed = processed.replace(/^[\s]*[-*]\s+(.+)$/gm, (match, content) => {
 111 |         return stripOnly ? content : `  ${COLORS.CYAN}•${COLORS.RESET} ${content}`;
 112 |     });
 113 |     
 114 |     // Ordered lists: 1. item
 115 |     processed = processed.replace(/^[\s]*\d+\.\s+(.+)$/gm, (match, content) => {
 116 |         return stripOnly ? content : `  ${COLORS.CYAN}›${COLORS.RESET} ${content}`;
 117 |     });
 118 |     
 119 |     // Links: [text](url) - process before blockquotes so links in quotes work
 120 |     processed = processed.replace(/\[([^\]]+)\]\(([^)]+)\)/g, (match, text, url) => {
 121 |         return stripOnly ? text : `${COLORS.CYAN}${text}${COLORS.RESET}`;
 122 |     });
 123 |     
 124 |     // Blockquotes: > quote
 125 |     processed = processed.replace(/^>\s+(.+)$/gm, (match, content) => {
 126 |         return stripOnly ? content : `${COLORS.DIM}│ ${content}${COLORS.RESET}`;
 127 |     });
 128 |     
 129 |     // Horizontal rules: --- or *** or ___
 130 |     processed = processed.replace(/^[-*_]{3,}$/gm, () => {
 131 |         return stripOnly ? '' : `${COLORS.DIM}${'─'.repeat(40)}${COLORS.RESET}`;
 132 |     });
 133 |     
 134 |     // Clean up any double resets or color artifacts
 135 |     processed = processed.replace(/(\x1b\[0m)+/g, COLORS.RESET);
 136 |     
 137 |     return processed;
 138 | }
 139 | 
 140 | /**
 141 |  * Wrap text to specified width while preserving words and indentation
 142 |  */
 143 | function wrapText(text, maxWidth = 80, indent = 0, treePrefix = '') {
 144 |     const indentStr = ' '.repeat(indent);
 145 |     const effectiveWidth = maxWidth - indent;
 146 | 
 147 |     // Strip ANSI codes for accurate width calculation
 148 |     const stripAnsi = (str) => str.replace(/\x1b\[[0-9;]*m/g, '');
 149 | 
 150 |     // Remove pre-existing newlines to consolidate text into single line
 151 |     // This prevents embedded newlines from breaking tree structure
 152 |     const normalizedText = text.replace(/\n/g, ' ').replace(/\s{2,}/g, ' ').trim();
 153 | 
 154 |     const textStripped = stripAnsi(normalizedText);
 155 |     if (textStripped.length <= effectiveWidth) {
 156 |         return [normalizedText];
 157 |     }
 158 | 
 159 |     const words = normalizedText.split(/\s+/); // Split on whitespace
 160 |     const lines = [];
 161 |     let currentLine = '';
 162 | 
 163 |     for (const word of words) {
 164 |         const testLine = currentLine ? currentLine + ' ' + word : word;
 165 |         const testLineStripped = stripAnsi(testLine);
 166 | 
 167 |         if (testLineStripped.length <= effectiveWidth) {
 168 |             currentLine = testLine;
 169 |         } else if (currentLine) {
 170 |             lines.push(currentLine);
 171 |             currentLine = word;
 172 |         } else {
 173 |             // Single word longer than line width, force break
 174 |             const effectiveWordWidth = stripAnsi(word).length;
 175 |             if (effectiveWordWidth > effectiveWidth) {
 176 |                 lines.push(word.substring(0, effectiveWidth));
 177 |                 currentLine = word.substring(effectiveWidth);
 178 |             } else {
 179 |                 currentLine = word;
 180 |             }
 181 |         }
 182 |     }
 183 | 
 184 |     if (currentLine) {
 185 |         lines.push(currentLine);
 186 |     }
 187 | 
 188 |     // Apply tree prefix to continuation lines (not just spaces)
 189 |     return lines.map((line, idx) => (idx === 0 ? line : treePrefix + indentStr + line));
 190 | }
 191 | 
 192 | /**
 193 |  * Format memories for CLI environment with enhanced visual formatting
 194 |  */
 195 | function formatMemoriesForCLI(memories, projectContext, options = {}) {
 196 |     const {
 197 |         includeProjectSummary = true,
 198 |         maxMemories = 8,
 199 |         includeTimestamp = true,
 200 |         maxContentLengthCLI = 400,
 201 |         maxContentLengthCategorized = 350,
 202 |         storageInfo = null,
 203 |         adaptiveTruncation = true,
 204 |         contentLengthConfig = null
 205 |     } = options;
 206 | 
 207 |     if (!memories || memories.length === 0) {
 208 |         return `\n${COLORS.CYAN}╭────────────────────────────────────────────────────────────────────────────────╮${COLORS.RESET}\n${COLORS.CYAN}│${COLORS.RESET} 🧠 ${COLORS.BRIGHT}Memory Context${COLORS.RESET}                                                              ${COLORS.CYAN}│${COLORS.RESET}\n${COLORS.CYAN}╰────────────────────────────────────────────────────────────────────────────────╯${COLORS.RESET}\n${COLORS.CYAN}┌─${COLORS.RESET} ${COLORS.GRAY}No relevant memories found for this session.${COLORS.RESET}\n`;
 209 |     }
 210 | 
 211 |     // Determine adaptive content length based on memory count
 212 |     const estimatedMemoryCount = Math.min(memories.length, maxMemories);
 213 |     let adaptiveContentLength = maxContentLengthCLI;
 214 | 
 215 |     if (adaptiveTruncation && contentLengthConfig) {
 216 |         if (estimatedMemoryCount >= 5) {
 217 |             adaptiveContentLength = contentLengthConfig.manyMemories || 300;
 218 |         } else if (estimatedMemoryCount >= 3) {
 219 |             adaptiveContentLength = contentLengthConfig.fewMemories || 500;
 220 |         } else {
 221 |             adaptiveContentLength = contentLengthConfig.veryFewMemories || 800;
 222 |         }
 223 |     }
 224 | 
 225 |     // Filter out null/generic memories and limit number
 226 |     const validMemories = [];
 227 |     let memoryIndex = 0;
 228 | 
 229 |     for (const memory of memories) {
 230 |         if (validMemories.length >= maxMemories) break;
 231 | 
 232 |         const formatted = formatMemoryForCLI(memory, memoryIndex, {
 233 |             maxContentLength: adaptiveContentLength,
 234 |             includeDate: includeTimestamp
 235 |         });
 236 | 
 237 |         if (formatted) {
 238 |             validMemories.push({ memory, formatted });
 239 |             memoryIndex++;
 240 |         }
 241 |     }
 242 | 
 243 |     // Build unified tree structure (no separate decorative box)
 244 |     let contextMessage = '';
 245 | 
 246 |     // Add project summary in enhanced CLI format
 247 |     if (includeProjectSummary && projectContext) {
 248 |         const { name, frameworks, tools, branch, lastCommit } = projectContext;
 249 |         const projectInfo = [];
 250 |         if (name) projectInfo.push(name);
 251 |         if (frameworks?.length) projectInfo.push(frameworks.slice(0, 2).join(', '));
 252 |         if (tools?.length) projectInfo.push(tools.slice(0, 2).join(', '));
 253 | 
 254 |         contextMessage += `\n${COLORS.CYAN}┌─${COLORS.RESET} 🧠 ${COLORS.BRIGHT}Injected Memory Context${COLORS.RESET} ${COLORS.DIM}→${COLORS.RESET} ${COLORS.BLUE}${projectInfo.join(', ')}${COLORS.RESET}\n`;
 255 | 
 256 |         // Add storage information if available
 257 |         if (storageInfo) {
 258 |             const locationText = storageInfo.location.length > 40 ?
 259 |                 storageInfo.location.substring(0, 37) + '...' :
 260 |                 storageInfo.location;
 261 | 
 262 |             // Show rich storage info if health data is available
 263 |             if (storageInfo.health && storageInfo.health.totalMemories > 0) {
 264 |                 const memoryInfo = `${storageInfo.health.totalMemories} memories`;
 265 |                 contextMessage += `${COLORS.CYAN}│${COLORS.RESET}\n`;
 266 |                 contextMessage += `${COLORS.CYAN}├─${COLORS.RESET} ${storageInfo.icon} ${COLORS.BRIGHT}${storageInfo.description}${COLORS.RESET} ${COLORS.DIM}•${COLORS.RESET} ${COLORS.GRAY}${memoryInfo}${COLORS.RESET}\n`;
 267 |             } else {
 268 |                 contextMessage += `${COLORS.CYAN}│${COLORS.RESET}\n`;
 269 |                 contextMessage += `${COLORS.CYAN}├─${COLORS.RESET} ${storageInfo.icon} ${COLORS.BRIGHT}${storageInfo.description}${COLORS.RESET}\n`;
 270 |             }
 271 |             contextMessage += `${COLORS.CYAN}├─${COLORS.RESET} 📍 ${COLORS.GRAY}${locationText}${COLORS.RESET}\n`;
 272 |         }
 273 | 
 274 |         contextMessage += `${COLORS.CYAN}├─${COLORS.RESET} 📚 ${COLORS.BRIGHT}${validMemories.length} memories loaded${COLORS.RESET}\n`;
 275 | 
 276 |         if (branch || lastCommit) {
 277 |             const gitInfo = [];
 278 |             if (branch) gitInfo.push(`${COLORS.GREEN}${branch}${COLORS.RESET}`);
 279 |             if (lastCommit) gitInfo.push(`${COLORS.GRAY}${lastCommit.substring(0, 7)}${COLORS.RESET}`);
 280 |             contextMessage += `${COLORS.CYAN}│${COLORS.RESET}\n`;
 281 |         }
 282 |     } else {
 283 |         contextMessage += `\n${COLORS.CYAN}┌─${COLORS.RESET} 🧠 ${COLORS.BRIGHT}Injected Memory Context${COLORS.RESET}\n`;
 284 |         contextMessage += `${COLORS.CYAN}├─${COLORS.RESET} 📚 ${COLORS.BRIGHT}${validMemories.length} memories loaded${COLORS.RESET}\n`;
 285 |     }
 286 | 
 287 |     contextMessage += `${COLORS.CYAN}│${COLORS.RESET}\n`;
 288 | 
 289 |     if (validMemories.length > 3) {
 290 |         // Group by category with enhanced formatting
 291 |         const categories = groupMemoriesByCategory(validMemories.map(v => v.memory));
 292 | 
 293 |         const categoryInfo = {
 294 |             'recent-work': { title: 'Recent Work', icon: '🔥', color: COLORS.GREEN },
 295 |             'current-problems': { title: 'Current Problems', icon: '⚠️', color: COLORS.YELLOW },
 296 |             'key-decisions': { title: 'Key Decisions', icon: '🎯', color: COLORS.CYAN },
 297 |             'additional-context': { title: 'Additional Context', icon: '📋', color: COLORS.GRAY }
 298 |         };
 299 | 
 300 |         let hasContent = false;
 301 |         let categoryCount = 0;
 302 |         const totalCategories = Object.values(categories).filter(cat => cat.length > 0).length;
 303 | 
 304 |         Object.entries(categories).forEach(([category, categoryMemories]) => {
 305 |             if (categoryMemories.length > 0) {
 306 |                 categoryCount++;
 307 |                 const isLast = categoryCount === totalCategories;
 308 |                 const categoryIcon = categoryInfo[category]?.icon || '📝';
 309 |                 const categoryTitle = categoryInfo[category]?.title || 'Context';
 310 |                 const categoryColor = categoryInfo[category]?.color || COLORS.GRAY;
 311 | 
 312 |                 contextMessage += `${COLORS.CYAN}${isLast ? '└─' : '├─'}${COLORS.RESET} ${categoryIcon} ${categoryColor}${COLORS.BRIGHT}${categoryTitle}${COLORS.RESET}:\n`;
 313 |                 hasContent = true;
 314 | 
 315 |                 categoryMemories.forEach((memory, idx) => {
 316 |                     const formatted = formatMemoryForCLI(memory, 0, {
 317 |                         maxContentLength: maxContentLengthCategorized,
 318 |                         includeDate: includeTimestamp,
 319 |                         indent: true
 320 |                     });
 321 |                     if (formatted) {
 322 |                         const isLastMemory = idx === categoryMemories.length - 1;
 323 |                         const connector = isLast ? '   ' : `${COLORS.CYAN}│${COLORS.RESET}  `;
 324 |                         const prefix = isLastMemory
 325 |                             ? `${connector}${COLORS.CYAN}└─${COLORS.RESET} `
 326 |                             : `${connector}${COLORS.CYAN}├─${COLORS.RESET} `;
 327 | 
 328 |                         // Calculate tree prefix for continuation lines
 329 |                         let treePrefix;
 330 |                         if (isLastMemory) {
 331 |                             // Last memory in category - no vertical line after └─
 332 |                             treePrefix = isLast ? '   ' : connector;
 333 |                         } else {
 334 |                             // Not last memory - maintain vertical tree structure
 335 |                             treePrefix = isLast
 336 |                                 ? `   ${COLORS.CYAN}│${COLORS.RESET}  `
 337 |                                 : `${COLORS.CYAN}│${COLORS.RESET}  ${COLORS.CYAN}│${COLORS.RESET}  `;
 338 |                         }
 339 | 
 340 |                         // Wrap long content lines with tree prefix for continuation
 341 |                         const lines = wrapText(formatted, 70, 6, treePrefix);
 342 | 
 343 |                         // Output all lines (first line with prefix, continuation lines already have tree chars)
 344 |                         lines.forEach((line, lineIdx) => {
 345 |                             if (lineIdx === 0) {
 346 |                                 contextMessage += `${prefix}${line}\n`;
 347 |                             } else {
 348 |                                 contextMessage += `${line}\n`;
 349 |                             }
 350 |                         });
 351 |                     }
 352 |                 });
 353 |                 if (!isLast) contextMessage += `${COLORS.CYAN}│${COLORS.RESET}\n`;
 354 |             }
 355 |         });
 356 | 
 357 |         if (!hasContent) {
 358 |             // Fallback to linear format
 359 |             validMemories.forEach(({ formatted }, idx) => {
 360 |                 const isLast = idx === validMemories.length - 1;
 361 |                 const connector = isLast ? '   ' : `${COLORS.CYAN}│${COLORS.RESET}  `;
 362 |                 const lines = wrapText(formatted, 76, 3, connector);
 363 | 
 364 |                 // Output all lines (first with tree char, continuation with connector prefix)
 365 |                 lines.forEach((line, lineIdx) => {
 366 |                     if (lineIdx === 0) {
 367 |                         contextMessage += `${COLORS.CYAN}${isLast ? '└─' : '├─'}${COLORS.RESET} ${line}\n`;
 368 |                     } else {
 369 |                         contextMessage += `${line}\n`;
 370 |                     }
 371 |                 });
 372 |             });
 373 |         }
 374 |     } else {
 375 |         // Simple linear formatting with enhanced visual elements
 376 |         validMemories.forEach(({ formatted }, idx) => {
 377 |             const isLast = idx === validMemories.length - 1;
 378 |             const connector = isLast ? '   ' : `${COLORS.CYAN}│${COLORS.RESET}  `;
 379 |             const lines = wrapText(formatted, 76, 3, connector);
 380 | 
 381 |             // Output all lines (first with tree char, continuation with connector prefix)
 382 |             lines.forEach((line, lineIdx) => {
 383 |                 if (lineIdx === 0) {
 384 |                     contextMessage += `${COLORS.CYAN}${isLast ? '└─' : '├─'}${COLORS.RESET} ${line}\n`;
 385 |                 } else {
 386 |                     contextMessage += `${line}\n`;
 387 |                 }
 388 |             });
 389 |         });
 390 |     }
 391 | 
 392 |     // Tree structure ends naturally with └─, no need for separate closing frame
 393 |     return contextMessage;
 394 | }
 395 | 
 396 | /**
 397 |  * Wrap text to fit within specified width while maintaining tree structure
 398 |  */
 399 | function wrapTextForTree(text, maxWidth = 80, indentPrefix = '   ') {
 400 |     if (!text) return [];
 401 | 
 402 |     // Remove ANSI codes for width calculation
 403 |     const stripAnsi = (str) => str.replace(/\x1b\[[0-9;]*m/g, '');
 404 | 
 405 |     const lines = [];
 406 |     const words = text.split(/\s+/);
 407 |     let currentLine = '';
 408 | 
 409 |     for (const word of words) {
 410 |         const testLine = currentLine ? `${currentLine} ${word}` : word;
 411 |         const testLineStripped = stripAnsi(testLine);
 412 | 
 413 |         if (testLineStripped.length <= maxWidth) {
 414 |             currentLine = testLine;
 415 |         } else {
 416 |             if (currentLine) {
 417 |                 lines.push(currentLine);
 418 |             }
 419 |             currentLine = word;
 420 |         }
 421 |     }
 422 | 
 423 |     if (currentLine) {
 424 |         lines.push(currentLine);
 425 |     }
 426 | 
 427 |     return lines.length > 0 ? lines : [text];
 428 | }
 429 | 
 430 | /**
 431 |  * Format individual memory for CLI with color coding and proper line wrapping
 432 |  */
 433 | function formatMemoryForCLI(memory, index, options = {}) {
 434 |     try {
 435 |         const {
 436 |             maxContentLength = 400,
 437 |             includeDate = true,
 438 |             indent = false,
 439 |             maxLineWidth = 70
 440 |         } = options;
 441 | 
 442 |         // Extract meaningful content with markdown conversion enabled for CLI
 443 |         const content = extractMeaningfulContent(
 444 |             memory.content || 'No content available',
 445 |             maxContentLength,
 446 |             { convertMarkdown: true, stripMarkdown: false }
 447 |         );
 448 | 
 449 |         // Skip generic summaries
 450 |         if (isGenericSessionSummary(memory.content)) {
 451 |             return null;
 452 |         }
 453 | 
 454 |         // Format date with standardized recency indicators
 455 |         let dateStr = '';
 456 |         if (includeDate && memory.created_at_iso) {
 457 |             const date = new Date(memory.created_at_iso);
 458 |             const now = new Date();
 459 |             const daysDiff = (now - date) / (1000 * 60 * 60 * 24);
 460 | 
 461 |             if (daysDiff < 1) {
 462 |                 dateStr = ` ${COLORS.GREEN}🕒 today${COLORS.RESET}`;
 463 |             } else if (daysDiff < 2) {
 464 |                 dateStr = ` ${COLORS.CYAN}📅 yesterday${COLORS.RESET}`;
 465 |             } else if (daysDiff <= 7) {
 466 |                 const daysAgo = Math.floor(daysDiff);
 467 |                 dateStr = ` ${COLORS.CYAN}📅 ${daysAgo}d ago${COLORS.RESET}`;
 468 |             } else if (daysDiff <= 30) {
 469 |                 const formattedDate = date.toLocaleDateString('en-US', { month: 'short', day: 'numeric' });
 470 |                 dateStr = ` ${COLORS.CYAN}📅 ${formattedDate}${COLORS.RESET}`;
 471 |             } else {
 472 |                 const formattedDate = date.toLocaleDateString('en-US', { month: 'short', day: 'numeric' });
 473 |                 dateStr = ` ${COLORS.GRAY}📅 ${formattedDate}${COLORS.RESET}`;
 474 |             }
 475 |         }
 476 | 
 477 |         // Determine content color based on memory type and recency
 478 |         let contentColor = '';
 479 |         let contentReset = COLORS.RESET;
 480 | 
 481 |         // Prioritize recent memories with green tint
 482 |         if (memory.created_at_iso) {
 483 |             const daysDiff = (new Date() - new Date(memory.created_at_iso)) / (1000 * 60 * 60 * 24);
 484 |             if (daysDiff < 7) {
 485 |                 // Recent memory - no special coloring, keep it prominent
 486 |                 contentColor = '';
 487 |             }
 488 |         }
 489 | 
 490 |         // Apply type-based coloring only for non-recent memories
 491 |         if (!contentColor) {
 492 |             if (memory.memory_type === 'decision' || (memory.tags && memory.tags.some(tag => tag.includes('decision')))) {
 493 |                 contentColor = COLORS.DIM; // Subtle for decisions
 494 |             } else if (memory.memory_type === 'insight') {
 495 |                 contentColor = COLORS.DIM;
 496 |             } else if (memory.memory_type === 'bug-fix') {
 497 |                 contentColor = COLORS.DIM;
 498 |             } else if (memory.memory_type === 'feature') {
 499 |                 contentColor = COLORS.DIM;
 500 |             }
 501 |         }
 502 | 
 503 |         return `${contentColor}${content}${contentReset}${dateStr}`;
 504 |     } catch (error) {
 505 |         return `${COLORS.GRAY}[Error formatting memory: ${error.message}]${COLORS.RESET}`;
 506 |     }
 507 | }
 508 | 
 509 | /**
 510 |  * Extract meaningful content from session summaries and structured memories
 511 |  */
 512 | function extractMeaningfulContent(content, maxLength = 500, options = {}) {
 513 |     if (!content || typeof content !== 'string') {
 514 |         return 'No content available';
 515 |     }
 516 | 
 517 |     const {
 518 |         convertMarkdown = isCLIEnvironment(),  // Auto-convert in CLI mode
 519 |         stripMarkdown = false  // Just strip without ANSI colors
 520 |     } = options;
 521 | 
 522 |     // Sanitize content - remove embedded formatting characters that conflict with tree structure
 523 |     let sanitizedContent = content
 524 |         // Remove checkmarks and bullets
 525 |         .replace(/[✅✓✔]/g, '')
 526 |         .replace(/^[\s]*[•▪▫]\s*/gm, '')
 527 |         // Remove list markers at start of lines
 528 |         .replace(/^[\s]*[-*]\s*/gm, '')
 529 |         // Remove embedded Date: lines from old session summaries
 530 |         .replace(/\*\*Date\*\*:.*?\n/gi, '')
 531 |         .replace(/^Date:\s*\n\s*\d{1,2}\.\d{1,2}\.(\d{2,4})?\s*/gim, '')  // Multi-line: "Date:\n  9.11.2025"
 532 |         .replace(/^Date:.*?\n/gim, '')  // Single-line: "Date: 9.11.2025"
 533 |         .replace(/^\d{1,2}\.\d{1,2}\.(\d{2,4})?\s*$/gim, '')  // Standalone date lines
 534 |         // Clean up multiple spaces
 535 |         .replace(/\s{2,}/g, ' ')
 536 |         // Remove markdown bold/italic
 537 |         .replace(/\*\*([^*]+)\*\*/g, '$1')
 538 |         .replace(/\*([^*]+)\*/g, '$1')
 539 |         .replace(/__([^_]+)__/g, '$1')
 540 |         .replace(/_([^_]+)_/g, '$1')
 541 |         .trim();
 542 | 
 543 |     // Check if this is a session summary with structured sections
 544 |     if (sanitizedContent.includes('# Session Summary') || sanitizedContent.includes('## 🎯') || sanitizedContent.includes('## 🏛️') || sanitizedContent.includes('## 💡')) {
 545 |         const sections = {
 546 |             decisions: [],
 547 |             insights: [],
 548 |             codeChanges: [],
 549 |             nextSteps: [],
 550 |             topics: []
 551 |         };
 552 | 
 553 |         // Extract structured sections
 554 |         const lines = sanitizedContent.split('\n');
 555 |         let currentSection = null;
 556 |         
 557 |         for (const line of lines) {
 558 |             const trimmed = line.trim();
 559 |             
 560 |             if (trimmed.includes('🏛️') && trimmed.includes('Decision')) {
 561 |                 currentSection = 'decisions';
 562 |                 continue;
 563 |             } else if (trimmed.includes('💡') && (trimmed.includes('Insight') || trimmed.includes('Key'))) {
 564 |                 currentSection = 'insights';
 565 |                 continue;
 566 |             } else if (trimmed.includes('💻') && trimmed.includes('Code')) {
 567 |                 currentSection = 'codeChanges';
 568 |                 continue;
 569 |             } else if (trimmed.includes('📋') && trimmed.includes('Next')) {
 570 |                 currentSection = 'nextSteps';
 571 |                 continue;
 572 |             } else if (trimmed.includes('🎯') && trimmed.includes('Topic')) {
 573 |                 currentSection = 'topics';
 574 |                 continue;
 575 |             } else if (trimmed.startsWith('##') || trimmed.startsWith('#')) {
 576 |                 currentSection = null; // Reset on new major section
 577 |                 continue;
 578 |             }
 579 |             
 580 |             // Collect bullet points under current section
 581 |             if (currentSection && trimmed.startsWith('- ') && trimmed.length > 2) {
 582 |                 const item = trimmed.substring(2).trim();
 583 |                 if (item.length > 5 && item !== 'implementation' && item !== '...') {
 584 |                     sections[currentSection].push(item);
 585 |                 }
 586 |             }
 587 |         }
 588 |         
 589 |         // Build meaningful summary from extracted sections
 590 |         const meaningfulParts = [];
 591 |         
 592 |         if (sections.decisions.length > 0) {
 593 |             meaningfulParts.push(`Decisions: ${sections.decisions.slice(0, 2).join('; ')}`);
 594 |         }
 595 |         if (sections.insights.length > 0) {
 596 |             meaningfulParts.push(`Insights: ${sections.insights.slice(0, 2).join('; ')}`);
 597 |         }
 598 |         if (sections.codeChanges.length > 0) {
 599 |             meaningfulParts.push(`Changes: ${sections.codeChanges.slice(0, 2).join('; ')}`);
 600 |         }
 601 |         if (sections.nextSteps.length > 0) {
 602 |             meaningfulParts.push(`Next: ${sections.nextSteps.slice(0, 2).join('; ')}`);
 603 |         }
 604 |         
 605 |         if (meaningfulParts.length > 0) {
 606 |             let extracted = meaningfulParts.join(' | ');
 607 | 
 608 |             // Re-sanitize to remove any Date: patterns that survived section extraction
 609 |             extracted = extracted
 610 |                 .replace(/Date:\s*\d{1,2}\.\d{1,2}\.(\d{2,4})?/gi, '')  // Remove "Date: 9.11.2025"
 611 |                 .replace(/\d{1,2}\.\d{1,2}\.(\d{2,4})?/g, '')  // Remove standalone dates
 612 |                 .replace(/\s{2,}/g, ' ')  // Clean up multiple spaces
 613 |                 .trim();
 614 | 
 615 |             const truncated = extracted.length > maxLength ? extracted.substring(0, maxLength - 3) + '...' : extracted;
 616 | 
 617 |             // Apply markdown conversion if requested
 618 |             if (convertMarkdown) {
 619 |                 return convertMarkdownToANSI(truncated, { stripOnly: stripMarkdown });
 620 |             }
 621 |             return truncated;
 622 |         }
 623 |     }
 624 |     
 625 |     // For non-structured content, use sanitized version
 626 |     let processedContent = sanitizedContent;
 627 |     if (convertMarkdown) {
 628 |         processedContent = convertMarkdownToANSI(sanitizedContent, { stripOnly: stripMarkdown });
 629 |     }
 630 | 
 631 |     // Smart first-sentence extraction for very short limits
 632 |     if (maxLength < 400) {
 633 |         // Try to get just the first 1-2 sentences
 634 |         const sentenceMatch = processedContent.match(/^[^.!?]+[.!?]\s*[^.!?]+[.!?]?/);
 635 |         if (sentenceMatch && sentenceMatch[0].length <= maxLength) {
 636 |             return sentenceMatch[0].trim();
 637 |         }
 638 |         // Try just first sentence
 639 |         const firstSentence = processedContent.match(/^[^.!?]+[.!?]/);
 640 |         if (firstSentence && firstSentence[0].length <= maxLength) {
 641 |             return firstSentence[0].trim();
 642 |         }
 643 |     }
 644 | 
 645 |     // Then use smart truncation
 646 |     if (processedContent.length <= maxLength) {
 647 |         return processedContent;
 648 |     }
 649 | 
 650 |     // Try to find a good breaking point (sentence, paragraph, or code block)
 651 |     const breakPoints = ['. ', '\n\n', '\n', '; '];
 652 | 
 653 |     for (const breakPoint of breakPoints) {
 654 |         const lastBreak = processedContent.lastIndexOf(breakPoint, maxLength - 3);
 655 |         if (lastBreak > maxLength * 0.7) { // Only use if we keep at least 70% of desired length
 656 |             return processedContent.substring(0, lastBreak + (breakPoint === '. ' ? 1 : 0)).trim();
 657 |         }
 658 |     }
 659 | 
 660 |     // Fallback to hard truncation
 661 |     return processedContent.substring(0, maxLength - 3).trim() + '...';
 662 | }
 663 | 
 664 | /**
 665 |  * Check if memory content appears to be a generic/empty session summary
 666 |  */
 667 | function isGenericSessionSummary(content) {
 668 |     if (!content || typeof content !== 'string') {
 669 |         return true;
 670 |     }
 671 |     
 672 |     // Check for generic patterns
 673 |     const genericPatterns = [
 674 |         /## 🎯 Topics Discussed\s*-\s*implementation\s*-\s*\.\.\.?$/m,
 675 |         /Topics Discussed.*implementation.*\.\.\..*$/s,
 676 |         /Session Summary.*implementation.*\.\.\..*$/s
 677 |     ];
 678 |     
 679 |     return genericPatterns.some(pattern => pattern.test(content));
 680 | }
 681 | 
 682 | /**
 683 |  * Format a single memory for context display
 684 |  */
 685 | function formatMemory(memory, index = 0, options = {}) {
 686 |     try {
 687 |         const {
 688 |             includeScore = false,
 689 |             includeMetadata = false,
 690 |             maxContentLength = 500,
 691 |             includeDate = true,
 692 |             showOnlyRelevantTags = true
 693 |         } = options;
 694 |         
 695 |         // Extract meaningful content using smart parsing
 696 |         // For non-CLI, strip markdown without adding ANSI colors
 697 |         const content = extractMeaningfulContent(
 698 |             memory.content || 'No content available', 
 699 |             maxContentLength,
 700 |             { convertMarkdown: true, stripMarkdown: true }
 701 |         );
 702 |         
 703 |         // Skip generic/empty session summaries
 704 |         if (isGenericSessionSummary(memory.content) && !includeScore) {
 705 |             return null; // Signal to skip this memory
 706 |         }
 707 |         
 708 |         // Format date more concisely
 709 |         let dateStr = '';
 710 |         if (includeDate && memory.created_at_iso) {
 711 |             const date = new Date(memory.created_at_iso);
 712 |             dateStr = ` (${date.toLocaleDateString('en-US', { month: 'short', day: 'numeric' })})`;
 713 |         }
 714 |         
 715 |         // Build formatted memory
 716 |         let formatted = `${index + 1}. ${content}${dateStr}`;
 717 |         
 718 |         // Add only the most relevant tags
 719 |         if (showOnlyRelevantTags && memory.tags && memory.tags.length > 0) {
 720 |             const relevantTags = memory.tags.filter(tag => {
 721 |                 const tagLower = tag.toLowerCase();
 722 |                 return !tagLower.startsWith('source:') && 
 723 |                        !tagLower.startsWith('claude-code-session') &&
 724 |                        !tagLower.startsWith('session-consolidation') &&
 725 |                        tagLower !== 'claude-code' &&
 726 |                        tagLower !== 'auto-generated' &&
 727 |                        tagLower !== 'implementation' &&
 728 |                        tagLower.length > 2;
 729 |             });
 730 |             
 731 |             // Only show tags if they add meaningful context (max 3)
 732 |             if (relevantTags.length > 0 && relevantTags.length <= 5) {
 733 |                 formatted += `\n   Tags: ${relevantTags.slice(0, 3).join(', ')}`;
 734 |             }
 735 |         }
 736 |         
 737 |         return formatted;
 738 |         
 739 |     } catch (error) {
 740 |         // Silently fail with error message to avoid noise
 741 |         return `${index + 1}. [Error formatting memory: ${error.message}]`;
 742 |     }
 743 | }
 744 | 
 745 | /**
 746 |  * Deduplicate memories based on content similarity
 747 |  */
 748 | function deduplicateMemories(memories, options = {}) {
 749 |     if (!Array.isArray(memories) || memories.length <= 1) {
 750 |         return memories;
 751 |     }
 752 |     
 753 |     const deduplicated = [];
 754 |     const seenContent = new Set();
 755 |     
 756 |     // Sort by relevance score (highest first) and recency
 757 |     const sorted = memories.sort((a, b) => {
 758 |         const scoreA = a.relevanceScore || 0;
 759 |         const scoreB = b.relevanceScore || 0;
 760 |         if (scoreA !== scoreB) return scoreB - scoreA;
 761 |         
 762 |         // If scores are equal, prefer more recent
 763 |         const dateA = new Date(a.created_at_iso || 0);
 764 |         const dateB = new Date(b.created_at_iso || 0);
 765 |         return dateB - dateA;
 766 |     });
 767 |     
 768 |     for (const memory of sorted) {
 769 |         const content = memory.content || '';
 770 |         
 771 |         // Create a normalized version for comparison
 772 |         let normalized = content.toLowerCase()
 773 |             .replace(/# session summary.*?\n/gi, '') // Remove session headers
 774 |             .replace(/\*\*date\*\*:.*?\n/gi, '')    // Remove date lines
 775 |             .replace(/\*\*project\*\*:.*?\n/gi, '') // Remove project lines
 776 |             .replace(/\s+/g, ' ')                   // Normalize whitespace
 777 |             .trim();
 778 |         
 779 |         // Skip if content is too generic or already seen
 780 |         if (normalized.length < 20 || isGenericSessionSummary(content)) {
 781 |             continue;
 782 |         }
 783 |         
 784 |         // Check for substantial similarity
 785 |         let isDuplicate = false;
 786 |         for (const seenNormalized of seenContent) {
 787 |             const similarity = calculateContentSimilarity(normalized, seenNormalized);
 788 |             if (similarity > 0.8) { // 80% similarity threshold
 789 |                 isDuplicate = true;
 790 |                 break;
 791 |             }
 792 |         }
 793 |         
 794 |         if (!isDuplicate) {
 795 |             seenContent.add(normalized);
 796 |             deduplicated.push(memory);
 797 |         }
 798 |     }
 799 |     
 800 |     // Only log if in verbose mode (can be passed via options)
 801 |     if (options?.verbose !== false && memories.length !== deduplicated.length) {
 802 |         console.log(`[Context Formatter] Deduplicated ${memories.length} → ${deduplicated.length} memories`);
 803 |     }
 804 |     return deduplicated;
 805 | }
 806 | 
 807 | /**
 808 |  * Calculate content similarity between two normalized strings
 809 |  */
 810 | function calculateContentSimilarity(str1, str2) {
 811 |     if (!str1 || !str2) return 0;
 812 |     if (str1 === str2) return 1;
 813 |     
 814 |     // Use simple word overlap similarity
 815 |     const words1 = new Set(str1.split(/\s+/).filter(w => w.length > 3));
 816 |     const words2 = new Set(str2.split(/\s+/).filter(w => w.length > 3));
 817 |     
 818 |     if (words1.size === 0 && words2.size === 0) return 1;
 819 |     if (words1.size === 0 || words2.size === 0) return 0;
 820 |     
 821 |     const intersection = new Set([...words1].filter(w => words2.has(w)));
 822 |     const union = new Set([...words1, ...words2]);
 823 |     
 824 |     return intersection.size / union.size;
 825 | }
 826 | 
 827 | /**
 828 |  * Group memories by category for better organization
 829 |  */
 830 | function groupMemoriesByCategory(memories, options = {}) {
 831 |     try {
 832 |         // First deduplicate to remove redundant content
 833 |         const deduplicated = deduplicateMemories(memories, options);
 834 | 
 835 |         const categories = {
 836 |             'recent-work': [],
 837 |             'current-problems': [],
 838 |             'key-decisions': [],
 839 |             'additional-context': []
 840 |         };
 841 | 
 842 |         const now = new Date();
 843 | 
 844 |         deduplicated.forEach(memory => {
 845 |             const type = memory.memory_type?.toLowerCase() || 'other';
 846 |             const tags = memory.tags || [];
 847 |             const content = memory.content?.toLowerCase() || '';
 848 | 
 849 |             // Check if memory is recent (within last week)
 850 |             let isRecent = false;
 851 |             if (memory.created_at_iso) {
 852 |                 const memDate = new Date(memory.created_at_iso);
 853 |                 const daysDiff = (now - memDate) / (1000 * 60 * 60 * 24);
 854 |                 isRecent = daysDiff <= 7;
 855 |             }
 856 | 
 857 |             // Detect current problems (issues, bugs, blockers, TODOs)
 858 |             // Exclude session summaries which may mention fixes but aren't problems themselves
 859 |             const isSessionType = type === 'session' || type === 'session-summary' ||
 860 |                 tags.some(tag => tag.toLowerCase() === 'session-summary');
 861 |             const isProblem = !isSessionType && (
 862 |                 type === 'issue' || type === 'bug' || type === 'bug-fix' ||
 863 |                 tags.some(tag => ['issue', 'bug', 'blocked', 'todo', 'problem', 'blocker'].includes(tag.toLowerCase())) ||
 864 |                 content.includes('issue #') || content.includes('bug:') || content.includes('blocked')
 865 |             );
 866 | 
 867 |             // Detect key decisions (architecture, design, technical choices)
 868 |             const isKeyDecision =
 869 |                 type === 'decision' || type === 'architecture' ||
 870 |                 tags.some(tag => ['decision', 'architecture', 'design', 'key-decisions', 'why'].includes(tag.toLowerCase())) ||
 871 |                 content.includes('decided to') || content.includes('architecture:');
 872 | 
 873 |             // Categorize with priority: recent-work > current-problems > key-decisions > additional-context
 874 |             if (isRecent && memory._gitContextType) {
 875 |                 // Git context memories from recent development
 876 |                 categories['recent-work'].push(memory);
 877 |             } else if (isProblem) {
 878 |                 categories['current-problems'].push(memory);
 879 |             } else if (isRecent) {
 880 |                 categories['recent-work'].push(memory);
 881 |             } else if (isKeyDecision) {
 882 |                 categories['key-decisions'].push(memory);
 883 |             } else {
 884 |                 categories['additional-context'].push(memory);
 885 |             }
 886 |         });
 887 | 
 888 |         // Sort each category by creation date (newest first)
 889 |         Object.keys(categories).forEach(category => {
 890 |             categories[category].sort((a, b) => {
 891 |                 const dateA = a.created_at_iso ? new Date(a.created_at_iso) : new Date(0);
 892 |                 const dateB = b.created_at_iso ? new Date(b.created_at_iso) : new Date(0);
 893 |                 return dateB - dateA; // Newest first
 894 |             });
 895 |         });
 896 | 
 897 |         return categories;
 898 | 
 899 |     } catch (error) {
 900 |         if (options?.verbose !== false) {
 901 |             console.warn('[Context Formatter] Error grouping memories:', error.message);
 902 |         }
 903 |         return { 'additional-context': memories };
 904 |     }
 905 | }
 906 | 
 907 | /**
 908 |  * Create a context summary from project information
 909 |  */
 910 | function createProjectSummary(projectContext) {
 911 |     try {
 912 |         let summary = `**Project**: ${projectContext.name}`;
 913 |         
 914 |         if (projectContext.language && projectContext.language !== 'Unknown') {
 915 |             summary += ` (${projectContext.language})`;
 916 |         }
 917 |         
 918 |         if (projectContext.frameworks && projectContext.frameworks.length > 0) {
 919 |             summary += `\n**Frameworks**: ${projectContext.frameworks.join(', ')}`;
 920 |         }
 921 |         
 922 |         if (projectContext.tools && projectContext.tools.length > 0) {
 923 |             summary += `\n**Tools**: ${projectContext.tools.join(', ')}`;
 924 |         }
 925 |         
 926 |         if (projectContext.git && projectContext.git.isRepo) {
 927 |             summary += `\n**Branch**: ${projectContext.git.branch || 'unknown'}`;
 928 |             
 929 |             if (projectContext.git.lastCommit) {
 930 |                 summary += `\n**Last Commit**: ${projectContext.git.lastCommit}`;
 931 |             }
 932 |         }
 933 |         
 934 |         return summary;
 935 |         
 936 |     } catch (error) {
 937 |         // Silently fail with fallback summary
 938 |         return `**Project**: ${projectContext.name || 'Unknown Project'}`;
 939 |     }
 940 | }
 941 | 
 942 | /**
 943 |  * Format memories for Claude Code context injection
 944 |  */
 945 | function formatMemoriesForContext(memories, projectContext, options = {}) {
 946 |     try {
 947 |         // Use CLI formatting if in CLI environment
 948 |         if (isCLIEnvironment()) {
 949 |             return formatMemoriesForCLI(memories, projectContext, options);
 950 |         }
 951 |         
 952 |         const {
 953 |             includeProjectSummary = true,
 954 |             includeScore = false,
 955 |             groupByCategory = true,
 956 |             maxMemories = 8,
 957 |             includeTimestamp = true,
 958 |             maxContentLength = 500,
 959 |             storageInfo = null
 960 |         } = options;
 961 |         
 962 |         if (!memories || memories.length === 0) {
 963 |             return `## 📋 Memory Context\n\nNo relevant memories found for this session.\n`;
 964 |         }
 965 |         
 966 |         // Filter out null/generic memories and limit number
 967 |         const validMemories = [];
 968 |         let memoryIndex = 0;
 969 |         
 970 |         for (const memory of memories) {
 971 |             if (validMemories.length >= maxMemories) break;
 972 |             
 973 |             const formatted = formatMemory(memory, memoryIndex, {
 974 |                 includeScore,
 975 |                 maxContentLength: maxContentLength,
 976 |                 includeDate: includeTimestamp,
 977 |                 showOnlyRelevantTags: true
 978 |             });
 979 |             
 980 |             if (formatted) { // formatMemory returns null for generic summaries
 981 |                 validMemories.push({ memory, formatted });
 982 |                 memoryIndex++;
 983 |             }
 984 |         }
 985 |         
 986 |         if (validMemories.length === 0) {
 987 |             return `## 📋 Memory Context\n\nNo meaningful memories found for this session (filtered out generic content).\n`;
 988 |         }
 989 |         
 990 |         // Start building context message
 991 |         let contextMessage = '## 🧠 Memory Context Loaded\n\n';
 992 |         
 993 |         // Add project summary
 994 |         if (includeProjectSummary && projectContext) {
 995 |             contextMessage += createProjectSummary(projectContext) + '\n\n';
 996 |         }
 997 |         
 998 |         // Add storage information
 999 |         if (storageInfo) {
1000 |             contextMessage += `**Storage**: ${storageInfo.description}`;
1001 |             
1002 |             // Add health information if available
1003 |             if (storageInfo.health && storageInfo.health.totalMemories > 0) {
1004 |                 const memoryCount = storageInfo.health.totalMemories;
1005 |                 const dbSize = storageInfo.health.databaseSizeMB;
1006 |                 const uniqueTags = storageInfo.health.uniqueTags;
1007 |                 
1008 |                 contextMessage += ` - ${memoryCount} memories`;
1009 |                 if (dbSize > 0) contextMessage += `, ${dbSize}MB`;
1010 |                 if (uniqueTags > 0) contextMessage += `, ${uniqueTags} unique tags`;
1011 |             }
1012 |             contextMessage += '\n';
1013 |             
1014 |             if (storageInfo.location && !storageInfo.location.includes('Configuration Error') && !storageInfo.location.includes('Health parse error')) {
1015 |                 contextMessage += `**Location**: \`${storageInfo.location}\`\n`;
1016 |             }
1017 |             
1018 |             if (storageInfo.health && storageInfo.health.embeddingModel && storageInfo.health.embeddingModel !== 'Unknown') {
1019 |                 contextMessage += `**Embedding Model**: ${storageInfo.health.embeddingModel}\n`;
1020 |             }
1021 |             
1022 |             contextMessage += '\n';
1023 |         }
1024 |         
1025 |         contextMessage += `**Loaded ${validMemories.length} relevant memories from your project history:**\n\n`;
1026 |         
1027 |         if (groupByCategory && validMemories.length > 3) {
1028 |             // Group and format by category only if we have enough content
1029 |             const categories = groupMemoriesByCategory(validMemories.map(v => v.memory));
1030 |             
1031 |             const categoryTitles = {
1032 |                 gitContext: '### ⚡ Current Development (Git Context)',
1033 |                 recent: '### 🕒 Recent Work (Last Week)',
1034 |                 decisions: '### 🎯 Key Decisions',
1035 |                 architecture: '### 🏗️ Architecture & Design', 
1036 |                 insights: '### 💡 Insights & Learnings',
1037 |                 bugs: '### 🐛 Bug Fixes & Issues',
1038 |                 features: '### ✨ Features & Implementation',
1039 |                 other: '### 📝 Additional Context'
1040 |             };
1041 |             
1042 |             let hasContent = false;
1043 |             Object.entries(categories).forEach(([category, categoryMemories]) => {
1044 |                 if (categoryMemories.length > 0) {
1045 |                     contextMessage += `${categoryTitles[category]}\n`;
1046 |                     hasContent = true;
1047 |                     
1048 |                     categoryMemories.forEach((memory, index) => {
1049 |                         const formatted = formatMemory(memory, index, {
1050 |                             includeScore,
1051 |                             maxContentLength: maxContentLength,
1052 |                             includeDate: includeTimestamp,
1053 |                             showOnlyRelevantTags: true
1054 |                         });
1055 |                         if (formatted) {
1056 |                             contextMessage += `${formatted}\n\n`;
1057 |                         }
1058 |                     });
1059 |                 }
1060 |             });
1061 |             
1062 |             if (!hasContent) {
1063 |                 // Fallback to linear format
1064 |                 validMemories.forEach(({ formatted }) => {
1065 |                     contextMessage += `${formatted}\n\n`;
1066 |                 });
1067 |             }
1068 |             
1069 |         } else {
1070 |             // Simple linear formatting for small lists
1071 |             validMemories.forEach(({ formatted }) => {
1072 |                 contextMessage += `${formatted}\n\n`;
1073 |             });
1074 |         }
1075 |         
1076 |         // Add concise footer
1077 |         contextMessage += '---\n';
1078 |         contextMessage += '*This context was automatically loaded based on your project and recent activities. ';
1079 |         contextMessage += 'Use this information to maintain continuity with your previous work and decisions.*';
1080 |         
1081 |         return contextMessage;
1082 |         
1083 |     } catch (error) {
1084 |         // Return error context without logging to avoid noise
1085 |         return `## 📋 Memory Context\n\n*Error loading context: ${error.message}*\n`;
1086 |     }
1087 | }
1088 | 
1089 | /**
1090 |  * Format memory for session-end consolidation
1091 |  */
1092 | function formatSessionConsolidation(sessionData, projectContext) {
1093 |     try {
1094 |         const timestamp = new Date().toISOString();
1095 |         
1096 |         let consolidation = `# Session Summary - ${projectContext.name}\n`;
1097 |         consolidation += `**Project**: ${projectContext.name} (${projectContext.language})\n\n`;
1098 |         
1099 |         if (sessionData.topics && sessionData.topics.length > 0) {
1100 |             consolidation += `## 🎯 Topics Discussed\n`;
1101 |             sessionData.topics.forEach(topic => {
1102 |                 consolidation += `- ${topic}\n`;
1103 |             });
1104 |             consolidation += '\n';
1105 |         }
1106 |         
1107 |         if (sessionData.decisions && sessionData.decisions.length > 0) {
1108 |             consolidation += `## 🏛️ Decisions Made\n`;
1109 |             sessionData.decisions.forEach(decision => {
1110 |                 consolidation += `- ${decision}\n`;
1111 |             });
1112 |             consolidation += '\n';
1113 |         }
1114 |         
1115 |         if (sessionData.insights && sessionData.insights.length > 0) {
1116 |             consolidation += `## 💡 Key Insights\n`;
1117 |             sessionData.insights.forEach(insight => {
1118 |                 consolidation += `- ${insight}\n`;
1119 |             });
1120 |             consolidation += '\n';
1121 |         }
1122 |         
1123 |         if (sessionData.codeChanges && sessionData.codeChanges.length > 0) {
1124 |             consolidation += `## 💻 Code Changes\n`;
1125 |             sessionData.codeChanges.forEach(change => {
1126 |                 consolidation += `- ${change}\n`;
1127 |             });
1128 |             consolidation += '\n';
1129 |         }
1130 |         
1131 |         if (sessionData.nextSteps && sessionData.nextSteps.length > 0) {
1132 |             consolidation += `## 📋 Next Steps\n`;
1133 |             sessionData.nextSteps.forEach(step => {
1134 |                 consolidation += `- ${step}\n`;
1135 |             });
1136 |             consolidation += '\n';
1137 |         }
1138 |         
1139 |         consolidation += `---\n*Session captured by Claude Code Memory Awareness at ${timestamp}*`;
1140 |         
1141 |         return consolidation;
1142 |         
1143 |     } catch (error) {
1144 |         // Return error without logging to avoid noise
1145 |         return `Session Summary Error: ${error.message}`;
1146 |     }
1147 | }
1148 | 
1149 | module.exports = {
1150 |     formatMemoriesForContext,
1151 |     formatMemoriesForCLI,
1152 |     formatMemory,
1153 |     formatMemoryForCLI,
1154 |     groupMemoriesByCategory,
1155 |     createProjectSummary,
1156 |     formatSessionConsolidation,
1157 |     isCLIEnvironment,
1158 |     convertMarkdownToANSI
1159 | };
1160 | 
1161 | // Direct execution support for testing
1162 | if (require.main === module) {
1163 |     // Test with mock data
1164 |     const mockMemories = [
1165 |         {
1166 |             content: 'Decided to use SQLite-vec for better performance, 10x faster than ChromaDB',
1167 |             tags: ['mcp-memory-service', 'decision', 'sqlite-vec', 'performance'],
1168 |             memory_type: 'decision',
1169 |             created_at_iso: '2025-08-19T10:00:00Z',
1170 |             relevanceScore: 0.95
1171 |         },
1172 |         {
1173 |             content: 'Implemented Claude Code hooks system for automatic memory awareness. Created session-start, session-end, and topic-change hooks.',
1174 |             tags: ['claude-code', 'hooks', 'architecture', 'memory-awareness'],
1175 |             memory_type: 'architecture',
1176 |             created_at_iso: '2025-08-19T09:30:00Z',
1177 |             relevanceScore: 0.87
1178 |         },
1179 |         {
1180 |             content: 'Fixed critical bug in project detector - was not handling pyproject.toml files correctly',
1181 |             tags: ['bug-fix', 'project-detector', 'python'],
1182 |             memory_type: 'bug-fix',
1183 |             created_at_iso: '2025-08-18T15:30:00Z',
1184 |             relevanceScore: 0.72
1185 |         },
1186 |         {
1187 |             content: 'Added new feature: Claude Code hooks with session lifecycle management',
1188 |             tags: ['feature', 'claude-code', 'hooks'],
1189 |             memory_type: 'feature',
1190 |             created_at_iso: '2025-08-17T12:00:00Z',
1191 |             relevanceScore: 0.85
1192 |         },
1193 |         {
1194 |             content: 'Key insight: Memory deduplication prevents information overload in context',
1195 |             tags: ['insight', 'memory-management', 'optimization'],
1196 |             memory_type: 'insight',
1197 |             created_at_iso: '2025-08-16T14:00:00Z',
1198 |             relevanceScore: 0.78
1199 |         }
1200 |     ];
1201 |     
1202 |     const mockProjectContext = {
1203 |         name: 'mcp-memory-service',
1204 |         language: 'JavaScript',
1205 |         frameworks: ['Node.js'],
1206 |         tools: ['npm'],
1207 |         branch: 'main',
1208 |         lastCommit: 'cdabc9a feat: enhance deduplication script'
1209 |     };
1210 |     
1211 |     console.log('\n=== CONTEXT FORMATTING TEST ===');
1212 |     const formatted = formatMemoriesForContext(mockMemories, mockProjectContext, {
1213 |         includeScore: true,
1214 |         groupByCategory: true
1215 |     });
1216 |     
1217 |     console.log(formatted);
1218 |     console.log('\n=== END TEST ===');
1219 | }
```

--------------------------------------------------------------------------------
/scripts/quality/phase2_complexity_analysis.md:
--------------------------------------------------------------------------------

```markdown
   1 | # Issue #240 Phase 2: Low-Hanging Complexity Reductions
   2 | 
   3 | ## Executive Summary
   4 | 
   5 | **Current State:**
   6 | - Overall Health: 63/100 (Grade C)
   7 | - Cyclomatic Complexity Score: 40/100
   8 | - Average complexity: 9.5
   9 | - High-risk functions (>7): 28 functions
  10 | - Maximum complexity: 62 (install.py::main)
  11 | 
  12 | **Phase 2 Goals:**
  13 | - Target functions: 5 main targets (complexity 10-15) + 5 quick wins
  14 | - Target complexity improvement: +10-15 points (40 → 50-55)
  15 | - Expected overall health improvement: +3 points (63 → 66-68)
  16 | - Strategy: Extract methods, guard clauses, dict lookups (no architectural changes)
  17 | 
  18 | **Total Estimated Effort:** 12-15 hours
  19 | 
  20 | **Functions Analyzed:** 5 target functions + 5 quick wins
  21 | 
  22 | ---
  23 | 
  24 | ## Target Function 1: install.py::configure_paths() (Complexity: 15)
  25 | 
  26 | ### Current Implementation
  27 | **Purpose:** Configure storage paths for memory service based on platform and backend type.
  28 | 
  29 | **Location:** Lines 1287-1445 (158 lines)
  30 | 
  31 | ### Complexity Breakdown
  32 | ```
  33 | Lines 1287-1306: +3 complexity (platform-specific path detection)
  34 | Lines 1306-1347: +5 complexity (storage backend conditional setup)
  35 | Lines 1349-1358: +2 complexity (backup directory test with error handling)
  36 | Lines 1359-1443: +5 complexity (Claude Desktop config update nested logic)
  37 | Total Base: 15
  38 | ```
  39 | 
  40 | **Primary Contributors:**
  41 | 1. Platform detection branching (macOS/Windows/Linux) - 3 branches
  42 | 2. Storage backend type branching (sqlite_vec/hybrid/cloudflare/chromadb) - 4 branches
  43 | 3. Nested Claude config file discovery and JSON manipulation
  44 | 4. Error handling for directory creation and file operations
  45 | 
  46 | ### Refactoring Proposal #1: Extract Platform Path Detection
  47 | **Risk:** Low | **Impact:** -3 complexity | **Time:** 1 hour
  48 | 
  49 | **Before:**
  50 | ```python
  51 | def configure_paths(args):
  52 |     print_step("4", "Configuring paths")
  53 |     system_info = detect_system()
  54 |     home_dir = Path.home()
  55 | 
  56 |     # Determine base directory based on platform
  57 |     if platform.system() == 'Darwin':  # macOS
  58 |         base_dir = home_dir / 'Library' / 'Application Support' / 'mcp-memory'
  59 |     elif platform.system() == 'Windows':  # Windows
  60 |         base_dir = Path(os.environ.get('LOCALAPPDATA', '')) / 'mcp-memory'
  61 |     else:  # Linux and others
  62 |         base_dir = home_dir / '.local' / 'share' / 'mcp-memory'
  63 | 
  64 |     storage_backend = os.environ.get('MCP_MEMORY_STORAGE_BACKEND', 'chromadb')
  65 |     ...
  66 | ```
  67 | 
  68 | **After:**
  69 | ```python
  70 | def get_platform_base_dir() -> Path:
  71 |     """Get platform-specific base directory for MCP Memory storage.
  72 | 
  73 |     Returns:
  74 |         Path: Platform-appropriate base directory
  75 |     """
  76 |     home_dir = Path.home()
  77 | 
  78 |     PLATFORM_PATHS = {
  79 |         'Darwin': home_dir / 'Library' / 'Application Support' / 'mcp-memory',
  80 |         'Windows': Path(os.environ.get('LOCALAPPDATA', '')) / 'mcp-memory',
  81 |     }
  82 | 
  83 |     system = platform.system()
  84 |     return PLATFORM_PATHS.get(system, home_dir / '.local' / 'share' / 'mcp-memory')
  85 | 
  86 | def configure_paths(args):
  87 |     print_step("4", "Configuring paths")
  88 |     system_info = detect_system()
  89 |     base_dir = get_platform_base_dir()
  90 |     storage_backend = os.environ.get('MCP_MEMORY_STORAGE_BACKEND', 'chromadb')
  91 |     ...
  92 | ```
  93 | 
  94 | **Complexity Impact:** 15 → 12 (-3)
  95 | - Removes platform branching from main function
  96 | - Uses dict lookup instead of if/elif/else chain
  97 | 
  98 | ### Refactoring Proposal #2: Extract Storage Path Setup
  99 | **Risk:** Low | **Impact:** -4 complexity | **Time:** 1.5 hours
 100 | 
 101 | **Before:**
 102 | ```python
 103 | def configure_paths(args):
 104 |     ...
 105 |     if storage_backend in ['sqlite_vec', 'hybrid', 'cloudflare']:
 106 |         storage_path = args.chroma_path or (base_dir / 'sqlite_vec.db')
 107 |         storage_dir = storage_path.parent if storage_path.name.endswith('.db') else storage_path
 108 |         backups_path = args.backups_path or (base_dir / 'backups')
 109 | 
 110 |         try:
 111 |             os.makedirs(storage_dir, exist_ok=True)
 112 |             os.makedirs(backups_path, exist_ok=True)
 113 |             print_info(f"SQLite-vec database: {storage_path}")
 114 |             print_info(f"Backups path: {backups_path}")
 115 | 
 116 |             # Test if directory is writable
 117 |             test_file = os.path.join(storage_dir, '.write_test')
 118 |             with open(test_file, 'w') as f:
 119 |                 f.write('test')
 120 |             os.remove(test_file)
 121 |         except Exception as e:
 122 |             print_error(f"Failed to configure SQLite-vec paths: {e}")
 123 |             return False
 124 |     else:
 125 |         chroma_path = args.chroma_path or (base_dir / 'chroma_db')
 126 |         backups_path = args.backups_path or (base_dir / 'backups')
 127 |         storage_path = chroma_path
 128 |         ...
 129 | ```
 130 | 
 131 | **After:**
 132 | ```python
 133 | def setup_storage_directories(backend: str, base_dir: Path, args) -> Tuple[Path, Path, bool]:
 134 |     """Setup storage and backup directories for the specified backend.
 135 | 
 136 |     Args:
 137 |         backend: Storage backend type
 138 |         base_dir: Base directory for storage
 139 |         args: Command line arguments
 140 | 
 141 |     Returns:
 142 |         Tuple of (storage_path, backups_path, success)
 143 |     """
 144 |     if backend in ['sqlite_vec', 'hybrid', 'cloudflare']:
 145 |         storage_path = args.chroma_path or (base_dir / 'sqlite_vec.db')
 146 |         storage_dir = storage_path.parent if storage_path.name.endswith('.db') else storage_path
 147 |     else:  # chromadb
 148 |         storage_path = args.chroma_path or (base_dir / 'chroma_db')
 149 |         storage_dir = storage_path
 150 | 
 151 |     backups_path = args.backups_path or (base_dir / 'backups')
 152 | 
 153 |     try:
 154 |         os.makedirs(storage_dir, exist_ok=True)
 155 |         os.makedirs(backups_path, exist_ok=True)
 156 | 
 157 |         # Test writability
 158 |         test_file = storage_dir / '.write_test'
 159 |         test_file.write_text('test')
 160 |         test_file.unlink()
 161 | 
 162 |         print_info(f"Storage path: {storage_path}")
 163 |         print_info(f"Backups path: {backups_path}")
 164 |         return storage_path, backups_path, True
 165 | 
 166 |     except Exception as e:
 167 |         print_error(f"Failed to configure storage paths: {e}")
 168 |         return storage_path, backups_path, False
 169 | 
 170 | def configure_paths(args):
 171 |     print_step("4", "Configuring paths")
 172 |     system_info = detect_system()
 173 |     base_dir = get_platform_base_dir()
 174 |     storage_backend = os.environ.get('MCP_MEMORY_STORAGE_BACKEND', 'chromadb')
 175 | 
 176 |     storage_path, backups_path, success = setup_storage_directories(
 177 |         storage_backend, base_dir, args
 178 |     )
 179 |     if not success:
 180 |         print_warning("Continuing with Claude Desktop configuration despite storage setup failure")
 181 |     ...
 182 | ```
 183 | 
 184 | **Complexity Impact:** 12 → 8 (-4)
 185 | - Removes nested storage backend setup logic
 186 | - Early return pattern for error handling
 187 | 
 188 | ### Refactoring Proposal #3: Extract Claude Config Update
 189 | **Risk:** Medium | **Impact:** -3 complexity | **Time:** 1.5 hours
 190 | 
 191 | **Before:**
 192 | ```python
 193 | def configure_paths(args):
 194 |     ...
 195 |     # Configure Claude Desktop if available
 196 |     import json
 197 |     claude_config_paths = [...]
 198 | 
 199 |     for config_path in claude_config_paths:
 200 |         if config_path.exists():
 201 |             print_info(f"Found Claude Desktop config at {config_path}")
 202 |             try:
 203 |                 config_text = config_path.read_text()
 204 |                 config = json.loads(config_text)
 205 | 
 206 |                 # Validate config structure
 207 |                 if not isinstance(config, dict):
 208 |                     print_warning(f"Invalid config format...")
 209 |                     continue
 210 | 
 211 |                 # Update or add MCP Memory configuration
 212 |                 if 'mcpServers' not in config:
 213 |                     config['mcpServers'] = {}
 214 | 
 215 |                 # Create environment configuration based on storage backend
 216 |                 env_config = {...}
 217 | 
 218 |                 if storage_backend in ['sqlite_vec', 'hybrid']:
 219 |                     env_config["MCP_MEMORY_SQLITE_PATH"] = str(storage_path)
 220 |                     ...
 221 | ```
 222 | 
 223 | **After:**
 224 | ```python
 225 | def build_mcp_env_config(storage_backend: str, storage_path: Path,
 226 |                         backups_path: Path) -> Dict[str, str]:
 227 |     """Build MCP environment configuration for Claude Desktop.
 228 | 
 229 |     Args:
 230 |         storage_backend: Type of storage backend
 231 |         storage_path: Path to storage directory/file
 232 |         backups_path: Path to backups directory
 233 | 
 234 |     Returns:
 235 |         Dict of environment variables for MCP configuration
 236 |     """
 237 |     env_config = {
 238 |         "MCP_MEMORY_BACKUPS_PATH": str(backups_path),
 239 |         "MCP_MEMORY_STORAGE_BACKEND": storage_backend
 240 |     }
 241 | 
 242 |     if storage_backend in ['sqlite_vec', 'hybrid']:
 243 |         env_config["MCP_MEMORY_SQLITE_PATH"] = str(storage_path)
 244 |         env_config["MCP_MEMORY_SQLITE_PRAGMAS"] = "busy_timeout=15000,cache_size=20000"
 245 | 
 246 |     if storage_backend in ['hybrid', 'cloudflare']:
 247 |         cloudflare_vars = [
 248 |             'CLOUDFLARE_API_TOKEN',
 249 |             'CLOUDFLARE_ACCOUNT_ID',
 250 |             'CLOUDFLARE_D1_DATABASE_ID',
 251 |             'CLOUDFLARE_VECTORIZE_INDEX'
 252 |         ]
 253 |         for var in cloudflare_vars:
 254 |             value = os.environ.get(var)
 255 |             if value:
 256 |                 env_config[var] = value
 257 | 
 258 |     if storage_backend == 'chromadb':
 259 |         env_config["MCP_MEMORY_CHROMA_PATH"] = str(storage_path)
 260 | 
 261 |     return env_config
 262 | 
 263 | def update_claude_config_file(config_path: Path, env_config: Dict[str, str],
 264 |                               project_root: Path, is_windows: bool) -> bool:
 265 |     """Update Claude Desktop configuration file with MCP Memory settings.
 266 | 
 267 |     Args:
 268 |         config_path: Path to Claude config file
 269 |         env_config: Environment configuration dictionary
 270 |         project_root: Root directory of the project
 271 |         is_windows: Whether running on Windows
 272 | 
 273 |     Returns:
 274 |         bool: True if update succeeded
 275 |     """
 276 |     try:
 277 |         config_text = config_path.read_text()
 278 |         config = json.loads(config_text)
 279 | 
 280 |         if not isinstance(config, dict):
 281 |             print_warning(f"Invalid config format in {config_path}")
 282 |             return False
 283 | 
 284 |         if 'mcpServers' not in config:
 285 |             config['mcpServers'] = {}
 286 | 
 287 |         # Create server configuration
 288 |         if is_windows:
 289 |             script_path = str((project_root / "memory_wrapper.py").resolve())
 290 |             config['mcpServers']['memory'] = {
 291 |                 "command": "python",
 292 |                 "args": [script_path],
 293 |                 "env": env_config
 294 |             }
 295 |         else:
 296 |             config['mcpServers']['memory'] = {
 297 |                 "command": "uv",
 298 |                 "args": ["--directory", str(project_root.resolve()), "run", "memory"],
 299 |                 "env": env_config
 300 |             }
 301 | 
 302 |         config_path.write_text(json.dumps(config, indent=2))
 303 |         print_success("Updated Claude Desktop configuration")
 304 |         return True
 305 | 
 306 |     except (OSError, PermissionError, json.JSONDecodeError) as e:
 307 |         print_warning(f"Failed to update Claude Desktop configuration: {e}")
 308 |         return False
 309 | 
 310 | def configure_paths(args):
 311 |     print_step("4", "Configuring paths")
 312 |     system_info = detect_system()
 313 |     base_dir = get_platform_base_dir()
 314 |     storage_backend = os.environ.get('MCP_MEMORY_STORAGE_BACKEND', 'chromadb')
 315 | 
 316 |     storage_path, backups_path, success = setup_storage_directories(
 317 |         storage_backend, base_dir, args
 318 |     )
 319 |     if not success:
 320 |         print_warning("Continuing with Claude Desktop configuration")
 321 | 
 322 |     # Configure Claude Desktop
 323 |     env_config = build_mcp_env_config(storage_backend, storage_path, backups_path)
 324 |     project_root = Path(__file__).parent.parent.parent
 325 | 
 326 |     claude_config_paths = [
 327 |         Path.home() / 'Library' / 'Application Support' / 'Claude' / 'claude_desktop_config.json',
 328 |         Path.home() / '.config' / 'Claude' / 'claude_desktop_config.json',
 329 |         Path('claude_config') / 'claude_desktop_config.json'
 330 |     ]
 331 | 
 332 |     for config_path in claude_config_paths:
 333 |         if config_path.exists():
 334 |             print_info(f"Found Claude Desktop config at {config_path}")
 335 |             if update_claude_config_file(config_path, env_config, project_root,
 336 |                                         system_info["is_windows"]):
 337 |                 break
 338 | 
 339 |     return True
 340 | ```
 341 | 
 342 | **Complexity Impact:** 8 → 5 (-3)
 343 | - Removes nested config update logic
 344 | - Separates env config building from file I/O
 345 | - Early return pattern in update function
 346 | 
 347 | ### Implementation Plan
 348 | 1. **Extract platform detection** (1 hour, low risk) - Simple dict lookup
 349 | 2. **Extract storage setup** (1.5 hours, low risk) - Straightforward extraction
 350 | 3. **Extract Claude config** (1.5 hours, medium risk) - Requires careful testing
 351 | 
 352 | **Total Complexity Reduction:** 15 → 5 (-10 points)
 353 | **Total Time:** 4 hours
 354 | 
 355 | ---
 356 | 
 357 | ## Target Function 2: cloudflare.py::_execute_batch() (Complexity: 14)
 358 | 
 359 | ### Current Implementation
 360 | **Purpose:** Execute batched D1 SQL queries with retry logic.
 361 | 
 362 | **Note:** After examining the cloudflare.py file, I found that `_execute_batch()` does not exist. The complexity report may be outdated or the function was refactored. Instead, I'll analyze `_search_by_tags_internal()` which shows similar complexity patterns (lines 583-667, complexity ~13).
 363 | 
 364 | ### Complexity Breakdown (\_search_by_tags_internal)
 365 | ```
 366 | Lines 590-610: +4 complexity (tag normalization and operation validation)
 367 | Lines 612-636: +5 complexity (SQL query construction with time filtering)
 368 | Lines 638-667: +4 complexity (result processing with error handling)
 369 | Total: 13
 370 | ```
 371 | 
 372 | ### Refactoring Proposal #1: Extract Tag Normalization
 373 | **Risk:** Low | **Impact:** -2 complexity | **Time:** 45 minutes
 374 | 
 375 | **Before:**
 376 | ```python
 377 | async def _search_by_tags_internal(self, tags, operation=None, time_start=None, time_end=None):
 378 |     try:
 379 |         if not tags:
 380 |             return []
 381 | 
 382 |         # Normalize tags (deduplicate, drop empty strings)
 383 |         deduped_tags = list(dict.fromkeys([tag for tag in tags if tag]))
 384 |         if not deduped_tags:
 385 |             return []
 386 | 
 387 |         if isinstance(operation, str):
 388 |             normalized_operation = operation.strip().upper() or "AND"
 389 |         else:
 390 |             normalized_operation = "AND"
 391 | 
 392 |         if normalized_operation not in {"AND", "OR"}:
 393 |             logger.warning("Unsupported tag search operation '%s'; defaulting to AND", operation)
 394 |             normalized_operation = "AND"
 395 | ```
 396 | 
 397 | **After:**
 398 | ```python
 399 | def normalize_tags_for_search(tags: List[str]) -> List[str]:
 400 |     """Deduplicate and filter empty tag strings.
 401 | 
 402 |     Args:
 403 |         tags: List of tag strings (may contain duplicates or empty strings)
 404 | 
 405 |     Returns:
 406 |         Deduplicated list of non-empty tags
 407 |     """
 408 |     return list(dict.fromkeys([tag for tag in tags if tag]))
 409 | 
 410 | def normalize_operation(operation: Optional[str]) -> str:
 411 |     """Normalize tag search operation to AND or OR.
 412 | 
 413 |     Args:
 414 |         operation: Raw operation string (case-insensitive)
 415 | 
 416 |     Returns:
 417 |         Normalized operation: "AND" or "OR"
 418 |     """
 419 |     if isinstance(operation, str):
 420 |         normalized = operation.strip().upper() or "AND"
 421 |     else:
 422 |         normalized = "AND"
 423 | 
 424 |     if normalized not in {"AND", "OR"}:
 425 |         logger.warning(f"Unsupported operation '{operation}'; defaulting to AND")
 426 |         normalized = "AND"
 427 | 
 428 |     return normalized
 429 | 
 430 | async def _search_by_tags_internal(self, tags, operation=None, time_start=None, time_end=None):
 431 |     try:
 432 |         if not tags:
 433 |             return []
 434 | 
 435 |         deduped_tags = normalize_tags_for_search(tags)
 436 |         if not deduped_tags:
 437 |             return []
 438 | 
 439 |         normalized_operation = normalize_operation(operation)
 440 | ```
 441 | 
 442 | **Complexity Impact:** 13 → 11 (-2)
 443 | 
 444 | ### Refactoring Proposal #2: Extract SQL Query Builder
 445 | **Risk:** Low | **Impact:** -3 complexity | **Time:** 1 hour
 446 | 
 447 | **Before:**
 448 | ```python
 449 | async def _search_by_tags_internal(self, tags, operation=None, time_start=None, time_end=None):
 450 |     ...
 451 |     placeholders = ",".join(["?"] * len(deduped_tags))
 452 |     params: List[Any] = list(deduped_tags)
 453 | 
 454 |     sql = (
 455 |         "SELECT m.* FROM memories m "
 456 |         "JOIN memory_tags mt ON m.id = mt.memory_id "
 457 |         "JOIN tags t ON mt.tag_id = t.id "
 458 |         f"WHERE t.name IN ({placeholders})"
 459 |     )
 460 | 
 461 |     if time_start is not None:
 462 |         sql += " AND m.created_at >= ?"
 463 |         params.append(time_start)
 464 |     if time_end is not None:
 465 |         sql += " AND m.created_at <= ?"
 466 |         params.append(time_end)
 467 | 
 468 |     sql += " GROUP BY m.id"
 469 | 
 470 |     if normalized_operation == "AND":
 471 |         sql += " HAVING COUNT(DISTINCT t.name) = ?"
 472 |         params.append(len(deduped_tags))
 473 | 
 474 |     sql += " ORDER BY m.created_at DESC"
 475 | ```
 476 | 
 477 | **After:**
 478 | ```python
 479 | def build_tag_search_query(tags: List[str], operation: str,
 480 |                           time_start: Optional[float] = None,
 481 |                           time_end: Optional[float] = None) -> Tuple[str, List[Any]]:
 482 |     """Build SQL query for tag-based search with time filtering.
 483 | 
 484 |     Args:
 485 |         tags: List of deduplicated tags
 486 |         operation: Search operation ("AND" or "OR")
 487 |         time_start: Optional start timestamp filter
 488 |         time_end: Optional end timestamp filter
 489 | 
 490 |     Returns:
 491 |         Tuple of (sql_query, parameters_list)
 492 |     """
 493 |     placeholders = ",".join(["?"] * len(tags))
 494 |     params: List[Any] = list(tags)
 495 | 
 496 |     sql = (
 497 |         "SELECT m.* FROM memories m "
 498 |         "JOIN memory_tags mt ON m.id = mt.memory_id "
 499 |         "JOIN tags t ON mt.tag_id = t.id "
 500 |         f"WHERE t.name IN ({placeholders})"
 501 |     )
 502 | 
 503 |     if time_start is not None:
 504 |         sql += " AND m.created_at >= ?"
 505 |         params.append(time_start)
 506 | 
 507 |     if time_end is not None:
 508 |         sql += " AND m.created_at <= ?"
 509 |         params.append(time_end)
 510 | 
 511 |     sql += " GROUP BY m.id"
 512 | 
 513 |     if operation == "AND":
 514 |         sql += " HAVING COUNT(DISTINCT t.name) = ?"
 515 |         params.append(len(tags))
 516 | 
 517 |     sql += " ORDER BY m.created_at DESC"
 518 | 
 519 |     return sql, params
 520 | 
 521 | async def _search_by_tags_internal(self, tags, operation=None, time_start=None, time_end=None):
 522 |     try:
 523 |         if not tags:
 524 |             return []
 525 | 
 526 |         deduped_tags = normalize_tags_for_search(tags)
 527 |         if not deduped_tags:
 528 |             return []
 529 | 
 530 |         normalized_operation = normalize_operation(operation)
 531 |         sql, params = build_tag_search_query(deduped_tags, normalized_operation,
 532 |                                             time_start, time_end)
 533 | ```
 534 | 
 535 | **Complexity Impact:** 11 → 8 (-3)
 536 | 
 537 | ### Implementation Plan
 538 | 1. **Extract tag normalization** (45 min, low risk) - Pure functions, easy to test
 539 | 2. **Extract SQL builder** (1 hour, low risk) - Testable without database
 540 | 
 541 | **Total Complexity Reduction:** 13 → 8 (-5 points)
 542 | **Total Time:** 1.75 hours
 543 | 
 544 | ---
 545 | 
 546 | ## Target Function 3: consolidator.py::_compress_redundant_memories() (Complexity: 13)
 547 | 
 548 | ### Current Implementation
 549 | **Purpose:** Identify and compress semantically similar memory clusters.
 550 | 
 551 | **Note:** After examining consolidator.py (556 lines), I found that `_compress_redundant_memories()` does not exist in the current codebase. The function was likely refactored into the consolidation pipeline. The most complex function in this file is `consolidate()` at lines 80-210 (complexity ~12).
 552 | 
 553 | ### Complexity Breakdown (consolidate method)
 554 | ```
 555 | Lines 99-110: +2 complexity (hybrid backend sync pause logic)
 556 | Lines 112-120: +2 complexity (memory retrieval and validation)
 557 | Lines 125-150: +4 complexity (association discovery conditional logic)
 558 | Lines 155-181: +4 complexity (compression and forgetting conditional logic)
 559 | Total: 12
 560 | ```
 561 | 
 562 | ### Refactoring Proposal #1: Extract Hybrid Sync Management
 563 | **Risk:** Low | **Impact:** -2 complexity | **Time:** 45 minutes
 564 | 
 565 | **Before:**
 566 | ```python
 567 | async def consolidate(self, time_horizon: str, **kwargs) -> ConsolidationReport:
 568 |     ...
 569 |     # Check if hybrid backend and pause sync during consolidation
 570 |     sync_was_paused = False
 571 |     is_hybrid = hasattr(self.storage, 'pause_sync') and hasattr(self.storage, 'resume_sync')
 572 | 
 573 |     try:
 574 |         self.logger.info(f"Starting {time_horizon} consolidation...")
 575 | 
 576 |         # Pause hybrid sync to eliminate bottleneck during metadata updates
 577 |         if is_hybrid:
 578 |             self.logger.info("Pausing hybrid backend sync during consolidation")
 579 |             await self.storage.pause_sync()
 580 |             sync_was_paused = True
 581 |         ...
 582 |     finally:
 583 |         # Resume hybrid sync after consolidation
 584 |         if sync_was_paused:
 585 |             try:
 586 |                 self.logger.info("Resuming hybrid backend sync after consolidation")
 587 |                 await self.storage.resume_sync()
 588 |             except Exception as e:
 589 |                 self.logger.error(f"Failed to resume sync after consolidation: {e}")
 590 | ```
 591 | 
 592 | **After:**
 593 | ```python
 594 | class SyncPauseContext:
 595 |     """Context manager for pausing hybrid backend sync during consolidation."""
 596 | 
 597 |     def __init__(self, storage, logger):
 598 |         self.storage = storage
 599 |         self.logger = logger
 600 |         self.is_hybrid = hasattr(storage, 'pause_sync') and hasattr(storage, 'resume_sync')
 601 |         self.was_paused = False
 602 | 
 603 |     async def __aenter__(self):
 604 |         if self.is_hybrid:
 605 |             self.logger.info("Pausing hybrid backend sync during consolidation")
 606 |             await self.storage.pause_sync()
 607 |             self.was_paused = True
 608 |         return self
 609 | 
 610 |     async def __aexit__(self, exc_type, exc_val, exc_tb):
 611 |         if self.was_paused:
 612 |             try:
 613 |                 self.logger.info("Resuming hybrid backend sync")
 614 |                 await self.storage.resume_sync()
 615 |             except Exception as e:
 616 |                 self.logger.error(f"Failed to resume sync: {e}")
 617 | 
 618 | async def consolidate(self, time_horizon: str, **kwargs) -> ConsolidationReport:
 619 |     start_time = datetime.now()
 620 |     report = ConsolidationReport(...)
 621 | 
 622 |     async with SyncPauseContext(self.storage, self.logger):
 623 |         try:
 624 |             self.logger.info(f"Starting {time_horizon} consolidation...")
 625 |             # ... rest of consolidation logic
 626 | ```
 627 | 
 628 | **Complexity Impact:** 12 → 10 (-2)
 629 | - Removes nested sync management logic
 630 | - Async context manager handles cleanup automatically
 631 | 
 632 | ### Refactoring Proposal #2: Extract Phase-Specific Processing Guard
 633 | **Risk:** Low | **Impact:** -2 complexity | **Time:** 30 minutes
 634 | 
 635 | **Before:**
 636 | ```python
 637 | async def consolidate(self, time_horizon: str, **kwargs) -> ConsolidationReport:
 638 |     ...
 639 |     # 3. Cluster by semantic similarity (if enabled and appropriate)
 640 |     clusters = []
 641 |     if self.config.clustering_enabled and time_horizon in ['weekly', 'monthly', 'quarterly']:
 642 |         self.logger.info(f"🔗 Phase 2/6: Clustering memories...")
 643 |         clusters = await self.clustering_engine.process(memories)
 644 |         report.clusters_created = len(clusters)
 645 | 
 646 |     # 4. Run creative associations (if enabled and appropriate)
 647 |     associations = []
 648 |     if self.config.associations_enabled and time_horizon in ['weekly', 'monthly']:
 649 |         self.logger.info(f"🧠 Phase 3/6: Discovering associations...")
 650 |         existing_associations = await self._get_existing_associations()
 651 |         associations = await self.association_engine.process(memories, existing_associations)
 652 |         report.associations_discovered = len(associations)
 653 | ```
 654 | 
 655 | **After:**
 656 | ```python
 657 | def should_run_clustering(self, time_horizon: str) -> bool:
 658 |     """Check if clustering should run for this time horizon."""
 659 |     return self.config.clustering_enabled and time_horizon in ['weekly', 'monthly', 'quarterly']
 660 | 
 661 | def should_run_associations(self, time_horizon: str) -> bool:
 662 |     """Check if association discovery should run for this time horizon."""
 663 |     return self.config.associations_enabled and time_horizon in ['weekly', 'monthly']
 664 | 
 665 | def should_run_compression(self, time_horizon: str) -> bool:
 666 |     """Check if compression should run for this time horizon."""
 667 |     return self.config.compression_enabled
 668 | 
 669 | def should_run_forgetting(self, time_horizon: str) -> bool:
 670 |     """Check if controlled forgetting should run for this time horizon."""
 671 |     return self.config.forgetting_enabled and time_horizon in ['monthly', 'quarterly', 'yearly']
 672 | 
 673 | async def consolidate(self, time_horizon: str, **kwargs) -> ConsolidationReport:
 674 |     ...
 675 |     # 3. Cluster by semantic similarity
 676 |     clusters = []
 677 |     if self.should_run_clustering(time_horizon):
 678 |         self.logger.info(f"🔗 Phase 2/6: Clustering memories...")
 679 |         clusters = await self.clustering_engine.process(memories)
 680 |         report.clusters_created = len(clusters)
 681 | 
 682 |     # 4. Run creative associations
 683 |     associations = []
 684 |     if self.should_run_associations(time_horizon):
 685 |         self.logger.info(f"🧠 Phase 3/6: Discovering associations...")
 686 |         existing_associations = await self._get_existing_associations()
 687 |         associations = await self.association_engine.process(memories, existing_associations)
 688 |         report.associations_discovered = len(associations)
 689 | ```
 690 | 
 691 | **Complexity Impact:** 10 → 8 (-2)
 692 | - Extracts multi-condition guards to named methods
 693 | - Improves readability and testability
 694 | 
 695 | ### Implementation Plan
 696 | 1. **Extract sync context manager** (45 min, low risk) - Standard async pattern
 697 | 2. **Extract phase guards** (30 min, low risk) - Simple boolean methods
 698 | 
 699 | **Total Complexity Reduction:** 12 → 8 (-4 points)
 700 | **Total Time:** 1.25 hours
 701 | 
 702 | ---
 703 | 
 704 | ## Target Function 4: analytics.py::get_analytics() (Complexity: 12)
 705 | 
 706 | ### Current Implementation
 707 | **Purpose:** Aggregate analytics overview from storage backend.
 708 | 
 709 | **Note:** After examining analytics.py, the `get_analytics()` function doesn't exist. The most complex function is `get_memory_growth()` at lines 267-363 (complexity ~11).
 710 | 
 711 | ### Complexity Breakdown (get_memory_growth)
 712 | ```
 713 | Lines 279-293: +4 complexity (period validation and interval calculation)
 714 | Lines 304-334: +5 complexity (date grouping and interval aggregation loops)
 715 | Lines 336-353: +2 complexity (label generation and data point creation)
 716 | Total: 11
 717 | ```
 718 | 
 719 | ### Refactoring Proposal #1: Extract Period Configuration
 720 | **Risk:** Low | **Impact:** -3 complexity | **Time:** 45 minutes
 721 | 
 722 | **Before:**
 723 | ```python
 724 | @router.get("/memory-growth", response_model=MemoryGrowthData, tags=["analytics"])
 725 | async def get_memory_growth(period: str = Query("month", ...), ...):
 726 |     try:
 727 |         # Define the period
 728 |         if period == "week":
 729 |             days = 7
 730 |             interval_days = 1
 731 |         elif period == "month":
 732 |             days = 30
 733 |             interval_days = 7
 734 |         elif period == "quarter":
 735 |             days = 90
 736 |             interval_days = 7
 737 |         elif period == "year":
 738 |             days = 365
 739 |             interval_days = 30
 740 |         else:
 741 |             raise HTTPException(status_code=400, detail="Invalid period...")
 742 | ```
 743 | 
 744 | **After:**
 745 | ```python
 746 | @dataclass
 747 | class PeriodConfig:
 748 |     """Configuration for time period analysis."""
 749 |     days: int
 750 |     interval_days: int
 751 |     label_format: str
 752 | 
 753 | PERIOD_CONFIGS = {
 754 |     "week": PeriodConfig(days=7, interval_days=1, label_format="daily"),
 755 |     "month": PeriodConfig(days=30, interval_days=7, label_format="weekly"),
 756 |     "quarter": PeriodConfig(days=90, interval_days=7, label_format="weekly"),
 757 |     "year": PeriodConfig(days=365, interval_days=30, label_format="monthly"),
 758 | }
 759 | 
 760 | def get_period_config(period: str) -> PeriodConfig:
 761 |     """Get configuration for the specified time period.
 762 | 
 763 |     Args:
 764 |         period: Time period identifier (week, month, quarter, year)
 765 | 
 766 |     Returns:
 767 |         PeriodConfig for the specified period
 768 | 
 769 |     Raises:
 770 |         HTTPException: If period is invalid
 771 |     """
 772 |     config = PERIOD_CONFIGS.get(period)
 773 |     if not config:
 774 |         raise HTTPException(
 775 |             status_code=400,
 776 |             detail=f"Invalid period. Use: {', '.join(PERIOD_CONFIGS.keys())}"
 777 |         )
 778 |     return config
 779 | 
 780 | @router.get("/memory-growth", response_model=MemoryGrowthData, tags=["analytics"])
 781 | async def get_memory_growth(period: str = Query("month", ...), ...):
 782 |     try:
 783 |         config = get_period_config(period)
 784 |         days = config.days
 785 |         interval_days = config.interval_days
 786 | ```
 787 | 
 788 | **Complexity Impact:** 11 → 8 (-3)
 789 | - Replaces if/elif chain with dict lookup
 790 | - Configuration is data-driven and easily extensible
 791 | 
 792 | ### Refactoring Proposal #2: Extract Interval Aggregation
 793 | **Risk:** Low | **Impact:** -2 complexity | **Time:** 1 hour
 794 | 
 795 | **Before:**
 796 | ```python
 797 | async def get_memory_growth(...):
 798 |     ...
 799 |     # Create data points
 800 |     current_date = start_date.date()
 801 |     while current_date <= end_date.date():
 802 |         # For intervals > 1 day, sum counts across the entire interval
 803 |         interval_end = current_date + timedelta(days=interval_days)
 804 |         count = 0
 805 | 
 806 |         # Sum all memories within this interval
 807 |         check_date = current_date
 808 |         while check_date < interval_end and check_date <= end_date.date():
 809 |             count += date_counts.get(check_date, 0)
 810 |             check_date += timedelta(days=1)
 811 | 
 812 |         cumulative += count
 813 | 
 814 |         # Convert date to datetime for label generation
 815 |         current_datetime = datetime.combine(current_date, datetime.min.time())
 816 |         label = _generate_interval_label(current_datetime, period)
 817 | 
 818 |         data_points.append(MemoryGrowthPoint(...))
 819 | 
 820 |         current_date += timedelta(days=interval_days)
 821 | ```
 822 | 
 823 | **After:**
 824 | ```python
 825 | def aggregate_interval_counts(date_counts: Dict[date, int],
 826 |                              start_date: date,
 827 |                              end_date: date,
 828 |                              interval_days: int) -> List[Tuple[date, int]]:
 829 |     """Aggregate memory counts over time intervals.
 830 | 
 831 |     Args:
 832 |         date_counts: Map of dates to memory counts
 833 |         start_date: Start date for aggregation
 834 |         end_date: End date for aggregation
 835 |         interval_days: Number of days per interval
 836 | 
 837 |     Returns:
 838 |         List of (interval_start_date, count) tuples
 839 |     """
 840 |     intervals = []
 841 |     current_date = start_date
 842 | 
 843 |     while current_date <= end_date:
 844 |         interval_end = current_date + timedelta(days=interval_days)
 845 | 
 846 |         # Sum all memories within this interval
 847 |         count = 0
 848 |         check_date = current_date
 849 |         while check_date < interval_end and check_date <= end_date:
 850 |             count += date_counts.get(check_date, 0)
 851 |             check_date += timedelta(days=1)
 852 | 
 853 |         intervals.append((current_date, count))
 854 |         current_date += timedelta(days=interval_days)
 855 | 
 856 |     return intervals
 857 | 
 858 | def build_growth_data_points(intervals: List[Tuple[date, int]],
 859 |                             period: str) -> List[MemoryGrowthPoint]:
 860 |     """Build MemoryGrowthPoint objects from interval data.
 861 | 
 862 |     Args:
 863 |         intervals: List of (date, count) tuples
 864 |         period: Time period for label generation
 865 | 
 866 |     Returns:
 867 |         List of MemoryGrowthPoint objects with labels
 868 |     """
 869 |     data_points = []
 870 |     cumulative = 0
 871 | 
 872 |     for current_date, count in intervals:
 873 |         cumulative += count
 874 |         current_datetime = datetime.combine(current_date, datetime.min.time())
 875 |         label = _generate_interval_label(current_datetime, period)
 876 | 
 877 |         data_points.append(MemoryGrowthPoint(
 878 |             date=current_date.isoformat(),
 879 |             count=count,
 880 |             cumulative=cumulative,
 881 |             label=label
 882 |         ))
 883 | 
 884 |     return data_points
 885 | 
 886 | async def get_memory_growth(...):
 887 |     ...
 888 |     intervals = aggregate_interval_counts(date_counts, start_date.date(),
 889 |                                          end_date.date(), interval_days)
 890 |     data_points = build_growth_data_points(intervals, period)
 891 | ```
 892 | 
 893 | **Complexity Impact:** 8 → 6 (-2)
 894 | - Separates data aggregation from presentation
 895 | - Nested loops extracted to dedicated function
 896 | 
 897 | ### Implementation Plan
 898 | 1. **Extract period config** (45 min, low risk) - Dict lookup pattern
 899 | 2. **Extract interval aggregation** (1 hour, low risk) - Pure function extraction
 900 | 
 901 | **Total Complexity Reduction:** 11 → 6 (-5 points)
 902 | **Total Time:** 1.75 hours
 903 | 
 904 | ---
 905 | 
 906 | ## Target Function 5: quality_gate.sh functions (Complexity: 10-12)
 907 | 
 908 | ### Current Implementation
 909 | **Purpose:** Multiple bash functions for PR quality checks.
 910 | 
 911 | **Note:** After searching, I found bash scripts in `/scripts/pr/` but they don't contain individual functions with measurable cyclomatic complexity in the Python sense. Bash scripts typically have complexity from conditional branches and loops, but they're measured differently.
 912 | 
 913 | Instead, I'll analyze a Python equivalent that would benefit from refactoring: The analytics endpoint functions that have similar complexity patterns.
 914 | 
 915 | Let me analyze `get_tag_usage_analytics()` from analytics.py (lines 366-428, complexity ~10).
 916 | 
 917 | ### Complexity Breakdown (get_tag_usage_analytics)
 918 | ```
 919 | Lines 379-395: +3 complexity (storage method availability checks and fallbacks)
 920 | Lines 397-410: +4 complexity (tag data processing with total memory calculation)
 921 | Lines 412-421: +3 complexity (tag stats calculation loop)
 922 | Total: 10
 923 | ```
 924 | 
 925 | ### Refactoring Proposal #1: Extract Storage Stats Retrieval
 926 | **Risk:** Low | **Impact:** -2 complexity | **Time:** 30 minutes
 927 | 
 928 | **Before:**
 929 | ```python
 930 | async def get_tag_usage_analytics(...):
 931 |     try:
 932 |         # Get all tags with counts
 933 |         if hasattr(storage, 'get_all_tags_with_counts'):
 934 |             tag_data = await storage.get_all_tags_with_counts()
 935 |         else:
 936 |             raise HTTPException(status_code=501, detail="Tag analytics not supported...")
 937 | 
 938 |         # Get total memories for accurate percentage calculation
 939 |         if hasattr(storage, 'get_stats'):
 940 |             try:
 941 |                 stats = await storage.get_stats()
 942 |                 total_memories = stats.get("total_memories", 0)
 943 |             except Exception as e:
 944 |                 logger.warning(f"Failed to retrieve storage stats: {e}")
 945 |                 stats = {}
 946 |                 total_memories = 0
 947 |         else:
 948 |             total_memories = 0
 949 | 
 950 |         if total_memories == 0:
 951 |             # Fallback: calculate from all tag data
 952 |             all_tags = tag_data.copy()
 953 |             total_memories = sum(tag["count"] for tag in all_tags)
 954 | ```
 955 | 
 956 | **After:**
 957 | ```python
 958 | async def get_total_memory_count(storage: MemoryStorage,
 959 |                                 tag_data: List[Dict]) -> int:
 960 |     """Get total memory count from storage or calculate from tag data.
 961 | 
 962 |     Args:
 963 |         storage: Storage backend
 964 |         tag_data: Tag count data for fallback calculation
 965 | 
 966 |     Returns:
 967 |         Total memory count
 968 |     """
 969 |     if hasattr(storage, 'get_stats'):
 970 |         try:
 971 |             stats = await storage.get_stats()
 972 |             total = stats.get("total_memories", 0)
 973 |             if total > 0:
 974 |                 return total
 975 |         except Exception as e:
 976 |             logger.warning(f"Failed to retrieve storage stats: {e}")
 977 | 
 978 |     # Fallback: calculate from tag data
 979 |     return sum(tag["count"] for tag in tag_data)
 980 | 
 981 | async def get_tag_usage_analytics(...):
 982 |     try:
 983 |         # Get all tags with counts
 984 |         if hasattr(storage, 'get_all_tags_with_counts'):
 985 |             tag_data = await storage.get_all_tags_with_counts()
 986 |         else:
 987 |             raise HTTPException(status_code=501,
 988 |                               detail="Tag analytics not supported by storage backend")
 989 | 
 990 |         total_memories = await get_total_memory_count(storage, tag_data)
 991 | ```
 992 | 
 993 | **Complexity Impact:** 10 → 8 (-2)
 994 | 
 995 | ### Refactoring Proposal #2: Extract Tag Stats Calculation
 996 | **Risk:** Low | **Impact:** -2 complexity | **Time:** 30 minutes
 997 | 
 998 | **Before:**
 999 | ```python
1000 | async def get_tag_usage_analytics(...):
1001 |     ...
1002 |     # Convert to response format
1003 |     tags = []
1004 |     for tag_item in tag_data:
1005 |         percentage = (tag_item["count"] / total_memories * 100) if total_memories > 0 else 0
1006 | 
1007 |         tags.append(TagUsageStats(
1008 |             tag=tag_item["tag"],
1009 |             count=tag_item["count"],
1010 |             percentage=round(percentage, 1),
1011 |             growth_rate=None  # Would need historical data to calculate
1012 |         ))
1013 | 
1014 |     return TagUsageData(
1015 |         tags=tags,
1016 |         total_memories=total_memories,
1017 |         period=period
1018 |     )
1019 | ```
1020 | 
1021 | **After:**
1022 | ```python
1023 | def calculate_tag_percentage(count: int, total: int) -> float:
1024 |     """Calculate percentage safely handling division by zero.
1025 | 
1026 |     Args:
1027 |         count: Tag usage count
1028 |         total: Total memory count
1029 | 
1030 |     Returns:
1031 |         Rounded percentage (1 decimal place)
1032 |     """
1033 |     return round((count / total * 100) if total > 0 else 0, 1)
1034 | 
1035 | def build_tag_usage_stats(tag_data: List[Dict], total_memories: int) -> List[TagUsageStats]:
1036 |     """Build TagUsageStats objects from raw tag data.
1037 | 
1038 |     Args:
1039 |         tag_data: Raw tag count data
1040 |         total_memories: Total memory count for percentage calculation
1041 | 
1042 |     Returns:
1043 |         List of TagUsageStats objects
1044 |     """
1045 |     return [
1046 |         TagUsageStats(
1047 |             tag=tag_item["tag"],
1048 |             count=tag_item["count"],
1049 |             percentage=calculate_tag_percentage(tag_item["count"], total_memories),
1050 |             growth_rate=None  # Would need historical data
1051 |         )
1052 |         for tag_item in tag_data
1053 |     ]
1054 | 
1055 | async def get_tag_usage_analytics(...):
1056 |     ...
1057 |     tags = build_tag_usage_stats(tag_data, total_memories)
1058 | 
1059 |     return TagUsageData(
1060 |         tags=tags,
1061 |         total_memories=total_memories,
1062 |         period=period
1063 |     )
1064 | ```
1065 | 
1066 | **Complexity Impact:** 8 → 6 (-2)
1067 | 
1068 | ### Implementation Plan
1069 | 1. **Extract memory count retrieval** (30 min, low risk) - Simple extraction
1070 | 2. **Extract tag stats calculation** (30 min, low risk) - Pure function
1071 | 
1072 | **Total Complexity Reduction:** 10 → 6 (-4 points)
1073 | **Total Time:** 1 hour
1074 | 
1075 | ---
1076 | 
1077 | ## Quick Wins Summary
1078 | 
1079 | ### Quick Win 1: install.py::detect_gpu() (Complexity: 10 → 7)
1080 | **Refactoring:** Extract platform-specific GPU detection to separate functions
1081 | **Time:** 1 hour | **Risk:** Low
1082 | 
1083 | **Before:**
1084 | ```python
1085 | def detect_gpu():
1086 |     system_info = detect_system()
1087 | 
1088 |     # Check for CUDA
1089 |     has_cuda = False
1090 |     cuda_version = None
1091 |     if system_info["is_windows"]:
1092 |         cuda_path = os.environ.get('CUDA_PATH')
1093 |         if cuda_path and os.path.exists(cuda_path):
1094 |             has_cuda = True
1095 |             # ... 15 lines of version detection
1096 |     elif system_info["is_linux"]:
1097 |         cuda_paths = ['/usr/local/cuda', os.environ.get('CUDA_HOME')]
1098 |         for path in cuda_paths:
1099 |             if path and os.path.exists(path):
1100 |                 has_cuda = True
1101 |                 # ... 15 lines of version detection
1102 | ```
1103 | 
1104 | **After:**
1105 | ```python
1106 | def detect_cuda_windows() -> Tuple[bool, Optional[str]]:
1107 |     """Detect CUDA on Windows systems."""
1108 |     cuda_path = os.environ.get('CUDA_PATH')
1109 |     if not (cuda_path and os.path.exists(cuda_path)):
1110 |         return False, None
1111 | 
1112 |     # ... version detection logic
1113 |     return True, cuda_version
1114 | 
1115 | def detect_cuda_linux() -> Tuple[bool, Optional[str]]:
1116 |     """Detect CUDA on Linux systems."""
1117 |     cuda_paths = ['/usr/local/cuda', os.environ.get('CUDA_HOME')]
1118 |     for path in cuda_paths:
1119 |         if path and os.path.exists(path):
1120 |             # ... version detection logic
1121 |             return True, cuda_version
1122 |     return False, None
1123 | 
1124 | CUDA_DETECTORS = {
1125 |     'windows': detect_cuda_windows,
1126 |     'linux': detect_cuda_linux,
1127 | }
1128 | 
1129 | def detect_gpu():
1130 |     system_info = detect_system()
1131 | 
1132 |     # Detect CUDA using platform-specific detector
1133 |     detector_key = 'windows' if system_info["is_windows"] else 'linux'
1134 |     detector = CUDA_DETECTORS.get(detector_key, lambda: (False, None))
1135 |     has_cuda, cuda_version = detector()
1136 | ```
1137 | 
1138 | **Impact:** -3 complexity
1139 | - Platform-specific logic extracted
1140 | - Dict dispatch replaces if/elif chain
1141 | 
1142 | ---
1143 | 
1144 | ### Quick Win 2: cloudflare.py::get_memory_timestamps() (Complexity: 9 → 7)
1145 | **Refactoring:** Extract SQL query building and result processing
1146 | **Time:** 45 minutes | **Risk:** Low
1147 | 
1148 | **Before:**
1149 | ```python
1150 | async def get_memory_timestamps(self, days: Optional[int] = None) -> List[float]:
1151 |     try:
1152 |         if days is not None:
1153 |             cutoff = datetime.now(timezone.utc) - timedelta(days=days)
1154 |             cutoff_timestamp = cutoff.timestamp()
1155 | 
1156 |             sql = "SELECT created_at FROM memories WHERE created_at >= ? ORDER BY created_at DESC"
1157 |             payload = {"sql": sql, "params": [cutoff_timestamp]}
1158 |         else:
1159 |             sql = "SELECT created_at FROM memories ORDER BY created_at DESC"
1160 |             payload = {"sql": sql, "params": []}
1161 | 
1162 |         response = await self._retry_request("POST", f"{self.d1_url}/query", json=payload)
1163 |         result = response.json()
1164 | 
1165 |         timestamps = []
1166 |         if result.get("success") and result.get("result", [{}])[0].get("results"):
1167 |             for row in result["result"][0]["results"]:
1168 |                 if row.get("created_at") is not None:
1169 |                     timestamps.append(float(row["created_at"]))
1170 | ```
1171 | 
1172 | **After:**
1173 | ```python
1174 | def build_timestamp_query(days: Optional[int]) -> Tuple[str, List[Any]]:
1175 |     """Build SQL query for fetching memory timestamps.
1176 | 
1177 |     Args:
1178 |         days: Optional day limit for filtering
1179 | 
1180 |     Returns:
1181 |         Tuple of (sql_query, parameters)
1182 |     """
1183 |     if days is not None:
1184 |         cutoff = datetime.now(timezone.utc) - timedelta(days=days)
1185 |         return (
1186 |             "SELECT created_at FROM memories WHERE created_at >= ? ORDER BY created_at DESC",
1187 |             [cutoff.timestamp()]
1188 |         )
1189 |     return (
1190 |         "SELECT created_at FROM memories ORDER BY created_at DESC",
1191 |         []
1192 |     )
1193 | 
1194 | def extract_timestamps(result: Dict) -> List[float]:
1195 |     """Extract timestamp values from D1 query result.
1196 | 
1197 |     Args:
1198 |         result: D1 query response JSON
1199 | 
1200 |     Returns:
1201 |         List of Unix timestamps
1202 |     """
1203 |     if not (result.get("success") and result.get("result", [{}])[0].get("results")):
1204 |         return []
1205 | 
1206 |     return [
1207 |         float(row["created_at"])
1208 |         for row in result["result"][0]["results"]
1209 |         if row.get("created_at") is not None
1210 |     ]
1211 | 
1212 | async def get_memory_timestamps(self, days: Optional[int] = None) -> List[float]:
1213 |     try:
1214 |         sql, params = build_timestamp_query(days)
1215 |         payload = {"sql": sql, "params": params}
1216 | 
1217 |         response = await self._retry_request("POST", f"{self.d1_url}/query", json=payload)
1218 |         result = response.json()
1219 | 
1220 |         timestamps = extract_timestamps(result)
1221 | ```
1222 | 
1223 | **Impact:** -2 complexity
1224 | - Query building extracted
1225 | - Result processing extracted
1226 | 
1227 | ---
1228 | 
1229 | ### Quick Win 3: consolidator.py::_get_memories_for_horizon() (Complexity: 10 → 8)
1230 | **Refactoring:** Extract time range calculation and incremental mode sorting
1231 | **Time:** 45 minutes | **Risk:** Low
1232 | 
1233 | **Before:**
1234 | ```python
1235 | async def _get_memories_for_horizon(self, time_horizon: str, **kwargs) -> List[Memory]:
1236 |     now = datetime.now()
1237 | 
1238 |     # Define time ranges for different horizons
1239 |     time_ranges = {
1240 |         'daily': timedelta(days=1),
1241 |         'weekly': timedelta(days=7),
1242 |         'monthly': timedelta(days=30),
1243 |         'quarterly': timedelta(days=90),
1244 |         'yearly': timedelta(days=365)
1245 |     }
1246 | 
1247 |     if time_horizon not in time_ranges:
1248 |         raise ConsolidationError(f"Unknown time horizon: {time_horizon}")
1249 | 
1250 |     # For daily processing, get recent memories (no change - already efficient)
1251 |     if time_horizon == 'daily':
1252 |         start_time = (now - timedelta(days=2)).timestamp()
1253 |         end_time = now.timestamp()
1254 |         memories = await self.storage.get_memories_by_time_range(start_time, end_time)
1255 |     else:
1256 |         # ... complex incremental logic
1257 | ```
1258 | 
1259 | **After:**
1260 | ```python
1261 | TIME_HORIZON_CONFIGS = {
1262 |     'daily': {'days': 1, 'use_time_range': True, 'range_days': 2},
1263 |     'weekly': {'days': 7, 'use_time_range': False},
1264 |     'monthly': {'days': 30, 'use_time_range': False},
1265 |     'quarterly': {'days': 90, 'use_time_range': False},
1266 |     'yearly': {'days': 365, 'use_time_range': False}
1267 | }
1268 | 
1269 | def get_consolidation_sort_key(memory: Memory) -> float:
1270 |     """Get sort key for incremental consolidation (oldest first).
1271 | 
1272 |     Args:
1273 |         memory: Memory object to get sort key for
1274 | 
1275 |     Returns:
1276 |         Sort key (timestamp, lower = older)
1277 |     """
1278 |     if memory.metadata and 'last_consolidated_at' in memory.metadata:
1279 |         return float(memory.metadata['last_consolidated_at'])
1280 |     return memory.created_at if memory.created_at else 0.0
1281 | 
1282 | async def _get_memories_for_horizon(self, time_horizon: str, **kwargs) -> List[Memory]:
1283 |     config = TIME_HORIZON_CONFIGS.get(time_horizon)
1284 |     if not config:
1285 |         raise ConsolidationError(f"Unknown time horizon: {time_horizon}")
1286 | 
1287 |     now = datetime.now()
1288 | 
1289 |     if config['use_time_range']:
1290 |         start_time = (now - timedelta(days=config['range_days'])).timestamp()
1291 |         end_time = now.timestamp()
1292 |         return await self.storage.get_memories_by_time_range(start_time, end_time)
1293 | 
1294 |     # ... simplified incremental logic using extracted functions
1295 | ```
1296 | 
1297 | **Impact:** -2 complexity
1298 | - Config-driven time range selection
1299 | - Sort key extraction to separate function
1300 | 
1301 | ---
1302 | 
1303 | ### Quick Win 4: analytics.py::get_activity_breakdown() (Complexity: 9 → 7)
1304 | **Refactoring:** Extract granularity-specific aggregation functions
1305 | **Time:** 1 hour | **Risk:** Low
1306 | 
1307 | **Before:**
1308 | ```python
1309 | async def get_activity_breakdown(granularity: str = Query("daily", ...)):
1310 |     ...
1311 |     if granularity == "hourly":
1312 |         hour_counts = defaultdict(int)
1313 |         for timestamp in timestamps:
1314 |             dt = datetime.fromtimestamp(timestamp, tz=timezone.utc)
1315 |             hour_counts[dt.hour] += 1
1316 |             active_days.add(dt.date())
1317 |             activity_dates.append(dt.date())
1318 |         # ... 10 lines of breakdown building
1319 |     elif granularity == "daily":
1320 |         day_counts = defaultdict(int)
1321 |         day_names = ["Monday", "Tuesday", ...]
1322 |         # ... 15 lines of breakdown building
1323 |     else:  # weekly
1324 |         week_counts = defaultdict(int)
1325 |         # ... 20 lines of breakdown building
1326 | ```
1327 | 
1328 | **After:**
1329 | ```python
1330 | def aggregate_hourly(timestamps: List[float]) -> Tuple[List[ActivityBreakdown], Set[date], List[date]]:
1331 |     """Aggregate activity data by hour."""
1332 |     hour_counts = defaultdict(int)
1333 |     active_days = set()
1334 |     activity_dates = []
1335 | 
1336 |     for timestamp in timestamps:
1337 |         dt = datetime.fromtimestamp(timestamp, tz=timezone.utc)
1338 |         hour_counts[dt.hour] += 1
1339 |         active_days.add(dt.date())
1340 |         activity_dates.append(dt.date())
1341 | 
1342 |     breakdown = [
1343 |         ActivityBreakdown(period="hourly", count=hour_counts.get(hour, 0), label=f"{hour:02d}:00")
1344 |         for hour in range(24)
1345 |     ]
1346 |     return breakdown, active_days, activity_dates
1347 | 
1348 | GRANULARITY_AGGREGATORS = {
1349 |     'hourly': aggregate_hourly,
1350 |     'daily': aggregate_daily,
1351 |     'weekly': aggregate_weekly
1352 | }
1353 | 
1354 | async def get_activity_breakdown(granularity: str = Query("daily", ...)):
1355 |     ...
1356 |     aggregator = GRANULARITY_AGGREGATORS.get(granularity, aggregate_daily)
1357 |     breakdown, active_days, activity_dates = aggregator(timestamps)
1358 | ```
1359 | 
1360 | **Impact:** -2 complexity
1361 | - Granularity-specific logic extracted
1362 | - Dict dispatch replaces if/elif chain
1363 | 
1364 | ---
1365 | 
1366 | ### Quick Win 5: analytics.py::get_memory_type_distribution() (Complexity: 9 → 7)
1367 | **Refactoring:** Extract storage backend type detection and query building
1368 | **Time:** 45 minutes | **Risk:** Low
1369 | 
1370 | **Before:**
1371 | ```python
1372 | async def get_memory_type_distribution(storage: MemoryStorage = Depends(get_storage), ...):
1373 |     try:
1374 |         # Try multiple approaches based on storage backend
1375 |         if hasattr(storage, 'get_type_counts'):
1376 |             type_counts_data = await storage.get_type_counts()
1377 |             type_counts = dict(type_counts_data)
1378 |             total_memories = sum(type_counts.values())
1379 |         elif hasattr(storage, 'primary') and hasattr(storage.primary, 'conn'):
1380 |             # Hybrid storage - access underlying SQLite
1381 |             cursor = storage.primary.conn.cursor()
1382 |             cursor.execute("""SELECT ... FROM memories GROUP BY mem_type""")
1383 |             type_counts = {row[0]: row[1] for row in cursor.fetchall()}
1384 |             ...
1385 |         elif hasattr(storage, 'conn') and storage.conn:
1386 |             # Direct SQLite storage
1387 |             cursor = storage.conn.cursor()
1388 |             cursor.execute("""SELECT ... FROM memories GROUP BY mem_type""")
1389 |             ...
1390 | ```
1391 | 
1392 | **After:**
1393 | ```python
1394 | async def get_type_counts_from_storage(storage: MemoryStorage) -> Tuple[Dict[str, int], int]:
1395 |     """Get memory type counts from storage backend.
1396 | 
1397 |     Returns:
1398 |         Tuple of (type_counts_dict, total_memories)
1399 |     """
1400 |     # Native support
1401 |     if hasattr(storage, 'get_type_counts'):
1402 |         type_counts_data = await storage.get_type_counts()
1403 |         type_counts = dict(type_counts_data)
1404 |         return type_counts, sum(type_counts.values())
1405 | 
1406 |     # Direct SQLite query (hybrid or direct)
1407 |     conn = None
1408 |     if hasattr(storage, 'primary') and hasattr(storage.primary, 'conn'):
1409 |         conn = storage.primary.conn
1410 |     elif hasattr(storage, 'conn'):
1411 |         conn = storage.conn
1412 | 
1413 |     if conn:
1414 |         cursor = conn.cursor()
1415 |         cursor.execute("""
1416 |             SELECT
1417 |                 CASE WHEN memory_type IS NULL OR memory_type = '' THEN 'untyped'
1418 |                      ELSE memory_type END as mem_type,
1419 |                 COUNT(*) as count
1420 |             FROM memories GROUP BY mem_type
1421 |         """)
1422 |         type_counts = {row[0]: row[1] for row in cursor.fetchall()}
1423 |         cursor.execute("SELECT COUNT(*) FROM memories")
1424 |         return type_counts, cursor.fetchone()[0]
1425 | 
1426 |     # Fallback to sampling
1427 |     logger.warning("Using sampling approach - results may be incomplete")
1428 |     memories = await storage.get_recent_memories(n=1000)
1429 |     type_counts = defaultdict(int)
1430 |     for memory in memories:
1431 |         type_counts[memory.memory_type or "untyped"] += 1
1432 |     return dict(type_counts), len(memories)
1433 | 
1434 | async def get_memory_type_distribution(storage: MemoryStorage = Depends(get_storage), ...):
1435 |     try:
1436 |         type_counts, total_memories = await get_type_counts_from_storage(storage)
1437 |         # ... build response
1438 | ```
1439 | 
1440 | **Impact:** -2 complexity
1441 | - Backend detection logic extracted
1442 | - Early return pattern in extraction function
1443 | 
1444 | ---
1445 | 
1446 | ## Implementation Roadmap
1447 | 
1448 | ### Phase 2A: Core Functions (Week 1)
1449 | **Target:** configure_paths, cloudflare tag search, consolidator.consolidate
1450 | 
1451 | | Function | Priority | Time | Dependency | Parallel? |
1452 | |----------|----------|------|------------|-----------|
1453 | | install.py::configure_paths() | High | 4h | None | Yes |
1454 | | cloudflare.py::_search_by_tags_internal() | High | 1.75h | None | Yes |
1455 | | consolidator.py::consolidate() | High | 1.25h | None | Yes |
1456 | 
1457 | **Subtotal:** 7 hours (can be done in parallel)
1458 | 
1459 | ### Phase 2B: Analytics Functions (Week 2)
1460 | **Target:** analytics endpoints optimization
1461 | 
1462 | | Function | Priority | Time | Dependency | Parallel? |
1463 | |----------|----------|------|------------|-----------|
1464 | | analytics.py::get_memory_growth() | Medium | 1.75h | None | Yes |
1465 | | analytics.py::get_tag_usage_analytics() | Medium | 1h | None | Yes |
1466 | 
1467 | **Subtotal:** 2.75 hours (can be done in parallel)
1468 | 
1469 | ### Phase 2C: Quick Wins (Week 2-3)
1470 | **Target:** Low-risk, high-impact improvements
1471 | 
1472 | | Function | Priority | Time | Dependency | Parallel? |
1473 | |----------|----------|------|------------|-----------|
1474 | | install.py::detect_gpu() | Low | 1h | None | Yes |
1475 | | cloudflare.py::get_memory_timestamps() | Low | 45m | None | Yes |
1476 | | consolidator.py::_get_memories_for_horizon() | Low | 45m | None | Yes |
1477 | | analytics.py::get_activity_breakdown() | Low | 1h | None | Yes |
1478 | | analytics.py::get_memory_type_distribution() | Low | 45m | None | Yes |
1479 | 
1480 | **Subtotal:** 4.25 hours (can be done in parallel)
1481 | 
1482 | ### Total Time Estimate
1483 | - **Sequential execution:** 14 hours
1484 | - **Parallel execution (with team):** 7 hours (Phase 2A) + 2.75h (Phase 2B) + 2h (Phase 2C) = **11.75 hours**
1485 | - **Recommended:** 12-15 hours (including testing and documentation)
1486 | 
1487 | ---
1488 | 
1489 | ## Expected Health Impact
1490 | 
1491 | ### Complexity Score Improvement
1492 | **Current:** 40/100
1493 | - 5 main target functions: -28 complexity points total
1494 | - 5 quick wins: -11 complexity points total
1495 | - **Total reduction:** -39 complexity points across 10 functions
1496 | 
1497 | **Projected:** 50-55/100 (+10-15 points)
1498 | 
1499 | ### Overall Health Score Improvement
1500 | **Current:** 63/100 (Grade C)
1501 | **Projected:** 66-68/100 (Grade C+)
1502 | 
1503 | **Calculation:**
1504 | - Phase 1 (dead code): +5-9 points → 68-72
1505 | - Phase 2 (complexity): +3 points → 71-75
1506 | 
1507 | ---
1508 | 
1509 | ## Success Criteria
1510 | 
1511 | ### Quantitative
1512 | - [ ] All 5 main functions reduced by 3+ complexity points each
1513 | - [ ] All 5 quick wins implemented successfully
1514 | - [ ] Total complexity reduction: 30+ points
1515 | - [ ] No breaking changes (all tests passing)
1516 | - [ ] No performance regressions
1517 | 
1518 | ### Qualitative
1519 | - [ ] Code readability improved (subjective review)
1520 | - [ ] Functions easier to understand and maintain
1521 | - [ ] Better separation of concerns
1522 | - [ ] Improved testability (isolated functions)
1523 | 
1524 | ---
1525 | 
1526 | ## Risk Assessment Matrix
1527 | 
1528 | | Function | Risk | Testing Requirements | Critical Path | Priority |
1529 | |----------|------|---------------------|---------------|----------|
1530 | | configure_paths | Low | Unit + integration | No (setup only) | High |
1531 | | _search_by_tags_internal | Low | Unit + DB tests | Yes (core search) | High |
1532 | | consolidate | Medium | Integration tests | Yes (consolidation) | High |
1533 | | get_memory_growth | Low | Unit + API tests | No (analytics) | Medium |
1534 | | get_tag_usage_analytics | Low | Unit + API tests | No (analytics) | Medium |
1535 | | detect_gpu | Low | Unit tests | No (setup only) | Low |
1536 | | get_memory_timestamps | Low | Unit + DB tests | No (analytics) | Low |
1537 | | _get_memories_for_horizon | Medium | Integration tests | Yes (consolidation) | Medium |
1538 | | get_activity_breakdown | Low | Unit + API tests | No (analytics) | Low |
1539 | | get_memory_type_distribution | Low | Unit + API tests | No (analytics) | Low |
1540 | 
1541 | **Critical Path Functions (require careful testing):**
1542 | 1. _search_by_tags_internal - Core search functionality
1543 | 2. consolidate - Memory consolidation pipeline
1544 | 3. _get_memories_for_horizon - Consolidation memory selection
1545 | 
1546 | **Low-Risk Functions (easier to refactor):**
1547 | - All analytics endpoints (read-only, non-critical)
1548 | - Setup functions (configure_paths, detect_gpu)
1549 | 
1550 | ---
1551 | 
1552 | ## Testing Strategy
1553 | 
1554 | ### Unit Tests (per function)
1555 | - Test extracted functions independently
1556 | - Verify input/output contracts
1557 | - Test edge cases and error handling
1558 | 
1559 | ### Integration Tests
1560 | - Test critical path functions with real storage
1561 | - Verify no behavioral changes
1562 | - Performance benchmarks (before/after)
1563 | 
1564 | ### Regression Tests
1565 | - Run full test suite after each refactoring
1566 | - Verify API contracts unchanged
1567 | - Check performance hasn't degraded
1568 | 
1569 | ---
1570 | 
1571 | ## Next Steps
1572 | 
1573 | 1. **Review and approve** this Phase 2 analysis
1574 | 2. **Select implementation approach:**
1575 |    - Option A: Sequential (14 hours, single developer)
1576 |    - Option B: Parallel (12 hours, multiple developers)
1577 |    - Option C: Prioritized (7 hours for critical functions only)
1578 | 
1579 | 3. **Set up tracking:**
1580 |    - Create GitHub issues for each function
1581 |    - Track complexity reduction progress
1582 |    - Monitor test coverage
1583 | 
1584 | 4. **Begin Phase 2A** (highest priority functions)
1585 | 
1586 | ---
1587 | 
1588 | ## Appendix: Refactoring Patterns Used
1589 | 
1590 | ### Pattern 1: Extract Method
1591 | **Purpose:** Reduce function length and improve testability
1592 | **Used in:** All functions analyzed
1593 | **Example:** Platform detection, SQL query building
1594 | 
1595 | ### Pattern 2: Guard Clause
1596 | **Purpose:** Reduce nesting and improve readability
1597 | **Used in:** Tag search, config updates
1598 | **Example:** Early returns for validation
1599 | 
1600 | ### Pattern 3: Dict Lookup
1601 | **Purpose:** Replace if/elif chains with data-driven logic
1602 | **Used in:** Period configs, platform detection
1603 | **Example:** `PERIOD_CONFIGS[period]` instead of if/elif
1604 | 
1605 | ### Pattern 4: Context Manager
1606 | **Purpose:** Simplify resource management and cleanup
1607 | **Used in:** Consolidation sync management
1608 | **Example:** `async with SyncPauseContext(...)`
1609 | 
1610 | ### Pattern 5: Configuration Object
1611 | **Purpose:** Centralize related configuration data
1612 | **Used in:** Period analysis, time horizons
1613 | **Example:** `@dataclass PeriodConfig`
1614 | 
1615 | ---
1616 | 
1617 | ## Lessons from Phase 1
1618 | 
1619 | **What worked well:**
1620 | - Clear complexity scoring and prioritization
1621 | - Incremental approach (low-risk first)
1622 | - Automated testing validation
1623 | 
1624 | **Improvements for Phase 2:**
1625 | - More explicit refactoring examples (✅ done)
1626 | - Better risk assessment (✅ done)
1627 | - Parallel execution planning (✅ done)
1628 | 
1629 | ---
1630 | 
1631 | **End of Phase 2 Analysis**
1632 | **Total Functions Analyzed:** 10 (5 main + 5 quick wins)
1633 | **Total Complexity Reduction:** -39 points
1634 | **Total Time Estimate:** 12-15 hours
1635 | 
```
Page 36/47FirstPrevNextLast