#
tokens: 39784/50000 1/207 files (page 37/45)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 37 of 45. Use http://codebase.md/dicklesworthstone/llm_gateway_mcp_server?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .cursorignore
├── .env.example
├── .envrc
├── .gitignore
├── additional_features.md
├── check_api_keys.py
├── completion_support.py
├── comprehensive_test.py
├── docker-compose.yml
├── Dockerfile
├── empirically_measured_model_speeds.json
├── error_handling.py
├── example_structured_tool.py
├── examples
│   ├── __init__.py
│   ├── advanced_agent_flows_using_unified_memory_system_demo.py
│   ├── advanced_extraction_demo.py
│   ├── advanced_unified_memory_system_demo.py
│   ├── advanced_vector_search_demo.py
│   ├── analytics_reporting_demo.py
│   ├── audio_transcription_demo.py
│   ├── basic_completion_demo.py
│   ├── cache_demo.py
│   ├── claude_integration_demo.py
│   ├── compare_synthesize_demo.py
│   ├── cost_optimization.py
│   ├── data
│   │   ├── sample_event.txt
│   │   ├── Steve_Jobs_Introducing_The_iPhone_compressed.md
│   │   └── Steve_Jobs_Introducing_The_iPhone_compressed.mp3
│   ├── docstring_refiner_demo.py
│   ├── document_conversion_and_processing_demo.py
│   ├── entity_relation_graph_demo.py
│   ├── filesystem_operations_demo.py
│   ├── grok_integration_demo.py
│   ├── local_text_tools_demo.py
│   ├── marqo_fused_search_demo.py
│   ├── measure_model_speeds.py
│   ├── meta_api_demo.py
│   ├── multi_provider_demo.py
│   ├── ollama_integration_demo.py
│   ├── prompt_templates_demo.py
│   ├── python_sandbox_demo.py
│   ├── rag_example.py
│   ├── research_workflow_demo.py
│   ├── sample
│   │   ├── article.txt
│   │   ├── backprop_paper.pdf
│   │   ├── buffett.pdf
│   │   ├── contract_link.txt
│   │   ├── legal_contract.txt
│   │   ├── medical_case.txt
│   │   ├── northwind.db
│   │   ├── research_paper.txt
│   │   ├── sample_data.json
│   │   └── text_classification_samples
│   │       ├── email_classification.txt
│   │       ├── news_samples.txt
│   │       ├── product_reviews.txt
│   │       └── support_tickets.txt
│   ├── sample_docs
│   │   └── downloaded
│   │       └── attention_is_all_you_need.pdf
│   ├── sentiment_analysis_demo.py
│   ├── simple_completion_demo.py
│   ├── single_shot_synthesis_demo.py
│   ├── smart_browser_demo.py
│   ├── sql_database_demo.py
│   ├── sse_client_demo.py
│   ├── test_code_extraction.py
│   ├── test_content_detection.py
│   ├── test_ollama.py
│   ├── text_classification_demo.py
│   ├── text_redline_demo.py
│   ├── tool_composition_examples.py
│   ├── tournament_code_demo.py
│   ├── tournament_text_demo.py
│   ├── unified_memory_system_demo.py
│   ├── vector_search_demo.py
│   ├── web_automation_instruction_packs.py
│   └── workflow_delegation_demo.py
├── LICENSE
├── list_models.py
├── marqo_index_config.json.example
├── mcp_protocol_schema_2025-03-25_version.json
├── mcp_python_lib_docs.md
├── mcp_tool_context_estimator.py
├── model_preferences.py
├── pyproject.toml
├── quick_test.py
├── README.md
├── resource_annotations.py
├── run_all_demo_scripts_and_check_for_errors.py
├── storage
│   └── smart_browser_internal
│       ├── locator_cache.db
│       ├── readability.js
│       └── storage_state.enc
├── test_client.py
├── test_connection.py
├── TEST_README.md
├── test_sse_client.py
├── test_stdio_client.py
├── tests
│   ├── __init__.py
│   ├── conftest.py
│   ├── integration
│   │   ├── __init__.py
│   │   └── test_server.py
│   ├── manual
│   │   ├── test_extraction_advanced.py
│   │   └── test_extraction.py
│   └── unit
│       ├── __init__.py
│       ├── test_cache.py
│       ├── test_providers.py
│       └── test_tools.py
├── TODO.md
├── tool_annotations.py
├── tools_list.json
├── ultimate_mcp_banner.webp
├── ultimate_mcp_logo.webp
├── ultimate_mcp_server
│   ├── __init__.py
│   ├── __main__.py
│   ├── cli
│   │   ├── __init__.py
│   │   ├── __main__.py
│   │   ├── commands.py
│   │   ├── helpers.py
│   │   └── typer_cli.py
│   ├── clients
│   │   ├── __init__.py
│   │   ├── completion_client.py
│   │   └── rag_client.py
│   ├── config
│   │   └── examples
│   │       └── filesystem_config.yaml
│   ├── config.py
│   ├── constants.py
│   ├── core
│   │   ├── __init__.py
│   │   ├── evaluation
│   │   │   ├── base.py
│   │   │   └── evaluators.py
│   │   ├── providers
│   │   │   ├── __init__.py
│   │   │   ├── anthropic.py
│   │   │   ├── base.py
│   │   │   ├── deepseek.py
│   │   │   ├── gemini.py
│   │   │   ├── grok.py
│   │   │   ├── ollama.py
│   │   │   ├── openai.py
│   │   │   └── openrouter.py
│   │   ├── server.py
│   │   ├── state_store.py
│   │   ├── tournaments
│   │   │   ├── manager.py
│   │   │   ├── tasks.py
│   │   │   └── utils.py
│   │   └── ums_api
│   │       ├── __init__.py
│   │       ├── ums_database.py
│   │       ├── ums_endpoints.py
│   │       ├── ums_models.py
│   │       └── ums_services.py
│   ├── exceptions.py
│   ├── graceful_shutdown.py
│   ├── services
│   │   ├── __init__.py
│   │   ├── analytics
│   │   │   ├── __init__.py
│   │   │   ├── metrics.py
│   │   │   └── reporting.py
│   │   ├── cache
│   │   │   ├── __init__.py
│   │   │   ├── cache_service.py
│   │   │   ├── persistence.py
│   │   │   ├── strategies.py
│   │   │   └── utils.py
│   │   ├── cache.py
│   │   ├── document.py
│   │   ├── knowledge_base
│   │   │   ├── __init__.py
│   │   │   ├── feedback.py
│   │   │   ├── manager.py
│   │   │   ├── rag_engine.py
│   │   │   ├── retriever.py
│   │   │   └── utils.py
│   │   ├── prompts
│   │   │   ├── __init__.py
│   │   │   ├── repository.py
│   │   │   └── templates.py
│   │   ├── prompts.py
│   │   └── vector
│   │       ├── __init__.py
│   │       ├── embeddings.py
│   │       └── vector_service.py
│   ├── tool_token_counter.py
│   ├── tools
│   │   ├── __init__.py
│   │   ├── audio_transcription.py
│   │   ├── base.py
│   │   ├── completion.py
│   │   ├── docstring_refiner.py
│   │   ├── document_conversion_and_processing.py
│   │   ├── enhanced-ums-lookbook.html
│   │   ├── entity_relation_graph.py
│   │   ├── excel_spreadsheet_automation.py
│   │   ├── extraction.py
│   │   ├── filesystem.py
│   │   ├── html_to_markdown.py
│   │   ├── local_text_tools.py
│   │   ├── marqo_fused_search.py
│   │   ├── meta_api_tool.py
│   │   ├── ocr_tools.py
│   │   ├── optimization.py
│   │   ├── provider.py
│   │   ├── pyodide_boot_template.html
│   │   ├── python_sandbox.py
│   │   ├── rag.py
│   │   ├── redline-compiled.css
│   │   ├── sentiment_analysis.py
│   │   ├── single_shot_synthesis.py
│   │   ├── smart_browser.py
│   │   ├── sql_databases.py
│   │   ├── text_classification.py
│   │   ├── text_redline_tools.py
│   │   ├── tournament.py
│   │   ├── ums_explorer.html
│   │   └── unified_memory_system.py
│   ├── utils
│   │   ├── __init__.py
│   │   ├── async_utils.py
│   │   ├── display.py
│   │   ├── logging
│   │   │   ├── __init__.py
│   │   │   ├── console.py
│   │   │   ├── emojis.py
│   │   │   ├── formatter.py
│   │   │   ├── logger.py
│   │   │   ├── panels.py
│   │   │   ├── progress.py
│   │   │   └── themes.py
│   │   ├── parse_yaml.py
│   │   ├── parsing.py
│   │   ├── security.py
│   │   └── text.py
│   └── working_memory_api.py
├── unified_memory_system_technical_analysis.md
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/unified_memory_system_technical_analysis.md:
--------------------------------------------------------------------------------

```markdown
   1 | # Technical Analysis of Unified Agent Memory and Cognitive System
   2 | 
   3 | ## System Overview and Architecture
   4 | 
   5 | The provided code implements a sophisticated `Unified Agent Memory and Cognitive System` designed for LLM agents. This system combines a structured memory hierarchy with process tracking, reasoning capabilities, and knowledge management. It's built as an asynchronous Python module using SQLite for persistence with sophisticated memory organization patterns.
   6 | 
   7 | ### Core Architecture
   8 | 
   9 | The system implements a cognitive architecture with four distinct memory levels:
  10 | 
  11 | 1. **Working Memory**: Temporarily active information (30-minute default TTL)
  12 | 2. **Episodic Memory**: Experiences and event records (7-day default TTL)
  13 | 3. **Semantic Memory**: Knowledge, facts, and insights (30-day default TTL)
  14 | 4. **Procedural Memory**: Skills and procedures (90-day default TTL)
  15 | 
  16 | These are implemented through a SQLite database using `aiosqlite` for asynchronous operations, with optimized configuration:
  17 | 
  18 | ```python
  19 | DEFAULT_DB_PATH = os.environ.get("AGENT_MEMORY_DB_PATH", "unified_agent_memory.db")
  20 | MAX_TEXT_LENGTH = 64000  # Maximum for text fields
  21 | CONNECTION_TIMEOUT = 10.0  # seconds
  22 | ISOLATION_LEVEL = None  # SQLite autocommit mode
  23 | 
  24 | # Memory management parameters
  25 | MAX_WORKING_MEMORY_SIZE = int(os.environ.get("MAX_WORKING_MEMORY_SIZE", "20"))
  26 | DEFAULT_TTL = {
  27 |     "working": 60 * 30,       # 30 minutes
  28 |     "episodic": 60 * 60 * 24 * 7, # 7 days
  29 |     "semantic": 60 * 60 * 24 * 30, # 30 days
  30 |     "procedural": 60 * 60 * 24 * 90 # 90 days
  31 | }
  32 | MEMORY_DECAY_RATE = float(os.environ.get("MEMORY_DECAY_RATE", "0.01"))  # Per hour
  33 | ```
  34 | 
  35 | The system uses various SQLite optimizations through pragmas:
  36 | 
  37 | ```python
  38 | SQLITE_PRAGMAS = [
  39 |     "PRAGMA journal_mode=WAL",  # Write-Ahead Logging
  40 |     "PRAGMA synchronous=NORMAL",  # Balance durability and performance
  41 |     "PRAGMA foreign_keys=ON",
  42 |     "PRAGMA temp_store=MEMORY",
  43 |     "PRAGMA cache_size=-32000",  # ~32MB cache
  44 |     "PRAGMA mmap_size=2147483647",  # Memory-mapped I/O
  45 |     "PRAGMA busy_timeout=30000"  # 30-second timeout
  46 | ]
  47 | ```
  48 | 
  49 | ## Type System and Enumerations
  50 | 
  51 | The code defines comprehensive type hierarchies through enumerations:
  52 | 
  53 | ### Workflow and Action Status
  54 | ```python
  55 | class WorkflowStatus(str, Enum):
  56 |     ACTIVE = "active"
  57 |     PAUSED = "paused"
  58 |     COMPLETED = "completed"
  59 |     FAILED = "failed"
  60 |     ABANDONED = "abandoned"
  61 | 
  62 | class ActionStatus(str, Enum):
  63 |     PLANNED = "planned"
  64 |     IN_PROGRESS = "in_progress"
  65 |     COMPLETED = "completed"
  66 |     FAILED = "failed"
  67 |     SKIPPED = "skipped"
  68 | ```
  69 | 
  70 | ### Content Classification
  71 | ```python
  72 | class ActionType(str, Enum):
  73 |     TOOL_USE = "tool_use"
  74 |     REASONING = "reasoning"
  75 |     PLANNING = "planning"
  76 |     RESEARCH = "research"
  77 |     ANALYSIS = "analysis"
  78 |     DECISION = "decision"
  79 |     OBSERVATION = "observation"
  80 |     REFLECTION = "reflection"
  81 |     SUMMARY = "summary"
  82 |     CONSOLIDATION = "consolidation"
  83 |     MEMORY_OPERATION = "memory_operation"
  84 | 
  85 | class ArtifactType(str, Enum):
  86 |     FILE = "file"
  87 |     TEXT = "text"
  88 |     IMAGE = "image"
  89 |     TABLE = "table"
  90 |     CHART = "chart"
  91 |     CODE = "code"
  92 |     DATA = "data"
  93 |     JSON = "json" 
  94 |     URL = "url"
  95 | 
  96 | class ThoughtType(str, Enum):
  97 |     GOAL = "goal"
  98 |     QUESTION = "question"
  99 |     HYPOTHESIS = "hypothesis"
 100 |     INFERENCE = "inference"
 101 |     EVIDENCE = "evidence"
 102 |     CONSTRAINT = "constraint"
 103 |     PLAN = "plan"
 104 |     DECISION = "decision"
 105 |     REFLECTION = "reflection"
 106 |     CRITIQUE = "critique"
 107 |     SUMMARY = "summary"
 108 | ```
 109 | 
 110 | ### Memory System Types
 111 | ```python
 112 | class MemoryLevel(str, Enum):
 113 |     WORKING = "working"
 114 |     EPISODIC = "episodic"
 115 |     SEMANTIC = "semantic"
 116 |     PROCEDURAL = "procedural"
 117 | 
 118 | class MemoryType(str, Enum):
 119 |     OBSERVATION = "observation"
 120 |     ACTION_LOG = "action_log"
 121 |     TOOL_OUTPUT = "tool_output"
 122 |     ARTIFACT_CREATION = "artifact_creation"
 123 |     REASONING_STEP = "reasoning_step"
 124 |     FACT = "fact"
 125 |     INSIGHT = "insight"
 126 |     PLAN = "plan"
 127 |     QUESTION = "question"
 128 |     SUMMARY = "summary"
 129 |     REFLECTION = "reflection"
 130 |     SKILL = "skill"
 131 |     PROCEDURE = "procedure"
 132 |     PATTERN = "pattern"
 133 |     CODE = "code"
 134 |     JSON = "json"
 135 |     URL = "url"
 136 |     TEXT = "text"
 137 | 
 138 | class LinkType(str, Enum):
 139 |     RELATED = "related"
 140 |     CAUSAL = "causal"
 141 |     SEQUENTIAL = "sequential"
 142 |     HIERARCHICAL = "hierarchical"
 143 |     CONTRADICTS = "contradicts"
 144 |     SUPPORTS = "supports"
 145 |     GENERALIZES = "generalizes"
 146 |     SPECIALIZES = "specializes"
 147 |     FOLLOWS = "follows"
 148 |     PRECEDES = "precedes"
 149 |     TASK = "task"
 150 |     REFERENCES = "references"
 151 | ```
 152 | 
 153 | ## Database Schema
 154 | 
 155 | The system uses a sophisticated relational database schema with 15+ tables and numerous indices:
 156 | 
 157 | 1. **workflows**: Tracks high-level workflow containers
 158 | 2. **actions**: Records agent actions and tool executions
 159 | 3. **artifacts**: Stores outputs and files created during workflows
 160 | 4. **thought_chains**: Groups related thoughts (reasoning processes)
 161 | 5. **thoughts**: Individual reasoning steps and insights
 162 | 6. **memories**: Core memory storage with metadata and classification
 163 | 7. **memory_links**: Associative connections between memories
 164 | 8. **embeddings**: Vector embeddings for semantic search
 165 | 9. **cognitive_states**: Snapshots of agent cognitive state
 166 | 10. **reflections**: Meta-cognitive analysis outputs
 167 | 11. **memory_operations**: Audit log of memory system operations
 168 | 12. **tags, workflow_tags, action_tags, artifact_tags**: Tagging system
 169 | 13. **dependencies**: Tracks dependencies between actions
 170 | 14. **memory_fts**: Virtual FTS5 table for full-text search
 171 | 
 172 | Each table has appropriate foreign key constraints and indexes for performance optimization. The schema includes circular references between memories and thoughts, implemented with deferred constraints.
 173 | 
 174 | ## Connection Management
 175 | 
 176 | The database connection is managed through a sophisticated singleton pattern:
 177 | 
 178 | ```python
 179 | class DBConnection:
 180 |     """Context manager for database connections using aiosqlite."""
 181 | 
 182 |     _instance: Optional[aiosqlite.Connection] = None 
 183 |     _lock = asyncio.Lock()
 184 |     _db_path_used: Optional[str] = None
 185 |     _init_lock_timeout = 15.0  # seconds
 186 |     
 187 |     # Methods for connection management, initialization, transaction handling, etc.
 188 | ```
 189 | 
 190 | Key features include:
 191 | - Asynchronous context manager pattern with `__aenter__` and `__aexit__`
 192 | - Lock-protected singleton initialization with timeout
 193 | - Transaction context manager with automatic commit/rollback
 194 | - Schema initialization on first connection
 195 | - Custom SQLite function registration
 196 | 
 197 | ## Utility Functions
 198 | 
 199 | The system includes several utility classes and functions:
 200 | 
 201 | ```python
 202 | def to_iso_z(ts: float) -> str:
 203 |     """Converts Unix timestamps to ISO-8601 with Z suffix."""
 204 |     # Implementation
 205 | 
 206 | class MemoryUtils:
 207 |     """Utility methods for memory operations."""
 208 |     
 209 |     @staticmethod
 210 |     def generate_id() -> str:
 211 |         """Generate a unique UUID V4 string for database records."""
 212 |         return str(uuid.uuid4())
 213 |     
 214 |     # Methods for serialization, validation, sequence generation, etc.
 215 | ```
 216 | 
 217 | Additional utility methods include:
 218 | - JSON serialization with robust error handling and truncation
 219 | - SQL identifier validation to prevent injection
 220 | - Tag processing to maintain taxonomies
 221 | - Access tracking to update statistics
 222 | - Operation logging for audit trails
 223 | 
 224 | ## Vector Embeddings and Semantic Search
 225 | 
 226 | The system integrates with an external embedding service:
 227 | 
 228 | ```python
 229 | # Embedding configuration
 230 | DEFAULT_EMBEDDING_MODEL = "text-embedding-3-small"
 231 | EMBEDDING_DIMENSION = 384  # For the default model
 232 | SIMILARITY_THRESHOLD = 0.75
 233 | ```
 234 | 
 235 | Implementation includes:
 236 | - `_store_embedding()`: Generates and stores vector embeddings with error handling
 237 | - `_find_similar_memories()`: Performs semantic search with cosine similarity and filtering
 238 | - Integration with scikit-learn for similarity calculations
 239 | 
 240 | ## Memory Relevance Calculation
 241 | 
 242 | The system implements a sophisticated memory relevance scoring algorithm:
 243 | 
 244 | ```python
 245 | def _compute_memory_relevance(importance, confidence, created_at, access_count, last_accessed):
 246 |     """Computes a relevance score based on multiple factors."""
 247 |     now = time.time()
 248 |     age_hours = (now - created_at) / 3600 if created_at else 0
 249 |     recency_factor = 1.0 / (1.0 + (now - (last_accessed or created_at)) / 86400)
 250 |     decayed_importance = max(0, importance * (1.0 - MEMORY_DECAY_RATE * age_hours))
 251 |     usage_boost = min(1.0 + (access_count / 10.0), 2.0) if access_count else 1.0
 252 |     relevance = (decayed_importance * usage_boost * confidence * recency_factor)
 253 |     return min(max(relevance, 0.0), 10.0)
 254 | ```
 255 | 
 256 | This function factors in:
 257 | - Base importance score (1-10 scale)
 258 | - Time-based decay of importance
 259 | - Usage frequency boost
 260 | - Confidence weighting
 261 | - Recency bias
 262 | 
 263 | 
 264 | ## Core Memory Operations
 265 | 
 266 | The system implements a comprehensive set of operations for memory management through tool functions, each designed with standardized error handling and metrics tracking via decorators (`@with_tool_metrics`, `@with_error_handling`).
 267 | 
 268 | ### Memory Creation and Storage
 269 | 
 270 | The primary function for creating memories is `store_memory()`:
 271 | 
 272 | ```python
 273 | async def store_memory(
 274 |     workflow_id: str,
 275 |     content: str,
 276 |     memory_type: str,
 277 |     memory_level: str = MemoryLevel.EPISODIC.value,
 278 |     importance: float = 5.0,
 279 |     confidence: float = 1.0,
 280 |     description: Optional[str] = None,
 281 |     reasoning: Optional[str] = None,
 282 |     source: Optional[str] = None,
 283 |     tags: Optional[List[str]] = None,
 284 |     ttl: Optional[int] = None,
 285 |     context_data: Optional[Dict[str, Any]] = None,
 286 |     generate_embedding: bool = True,
 287 |     suggest_links: bool = True,
 288 |     link_suggestion_threshold: float = SIMILARITY_THRESHOLD,
 289 |     max_suggested_links: int = 3,
 290 |     action_id: Optional[str] = None,
 291 |     thought_id: Optional[str] = None,
 292 |     artifact_id: Optional[str] = None,
 293 |     db_path: str = DEFAULT_DB_PATH
 294 | ) -> Dict[str, Any]:
 295 | ```
 296 | 
 297 | This function:
 298 | 1. Validates input parameters (checking enum values, numeric ranges)
 299 | 2. Generates a UUID for the memory
 300 | 3. Records a timestamp
 301 | 4. Establishes database connections
 302 | 5. Performs existence checks for foreign keys
 303 | 6. Inserts the memory record with all metadata
 304 | 7. Optionally generates and stores vector embeddings for semantic search
 305 | 8. Identifies and suggests semantic links to related memories
 306 | 9. Updates workflow timestamps and logs the operation
 307 | 10. Returns a structured result with memory details and suggested links
 308 | 
 309 | Key parameters include:
 310 | - `workflow_id`: Required container for the memory
 311 | - `content`: The actual memory content text
 312 | - `memory_type`: Classification (e.g., "observation", "fact", "insight")
 313 | - `memory_level`: Cognitive level (e.g., "episodic", "semantic")
 314 | - `importance`/`confidence`: Scoring for relevance calculations (1.0-10.0/0.0-1.0)
 315 | - `generate_embedding`: Whether to create vector embeddings for semantic search
 316 | - `suggest_links`: Whether to automatically find related memories
 317 | 
 318 | Memory creation automatically handles:
 319 | - Tag normalization and storage
 320 | - TTL determination (using defaults if not specified)
 321 | - Importance and confidence validation
 322 | - Creation of bidirectional links to similar memories
 323 | 
 324 | ### Memory Retrieval and Search
 325 | 
 326 | The system offers multiple retrieval mechanisms:
 327 | 
 328 | #### Direct Retrieval by ID
 329 | 
 330 | ```python
 331 | async def get_memory_by_id(
 332 |     memory_id: str,
 333 |     include_links: bool = True,
 334 |     include_context: bool = True,
 335 |     context_limit: int = 5,
 336 |     db_path: str = DEFAULT_DB_PATH
 337 | ) -> Dict[str, Any]:
 338 | ```
 339 | 
 340 | This function:
 341 | 1. Fetches specific memory by ID
 342 | 2. Updates access statistics
 343 | 3. Optionally includes outgoing and incoming links
 344 | 4. Optionally includes semantically similar memories as context
 345 | 5. Checks TTL expiration
 346 | 
 347 | #### Keyword/Criteria-Based Search
 348 | 
 349 | ```python
 350 | async def query_memories(
 351 |     workflow_id: Optional[str] = None,
 352 |     memory_level: Optional[str] = None,
 353 |     memory_type: Optional[str] = None,
 354 |     search_text: Optional[str] = None,
 355 |     tags: Optional[List[str]] = None,
 356 |     min_importance: Optional[float] = None,
 357 |     max_importance: Optional[float] = None,
 358 |     min_confidence: Optional[float] = None,
 359 |     min_created_at_unix: Optional[int] = None,
 360 |     max_created_at_unix: Optional[int] = None,
 361 |     sort_by: str = "relevance",
 362 |     sort_order: str = "DESC",
 363 |     include_content: bool = True,
 364 |     include_links: bool = False,
 365 |     link_direction: str = "outgoing",
 366 |     limit: int = 10,
 367 |     offset: int = 0,
 368 |     db_path: str = DEFAULT_DB_PATH
 369 | ) -> Dict[str, Any]:
 370 | ```
 371 | 
 372 | This function provides powerful filtering capabilities:
 373 | - Workflow, level, type filters
 374 | - Full-text search via SQLite FTS5
 375 | - Tag filtering with array containment
 376 | - Importance/confidence ranges
 377 | - Creation time ranges
 378 | - Custom sorting options (relevance, importance, created_at, updated_at, etc.)
 379 | - Pagination via limit/offset
 380 | - Link inclusion options
 381 | 
 382 | #### Semantic/Vector Search
 383 | 
 384 | ```python
 385 | async def search_semantic_memories(
 386 |     query: str,
 387 |     workflow_id: Optional[str] = None,
 388 |     limit: int = 5,
 389 |     threshold: float = SIMILARITY_THRESHOLD,
 390 |     memory_level: Optional[str] = None,
 391 |     memory_type: Optional[str] = None,
 392 |     include_content: bool = True,
 393 |     db_path: str = DEFAULT_DB_PATH
 394 | ) -> Dict[str, Any]:
 395 | ```
 396 | 
 397 | This implements vector similarity search:
 398 | 1. Generates embeddings for the query
 399 | 2. Finds memories with similar embeddings using cosine similarity
 400 | 3. Applies threshold and filters
 401 | 4. Updates access statistics for retrieved memories
 402 | 
 403 | #### Hybrid Search (Keyword + Vector)
 404 | 
 405 | ```python
 406 | async def hybrid_search_memories(
 407 |     query: str,
 408 |     workflow_id: Optional[str] = None,
 409 |     limit: int = 10,
 410 |     offset: int = 0,
 411 |     semantic_weight: float = 0.6,
 412 |     keyword_weight: float = 0.4,
 413 |     memory_level: Optional[str] = None,
 414 |     memory_type: Optional[str] = None,
 415 |     tags: Optional[List[str]] = None,
 416 |     min_importance: Optional[float] = None,
 417 |     max_importance: Optional[float] = None,
 418 |     min_confidence: Optional[float] = None,
 419 |     min_created_at_unix: Optional[int] = None,
 420 |     max_created_at_unix: Optional[int] = None,
 421 |     include_content: bool = True,
 422 |     include_links: bool = False,
 423 |     link_direction: str = "outgoing",
 424 |     db_path: str = DEFAULT_DB_PATH
 425 | ) -> Dict[str, Any]:
 426 | ```
 427 | 
 428 | This sophisticated search function:
 429 | 1. Combines semantic and keyword search results
 430 | 2. Normalizes and weights scores from both approaches
 431 | 3. Applies comprehensive filtering options
 432 | 4. Performs efficient batched database operations for large result sets
 433 | 5. Returns hybrid-scored results with detailed metadata
 434 | 
 435 | ### Memory Updating and Maintenance
 436 | 
 437 | ```python
 438 | async def update_memory(
 439 |     memory_id: str,
 440 |     content: Optional[str] = None,
 441 |     importance: Optional[float] = None,
 442 |     confidence: Optional[float] = None,
 443 |     description: Optional[str] = None,
 444 |     reasoning: Optional[str] = None,
 445 |     tags: Optional[List[str]] = None,
 446 |     ttl: Optional[int] = None,
 447 |     memory_level: Optional[str] = None,
 448 |     regenerate_embedding: bool = False,
 449 |     db_path: str = DEFAULT_DB_PATH
 450 | ) -> Dict[str, Any]:
 451 | ```
 452 | 
 453 | This function allows updating memory attributes:
 454 | 1. Dynamically builds SQL UPDATE clauses for changed fields
 455 | 2. Optionally regenerates embeddings when content changes
 456 | 3. Maintains timestamps and history
 457 | 4. Returns detailed update information
 458 | 
 459 | ```python
 460 | async def delete_expired_memories(db_path: str = DEFAULT_DB_PATH) -> Dict[str, Any]:
 461 | ```
 462 | 
 463 | This maintenance function:
 464 | 1. Identifies memories that have reached their TTL
 465 | 2. Removes them in efficient batches
 466 | 3. Handles cascading deletions via foreign key constraints
 467 | 4. Logs operations for each affected workflow
 468 | 
 469 | ### Memory Linking and Relationships
 470 | 
 471 | ```python
 472 | async def create_memory_link(
 473 |     source_memory_id: str,
 474 |     target_memory_id: str,
 475 |     link_type: str,
 476 |     strength: float = 1.0,
 477 |     description: Optional[str] = None,
 478 |     db_path: str = DEFAULT_DB_PATH
 479 | ) -> Dict[str, Any]:
 480 | ```
 481 | 
 482 | This function creates directional associations between memories:
 483 | 1. Prevents self-linking
 484 | 2. Validates link types against `LinkType` enum
 485 | 3. Ensures link strength is in valid range (0.0-1.0)
 486 | 4. Uses UPSERT pattern for idempotency
 487 | 5. Returns link details
 488 | 
 489 | ```python
 490 | async def get_linked_memories(
 491 |     memory_id: str,
 492 |     direction: str = "both",
 493 |     link_type: Optional[str] = None,
 494 |     limit: int = 10,
 495 |     include_memory_details: bool = True,
 496 |     db_path: str = DEFAULT_DB_PATH
 497 | ) -> Dict[str, Any]:
 498 | ```
 499 | 
 500 | This retrieval function:
 501 | 1. Gets outgoing and/or incoming links
 502 | 2. Optionally filters by link type
 503 | 3. Includes detailed information about linked memories
 504 | 4. Updates access statistics
 505 | 5. Returns structured link information
 506 | 
 507 | ## Thought Chains and Reasoning
 508 | 
 509 | The system implements a sophisticated thought chain mechanism for tracking reasoning:
 510 | 
 511 | ### Thought Chain Creation and Management
 512 | 
 513 | ```python
 514 | async def create_thought_chain(
 515 |     workflow_id: str,
 516 |     title: str,
 517 |     initial_thought: Optional[str] = None,
 518 |     initial_thought_type: str = "goal",
 519 |     db_path: str = DEFAULT_DB_PATH
 520 | ) -> Dict[str, Any]:
 521 | ```
 522 | 
 523 | This function:
 524 | 1. Creates a container for related thoughts
 525 | 2. Optionally adds an initial thought (goal, hypothesis, etc.)
 526 | 3. Ensures atomicity through transaction management
 527 | 4. Returns chain details with ID and creation timestamp
 528 | 
 529 | ```python
 530 | async def record_thought(
 531 |     workflow_id: str,
 532 |     content: str,
 533 |     thought_type: str = "inference",
 534 |     thought_chain_id: Optional[str] = None,
 535 |     parent_thought_id: Optional[str] = None,
 536 |     relevant_action_id: Optional[str] = None,
 537 |     relevant_artifact_id: Optional[str] = None,
 538 |     relevant_memory_id: Optional[str] = None,
 539 |     db_path: str = DEFAULT_DB_PATH,
 540 |     conn: Optional[aiosqlite.Connection] = None
 541 | ) -> Dict[str, Any]:
 542 | ```
 543 | 
 544 | This function records individual reasoning steps:
 545 | 1. Validates thought type against `ThoughtType` enum
 546 | 2. Handles complex foreign key relationships
 547 | 3. Automatically determines target thought chain if not specified
 548 | 4. Manages parent-child relationships for hierarchical reasoning
 549 | 5. Creates links to related actions, artifacts, and memories
 550 | 6. Automatically creates semantic memory entries for important thoughts
 551 | 7. Supports transaction nesting through optional connection parameter
 552 | 
 553 | ```python
 554 | async def get_thought_chain(
 555 |     thought_chain_id: str,
 556 |     include_thoughts: bool = True,
 557 |     db_path: str = DEFAULT_DB_PATH
 558 | ) -> Dict[str, Any]:
 559 | ```
 560 | 
 561 | This retrieval function:
 562 | 1. Fetches chain metadata
 563 | 2. Optionally includes all thoughts in sequence
 564 | 3. Returns formatted timestamps and structured data
 565 | 
 566 | ### Thought Chain Visualization
 567 | 
 568 | ```python
 569 | async def visualize_reasoning_chain(
 570 |     thought_chain_id: str,
 571 |     output_format: str = "mermaid",
 572 |     db_path: str = DEFAULT_DB_PATH
 573 | ) -> Dict[str, Any]:
 574 | ```
 575 | 
 576 | This function generates visualizations:
 577 | 1. Retrieves the complete thought chain
 578 | 2. For Mermaid format:
 579 |    - Generates a directed graph representation
 580 |    - Creates node definitions with appropriate shapes based on thought types
 581 |    - Handles parent-child relationships with connections
 582 |    - Adds external links to related entities
 583 |    - Implements CSS styling for different thought types
 584 | 3. For JSON format:
 585 |    - Creates a hierarchical tree structure
 586 |    - Maps parent-child relationships
 587 |    - Includes all metadata
 588 | 4. Returns the visualization content in the requested format
 589 | 
 590 | The Mermaid generation happens through a helper function `_generate_thought_chain_mermaid()` that constructs a detailed graph with styling:
 591 | 
 592 | ```python
 593 | async def _generate_thought_chain_mermaid(thought_chain: Dict[str, Any]) -> str:
 594 |     # Implementation creates a complex Mermaid diagram with:
 595 |     # - Header node for the chain
 596 |     # - Nodes for each thought with type-specific styling
 597 |     # - Parent-child connections
 598 |     # - External links to actions, artifacts, memories
 599 |     # - Comprehensive styling definitions
 600 | ```
 601 | 
 602 | ## Working Memory Management
 603 | 
 604 | The system implements sophisticated working memory with capacity management:
 605 | 
 606 | ### Working Memory Operations
 607 | 
 608 | ```python
 609 | async def get_working_memory(
 610 |     context_id: str,
 611 |     include_content: bool = True,
 612 |     include_links: bool = True,
 613 |     db_path: str = DEFAULT_DB_PATH
 614 | ) -> Dict[str, Any]:
 615 | ```
 616 | 
 617 | This function:
 618 | 1. Retrieves the current active memory set for a context
 619 | 2. Updates access statistics
 620 | 3. Optionally includes memory content
 621 | 4. Optionally includes links between memories
 622 | 5. Returns a structured view of working memory
 623 | 
 624 | ```python
 625 | async def focus_memory(
 626 |     memory_id: str,
 627 |     context_id: str,
 628 |     add_to_working: bool = True,
 629 |     db_path: str = DEFAULT_DB_PATH
 630 | ) -> Dict[str, Any]:
 631 | ```
 632 | 
 633 | This function:
 634 | 1. Sets a specific memory as the current focus of attention
 635 | 2. Optionally adds the memory to working memory if not present
 636 | 3. Ensures memory and context workflow consistency
 637 | 4. Updates cognitive state records
 638 | 5. Returns focus update confirmation
 639 | 
 640 | ```python
 641 | async def _add_to_active_memories(conn: aiosqlite.Connection, context_id: str, memory_id: str) -> bool:
 642 | ```
 643 | 
 644 | This internal helper function implements working memory capacity management:
 645 | 1. Checks if memory is already in working memory
 646 | 2. Enforces the `MAX_WORKING_MEMORY_SIZE` limit
 647 | 3. When capacity is reached, computes relevance scores for all memories
 648 | 4. Removes least relevant memory to make space
 649 | 5. Returns success/failure status
 650 | 
 651 | ```python
 652 | async def optimize_working_memory(
 653 |     context_id: str,
 654 |     target_size: int = MAX_WORKING_MEMORY_SIZE,
 655 |     strategy: str = "balanced",
 656 |     db_path: str = DEFAULT_DB_PATH
 657 | ) -> Dict[str, Any]:
 658 | ```
 659 | 
 660 | This function performs optimization:
 661 | 1. Implements multiple strategies:
 662 |    - `balanced`: Considers all relevance factors
 663 |    - `importance`: Prioritizes importance scores
 664 |    - `recency`: Prioritizes recently accessed memories
 665 |    - `diversity`: Ensures variety of memory types
 666 | 2. Scores memories based on strategy
 667 | 3. Selects optimal subset to retain
 668 | 4. Updates the cognitive state
 669 | 5. Returns detailed optimization results
 670 | 
 671 | ```python
 672 | async def auto_update_focus(
 673 |     context_id: str,
 674 |     recent_actions_count: int = 3,
 675 |     db_path: str = DEFAULT_DB_PATH
 676 | ) -> Dict[str, Any]:
 677 | ```
 678 | 
 679 | This function implements automatic attention shifting:
 680 | 1. Analyzes memories currently in working memory
 681 | 2. Scores them based on relevance and recent activity
 682 | 3. Uses the `_calculate_focus_score()` helper with sophisticated heuristics
 683 | 4. Updates focus to the highest-scoring memory
 684 | 5. Returns details of the focus shift
 685 | 
 686 | The focus scoring implements multiple weight factors:
 687 | 
 688 | ```python
 689 | def _calculate_focus_score(memory: Dict, recent_action_ids: List[str], now_unix: int) -> float:
 690 |     """Calculate focus priority score based on multiple factors."""
 691 |     score = 0.0
 692 |     
 693 |     # Base relevance (importance, confidence, recency, usage)
 694 |     relevance = _compute_memory_relevance(...)
 695 |     score += relevance * 0.6  # Heavily weighted
 696 |     
 697 |     # Boost for recent action relationship
 698 |     if memory.get("action_id") in recent_action_ids:
 699 |         score += 3.0  # Significant boost
 700 |     
 701 |     # Type-based boosts for attention-worthy types
 702 |     if memory.get("memory_type") in ["question", "plan", "insight"]:
 703 |         score += 1.5
 704 |     
 705 |     # Memory level boosts
 706 |     if memory.get("memory_level") == MemoryLevel.SEMANTIC.value:
 707 |         score += 0.5
 708 |     elif memory.get("memory_level") == MemoryLevel.PROCEDURAL.value:
 709 |         score += 0.7
 710 |         
 711 |     return max(0.0, score)
 712 | ```
 713 | 
 714 | ## Cognitive State Management
 715 | 
 716 | The system implements cognitive state persistence for context restoration:
 717 | 
 718 | ```python
 719 | async def save_cognitive_state(
 720 |     workflow_id: str,
 721 |     title: str,
 722 |     working_memory_ids: List[str],
 723 |     focus_area_ids: Optional[List[str]] = None,
 724 |     context_action_ids: Optional[List[str]] = None,
 725 |     current_goal_thought_ids: Optional[List[str]] = None,
 726 |     db_path: str = DEFAULT_DB_PATH
 727 | ) -> Dict[str, Any]:
 728 | ```
 729 | 
 730 | This function:
 731 | 1. Validates that all provided IDs exist and belong to the workflow
 732 | 2. Marks previous states as not latest
 733 | 3. Serializes state components
 734 | 4. Records a timestamped cognitive state snapshot
 735 | 5. Returns confirmation with state ID
 736 | 
 737 | ```python
 738 | async def load_cognitive_state(
 739 |     workflow_id: str,
 740 |     state_id: Optional[str] = None,
 741 |     db_path: str = DEFAULT_DB_PATH
 742 | ) -> Dict[str, Any]:
 743 | ```
 744 | 
 745 | This function:
 746 | 1. Loads either a specific state or the latest state
 747 | 2. Deserializes state components
 748 | 3. Logs the operation
 749 | 4. Returns full state details
 750 | 
 751 | ```python
 752 | async def get_workflow_context(
 753 |     workflow_id: str,
 754 |     recent_actions_limit: int = 10,
 755 |     important_memories_limit: int = 5,
 756 |     key_thoughts_limit: int = 5,
 757 |     db_path: str = DEFAULT_DB_PATH
 758 | ) -> Dict[str, Any]:
 759 | ```
 760 | 
 761 | This function builds a comprehensive context summary:
 762 | 1. Fetches workflow metadata (title, goal, status)
 763 | 2. Gets latest cognitive state
 764 | 3. Retrieves recent actions
 765 | 4. Includes important memories
 766 | 5. Adds key thoughts (goals, decisions, reflections)
 767 | 6. Returns a structured context overview
 768 | 
 769 | ## Action and Artifact Tracking
 770 | 
 771 | The system tracks all agent actions and created artifacts:
 772 | 
 773 | ### Action Management
 774 | 
 775 | ```python
 776 | async def record_action_start(
 777 |     workflow_id: str,
 778 |     action_type: str,
 779 |     reasoning: str,
 780 |     tool_name: Optional[str] = None,
 781 |     tool_args: Optional[Dict[str, Any]] = None,
 782 |     title: Optional[str] = None,
 783 |     parent_action_id: Optional[str] = None,
 784 |     tags: Optional[List[str]] = None,
 785 |     related_thought_id: Optional[str] = None,
 786 |     db_path: str = DEFAULT_DB_PATH
 787 | ) -> Dict[str, Any]:
 788 | ```
 789 | 
 790 | This function:
 791 | 1. Validates action type against `ActionType` enum
 792 | 2. Requires reasoning explanation
 793 | 3. Validates references to workflow, parent action, and related thought
 794 | 4. Auto-generates title if not provided
 795 | 5. Creates a corresponding episodic memory entry
 796 | 6. Returns action details with ID and start time
 797 | 
 798 | ```python
 799 | async def record_action_completion(
 800 |     action_id: str,
 801 |     status: str = "completed",
 802 |     tool_result: Optional[Any] = None,
 803 |     summary: Optional[str] = None,
 804 |     conclusion_thought: Optional[str] = None,
 805 |     conclusion_thought_type: str = "inference",
 806 |     db_path: str = DEFAULT_DB_PATH
 807 | ) -> Dict[str, Any]:
 808 | ```
 809 | 
 810 | This function:
 811 | 1. Validates completion status (completed, failed, skipped)
 812 | 2. Records tool execution result
 813 | 3. Updates the action record
 814 | 4. Optionally adds a concluding thought
 815 | 5. Updates the linked episodic memory with outcome
 816 | 6. Returns completion confirmation
 817 | 
 818 | ```python
 819 | async def get_action_details(
 820 |     action_id: Optional[str] = None,
 821 |     action_ids: Optional[List[str]] = None,
 822 |     include_dependencies: bool = False,
 823 |     db_path: str = DEFAULT_DB_PATH
 824 | ) -> Dict[str, Any]:
 825 | ```
 826 | 
 827 | This function:
 828 | 1. Retrieves details for one or more actions
 829 | 2. Deserializes tool args and results
 830 | 3. Includes associated tags
 831 | 4. Optionally includes dependency relationships
 832 | 5. Returns comprehensive action information
 833 | 
 834 | ```python
 835 | async def get_recent_actions(
 836 |     workflow_id: str,
 837 |     limit: int = 5,
 838 |     action_type: Optional[str] = None,
 839 |     status: Optional[str] = None,
 840 |     include_tool_results: bool = True,
 841 |     include_reasoning: bool = True,
 842 |     db_path: str = DEFAULT_DB_PATH
 843 | ) -> Dict[str, Any]:
 844 | ```
 845 | 
 846 | This function:
 847 | 1. Gets the most recent actions for a workflow
 848 | 2. Applies type and status filters
 849 | 3. Controls inclusion of potentially large fields (tool results, reasoning)
 850 | 4. Returns a time-ordered action list
 851 | 
 852 | ### Action Dependencies
 853 | 
 854 | ```python
 855 | async def add_action_dependency(
 856 |     source_action_id: str,
 857 |     target_action_id: str,
 858 |     dependency_type: str = "requires",
 859 |     db_path: str = DEFAULT_DB_PATH
 860 | ) -> Dict[str, Any]:
 861 | ```
 862 | 
 863 | This function:
 864 | 1. Creates an explicit dependency relationship between actions
 865 | 2. Ensures actions belong to the same workflow
 866 | 3. Handles duplicate dependency declarations
 867 | 4. Returns dependency details
 868 | 
 869 | ```python
 870 | async def get_action_dependencies(
 871 |     action_id: str,
 872 |     direction: str = "downstream",
 873 |     dependency_type: Optional[str] = None,
 874 |     include_details: bool = False,
 875 |     db_path: str = DEFAULT_DB_PATH
 876 | ) -> Dict[str, Any]:
 877 | ```
 878 | 
 879 | This function:
 880 | 1. Retrieves actions that depend on this one (downstream) or
 881 | 2. Retrieves actions this one depends on (upstream)
 882 | 3. Optionally filters by dependency type
 883 | 4. Optionally includes full action details
 884 | 5. Returns structured dependency information
 885 | 
 886 | ### Artifact Management
 887 | 
 888 | ```python
 889 | async def record_artifact(
 890 |     workflow_id: str,
 891 |     name: str,
 892 |     artifact_type: str,
 893 |     action_id: Optional[str] = None,
 894 |     description: Optional[str] = None,
 895 |     path: Optional[str] = None,
 896 |     content: Optional[str] = None,
 897 |     metadata: Optional[Dict[str, Any]] = None,
 898 |     is_output: bool = False,
 899 |     tags: Optional[List[str]] = None,
 900 |     db_path: str = DEFAULT_DB_PATH
 901 | ) -> Dict[str, Any]:
 902 | ```
 903 | 
 904 | This function:
 905 | 1. Validates artifact type against `ArtifactType` enum
 906 | 2. Handles content truncation for large text artifacts
 907 | 3. Creates a corresponding episodic memory entry
 908 | 4. Records relationships to creating action
 909 | 5. Applies tags and metadata
 910 | 6. Returns artifact details with ID
 911 | 
 912 | ```python
 913 | async def get_artifacts(
 914 |     workflow_id: str,
 915 |     artifact_type: Optional[str] = None,
 916 |     tag: Optional[str] = None,
 917 |     is_output: Optional[bool] = None,
 918 |     include_content: bool = False,
 919 |     limit: int = 10,
 920 |     db_path: str = DEFAULT_DB_PATH
 921 | ) -> Dict[str, Any]:
 922 | ```
 923 | 
 924 | This function:
 925 | 1. Lists artifacts for a workflow with filtering
 926 | 2. Controls inclusion of potentially large content
 927 | 3. Deserializes metadata
 928 | 4. Returns artifact list with details
 929 | 
 930 | ```python
 931 | async def get_artifact_by_id(
 932 |     artifact_id: str,
 933 |     include_content: bool = True,
 934 |     db_path: str = DEFAULT_DB_PATH
 935 | ) -> Dict[str, Any]:
 936 | ```
 937 | 
 938 | This function:
 939 | 1. Retrieves a specific artifact by ID
 940 | 2. Updates access stats for related memory
 941 | 3. Returns complete artifact details
 942 | 
 943 | ## Meta-Cognitive Capabilities
 944 | 
 945 | The system implements sophisticated meta-cognitive functions:
 946 | 
 947 | ### Memory Consolidation
 948 | 
 949 | ```python
 950 | async def consolidate_memories(
 951 |     workflow_id: Optional[str] = None,
 952 |     target_memories: Optional[List[str]] = None,
 953 |     consolidation_type: str = "summary",
 954 |     query_filter: Optional[Dict[str, Any]] = None,
 955 |     max_source_memories: int = 20,
 956 |     prompt_override: Optional[str] = None,
 957 |     provider: str = LLMGatewayProvider.OPENAI.value,
 958 |     model: Optional[str] = None,
 959 |     store_result: bool = True,
 960 |     store_as_level: str = MemoryLevel.SEMANTIC.value,
 961 |     store_as_type: Optional[str] = None,
 962 |     max_tokens: int = 1000,
 963 |     db_path: str = DEFAULT_DB_PATH
 964 | ) -> Dict[str, Any]:
 965 | ```
 966 | 
 967 | This function implements memory consolidation:
 968 | 1. Allows selecting source memories:
 969 |    - Explicit memory ID list or
 970 |    - Query-based filtering or
 971 |    - Recent important memories from workflow
 972 | 2. Supports multiple consolidation types:
 973 |    - `summary`: Comprehensive integration of information
 974 |    - `insight`: Pattern recognition and implications
 975 |    - `procedural`: Generalized steps or methods
 976 |    - `question`: Key information gaps or uncertainties
 977 | 3. Generates LLM prompts with detailed instructions
 978 | 4. Makes external LLM API calls to process memories
 979 | 5. Automatically stores the result as a new memory
 980 | 6. Creates bidirectional links to source memories
 981 | 7. Returns consolidated content and details
 982 | 
 983 | The consolidation prompt generation is handled by `_generate_consolidation_prompt()`:
 984 | 
 985 | ```python
 986 | def _generate_consolidation_prompt(memories: List[Dict], consolidation_type: str) -> str:
 987 |     # Formats memory details with truncation
 988 |     # Adds type-specific instruction templates:
 989 |     # - summary: comprehensive integration
 990 |     # - insight: pattern identification
 991 |     # - procedural: generalized methods
 992 |     # - question: information gaps
 993 | ```
 994 | 
 995 | ### Reflection Generation
 996 | 
 997 | ```python
 998 | async def generate_reflection(
 999 |     workflow_id: str,
1000 |     reflection_type: str = "summary",
1001 |     recent_ops_limit: int = 30,
1002 |     provider: str = LLMGatewayProvider.OPENAI.value,
1003 |     model: Optional[str] = None,
1004 |     max_tokens: int = 1000,
1005 |     db_path: str = DEFAULT_DB_PATH
1006 | ) -> Dict[str, Any]:
1007 | ```
1008 | 
1009 | This meta-cognitive function:
1010 | 1. Analyzes recent memory operations (from the operation log)
1011 | 2. Supports multiple reflection types:
1012 |    - `summary`: Overview of recent activity
1013 |    - `progress`: Analysis of goal advancement
1014 |    - `gaps`: Knowledge and understanding deficits
1015 |    - `strengths`: Effective patterns and insights
1016 |    - `plan`: Strategic next steps
1017 | 3. Generates sophisticated prompts using `_generate_reflection_prompt()`
1018 | 4. Makes external LLM calls to perform analysis
1019 | 5. Stores the reflection in the reflection table
1020 | 6. Returns reflection content and metadata
1021 | 
1022 | ### Memory Promotion and Evolution
1023 | 
1024 | ```python
1025 | async def promote_memory_level(
1026 |     memory_id: str,
1027 |     target_level: Optional[str] = None,
1028 |     min_access_count_episodic: int = 5,
1029 |     min_confidence_episodic: float = 0.8,
1030 |     min_access_count_semantic: int = 10,
1031 |     min_confidence_semantic: float = 0.9,
1032 |     db_path: str = DEFAULT_DB_PATH
1033 | ) -> Dict[str, Any]:
1034 | ```
1035 | 
1036 | This function implements memory evolution:
1037 | 1. Checks if a memory meets criteria for promotion to a higher level
1038 | 2. Implements promotion paths:
1039 |    - Episodic → Semantic (experiences to knowledge)
1040 |    - Semantic → Procedural (knowledge to skills, with type constraints)
1041 | 3. Applies configurable criteria based on:
1042 |    - Access frequency (demonstrates importance)
1043 |    - Confidence level (demonstrates reliability)
1044 |    - Memory type (suitability for procedural level)
1045 | 4. Updates the memory level if criteria are met
1046 | 5. Returns promotion status with reason
1047 | 
1048 | ### Text Summarization
1049 | 
1050 | ```python
1051 | async def summarize_text(
1052 |     text_to_summarize: str,
1053 |     target_tokens: int = 500,
1054 |     prompt_template: Optional[str] = None,
1055 |     provider: str = "openai",
1056 |     model: Optional[str] = None,
1057 |     workflow_id: Optional[str] = None,
1058 |     record_summary: bool = False,
1059 |     db_path: str = DEFAULT_DB_PATH
1060 | ) -> Dict[str, Any]:
1061 | ```
1062 | 
1063 | This utility function:
1064 | 1. Summarizes text content using LLM
1065 | 2. Uses configurable prompt templates
1066 | 3. Controls summary length via token targeting
1067 | 4. Optionally stores summary as memory
1068 | 5. Returns summary text and metadata
1069 | 
1070 | ### Context Summarization
1071 | 
1072 | ```python
1073 | async def summarize_context_block(
1074 |     text_to_summarize: str,
1075 |     target_tokens: int = 500,
1076 |     context_type: str = "actions",
1077 |     workflow_id: Optional[str] = None,
1078 |     provider: str = LLMGatewayProvider.ANTHROPIC.value,
1079 |     model: Optional[str] = "claude-3-5-haiku-20241022",
1080 |     db_path: str = DEFAULT_DB_PATH
1081 | ) -> Dict[str, Any]:
1082 | ```
1083 | 
1084 | This specialized function:
1085 | 1. Summarizes specific types of context (actions, memories, thoughts)
1086 | 2. Uses custom prompts optimized for each context type
1087 | 3. Designed for agent context window management
1088 | 4. Returns focused summaries with compression ratio
1089 | 
1090 | ## Reporting and Visualization
1091 | 
1092 | The system implements sophisticated reporting capabilities:
1093 | 
1094 | ```python
1095 | async def generate_workflow_report(
1096 |     workflow_id: str,
1097 |     report_format: str = "markdown",
1098 |     include_details: bool = True,
1099 |     include_thoughts: bool = True,
1100 |     include_artifacts: bool = True,
1101 |     style: Optional[str] = "professional",
1102 |     db_path: str = DEFAULT_DB_PATH
1103 | ) -> Dict[str, Any]:
1104 | ```
1105 | 
1106 | This function creates comprehensive reports:
1107 | 1. Fetches complete workflow details
1108 | 2. Supports multiple formats:
1109 |    - `markdown`: Text-based structured report
1110 |    - `html`: Web-viewable report with CSS
1111 |    - `json`: Machine-readable structured data
1112 |    - `mermaid`: Diagrammatic representation
1113 | 3. Implements multiple styling options:
1114 |    - `professional`: Formal business report style
1115 |    - `concise`: Brief summary focused on key points
1116 |    - `narrative`: Story-like descriptive format
1117 |    - `technical`: Data-oriented technical format
1118 | 4. Uses helper functions for specific formats:
1119 |    - `_generate_professional_report()`
1120 |    - `_generate_concise_report()`
1121 |    - `_generate_narrative_report()`
1122 |    - `_generate_technical_report()`
1123 |    - `_generate_mermaid_diagram()`
1124 | 5. Returns report content with metadata
1125 | 
1126 | Memory network visualization is implemented through:
1127 | 
1128 | ```python
1129 | async def visualize_memory_network(
1130 |     workflow_id: Optional[str] = None,
1131 |     center_memory_id: Optional[str] = None,
1132 |     depth: int = 1,
1133 |     max_nodes: int = 30,
1134 |     memory_level: Optional[str] = None,
1135 |     memory_type: Optional[str] = None,
1136 |     output_format: str = "mermaid",
1137 |     db_path: str = DEFAULT_DB_PATH
1138 | ) -> Dict[str, Any]:
1139 | ```
1140 | 
1141 | This function:
1142 | 1. Creates a visual representation of memory relationships
1143 | 2. Supports workflow-wide view or centered on specific memory
1144 | 3. Uses breadth-first search to explore links to depth limit
1145 | 4. Applies memory type and level filters
1146 | 5. Generates Mermaid diagram with:
1147 |    - Nodes styled by memory level
1148 |    - Links showing relationship types
1149 |    - Center node highlighting
1150 | 6. Returns complete diagram code
1151 | 
1152 | # Detailed Key Tool Functions (Additional Core Functionality)
1153 | 
1154 | Below I'll cover several more important tool functions in detail that implement key functionality:
1155 | 
1156 | ## LLM Integration
1157 | 
1158 | The system integrates with external LLM providers through the `ultimate` module:
1159 | 
1160 | ```python
1161 | from ultimate_mcp_server.constants import Provider as LLMGatewayProvider
1162 | from ultimate_mcp_server.core.providers.base import get_provider
1163 | ```
1164 | 
1165 | This enables:
1166 | 1. Dynamic provider selection (OpenAI, Anthropic, etc.)
1167 | 2. Model specification
1168 | 3. Standardized prompting
1169 | 4. Response handling
1170 | 
1171 | Example LLM integration in consolidation:
1172 | 
1173 | ```python
1174 | provider_instance = await get_provider(provider)
1175 | llm_result = await provider_instance.generate_completion(
1176 |     prompt=prompt, model=model_to_use, max_tokens=max_tokens, temperature=0.7
1177 | )
1178 | reflection_content = llm_result.text.strip()
1179 | ```
1180 | 
1181 | ## System Statistics and Metrics
1182 | 
1183 | ```python
1184 | async def compute_memory_statistics(
1185 |     workflow_id: Optional[str] = None,
1186 |     db_path: str = DEFAULT_DB_PATH
1187 | ) -> Dict[str, Any]:
1188 | ```
1189 | 
1190 | This function:
1191 | 1. Computes comprehensive system statistics
1192 | 2. Supports global or workflow-specific scope
1193 | 3. Collects metrics on:
1194 |    - Total memory counts
1195 |    - Distribution by level and type
1196 |    - Confidence and importance averages
1197 |    - Temporal metrics (newest/oldest)
1198 |    - Link statistics by type
1199 |    - Tag frequencies
1200 |    - Workflow statuses
1201 | 4. Returns structured statistical data
1202 | 
1203 | ## Workflow Listing and Management
1204 | 
1205 | ```python
1206 | async def list_workflows(
1207 |     status: Optional[str] = None,
1208 |     tag: Optional[str] = None,
1209 |     after_date: Optional[str] = None,
1210 |     before_date: Optional[str] = None,
1211 |     limit: int = 10,
1212 |     offset: int = 0,
1213 |     db_path: str = DEFAULT_DB_PATH
1214 | ) -> Dict[str, Any]:
1215 | ```
1216 | 
1217 | This function:
1218 | 1. Lists workflows with filtering options
1219 | 2. Supports status, tag, and date range filters
1220 | 3. Includes pagination
1221 | 4. Returns workflow list with counts
1222 | 
1223 | ```python
1224 | async def create_workflow(
1225 |     title: str,
1226 |     description: Optional[str] = None,
1227 |     goal: Optional[str] = None,
1228 |     tags: Optional[List[str]] = None,
1229 |     metadata: Optional[Dict[str, Any]] = None,
1230 |     parent_workflow_id: Optional[str] = None,
1231 |     db_path: str = DEFAULT_DB_PATH
1232 | ) -> Dict[str, Any]:
1233 | ```
1234 | 
1235 | This function:
1236 | 1. Creates a new workflow container
1237 | 2. Creates default thought chain
1238 | 3. Adds initial goal thought if provided
1239 | 4. Supports workflow hierarchies via parent reference
1240 | 5. Returns workflow details with IDs
1241 | 
1242 | ```python
1243 | async def update_workflow_status(
1244 |     workflow_id: str,
1245 |     status: str,
1246 |     completion_message: Optional[str] = None,
1247 |     update_tags: Optional[List[str]] = None,
1248 |     db_path: str = DEFAULT_DB_PATH
1249 | ) -> Dict[str, Any]:
1250 | ```
1251 | 
1252 | This function:
1253 | 1. Updates workflow status (active, paused, completed, failed, abandoned)
1254 | 2. Adds completion thought for terminal statuses
1255 | 3. Updates tags
1256 | 4. Returns status update confirmation
1257 | 
1258 | 
1259 | ## Database Schema Details and Implementation
1260 | 
1261 | The system's database schema represents a sophisticated cognitive architecture designed for tracking agent workflows, actions, thoughts, and memories. Let's examine its detailed structure:
1262 | 
1263 | ### Schema Creation and Initialization
1264 | 
1265 | The schema is defined in the `SCHEMA_SQL` constant, which contains all DDL statements. The system uses a transactional approach to schema initialization:
1266 | 
1267 | ```python
1268 | # Initialize schema if needed
1269 | cursor = await conn.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='workflows'")
1270 | table_exists = await cursor.fetchone()
1271 | await cursor.close()
1272 | if not table_exists:
1273 |     logger.info("Database schema not found. Initializing...", emoji_key="gear")
1274 |     await conn.execute("PRAGMA foreign_keys = ON;")
1275 |     await conn.executescript(SCHEMA_SQL)
1276 |     logger.success("Database schema initialized successfully.", emoji_key="white_check_mark")
1277 | ```
1278 | 
1279 | The schema includes several critical components:
1280 | 
1281 | ### Base Tables
1282 | 
1283 | 1. **`workflows`**: The top-level container
1284 |    ```sql
1285 |    CREATE TABLE IF NOT EXISTS workflows (
1286 |        workflow_id TEXT PRIMARY KEY,
1287 |        title TEXT NOT NULL,
1288 |        description TEXT,
1289 |        goal TEXT,
1290 |        status TEXT NOT NULL,
1291 |        created_at INTEGER NOT NULL,
1292 |        updated_at INTEGER NOT NULL,
1293 |        completed_at INTEGER,
1294 |        parent_workflow_id TEXT,
1295 |        metadata TEXT,
1296 |        last_active INTEGER
1297 |    );
1298 |    ```
1299 | 
1300 | 2. **`actions`**: Records of agent activities
1301 |    ```sql
1302 |    CREATE TABLE IF NOT EXISTS actions (
1303 |        action_id TEXT PRIMARY KEY,
1304 |        workflow_id TEXT NOT NULL,
1305 |        parent_action_id TEXT,
1306 |        action_type TEXT NOT NULL,
1307 |        title TEXT,
1308 |        reasoning TEXT,
1309 |        tool_name TEXT,
1310 |        tool_args TEXT,
1311 |        tool_result TEXT,
1312 |        status TEXT NOT NULL,
1313 |        started_at INTEGER NOT NULL,
1314 |        completed_at INTEGER,
1315 |        sequence_number INTEGER,
1316 |        FOREIGN KEY (workflow_id) REFERENCES workflows(workflow_id) ON DELETE CASCADE,
1317 |        FOREIGN KEY (parent_action_id) REFERENCES actions(action_id) ON DELETE SET NULL
1318 |    );
1319 |    ```
1320 | 
1321 | 3. **`artifacts`**: Outputs and files created during workflows
1322 |    ```sql
1323 |    CREATE TABLE IF NOT EXISTS artifacts (
1324 |        artifact_id TEXT PRIMARY KEY,
1325 |        workflow_id TEXT NOT NULL,
1326 |        action_id TEXT,
1327 |        artifact_type TEXT NOT NULL,
1328 |        name TEXT NOT NULL,
1329 |        description TEXT,
1330 |        path TEXT,
1331 |        content TEXT,
1332 |        metadata TEXT,
1333 |        created_at INTEGER NOT NULL,
1334 |        is_output BOOLEAN DEFAULT FALSE,
1335 |        FOREIGN KEY (workflow_id) REFERENCES workflows(workflow_id) ON DELETE CASCADE,
1336 |        FOREIGN KEY (action_id) REFERENCES actions(action_id) ON DELETE SET NULL
1337 |    );
1338 |    ```
1339 | 
1340 | 4. **`memories`**: Core memory storage
1341 |    ```sql
1342 |    CREATE TABLE IF NOT EXISTS memories (
1343 |        memory_id TEXT PRIMARY KEY,
1344 |        workflow_id TEXT NOT NULL,
1345 |        content TEXT NOT NULL,
1346 |        memory_level TEXT NOT NULL,
1347 |        memory_type TEXT NOT NULL,
1348 |        importance REAL DEFAULT 5.0,
1349 |        confidence REAL DEFAULT 1.0,
1350 |        description TEXT,
1351 |        reasoning TEXT,
1352 |        source TEXT,
1353 |        context TEXT,
1354 |        tags TEXT,
1355 |        created_at INTEGER NOT NULL,
1356 |        updated_at INTEGER NOT NULL,
1357 |        last_accessed INTEGER,
1358 |        access_count INTEGER DEFAULT 0,
1359 |        ttl INTEGER DEFAULT 0,
1360 |        embedding_id TEXT,
1361 |        action_id TEXT,
1362 |        thought_id TEXT,
1363 |        artifact_id TEXT,
1364 |        FOREIGN KEY (workflow_id) REFERENCES workflows(workflow_id) ON DELETE CASCADE,
1365 |        FOREIGN KEY (embedding_id) REFERENCES embeddings(id) ON DELETE SET NULL,
1366 |        FOREIGN KEY (action_id) REFERENCES actions(action_id) ON DELETE SET NULL,
1367 |        FOREIGN KEY (artifact_id) REFERENCES artifacts(artifact_id) ON DELETE SET NULL
1368 |    );
1369 |    ```
1370 | 
1371 | 5. **`thought_chains`** and **`thoughts`**: Reasoning structure
1372 |    ```sql
1373 |    CREATE TABLE IF NOT EXISTS thought_chains (
1374 |        thought_chain_id TEXT PRIMARY KEY,
1375 |        workflow_id TEXT NOT NULL,
1376 |        action_id TEXT,
1377 |        title TEXT NOT NULL,
1378 |        created_at INTEGER NOT NULL,
1379 |        FOREIGN KEY (workflow_id) REFERENCES workflows(workflow_id) ON DELETE CASCADE,
1380 |        FOREIGN KEY (action_id) REFERENCES actions(action_id) ON DELETE SET NULL
1381 |    );
1382 | 
1383 |    CREATE TABLE IF NOT EXISTS thoughts (
1384 |        thought_id TEXT PRIMARY KEY,
1385 |        thought_chain_id TEXT NOT NULL,
1386 |        parent_thought_id TEXT,
1387 |        thought_type TEXT NOT NULL,
1388 |        content TEXT NOT NULL,
1389 |        sequence_number INTEGER NOT NULL,
1390 |        created_at INTEGER NOT NULL,
1391 |        relevant_action_id TEXT,
1392 |        relevant_artifact_id TEXT,
1393 |        relevant_memory_id TEXT,
1394 |        FOREIGN KEY (thought_chain_id) REFERENCES thought_chains(thought_chain_id) ON DELETE CASCADE,
1395 |        FOREIGN KEY (parent_thought_id) REFERENCES thoughts(thought_id) ON DELETE SET NULL,
1396 |        FOREIGN KEY (relevant_action_id) REFERENCES actions(action_id) ON DELETE SET NULL,
1397 |        FOREIGN KEY (relevant_artifact_id) REFERENCES artifacts(artifact_id) ON DELETE SET NULL
1398 |    );
1399 |    ```
1400 | 
1401 | ### Advanced Features
1402 | 
1403 | 1. **Circular Foreign Key Constraints**: The schema implements circular references between memories and thoughts using deferred constraints:
1404 | 
1405 |    ```sql
1406 |    -- Deferrable Circular Foreign Key Constraints for thoughts <-> memories
1407 |    BEGIN IMMEDIATE TRANSACTION;
1408 |    PRAGMA defer_foreign_keys = ON;
1409 | 
1410 |    ALTER TABLE thoughts ADD CONSTRAINT fk_thoughts_memory
1411 |        FOREIGN KEY (relevant_memory_id) REFERENCES memories(memory_id)
1412 |        ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
1413 | 
1414 |    ALTER TABLE memories ADD CONSTRAINT fk_memories_thought
1415 |        FOREIGN KEY (thought_id) REFERENCES thoughts(thought_id)
1416 |        ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
1417 | 
1418 |    COMMIT;
1419 |    ```
1420 | 
1421 |    This pattern allows creating memories that reference thoughts and thoughts that reference memories, resolving the chicken-and-egg problem typically encountered with circular foreign keys.
1422 | 
1423 | 2. **Full-Text Search**: The system implements sophisticated text search through SQLite's FTS5 virtual table:
1424 | 
1425 |    ```sql
1426 |    CREATE VIRTUAL TABLE IF NOT EXISTS memory_fts USING fts5(
1427 |        content, description, reasoning, tags,
1428 |        workflow_id UNINDEXED,
1429 |        memory_id UNINDEXED,
1430 |        content='memories',
1431 |        content_rowid='rowid',
1432 |        tokenize='porter unicode61'
1433 |    );
1434 |    ```
1435 | 
1436 |    With synchronized triggers:
1437 | 
1438 |    ```sql
1439 |    CREATE TRIGGER IF NOT EXISTS memories_after_insert AFTER INSERT ON memories BEGIN
1440 |        INSERT INTO memory_fts(rowid, content, description, reasoning, tags, workflow_id, memory_id)
1441 |        VALUES (new.rowid, new.content, new.description, new.reasoning, new.tags, new.workflow_id, new.memory_id);
1442 |    END;
1443 |    ```
1444 | 
1445 | 3. **Vector Embeddings**: The schema includes an `embeddings` table for storing vector representations:
1446 | 
1447 |    ```sql
1448 |    CREATE TABLE IF NOT EXISTS embeddings (
1449 |        id TEXT PRIMARY KEY,
1450 |        memory_id TEXT UNIQUE,
1451 |        model TEXT NOT NULL,
1452 |        embedding BLOB NOT NULL,
1453 |        dimension INTEGER NOT NULL,
1454 |        created_at INTEGER NOT NULL
1455 |    );
1456 |    ```
1457 | 
1458 |    With a back-reference from embeddings to memories:
1459 | 
1460 |    ```sql
1461 |    ALTER TABLE embeddings ADD CONSTRAINT fk_embeddings_memory FOREIGN KEY (memory_id) REFERENCES memories(memory_id) ON DELETE CASCADE;
1462 |    ```
1463 | 
1464 | 4. **Memory Links**: Associative connections between memories:
1465 | 
1466 |    ```sql
1467 |    CREATE TABLE IF NOT EXISTS memory_links (
1468 |        link_id TEXT PRIMARY KEY,
1469 |        source_memory_id TEXT NOT NULL,
1470 |        target_memory_id TEXT NOT NULL,
1471 |        link_type TEXT NOT NULL,
1472 |        strength REAL DEFAULT 1.0,
1473 |        description TEXT,
1474 |        created_at INTEGER NOT NULL,
1475 |        FOREIGN KEY (source_memory_id) REFERENCES memories(memory_id) ON DELETE CASCADE,
1476 |        FOREIGN KEY (target_memory_id) REFERENCES memories(memory_id) ON DELETE CASCADE,
1477 |        UNIQUE(source_memory_id, target_memory_id, link_type)
1478 |    );
1479 |    ```
1480 | 
1481 | 5. **Cognitive States**: Persistence of cognitive context:
1482 | 
1483 |    ```sql
1484 |    CREATE TABLE IF NOT EXISTS cognitive_states (
1485 |        state_id TEXT PRIMARY KEY,
1486 |        workflow_id TEXT NOT NULL,
1487 |        title TEXT NOT NULL,
1488 |        working_memory TEXT,
1489 |        focus_areas TEXT,
1490 |        context_actions TEXT,
1491 |        current_goals TEXT,
1492 |        created_at INTEGER NOT NULL,
1493 |        is_latest BOOLEAN NOT NULL,
1494 |        FOREIGN KEY (workflow_id) REFERENCES workflows(workflow_id) ON DELETE CASCADE
1495 |    );
1496 |    ```
1497 | 
1498 | 6. **Meta-Cognitive Components**:
1499 | 
1500 |    ```sql
1501 |    CREATE TABLE IF NOT EXISTS reflections (
1502 |        reflection_id TEXT PRIMARY KEY,
1503 |        workflow_id TEXT NOT NULL,
1504 |        title TEXT NOT NULL,
1505 |        content TEXT NOT NULL,
1506 |        reflection_type TEXT NOT NULL,
1507 |        created_at INTEGER NOT NULL,
1508 |        referenced_memories TEXT,
1509 |        FOREIGN KEY (workflow_id) REFERENCES workflows(workflow_id) ON DELETE CASCADE
1510 |    );
1511 | 
1512 |    CREATE TABLE IF NOT EXISTS memory_operations (
1513 |        operation_log_id TEXT PRIMARY KEY,
1514 |        workflow_id TEXT NOT NULL,
1515 |        memory_id TEXT,
1516 |        action_id TEXT,
1517 |        operation TEXT NOT NULL,
1518 |        operation_data TEXT,
1519 |        timestamp INTEGER NOT NULL,
1520 |        FOREIGN KEY (workflow_id) REFERENCES workflows(workflow_id) ON DELETE CASCADE,
1521 |        FOREIGN KEY (memory_id) REFERENCES memories(memory_id) ON DELETE SET NULL,
1522 |        FOREIGN KEY (action_id) REFERENCES actions(action_id) ON DELETE SET NULL
1523 |    );
1524 |    ```
1525 | 
1526 | 7. **Tagging System**: Comprehensive tagging with junction tables:
1527 | 
1528 |    ```sql
1529 |    CREATE TABLE IF NOT EXISTS tags (
1530 |        tag_id INTEGER PRIMARY KEY AUTOINCREMENT,
1531 |        name TEXT NOT NULL UNIQUE,
1532 |        description TEXT,
1533 |        category TEXT,
1534 |        created_at INTEGER NOT NULL
1535 |    );
1536 | 
1537 |    CREATE TABLE IF NOT EXISTS workflow_tags (
1538 |        workflow_id TEXT NOT NULL,
1539 |        tag_id INTEGER NOT NULL,
1540 |        PRIMARY KEY (workflow_id, tag_id),
1541 |        FOREIGN KEY (workflow_id) REFERENCES workflows(workflow_id) ON DELETE CASCADE,
1542 |        FOREIGN KEY (tag_id) REFERENCES tags(tag_id) ON DELETE CASCADE
1543 |    );
1544 |    ```
1545 | 
1546 |    With similar structures for `action_tags` and `artifact_tags`.
1547 | 
1548 | 8. **Dependencies**: Structured action dependencies:
1549 | 
1550 |    ```sql
1551 |    CREATE TABLE IF NOT EXISTS dependencies (
1552 |        dependency_id INTEGER PRIMARY KEY AUTOINCREMENT,
1553 |        source_action_id TEXT NOT NULL,
1554 |        target_action_id TEXT NOT NULL,
1555 |        dependency_type TEXT NOT NULL,
1556 |        created_at INTEGER NOT NULL,
1557 |        FOREIGN KEY (source_action_id) REFERENCES actions (action_id) ON DELETE CASCADE,
1558 |        FOREIGN KEY (target_action_id) REFERENCES actions (action_id) ON DELETE CASCADE,
1559 |        UNIQUE(source_action_id, target_action_id, dependency_type)
1560 |    );
1561 |    ```
1562 | 
1563 | ### Schema Optimization
1564 | 
1565 | The schema includes comprehensive indexing for performance optimization:
1566 | 
1567 | ```sql
1568 | -- Workflow indices
1569 | CREATE INDEX IF NOT EXISTS idx_workflows_status ON workflows(status);
1570 | CREATE INDEX IF NOT EXISTS idx_workflows_parent ON workflows(parent_workflow_id);
1571 | CREATE INDEX IF NOT EXISTS idx_workflows_last_active ON workflows(last_active DESC);
1572 | -- Action indices
1573 | CREATE INDEX IF NOT EXISTS idx_actions_workflow_id ON actions(workflow_id);
1574 | CREATE INDEX IF NOT EXISTS idx_actions_parent ON actions(parent_action_id);
1575 | CREATE INDEX IF NOT EXISTS idx_actions_sequence ON actions(workflow_id, sequence_number);
1576 | CREATE INDEX IF NOT EXISTS idx_actions_type ON actions(action_type);
1577 | ```
1578 | 
1579 | With over 25 carefully designed indices covering most query patterns. Foreign keys are indexed as well as search fields, and compound indices are used for common query patterns.
1580 | 
1581 | ## Custom SQLite Functions
1582 | 
1583 | The system extends SQLite with custom functions for advanced querying capabilities:
1584 | 
1585 | ```python
1586 | await conn.create_function("json_contains", 2, _json_contains, deterministic=True)
1587 | await conn.create_function("json_contains_any", 2, _json_contains_any, deterministic=True)
1588 | await conn.create_function("json_contains_all", 2, _json_contains_all, deterministic=True)
1589 | await conn.create_function("compute_memory_relevance", 5, _compute_memory_relevance, deterministic=True)
1590 | ```
1591 | 
1592 | These functions enable:
1593 | 
1594 | 1. **JSON Array Operations**:
1595 | 
1596 |    ```python
1597 |    def _json_contains(json_text, search_value):
1598 |        """Check if a JSON array contains a specific value."""
1599 |        if not json_text: 
1600 |            return False
1601 |        try: 
1602 |            return search_value in json.loads(json_text) if isinstance(json.loads(json_text), list) else False
1603 |        except Exception: 
1604 |            return False
1605 |    ```
1606 | 
1607 |    With similar functions for checking if any or all values from a list are present in a JSON array.
1608 | 
1609 | 2. **Memory Relevance Calculation**:
1610 | 
1611 |    ```python
1612 |    def _compute_memory_relevance(importance, confidence, created_at, access_count, last_accessed):
1613 |        """Computes a relevance score based on multiple factors. Uses Unix Timestamps."""
1614 |        now = time.time()
1615 |        age_hours = (now - created_at) / 3600 if created_at else 0
1616 |        recency_factor = 1.0 / (1.0 + (now - (last_accessed or created_at)) / 86400)
1617 |        decayed_importance = max(0, importance * (1.0 - MEMORY_DECAY_RATE * age_hours))
1618 |        usage_boost = min(1.0 + (access_count / 10.0), 2.0) if access_count else 1.0
1619 |        relevance = (decayed_importance * usage_boost * confidence * recency_factor)
1620 |        return min(max(relevance, 0.0), 10.0)
1621 |    ```
1622 | 
1623 |    This function is central to memory prioritization, implementing:
1624 |    - Time-based decay of importance
1625 |    - Recency boost for recently accessed memories
1626 |    - Usage frequency boost
1627 |    - Confidence weighting
1628 |    - Bounded output range (0.0-10.0)
1629 | 
1630 | ## Error Handling and Decorators
1631 | 
1632 | The system implements consistent error handling through decorators:
1633 | 
1634 | ```python
1635 | @with_tool_metrics
1636 | @with_error_handling
1637 | async def function_name(...):
1638 |     # Implementation
1639 | ```
1640 | 
1641 | These decorators provide:
1642 | 
1643 | 1. **Error Standardization**:
1644 |    - `ToolInputError`: For invalid parameters
1645 |    - `ToolError`: For operational/system failures
1646 |    - Robust exception conversion and logging
1647 | 
1648 | 2. **Performance Metrics**:
1649 |    - Timing for each operation
1650 |    - Success/failure tracking
1651 |    - Consistent result formatting
1652 | 
1653 | 3. **Logging Integration**:
1654 |    - Standardized log format with emojis
1655 |    - Differentiated log levels (info, warning, error)
1656 |    - Performance timing included
1657 | 
1658 | The pattern ensures all tool functions have consistent behavior:
1659 | 
1660 | ```python
1661 | # Example decorator patterns:
1662 | def with_error_handling(func):
1663 |     """Wrapper for standardized error handling in tool functions."""
1664 |     @functools.wraps(func)
1665 |     async def wrapper(*args, **kwargs):
1666 |         try:
1667 |             return await func(*args, **kwargs)
1668 |         except ToolInputError:
1669 |             # Re-raise with input validation errors
1670 |             raise
1671 |         except Exception as e:
1672 |             # Convert other exceptions to ToolError
1673 |             logger.error(f"Error in {func.__name__}: {e}", exc_info=True)
1674 |             raise ToolError(f"Operation failed: {str(e)}") from e
1675 |     return wrapper
1676 | 
1677 | def with_tool_metrics(func):
1678 |     """Wrapper for tracking metrics and standardizing tool function results."""
1679 |     @functools.wraps(func)
1680 |     async def wrapper(*args, **kwargs):
1681 |         start_time = time.time()
1682 |         result = await func(*args, **kwargs)
1683 |         processing_time = time.time() - start_time
1684 |         
1685 |         # Add standardized fields if result is a dict
1686 |         if isinstance(result, dict):
1687 |             result["success"] = True
1688 |             result["processing_time"] = processing_time
1689 |             
1690 |         logger.info(f"{func.__name__} completed in {processing_time:.3f}s")
1691 |         return result
1692 |     return wrapper
1693 | ```
1694 | 
1695 | ## Transaction Management
1696 | 
1697 | The system implements sophisticated transaction management through a context manager:
1698 | 
1699 | ```python
1700 | @contextlib.asynccontextmanager
1701 | async def transaction(self) -> AsyncIterator[aiosqlite.Connection]:
1702 |     """Provides an atomic transaction block using the singleton connection."""
1703 |     conn = await self.__aenter__()  # Acquire the connection instance
1704 |     try:
1705 |         await conn.execute("BEGIN DEFERRED TRANSACTION")
1706 |         logger.debug("DB Transaction Started.")
1707 |         yield conn  # Provide the connection to the 'async with' block
1708 |     except Exception as e:
1709 |         logger.error(f"Exception during transaction, rolling back: {e}", exc_info=True)
1710 |         await conn.rollback()
1711 |         logger.warning("DB Transaction Rolled Back.", emoji_key="rewind")
1712 |         raise  # Re-raise the exception after rollback
1713 |     else:
1714 |         await conn.commit()
1715 |         logger.debug("DB Transaction Committed.")
1716 | ```
1717 | 
1718 | This allows operations to be grouped atomically:
1719 | 
1720 | ```python
1721 | # Usage example
1722 | db_manager = DBConnection(db_path)
1723 | async with db_manager.transaction() as conn:
1724 |     # Multiple operations that should succeed or fail together
1725 |     await conn.execute("INSERT INTO ...")
1726 |     await conn.execute("UPDATE ...")
1727 |     # Auto-commits on success, rolls back on exception
1728 | ```
1729 | 
1730 | The transaction manager is used extensively throughout the codebase to ensure data integrity, particularly for:
1731 | - Creating workflow and initial thought chain
1732 | - Recording actions and linked memories
1733 | - Creating thoughts with associated memory entries
1734 | - Complex dependency operations
1735 | 
1736 | ## Vector Embedding and Semantic Search Implementation
1737 | 
1738 | ### Embedding Storage
1739 | 
1740 | The system implements vector embedding storage:
1741 | 
1742 | ```python
1743 | async def _store_embedding(conn: aiosqlite.Connection, memory_id: str, text: str) -> Optional[str]:
1744 |     """Generates and stores an embedding for a memory using the EmbeddingService."""
1745 |     try:
1746 |         embedding_service = get_embedding_service()  # Get singleton instance
1747 |         if not embedding_service.client:
1748 |              logger.warning("EmbeddingService client not available. Cannot generate embedding.")
1749 |              return None
1750 | 
1751 |         # Generate embedding using the service (handles caching internally)
1752 |         embedding_list = await embedding_service.create_embeddings(texts=[text])
1753 |         if not embedding_list or not embedding_list[0]:
1754 |              logger.warning(f"Failed to generate embedding for memory {memory_id}")
1755 |              return None
1756 |         embedding_array = np.array(embedding_list[0], dtype=np.float32)
1757 |         if embedding_array.size == 0:
1758 |              logger.warning(f"Generated embedding is empty for memory {memory_id}")
1759 |              return None
1760 | 
1761 |         # Get the embedding dimension
1762 |         embedding_dimension = embedding_array.shape[0]
1763 | 
1764 |         # Generate a unique ID for this embedding entry
1765 |         embedding_db_id = MemoryUtils.generate_id()
1766 |         embedding_bytes = embedding_array.tobytes()
1767 |         model_used = embedding_service.default_model
1768 | 
1769 |         # Store embedding in DB
1770 |         await conn.execute(
1771 |             """
1772 |             INSERT INTO embeddings (id, memory_id, model, embedding, dimension, created_at)
1773 |             VALUES (?, ?, ?, ?, ?, ?)
1774 |             ON CONFLICT(memory_id) DO UPDATE SET
1775 |                 id = excluded.id,
1776 |                 model = excluded.model,
1777 |                 embedding = excluded.embedding,
1778 |                 dimension = excluded.dimension,
1779 |                 created_at = excluded.created_at
1780 |             """,
1781 |             (embedding_db_id, memory_id, model_used, embedding_bytes, embedding_dimension, int(time.time()))
1782 |         )
1783 |         
1784 |         # Update memory record to link to embedding
1785 |         await conn.execute(
1786 |             "UPDATE memories SET embedding_id = ? WHERE memory_id = ?",
1787 |             (embedding_db_id, memory_id)
1788 |         )
1789 | 
1790 |         return embedding_db_id
1791 |     except Exception as e:
1792 |         logger.error(f"Failed to store embedding for memory {memory_id}: {e}", exc_info=True)
1793 |         return None
1794 | ```
1795 | 
1796 | Key aspects:
1797 | 1. Integration with external embedding service
1798 | 2. Numpy array serialization to binary BLOB
1799 | 3. Dimension tracking for compatibility
1800 | 4. UPSERT pattern for idempotent updates
1801 | 5. Error handling for service failures
1802 | 
1803 | ### Semantic Search Implementation
1804 | 
1805 | ```python
1806 | async def _find_similar_memories(
1807 |     conn: aiosqlite.Connection,
1808 |     query_text: str,
1809 |     workflow_id: Optional[str] = None,
1810 |     limit: int = 5,
1811 |     threshold: float = SIMILARITY_THRESHOLD,
1812 |     memory_level: Optional[str] = None,
1813 |     memory_type: Optional[str] = None
1814 | ) -> List[Tuple[str, float]]:
1815 |     """Finds memories with similar semantic meaning using embeddings."""
1816 |     try:
1817 |         embedding_service = get_embedding_service()
1818 |         if not embedding_service.client:
1819 |             logger.warning("EmbeddingService client not available.")
1820 |             return []
1821 | 
1822 |         # 1. Generate query embedding
1823 |         query_embedding_list = await embedding_service.create_embeddings(texts=[query_text])
1824 |         if not query_embedding_list or not query_embedding_list[0]:
1825 |             logger.warning(f"Failed to generate query embedding")
1826 |             return []
1827 |         query_embedding = np.array(query_embedding_list[0], dtype=np.float32)
1828 |         query_dimension = query_embedding.shape[0]
1829 |         query_embedding_2d = query_embedding.reshape(1, -1)
1830 | 
1831 |         # 2. Build query for candidate embeddings with filters
1832 |         sql = """
1833 |         SELECT m.memory_id, e.embedding
1834 |         FROM memories m
1835 |         JOIN embeddings e ON m.embedding_id = e.id
1836 |         WHERE e.dimension = ?
1837 |         """ 
1838 |         params: List[Any] = [query_dimension]
1839 | 
1840 |         # Add filters
1841 |         if workflow_id:
1842 |             sql += " AND m.workflow_id = ?"
1843 |             params.append(workflow_id)
1844 |         if memory_level:
1845 |             sql += " AND m.memory_level = ?"
1846 |             params.append(memory_level.lower())
1847 |         if memory_type:
1848 |             sql += " AND m.memory_type = ?"
1849 |             params.append(memory_type.lower())
1850 | 
1851 |         # Add TTL check
1852 |         now_unix = int(time.time())
1853 |         sql += " AND (m.ttl = 0 OR m.created_at + m.ttl > ?)"
1854 |         params.append(now_unix)
1855 | 
1856 |         # Optimize with pre-filtering and candidate limit
1857 |         candidate_limit = max(limit * 5, 50)
1858 |         sql += " ORDER BY m.last_accessed DESC NULLS LAST LIMIT ?"
1859 |         params.append(candidate_limit)
1860 | 
1861 |         # 3. Fetch candidate embeddings with matching dimension
1862 |         candidates: List[Tuple[str, bytes]] = []
1863 |         async with conn.execute(sql, params) as cursor:
1864 |             candidates = await cursor.fetchall()
1865 | 
1866 |         if not candidates:
1867 |             logger.debug(f"No candidate memories found matching filters")
1868 |             return []
1869 | 
1870 |         # 4. Calculate similarities using scikit-learn
1871 |         similarities: List[Tuple[str, float]] = []
1872 |         for memory_id, embedding_bytes in candidates:
1873 |             try:
1874 |                 # Deserialize embedding from bytes
1875 |                 memory_embedding = np.frombuffer(embedding_bytes, dtype=np.float32)
1876 |                 if memory_embedding.size == 0:
1877 |                     continue
1878 | 
1879 |                 memory_embedding_2d = memory_embedding.reshape(1, -1)
1880 |                 
1881 |                 # Safety check for dimension mismatch
1882 |                 if query_embedding_2d.shape[1] != memory_embedding_2d.shape[1]:
1883 |                     continue
1884 | 
1885 |                 # Calculate cosine similarity
1886 |                 similarity = sk_cosine_similarity(query_embedding_2d, memory_embedding_2d)[0][0]
1887 | 
1888 |                 # 5. Filter by threshold
1889 |                 if similarity >= threshold:
1890 |                     similarities.append((memory_id, float(similarity)))
1891 |             except Exception as e:
1892 |                 logger.warning(f"Error processing embedding for memory {memory_id}: {e}")
1893 |                 continue
1894 | 
1895 |         # 6. Sort by similarity (descending) and limit
1896 |         similarities.sort(key=lambda x: x[1], reverse=True)
1897 |         return similarities[:limit]
1898 | 
1899 |     except Exception as e:
1900 |         logger.error(f"Failed to find similar memories: {e}", exc_info=True)
1901 |         return []
1902 | ```
1903 | 
1904 | Key aspects:
1905 | 1. Integration with embedding service API
1906 | 2. Efficient querying with dimension matching
1907 | 3. Candidate pre-filtering before similarity calculation
1908 | 4. Serialized binary embedding handling
1909 | 5. Scikit-learn integration for cosine similarity
1910 | 6. Threshold filtering and result ranking
1911 | 7. Comprehensive error handling for edge cases
1912 | 
1913 | ## Mermaid Diagram Generation
1914 | 
1915 | The system generates sophisticated visualization diagrams:
1916 | 
1917 | ### Workflow Diagram Generation
1918 | 
1919 | ```python
1920 | async def _generate_mermaid_diagram(workflow: Dict[str, Any]) -> str:
1921 |     """Generates a detailed Mermaid flowchart representation of the workflow."""
1922 | 
1923 |     def sanitize_mermaid_id(uuid_str: Optional[str], prefix: str) -> str:
1924 |         """Creates a valid Mermaid node ID from a UUID, handling None."""
1925 |         if not uuid_str:
1926 |              return f"{prefix}_MISSING_{MemoryUtils.generate_id().replace('-', '_')}"
1927 |         sanitized = uuid_str.replace("-", "_")  # Hyphens cause issues in Mermaid
1928 |         return f"{prefix}_{sanitized}"
1929 | 
1930 |     diagram = ["```mermaid", "flowchart TD"]  # Top-Down flowchart
1931 | 
1932 |     # --- Generate Workflow Node ---
1933 |     wf_node_id = sanitize_mermaid_id(workflow.get('workflow_id'), "W")
1934 |     wf_title = _mermaid_escape(workflow.get('title', 'Workflow'))
1935 |     wf_status_class = f":::{workflow.get('status', 'active')}"
1936 |     diagram.append(f'    {wf_node_id}("{wf_title}"){wf_status_class}')
1937 |     
1938 |     # --- Generate Action Nodes ---
1939 |     action_nodes = {}  # Map action_id to mermaid_node_id
1940 |     parent_links = {}  # Map child_action_id to parent_action_id
1941 |     sequential_links = {}  # Map sequence_number to action_id
1942 | 
1943 |     for action in sorted(workflow.get("actions", []), key=lambda a: a.get("sequence_number", 0)):
1944 |         action_id = action.get("action_id")
1945 |         if not action_id: 
1946 |             continue
1947 | 
1948 |         node_id = sanitize_mermaid_id(action_id, "A")
1949 |         action_nodes[action_id] = node_id
1950 |         
1951 |         # Create node label with type, title, and tool info
1952 |         action_type = action.get('action_type', 'Action').capitalize()
1953 |         action_title = _mermaid_escape(action.get('title', action_type))
1954 |         sequence_number = action.get("sequence_number", 0)
1955 |         label = f"<b>{action_type} #{sequence_number}</b><br/>{action_title}"
1956 |         if action.get('tool_name'):
1957 |             label += f"<br/><i>Tool: {_mermaid_escape(action['tool_name'])}</i>"
1958 | 
1959 |         # Style node based on status
1960 |         status = action.get('status', ActionStatus.PLANNED.value)
1961 |         node_style = f":::{status}"
1962 | 
1963 |         diagram.append(f'    {node_id}["{label}"]{node_style}')
1964 | 
1965 |         # Record parent relationship
1966 |         parent_action_id = action.get("parent_action_id")
1967 |         if parent_action_id:
1968 |             parent_links[action_id] = parent_action_id
1969 |         else:
1970 |             sequential_links[sequence_number] = action_id
1971 |     
1972 |     # --- Generate Action Links ---
1973 |     linked_actions = set()
1974 |     
1975 |     # Parent->Child links
1976 |     for child_id, parent_id in parent_links.items():
1977 |         if child_id in action_nodes and parent_id in action_nodes:
1978 |             child_node = action_nodes[child_id]
1979 |             parent_node = action_nodes[parent_id]
1980 |             diagram.append(f"    {parent_node} --> {child_node}")
1981 |             linked_actions.add(child_id)
1982 | 
1983 |     # Sequential links for actions without explicit parents
1984 |     last_sequential_node = wf_node_id
1985 |     for seq_num in sorted(sequential_links.keys()):
1986 |         action_id = sequential_links[seq_num]
1987 |         if action_id in action_nodes:
1988 |              node_id = action_nodes[action_id]
1989 |              diagram.append(f"    {last_sequential_node} --> {node_id}")
1990 |              last_sequential_node = node_id
1991 |              linked_actions.add(action_id)
1992 |     
1993 |     # --- Generate Artifact Nodes ---
1994 |     for artifact in workflow.get("artifacts", []):
1995 |         artifact_id = artifact.get("artifact_id")
1996 |         if not artifact_id: 
1997 |             continue
1998 | 
1999 |         node_id = sanitize_mermaid_id(artifact_id, "F")
2000 |         artifact_name = _mermaid_escape(artifact.get('name', 'Artifact'))
2001 |         artifact_type = _mermaid_escape(artifact.get('artifact_type', 'file'))
2002 |         label = f"📄<br/><b>{artifact_name}</b><br/>({artifact_type})"
2003 | 
2004 |         node_shape_start, node_shape_end = "[(", ")]"  # Database/capsule shape
2005 |         node_style = ":::artifact"
2006 |         if artifact.get('is_output'):
2007 |             node_style = ":::artifact_output"  # Special style for outputs
2008 | 
2009 |         diagram.append(f'    {node_id}{node_shape_start}"{label}"{node_shape_end}{node_style}')
2010 | 
2011 |         # Link from creating action
2012 |         creator_action_id = artifact.get("action_id")
2013 |         if creator_action_id and creator_action_id in action_nodes:
2014 |             creator_node = action_nodes[creator_action_id]
2015 |             diagram.append(f"    {creator_node} -- Creates --> {node_id}")
2016 |         else:
2017 |             # Link to workflow if no specific action
2018 |             diagram.append(f"    {wf_node_id} -.-> {node_id}")
2019 |     
2020 |     # --- Add Class Definitions for Styling ---
2021 |     diagram.append("\n    %% Stylesheets")
2022 |     diagram.append("    classDef workflow fill:#e7f0fd,stroke:#0056b3,stroke-width:2px,color:#000")
2023 |     diagram.append("    classDef completed fill:#d4edda,stroke:#155724,stroke-width:1px,color:#155724")
2024 |     diagram.append("    classDef failed fill:#f8d7da,stroke:#721c24,stroke-width:1px,color:#721c24")
2025 |     # ... many more style definitions ...
2026 | 
2027 |     diagram.append("```")
2028 |     return "\n".join(diagram)
2029 | ```
2030 | 
2031 | This intricate function:
2032 | 1. Sanitizes UUIDs for Mermaid compatibility
2033 | 2. Constructs a flowchart with workflow, actions, and artifacts
2034 | 3. Creates hierarchical relationships
2035 | 4. Handles parent-child and sequential relationships
2036 | 5. Implements detailed styling based on status
2037 | 6. Escapes special characters for Mermaid compatibility
2038 | 
2039 | ### Memory Network Diagram Generation
2040 | 
2041 | ```python
2042 | async def _generate_memory_network_mermaid(memories: List[Dict], links: List[Dict], center_memory_id: Optional[str] = None) -> str:
2043 |     """Helper function to generate Mermaid graph syntax for a memory network."""
2044 | 
2045 |     def sanitize_mermaid_id(uuid_str: Optional[str], prefix: str) -> str:
2046 |         """Creates a valid Mermaid node ID from a UUID, handling None."""
2047 |         if not uuid_str:
2048 |              return f"{prefix}_MISSING_{MemoryUtils.generate_id().replace('-', '_')}"
2049 |         sanitized = uuid_str.replace("-", "_")
2050 |         return f"{prefix}_{sanitized}"
2051 | 
2052 |     diagram = ["```mermaid", "graph TD"]  # Top-Down graph direction
2053 | 
2054 |     # --- Memory Node Definitions ---
2055 |     memory_id_to_node_id = {}  # Map full memory ID to sanitized Mermaid node ID
2056 |     for memory in memories:
2057 |         mem_id = memory.get("memory_id")
2058 |         if not mem_id: 
2059 |             continue
2060 | 
2061 |         node_id = sanitize_mermaid_id(mem_id, "M")
2062 |         memory_id_to_node_id[mem_id] = node_id
2063 | 
2064 |         # Create node label with type, description, importance
2065 |         mem_type = memory.get("memory_type", "memory").capitalize()
2066 |         desc = _mermaid_escape(memory.get("description", mem_id))
2067 |         if len(desc) > 40:
2068 |             desc = desc[:37] + "..."
2069 |         importance = memory.get('importance', 5.0)
2070 |         label = f"<b>{mem_type}</b><br/>{desc}<br/><i>(I: {importance:.1f})</i>"
2071 | 
2072 |         # Choose node shape based on memory level
2073 |         level = memory.get("memory_level", MemoryLevel.EPISODIC.value)
2074 |         shape_start, shape_end = "[", "]"  # Default rectangle (Semantic)
2075 |         if level == MemoryLevel.EPISODIC.value:
2076 |             shape_start, shape_end = "(", ")"  # Round (Episodic)
2077 |         elif level == MemoryLevel.PROCEDURAL.value:
2078 |             shape_start, shape_end = "[[", "]]"  # Subroutine (Procedural)
2079 |         elif level == MemoryLevel.WORKING.value:
2080 |              shape_start, shape_end = "([", "])"  # Capsule (Working)
2081 | 
2082 |         # Style node based on level + highlight center
2083 |         node_style = f":::level{level}"
2084 |         if mem_id == center_memory_id:
2085 |             node_style += " :::centerNode"  # Highlight center node
2086 | 
2087 |         diagram.append(f'    {node_id}{shape_start}"{label}"{shape_end}{node_style}')
2088 | 
2089 |     # --- Memory Link Definitions ---
2090 |     for link in links:
2091 |         source_mem_id = link.get("source_memory_id")
2092 |         target_mem_id = link.get("target_memory_id")
2093 |         link_type = link.get("link_type", "related")
2094 | 
2095 |         # Only draw links where both ends are in the visualization
2096 |         if source_mem_id in memory_id_to_node_id and target_mem_id in memory_id_to_node_id:
2097 |             source_node = memory_id_to_node_id[source_mem_id]
2098 |             target_node = memory_id_to_node_id[target_mem_id]
2099 |             diagram.append(f"    {source_node} -- {link_type} --> {target_node}")
2100 | 
2101 |     # --- Add Class Definitions for Styling ---
2102 |     diagram.append("\n    %% Stylesheets")
2103 |     diagram.append("    classDef levelworking fill:#e3f2fd,stroke:#2196f3,color:#1e88e5,stroke-width:1px;")
2104 |     diagram.append("    classDef levelepisodic fill:#e8f5e9,stroke:#4caf50,color:#388e3c,stroke-width:1px;")
2105 |     # ... additional style definitions ...
2106 |     diagram.append("    classDef centerNode stroke-width:3px,stroke:#0d47a1,font-weight:bold;")
2107 | 
2108 |     diagram.append("```")
2109 |     return "\n".join(diagram)
2110 | ```
2111 | 
2112 | This visualization:
2113 | 1. Displays memories with level-specific shapes
2114 | 2. Shows relationship types on connection lines
2115 | 3. Provides visual cues for importance and type
2116 | 4. Highlights the center node when specified
2117 | 5. Implements sophisticated styling based on memory levels
2118 | 
2119 | ## Character Escaping for Mermaid
2120 | 
2121 | The system implements robust character escaping for Mermaid compatibility:
2122 | 
2123 | ```python
2124 | def _mermaid_escape(text: str) -> str:
2125 |     """Escapes characters problematic for Mermaid node labels."""
2126 |     if not isinstance(text, str):
2127 |         text = str(text)
2128 |     # Replace quotes first, then other potentially problematic characters
2129 |     text = text.replace('"', '#quot;')
2130 |     text = text.replace('(', '#40;')
2131 |     text = text.replace(')', '#41;')
2132 |     text = text.replace('[', '#91;')
2133 |     text = text.replace(']', '#93;')
2134 |     text = text.replace('{', '#123;')
2135 |     text = text.replace('}', '#125;')
2136 |     text = text.replace(':', '#58;')
2137 |     text = text.replace(';', '#59;')
2138 |     text = text.replace('<', '#lt;')
2139 |     text = text.replace('>', '#gt;')
2140 |     # Replace newline with <br> for multiline labels
2141 |     text = text.replace('\n', '<br>')
2142 |     return text
2143 | ```
2144 | 
2145 | This function handles all special characters that could break Mermaid diagram syntax.
2146 | 
2147 | ## Serialization and Data Handling
2148 | 
2149 | The system implements sophisticated serialization with robust error handling:
2150 | 
2151 | ```python
2152 | async def serialize(obj: Any) -> Optional[str]:
2153 |     """Safely serialize an arbitrary Python object to a JSON string.
2154 | 
2155 |     Handles potential serialization errors and very large objects.
2156 |     Attempts to represent complex objects that fail direct serialization.
2157 |     If the final JSON string exceeds MAX_TEXT_LENGTH, it returns a
2158 |     JSON object indicating truncation.
2159 |     """
2160 |     if obj is None:
2161 |         return None
2162 | 
2163 |     json_str = None
2164 | 
2165 |     try:
2166 |         # Attempt direct JSON serialization
2167 |         json_str = json.dumps(obj, ensure_ascii=False, default=str)
2168 | 
2169 |     except TypeError as e:
2170 |         # Handle objects that are not directly serializable
2171 |         logger.debug(f"Direct JSON serialization failed for type {type(obj)}: {e}")
2172 |         try:
2173 |             # Fallback using string representation
2174 |             fallback_repr = str(obj)
2175 |             fallback_bytes = fallback_repr.encode('utf-8')
2176 |             
2177 |             if len(fallback_bytes) > MAX_TEXT_LENGTH:
2178 |                 # Truncate if too large
2179 |                 truncated_bytes = fallback_bytes[:MAX_TEXT_LENGTH]
2180 |                 truncated_repr = truncated_bytes.decode('utf-8', errors='replace')
2181 |                 
2182 |                 # Advanced handling for multi-byte character truncation
2183 |                 if truncated_repr.endswith('\ufffd') and MAX_TEXT_LENGTH > 1:
2184 |                      shorter_repr = fallback_bytes[:MAX_TEXT_LENGTH-1].decode('utf-8', errors='replace')
2185 |                      if not shorter_repr.endswith('\ufffd'):
2186 |                           truncated_repr = shorter_repr
2187 |                 
2188 |                 truncated_repr += "[TRUNCATED]"
2189 |                 logger.warning(f"Fallback string representation truncated for type {type(obj)}.")
2190 |             else:
2191 |                 truncated_repr = fallback_repr
2192 | 
2193 |             # Create structured representation of the error
2194 |             json_str = json.dumps({
2195 |                 "error": f"Serialization failed for type {type(obj)}.",
2196 |                 "fallback_repr": truncated_repr
2197 |             }, ensure_ascii=False)
2198 |             
2199 |         except Exception as fallback_e:
2200 |             # Final fallback if even string conversion fails
2201 |             logger.error(f"Could not serialize object of type {type(obj)} even with fallback: {fallback_e}")
2202 |             json_str = json.dumps({
2203 |                 "error": f"Unserializable object type {type(obj)}. Fallback failed.",
2204 |                 "critical_error": str(fallback_e)
2205 |             }, ensure_ascii=False)
2206 | 
2207 |     # Check final length regardless of serialization path
2208 |     if json_str is None:
2209 |          logger.error(f"Internal error: json_str is None after serialization attempt for object of type {type(obj)}")
2210 |          return json.dumps({
2211 |              "error": "Internal serialization error occurred.",
2212 |              "original_type": str(type(obj))
2213 |          }, ensure_ascii=False)
2214 | 
2215 |     # Check if final result exceeds max length
2216 |     final_bytes = json_str.encode('utf-8')
2217 |     if len(final_bytes) > MAX_TEXT_LENGTH:
2218 |         logger.warning(f"Serialized JSON string exceeds max length ({MAX_TEXT_LENGTH} bytes)")
2219 |         preview_str = json_str[:200] + ("..." if len(json_str) > 200 else "")
2220 |         return json.dumps({
2221 |             "error": "Serialized content exceeded maximum length.",
2222 |             "original_type": str(type(obj)),
2223 |             "preview": preview_str
2224 |         }, ensure_ascii=False)
2225 |     else:
2226 |         return json_str
2227 | ```
2228 | 
2229 | This highly sophisticated serialization function:
2230 | 1. Handles arbitrary Python objects
2231 | 2. Implements multiple fallback strategies
2232 | 3. Properly handles UTF-8 encoding and truncation
2233 | 4. Preserves information about serialization failures
2234 | 5. Returns structured error information
2235 | 6. Enforces maximum content length limits
2236 | 
2237 | ## LLM Prompt Templates for Meta-Cognition
2238 | 
2239 | The system uses sophisticated prompt templates for LLM-based reflection:
2240 | 
2241 | ### Consolidation Prompts
2242 | 
2243 | ```python
2244 | def _generate_consolidation_prompt(memories: List[Dict], consolidation_type: str) -> str:
2245 |     """Generates a prompt for memory consolidation."""
2246 |     # Format memories with metadata
2247 |     memory_texts = []
2248 |     for i, memory in enumerate(memories[:20], 1):
2249 |         desc = memory.get("description") or ""
2250 |         content_preview = (memory.get("content", "") or "")[:300]
2251 |         mem_type = memory.get("memory_type", "N/A")
2252 |         importance = memory.get("importance", 5.0)
2253 |         confidence = memory.get("confidence", 1.0)
2254 |         created_ts = memory.get("created_at", 0)
2255 |         created_dt_str = datetime.fromtimestamp(created_ts).strftime('%Y-%m-%d %H:%M') if created_ts else "Unknown Date"
2256 |         mem_id_short = memory.get("memory_id", "UNKNOWN")[:8]
2257 | 
2258 |         formatted = f"--- MEMORY #{i} (ID: {mem_id_short}..., Type: {mem_type}, Importance: {importance:.1f}, Confidence: {confidence:.1f}, Date: {created_dt_str}) ---\n"
2259 |         if desc:
2260 |             formatted += f"Description: {desc}\n"
2261 |         formatted += f"Content Preview: {content_preview}"
2262 |         # Indicate truncation
2263 |         if len(memory.get("content", "")) > 300:
2264 |             formatted += "...\n"
2265 |         else:
2266 |             formatted += "\n"
2267 |         memory_texts.append(formatted)
2268 | 
2269 |     memories_str = "\n".join(memory_texts)
2270 | 
2271 |     # Base prompt template
2272 |     base_prompt = f"""You are an advanced cognitive system processing and consolidating memories for an AI agent. Below are {len(memories)} memory items containing information, observations, and insights relevant to a task. Your goal is to perform a specific type of consolidation: '{consolidation_type}'.
2273 | 
2274 | Analyze the following memories carefully:
2275 | 
2276 | {memories_str}
2277 | --- END OF MEMORIES ---
2278 | """
2279 | 
2280 |     # Add type-specific instructions
2281 |     if consolidation_type == "summary":
2282 |         base_prompt += """TASK: Create a comprehensive and coherent summary...
2283 |         [detailed instructions for summarization]
2284 |         """
2285 |     elif consolidation_type == "insight":
2286 |         base_prompt += """TASK: Generate high-level insights...
2287 |         [detailed instructions for insight generation]
2288 |         """
2289 |     # Additional consolidation types...
2290 | 
2291 |     return base_prompt
2292 | ```
2293 | 
2294 | ### Reflection Prompts
2295 | 
2296 | ```python
2297 | def _generate_reflection_prompt(
2298 |     workflow_name: str,
2299 |     workflow_desc: Optional[str],
2300 |     operations: List[Dict],
2301 |     memories: Dict[str, Dict],
2302 |     reflection_type: str
2303 | ) -> str:
2304 |     """Generates a prompt for reflective analysis."""
2305 |     # Format operations with context
2306 |     op_texts = []
2307 |     for i, op_data in enumerate(operations[:30], 1):
2308 |         op_ts_unix = op_data.get("timestamp", 0)
2309 |         op_ts_str = datetime.fromtimestamp(op_ts_unix).strftime('%Y-%m-%d %H:%M:%S') if op_ts_unix else "Unknown Time"
2310 |         op_type = op_data.get('operation', 'UNKNOWN').upper()
2311 |         mem_id = op_data.get('memory_id')
2312 |         action_id = op_data.get('action_id')
2313 | 
2314 |         # Extract operation details
2315 |         op_details_dict = {}
2316 |         op_data_raw = op_data.get('operation_data')
2317 |         if op_data_raw:
2318 |              try:
2319 |                   op_details_dict = json.loads(op_data_raw)
2320 |              except (json.JSONDecodeError, TypeError):
2321 |                   op_details_dict = {"raw_data": str(op_data_raw)[:50]}
2322 | 
2323 |         # Build rich description
2324 |         desc_parts = [f"OP #{i} ({op_ts_str})", f"Type: {op_type}"]
2325 |         if mem_id:
2326 |             mem_info = memories.get(mem_id)
2327 |             mem_desc_text = f"Mem({mem_id[:6]}..)"
2328 |             if mem_info:
2329 |                  mem_desc_text += f" Desc: {mem_info.get('description', 'N/A')[:40]}"
2330 |                  if mem_info.get('memory_type'):
2331 |                       mem_desc_text += f" Type: {mem_info['memory_type']}"
2332 |             desc_parts.append(mem_desc_text)
2333 | 
2334 |         if action_id:
2335 |             desc_parts.append(f"Action({action_id[:6]}..)")
2336 | 
2337 |         # Add operation data details
2338 |         detail_items = []
2339 |         for k, v in op_details_dict.items():
2340 |              if k not in ['content', 'description', 'embedding', 'prompt']:
2341 |                   detail_items.append(f"{k}={str(v)[:30]}")
2342 |         if detail_items:
2343 |             desc_parts.append(f"Data({', '.join(detail_items)})")
2344 | 
2345 |         op_texts.append(" | ".join(desc_parts))
2346 | 
2347 |     operations_str = "\n".join(op_texts)
2348 | 
2349 |     # Base prompt template
2350 |     base_prompt = f"""You are an advanced meta-cognitive system analyzing an AI agent's workflow: "{workflow_name}".
2351 | Workflow Description: {workflow_desc or 'N/A'}
2352 | Your task is to perform a '{reflection_type}' reflection based on the recent memory operations listed below. Analyze these operations to understand the agent's process, progress, and knowledge state.
2353 | 
2354 | RECENT OPERATIONS (Up to 30):
2355 | {operations_str}
2356 | """
2357 | 
2358 |     # Add type-specific instructions
2359 |     if reflection_type == "summary":
2360 |         base_prompt += """TASK: Create a reflective summary...
2361 |         [detailed instructions for reflective summarization]
2362 |         """
2363 |     elif reflection_type == "progress":
2364 |         base_prompt += """TASK: Analyze the progress...
2365 |         [detailed instructions for progress analysis]
2366 |         """
2367 |     # Additional reflection types...
2368 | 
2369 |     return base_prompt
2370 | ```
2371 | 
2372 | These templates implement:
2373 | 1. Rich context formatting with metadata
2374 | 2. Type-specific detailed instructions
2375 | 3. Structured memory representation
2376 | 4. Operation history formatting with context
2377 | 5. Guidance tailored to different meta-cognitive tasks
2378 | 
2379 | ## Integration Patterns for Complex Operations
2380 | 
2381 | The system implements several integration patterns for complex operations:
2382 | 
2383 | ### Workflow Creation with Initial Thought
2384 | 
2385 | ```python
2386 | async def create_workflow(
2387 |     title: str,
2388 |     description: Optional[str] = None,
2389 |     goal: Optional[str] = None,
2390 |     tags: Optional[List[str]] = None,
2391 |     metadata: Optional[Dict[str, Any]] = None,
2392 |     parent_workflow_id: Optional[str] = None,
2393 |     db_path: str = DEFAULT_DB_PATH
2394 | ) -> Dict[str, Any]:
2395 |     """Creates a new workflow, including a default thought chain and initial goal thought if specified."""
2396 |     # Validation and initialization...
2397 |     
2398 |     try:
2399 |         async with DBConnection(db_path) as conn:
2400 |             # Check parent workflow existence...
2401 |             
2402 |             # Serialize metadata
2403 |             metadata_json = await MemoryUtils.serialize(metadata)
2404 | 
2405 |             # Insert the main workflow record
2406 |             await conn.execute("""INSERT INTO workflows...""")
2407 | 
2408 |             # Process and associate tags
2409 |             await MemoryUtils.process_tags(conn, workflow_id, tags or [], "workflow")
2410 | 
2411 |             # Create the default thought chain associated with this workflow
2412 |             thought_chain_id = MemoryUtils.generate_id()
2413 |             chain_title = f"Main reasoning for: {title}"
2414 |             await conn.execute("""INSERT INTO thought_chains...""")
2415 | 
2416 |             # If a goal was provided, add it as the first thought in the default chain
2417 |             if goal:
2418 |                 thought_id = MemoryUtils.generate_id()
2419 |                 seq_no = await MemoryUtils.get_next_sequence_number(conn, thought_chain_id, "thoughts", "thought_chain_id")
2420 |                 await conn.execute("""INSERT INTO thoughts...""")
2421 | 
2422 |             # Commit the transaction
2423 |             await conn.commit()
2424 | 
2425 |             # Prepare and return result
2426 |             # ...
2427 |     except ToolInputError:
2428 |         raise
2429 |     except Exception as e:
2430 |         # Log the error and raise a generic ToolError
2431 |         logger.error(f"Error creating workflow: {e}", exc_info=True)
2432 |         raise ToolError(f"Failed to create workflow: {str(e)}") from e
2433 | ```
2434 | 
2435 | This pattern:
2436 | 1. Creates multiple related objects in one transaction
2437 | 2. Establishes default chain for reasoning
2438 | 3. Optionally adds initial thought/goal
2439 | 4. Ensures atomicity through transaction management
2440 | 
2441 | ### Action Recording with Episodic Memory
2442 | 
2443 | ```python
2444 | async def record_action_start(
2445 |     workflow_id: str,
2446 |     action_type: str,
2447 |     reasoning: str,
2448 |     tool_name: Optional[str] = None,
2449 |     tool_args: Optional[Dict[str, Any]] = None,
2450 |     title: Optional[str] = None,
2451 |     parent_action_id: Optional[str] = None,
2452 |     tags: Optional[List[str]] = None,
2453 |     related_thought_id: Optional[str] = None,
2454 |     db_path: str = DEFAULT_DB_PATH
2455 | ) -> Dict[str, Any]:
2456 |     """Records the start of an action within a workflow and creates a corresponding episodic memory."""
2457 |     # Validation and initialization...
2458 |     
2459 |     try:
2460 |         async with DBConnection(db_path) as conn:
2461 |             # Existence checks...
2462 |             
2463 |             # Determine sequence and auto-title...
2464 |             
2465 |             # Insert action record
2466 |             tool_args_json = await MemoryUtils.serialize(tool_args)
2467 |             await conn.execute("""INSERT INTO actions...""")
2468 | 
2469 |             # Process tags
2470 |             await MemoryUtils.process_tags(conn, action_id, tags or [], "action")
2471 | 
2472 |             # Link to related thought
2473 |             if related_thought_id:
2474 |                 await conn.execute("UPDATE thoughts SET relevant_action_id = ? WHERE thought_id = ?", 
2475 |                                  (action_id, related_thought_id))
2476 | 
2477 |             # Create linked episodic memory
2478 |             memory_id = MemoryUtils.generate_id()
2479 |             memory_content = f"Started action [{sequence_number}] '{auto_title}' ({action_type_enum.value}). Reasoning: {reasoning}"
2480 |             if tool_name:
2481 |                  memory_content += f" Tool: {tool_name}."
2482 |             mem_tags = ["action_start", action_type_enum.value] + (tags or [])
2483 |             mem_tags_json = json.dumps(list(set(mem_tags)))
2484 | 
2485 |             await conn.execute("""INSERT INTO memories...""")
2486 |             await MemoryUtils._log_memory_operation(conn, workflow_id, "create_from_action_start", memory_id, action_id)
2487 | 
2488 |             # Update workflow timestamp
2489 |             await conn.execute("UPDATE workflows SET updated_at = ?, last_active = ? WHERE workflow_id = ?", 
2490 |                              (now_unix, now_unix, workflow_id))
2491 | 
2492 |             # Commit transaction
2493 |             await conn.commit()
2494 | 
2495 |             # Prepare and return result
2496 |             # ...
2497 |     except ToolInputError:
2498 |         raise
2499 |     except Exception as e:
2500 |         logger.error(f"Error recording action start: {e}", exc_info=True)
2501 |         raise ToolError(f"Failed to record action start: {str(e)}") from e
2502 | ```
2503 | 
2504 | This pattern:
2505 | 1. Records action details
2506 | 2. Automatically creates linked episodic memory
2507 | 3. Updates related entities (thoughts, workflow)
2508 | 4. Maintains bidirectional references
2509 | 5. Ensures proper tagging and categorization
2510 | 
2511 | ### Thought Recording with Optional Memory Creation
2512 | 
2513 | ```python
2514 | async def record_thought(
2515 |     workflow_id: str,
2516 |     content: str,
2517 |     thought_type: str = "inference",
2518 |     thought_chain_id: Optional[str] = None,
2519 |     parent_thought_id: Optional[str] = None,
2520 |     relevant_action_id: Optional[str] = None,
2521 |     relevant_artifact_id: Optional[str] = None,
2522 |     relevant_memory_id: Optional[str] = None,
2523 |     db_path: str = DEFAULT_DB_PATH,
2524 |     conn: Optional[aiosqlite.Connection] = None
2525 | ) -> Dict[str, Any]:
2526 |     """Records a thought in a reasoning chain, potentially linking to memory and creating an associated memory entry."""
2527 |     # Validation...
2528 |     
2529 |     thought_id = MemoryUtils.generate_id()
2530 |     now_unix = int(time.time())
2531 |     linked_memory_id = None
2532 | 
2533 |     async def _perform_db_operations(db_conn: aiosqlite.Connection):
2534 |         """Inner function to perform DB ops using the provided connection."""
2535 |         nonlocal linked_memory_id
2536 | 
2537 |         # Existence checks...
2538 |         
2539 |         # Determine target thought chain...
2540 |         
2541 |         # Get sequence number...
2542 |         
2543 |         # Insert thought record...
2544 |         
2545 |         # Update workflow timestamp...
2546 |         
2547 |         # Create linked memory for important thoughts
2548 |         important_thought_types = [
2549 |             ThoughtType.GOAL.value, ThoughtType.DECISION.value, ThoughtType.SUMMARY.value,
2550 |             ThoughtType.REFLECTION.value, ThoughtType.HYPOTHESIS.value
2551 |         ]
2552 | 
2553 |         if thought_type_enum.value in important_thought_types:
2554 |             linked_memory_id = MemoryUtils.generate_id()
2555 |             mem_content = f"Thought [{sequence_number}] ({thought_type_enum.value.capitalize()}): {content}"
2556 |             mem_tags = ["reasoning", thought_type_enum.value]
2557 |             mem_importance = 7.5 if thought_type_enum.value in [ThoughtType.GOAL.value, ThoughtType.DECISION.value] else 6.5
2558 | 
2559 |             await db_conn.execute("""INSERT INTO memories...""")
2560 |             await MemoryUtils._log_memory_operation(db_conn, workflow_id, "create_from_thought", linked_memory_id, None)
2561 |             
2562 |         return target_thought_chain_id, sequence_number
2563 |     
2564 |     try:
2565 |         target_thought_chain_id_res = None
2566 |         sequence_number_res = None
2567 | 
2568 |         if conn:
2569 |             # Use provided connection (transaction nesting)
2570 |             target_thought_chain_id_res, sequence_number_res = await _perform_db_operations(conn)
2571 |             # No commit - handled by outer transaction
2572 |         else:
2573 |             # Manage local transaction
2574 |             db_manager = DBConnection(db_path)
2575 |             async with db_manager.transaction() as local_conn:
2576 |                 target_thought_chain_id_res, sequence_number_res = await _perform_db_operations(local_conn)
2577 |             # Commit handled by transaction manager
2578 | 
2579 |         # Prepare and return result
2580 |         # ...
2581 |     except ToolInputError:
2582 |         raise
2583 |     except Exception as e:
2584 |         logger.error(f"Error recording thought: {e}", exc_info=True)
2585 |         raise ToolError(f"Failed to record thought: {str(e)}") from e
2586 | ```
2587 | 
2588 | This pattern:
2589 | 1. Supports transaction nesting via optional connection parameter
2590 | 2. Conditionally creates memory entries for important thoughts
2591 | 3. Implements comprehensive linking between entities
2592 | 4. Uses inner functions for encapsulation
2593 | 5. Determines correct thought chain automatically
2594 | 
2595 | ### Memory Consolidation with Linking
2596 | 
2597 | ```python
2598 | async def consolidate_memories(
2599 |     workflow_id: Optional[str] = None,
2600 |     target_memories: Optional[List[str]] = None,
2601 |     consolidation_type: str = "summary",
2602 |     query_filter: Optional[Dict[str, Any]] = None,
2603 |     max_source_memories: int = 20,
2604 |     prompt_override: Optional[str] = None,
2605 |     provider: str = LLMGatewayProvider.OPENAI.value,
2606 |     model: Optional[str] = None,
2607 |     store_result: bool = True,
2608 |     store_as_level: str = MemoryLevel.SEMANTIC.value,
2609 |     store_as_type: Optional[str] = None,
2610 |     max_tokens: int = 1000,
2611 |     db_path: str = DEFAULT_DB_PATH
2612 | ) -> Dict[str, Any]:
2613 |     """Consolidates multiple memories using an LLM to generate summaries, insights, etc."""
2614 |     # Validation...
2615 |     
2616 |     source_memories_list = []
2617 |     source_memory_ids = []
2618 |     effective_workflow_id = workflow_id
2619 | 
2620 |     try:
2621 |         async with DBConnection(db_path) as conn:
2622 |             # Select source memories (full logic)...
2623 |             
2624 |             # Generate consolidation prompt...
2625 |             
2626 |             # Call LLM via Gateway...
2627 |             provider_instance = await get_provider(provider)
2628 |             llm_result = await provider_instance.generate_completion(
2629 |                 prompt=prompt, model=final_model, max_tokens=max_tokens, temperature=0.6
2630 |             )
2631 |             consolidated_content = llm_result.text.strip()
2632 |             
2633 |             # Store result as new memory...
2634 |             if store_result and consolidated_content:
2635 |                 # Use derived importance and confidence...
2636 |                 derived_importance = min(max(source_importances) + 0.5, 10.0)
2637 |                 derived_confidence = min(sum(source_confidences) / len(source_confidences), 1.0)
2638 |                 derived_confidence *= (1.0 - min(0.2, (len(source_memories_list) - 1) * 0.02))
2639 |                 
2640 |                 # Store the new memory...
2641 |                 store_result_dict = await store_memory(
2642 |                     workflow_id=effective_workflow_id,
2643 |                     content=consolidated_content,
2644 |                     memory_type=result_type.value,
2645 |                     memory_level=result_level.value,
2646 |                     importance=round(derived_importance, 2),
2647 |                     confidence=round(derived_confidence, 3),
2648 |                     description=result_desc,
2649 |                     source=f"consolidation_{consolidation_type}",
2650 |                     tags=result_tags, context_data=result_context,
2651 |                     generate_embedding=True, db_path=db_path
2652 |                 )
2653 |                 stored_memory_id = store_result_dict.get("memory_id")
2654 |                 
2655 |                 # Link result to sources...
2656 |                 if stored_memory_id:
2657 |                     link_tasks = []
2658 |                     for source_id in source_memory_ids:
2659 |                          link_task = create_memory_link(
2660 |                              source_memory_id=stored_memory_id,
2661 |                              target_memory_id=source_id,
2662 |                              link_type=LinkType.GENERALIZES.value,
2663 |                              description=f"Source for consolidated {consolidation_type}",
2664 |                              db_path=db_path
2665 |                          )
2666 |                          link_tasks.append(link_task)
2667 |                     await asyncio.gather(*link_tasks, return_exceptions=True)
2668 |             
2669 |             # Log operation...
2670 |             
2671 |             # Commit...
2672 |             
2673 |             # Prepare and return result...
2674 |     except (ToolInputError, ToolError):
2675 |         raise
2676 |     except Exception as e:
2677 |         logger.error(f"Failed to consolidate memories: {str(e)}", exc_info=True)
2678 |         raise ToolError(f"Failed to consolidate memories: {str(e)}") from e
2679 | ```
2680 | 
2681 | This pattern:
2682 | 1. Integrates with external LLM services
2683 | 2. Implements sophisticated source memory selection
2684 | 3. Derives importance and confidence heuristically
2685 | 4. Creates bidirectional links to source memories
2686 | 5. Uses asynchronous link creation with gather
2687 | 
2688 | ### Hybrid Search with Weighted Scoring
2689 | 
2690 | ```python
2691 | async def hybrid_search_memories(
2692 |     query: str,
2693 |     workflow_id: Optional[str] = None,
2694 |     limit: int = 10,
2695 |     offset: int = 0,
2696 |     semantic_weight: float = 0.6,
2697 |     keyword_weight: float = 0.4,
2698 |     # Additional parameters...
2699 | ) -> Dict[str, Any]:
2700 |     """Performs a hybrid search combining semantic similarity and keyword/filtered relevance."""
2701 |     # Validation...
2702 |     
2703 |     try:
2704 |         async with DBConnection(db_path) as conn:
2705 |             # --- Step 1: Semantic Search ---
2706 |             semantic_results: List[Tuple[str, float]] = []
2707 |             if norm_sem_weight > 0:
2708 |                 try:
2709 |                     semantic_candidate_limit = min(max(limit * 5, 50), MAX_SEMANTIC_CANDIDATES)
2710 |                     semantic_results = await _find_similar_memories(
2711 |                         conn=conn,
2712 |                         query_text=query,
2713 |                         workflow_id=workflow_id,
2714 |                         limit=semantic_candidate_limit,
2715 |                         threshold=0.1,  # Lower threshold for hybrid
2716 |                         memory_level=memory_level,
2717 |                         memory_type=memory_type
2718 |                     )
2719 |                     for mem_id, score in semantic_results:
2720 |                         combined_scores[mem_id]["semantic"] = score
2721 |                 except Exception as sem_err:
2722 |                     logger.warning(f"Semantic search part failed in hybrid search: {sem_err}")
2723 |             
2724 |             # --- Step 2: Keyword/Filtered Search ---
2725 |             if norm_key_weight > 0:
2726 |                 # Build query with filters...
2727 |                 # Execute query...
2728 |                 # Calculate raw scores...
2729 |                 # Normalize keyword scores...
2730 |                 for mem_id, raw_score in raw_keyword_scores.items():
2731 |                     normalized_kw_score = min(max(raw_score / normalization_factor, 0.0), 1.0)
2732 |                     combined_scores[mem_id]["keyword"] = normalized_kw_score
2733 |             
2734 |             # --- Step 3: Calculate Hybrid Score ---
2735 |             if combined_scores:
2736 |                 for _mem_id, scores in combined_scores.items():
2737 |                     scores["hybrid"] = (scores["semantic"] * norm_sem_weight) + (scores["keyword"] * norm_key_weight)
2738 | 
2739 |                 # Sort by hybrid score
2740 |                 sorted_ids_scores = sorted(combined_scores.items(), key=lambda item: item[1]["hybrid"], reverse=True)
2741 | 
2742 |                 # Apply pagination after ranking
2743 |                 paginated_ids_scores = sorted_ids_scores[offset : offset + limit]
2744 |                 final_ranked_ids = [item[0] for item in paginated_ids_scores]
2745 |                 final_scores_map = {item[0]: item[1] for item in paginated_ids_scores}
2746 |             
2747 |             # --- Step 4-7: Fetch details, links, reconstruct results, update access ---
2748 |             # ...
2749 |             
2750 |             # Return final results...
2751 |     except ToolInputError:
2752 |         raise
2753 |     except Exception as e:
2754 |         logger.error(f"Hybrid search failed: {str(e)}", emoji_key="x", exc_info=True)
2755 |         raise ToolError(f"Hybrid search failed: {str(e)}") from e
2756 | ```
2757 | 
2758 | This pattern:
2759 | 1. Combines vector similarity and keyword search
2760 | 2. Implements weighted scoring with normalization
2761 | 3. Applies filters and pagination efficiently
2762 | 4. Handles score normalization for different ranges
2763 | 5. Optimizes database access with batched operations
2764 | 
2765 | ## System Initialization and Configuration
2766 | 
2767 | The system includes comprehensive initialization:
2768 | 
2769 | ```python
2770 | async def initialize_memory_system(db_path: str = DEFAULT_DB_PATH) -> Dict[str, Any]:
2771 |     """Initializes the Unified Agent Memory system and checks embedding service status."""
2772 |     start_time = time.time()
2773 |     logger.info("Initializing Unified Memory System...", emoji_key="rocket")
2774 |     embedding_service_warning = None
2775 | 
2776 |     try:
2777 |         # Initialize/Verify Database Schema
2778 |         async with DBConnection(db_path) as conn:
2779 |              # Test connection with simple query
2780 |             cursor = await conn.execute("SELECT count(*) FROM workflows")
2781 |             _ = await cursor.fetchone()
2782 |             await cursor.close()
2783 |         logger.success("Unified Memory System database connection verified.", emoji_key="database")
2784 | 
2785 |         # Verify EmbeddingService functionality
2786 |         try:
2787 |             embedding_service = get_embedding_service()
2788 |             if embedding_service.client is not None:
2789 |                 logger.info("EmbeddingService initialized and functional.", emoji_key="brain")
2790 |             else:
2791 |                 embedding_service_warning = "EmbeddingService client not available. Embeddings disabled."
2792 |                 logger.error(embedding_service_warning, emoji_key="warning")
2793 |                 raise ToolError(embedding_service_warning)
2794 |         except Exception as embed_init_err:
2795 |              if not isinstance(embed_init_err, ToolError):
2796 |                  embedding_service_warning = f"Failed to initialize EmbeddingService: {str(embed_init_err)}"
2797 |                  logger.error(embedding_service_warning, emoji_key="error", exc_info=True)
2798 |                  raise ToolError(embedding_service_warning) from embed_init_err
2799 |              else:
2800 |                  raise embed_init_err
2801 | 
2802 |         # Return success status
2803 |         processing_time = time.time() - start_time
2804 |         logger.success("Unified Memory System initialized successfully.", emoji_key="white_check_mark", time=processing_time)
2805 | 
2806 |         return {
2807 |             "success": True,
2808 |             "message": "Unified Memory System initialized successfully.",
2809 |             "db_path": os.path.abspath(db_path),
2810 |             "embedding_service_functional": True,
2811 |             "embedding_service_warning": None,
2812 |             "processing_time": processing_time
2813 |         }
2814 |     except Exception as e:
2815 |         processing_time = time.time() - start_time
2816 |         logger.error(f"Failed to initialize memory system: {str(e)}", emoji_key="x", exc_info=True, time=processing_time)
2817 |         if isinstance(e, ToolError):
2818 |             raise e
2819 |         else:
2820 |             raise ToolError(f"Memory system initialization failed: {str(e)}") from e
2821 | ```
2822 | 
2823 | This initialization:
2824 | 1. Verifies database connection and schema
2825 | 2. Checks embedding service functionality
2826 | 3. Provides detailed diagnostics
2827 | 4. Implements robust error handling
2828 | 5. Returns comprehensive status information
2829 | 
2830 | ## System Architecture Summary
2831 | 
2832 | The Unified Agent Memory and Cognitive System represents a sophisticated architecture for LLM agent cognitive modeling and workflow tracking. Its key architectural components include:
2833 | 
2834 | 1. **Multi-Level Memory Hierarchy**:
2835 |    - Working memory for active processing
2836 |    - Episodic memory for experiences and events
2837 |    - Semantic memory for knowledge and facts
2838 |    - Procedural memory for skills and procedures
2839 | 
2840 | 2. **Workflow Tracking Structure**:
2841 |    - Workflows as top-level containers
2842 |    - Actions for agent activities and tool use
2843 |    - Artifacts for outputs and files
2844 |    - Thought chains for reasoning processes
2845 | 
2846 | 3. **Associative Memory Graph**:
2847 |    - Bidirectional links between memories
2848 |    - Type-classified relationships
2849 |    - Weighted link strengths
2850 |    - Hierarchical organization
2851 | 
2852 | 4. **Cognitive State Management**:
2853 |    - Working memory management with capacity limits
2854 |    - Focus tracking and automatic updating
2855 |    - State persistence for context recovery
2856 |    - Workflow context summarization
2857 | 
2858 | 5. **Meta-Cognitive Capabilities**:
2859 |    - Memory consolidation (summary, insight, procedural)
2860 |    - Reflection generation (summary, progress, gaps, strengths, plan)
2861 |    - Memory promotion based on usage patterns
2862 |    - Complex visualization generation
2863 | 
2864 | 6. **Vector-Based Semantic Search**:
2865 |    - Integration with embedding services
2866 |    - Cosine similarity calculation
2867 |    - Hybrid search combining vector and keyword approaches
2868 |    - Optimized candidate selection
2869 | 
2870 | 7. **Operation Audit and Analytics**:
2871 |    - Comprehensive operation logging
2872 |    - Statistical analysis and reporting
2873 |    - Performance measurement
2874 |    - Memory access tracking
2875 | 
2876 | This architecture enables advanced agent cognition through:
2877 | 1. Systematic knowledge organization
2878 | 2. Context-aware reasoning
2879 | 3. Memory evolution and refinement
2880 | 4. Meta-cognitive reflection
2881 | 5. Structured workflow management
2882 | 6. Rich visualization and reporting
2883 | 
2884 | The system provides a comprehensive foundation for sophisticated AI agent development with human-like memory organization and cognitive processes.
2885 | 
2886 | ## Architectural Motivation and Design Philosophy
2887 | 
2888 | The Unified Agent Memory and Cognitive System emerges from a fundamental challenge in AI agent development: creating systems that can maintain context, learn from experiences, understand patterns, and exhibit increasingly human-like cognitive capabilities. Traditional approaches to LLM agent architecture frequently suffer from several limitations:
2889 | 
2890 | 1. **Context Window Constraints**: LLMs have finite context windows, making long-term memory management essential
2891 | 2. **Memory Organization**: Flat memory structures lack the nuanced organization that enables efficient retrieval
2892 | 3. **Cognitive Continuity**: Maintaining coherent agent identity and learning across sessions
2893 | 4. **Metacognitive Capabilities**: Enabling self-reflection and knowledge consolidation
2894 | 
2895 | This memory system addresses these challenges through a cognitive architecture inspired by human memory models while being optimized for computational implementation. The four-tiered memory hierarchy (working, episodic, semantic, procedural) draws from established psychological frameworks but adapts them for practical AI implementation:
2896 | 
2897 | ```
2898 | Working Memory  → Episodic Memory  → Semantic Memory  → Procedural Memory
2899 | (Active focus)    (Experiences)       (Knowledge)        (Skills)
2900 | TTL: 30 minutes   TTL: 7 days         TTL: 30 days       TTL: 90 days
2901 | ```
2902 | 
2903 | This progression models how information flows through and evolves within the system, mimicking how human cognition transforms experiences into knowledge and eventually into skills.
2904 | 
2905 | ## Integration with Agent Architecture
2906 | 
2907 | While not explicitly detailed in the code, the memory system is designed to integrate with a comprehensive agent architecture:
2908 | 
2909 | ```
2910 | ┌───────────────────────────────────────┐
2911 | │           Agent Architecture          │
2912 | ├───────────┬─────────────┬─────────────┤
2913 | │ Perception│  Reasoning  │   Action    │
2914 | │           │             │  Generation │
2915 | ├───────────┴─────────────┴─────────────┤
2916 | │       Unified Memory System           │
2917 | ├─────────────────────────────────────┬─┤
2918 | │         Working Memory              │ │
2919 | ├─────────────────────────────────────┤ │
2920 | │  Episodic │ Semantic │ Procedural   │M│
2921 | │  Memory   │ Memory   │ Memory       │e│
2922 | ├─────────────────────────────────────┤t│
2923 | │         Memory Operations           │a│
2924 | ├─────────────────────────────────────┤c│
2925 | │  Associative Memory Network         │o│
2926 | ├─────────────────────────────────────┤g│
2927 | │  Thought Chains & Reasoning         │n│
2928 | ├─────────────────────────────────────┤i│
2929 | │  Workflow & Action Tracking         │t│
2930 | ├─────────────────────────────────────┤i│
2931 | │  Cognitive State Management         │o│
2932 | ├─────────────────────────────────────┤n│
2933 | │  Structured Knowledge Storage       │ │
2934 | └─────────────────────────────────────┴─┘
2935 | ```
2936 | 
2937 | The system functions as the cognitive backbone of an agent, with:
2938 | 
2939 | 1. **Input Integration**: Perceptions, observations, and inputs flow into episodic memory
2940 | 2. **Reasoning Support**: Thought chains and semantic memory support reasoning processes
2941 | 3. **Action Context**: Actions are recorded with reasoning and outcomes for future reference
2942 | 4. **Metacognition**: Consolidation and reflection processes enable higher-order cognition
2943 | 
2944 | Every part of the agent's functioning creates corresponding memory entries, allowing for persistent cognitive continuity across interactions.
2945 | 
2946 | ## Biomimetic Design and Cognitive Science Foundations
2947 | 
2948 | The system incorporates several principles from cognitive science:
2949 | 
2950 | ### Spreading Activation and Associative Networks
2951 | 
2952 | The memory link structure and semantic search implement a form of spreading activation, where retrieval of one memory activates related memories. Through functions like `get_linked_memories()` and the working memory optimization in `auto_update_focus()`, the system propagates attention and retrieval along associative pathways.
2953 | 
2954 | ### Memory Decay and Reinforcement
2955 | 
2956 | The implementation of importance decay and access-based reinforcement mirrors human memory dynamics:
2957 | 
2958 | ```python
2959 | def _compute_memory_relevance(importance, confidence, created_at, access_count, last_accessed):
2960 |     now = time.time()
2961 |     age_hours = (now - created_at) / 3600 if created_at else 0
2962 |     recency_factor = 1.0 / (1.0 + (now - (last_accessed or created_at)) / 86400)
2963 |     decayed_importance = max(0, importance * (1.0 - MEMORY_DECAY_RATE * age_hours))
2964 |     usage_boost = min(1.0 + (access_count / 10.0), 2.0) if access_count else 1.0
2965 |     relevance = (decayed_importance * usage_boost * confidence * recency_factor)
2966 |     return min(max(relevance, 0.0), 10.0)
2967 | ```
2968 | 
2969 | This function incorporates multiple cognitive principles:
2970 | - Memories decay over time with a configurable rate
2971 | - Frequently accessed memories remain relevant longer
2972 | - Recently accessed memories are prioritized
2973 | - Confidence acts as a weighting factor for reliability
2974 | 
2975 | ### Memory Evolution Pathways
2976 | 
2977 | The system models how information evolves through cognitive processing:
2978 | 
2979 | 1. **Observation → Episodic**: Direct experiences and inputs enter as episodic memories
2980 | 2. **Episodic → Semantic**: Through `promote_memory_level()`, frequently accessed episodic memories evolve into semantic knowledge
2981 | 3. **Semantic → Procedural**: Knowledge that represents skills or procedures can be further promoted
2982 | 4. **Consolidation**: Through `consolidate_memories()`, multiple related memories synthesize into higher-order insights
2983 | 
2984 | This progression mimics human learning processes where repeated experiences transform into consolidated knowledge and eventually into skills and habits.
2985 | 
2986 | ## Architectural Implementation Details
2987 | 
2988 | The system implements these cognitive principles through sophisticated database design and processing logic:
2989 | 
2990 | ### Circular References and Advanced SQL Techniques
2991 | 
2992 | One unique aspect not fully explored in previous sections is the handling of circular references between memories and thoughts:
2993 | 
2994 | ```sql
2995 | -- Deferrable Circular Foreign Key Constraints for thoughts <-> memories
2996 | BEGIN IMMEDIATE TRANSACTION;
2997 | PRAGMA defer_foreign_keys = ON;
2998 | 
2999 | ALTER TABLE thoughts ADD CONSTRAINT fk_thoughts_memory
3000 |     FOREIGN KEY (relevant_memory_id) REFERENCES memories(memory_id)
3001 |     ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
3002 | 
3003 | ALTER TABLE memories ADD CONSTRAINT fk_memories_thought
3004 |     FOREIGN KEY (thought_id) REFERENCES thoughts(thought_id)
3005 |     ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
3006 | 
3007 | COMMIT;
3008 | ```
3009 | 
3010 | This implementation uses SQLite's deferred constraints to solve the chicken-and-egg problem of bidirectional references. This enables the creation of thoughts that reference memories, and memories that reference thoughts, without circular dependency issues during insertion.
3011 | 
3012 | ### Embedding Integration and Vector Search
3013 | 
3014 | The vector embedding system represents a crucial advancement in semantic retrieval. The code implements:
3015 | 
3016 | 1. **Dimension-Aware Storage**: Embeddings include dimension metadata for compatibility checking
3017 | 2. **Binary BLOB Storage**: Vectors are efficiently stored as binary blobs
3018 | 3. **Model Tracking**: Embedding model information is preserved for future compatibility
3019 | 4. **Optimized Retrieval**: Candidate pre-filtering happens before similarity calculation
3020 | 5. **Hybrid Retrieval**: Combined vector and keyword search for robust memory access
3021 | 
3022 | This sophisticated approach enables the "remembering-by-meaning" capability essential for human-like memory retrieval.
3023 | 
3024 | ## LLM Integration for Meta-Cognitive Functions
3025 | 
3026 | A distinctive aspect of this architecture is its use of LLMs for meta-cognitive processes:
3027 | 
3028 | ### Prompt Engineering for Cognitive Functions
3029 | 
3030 | The system includes carefully crafted prompts for various cognitive operations:
3031 | 
3032 | ```python
3033 | def _generate_consolidation_prompt(memories: List[Dict], consolidation_type: str) -> str:
3034 |     # Format memory details...
3035 |     base_prompt = f"""You are an advanced cognitive system processing and consolidating 
3036 |     memories for an AI agent. Below are {len(memories)} memory items containing 
3037 |     information, observations, and insights relevant to a task. Your goal is to 
3038 |     perform a specific type of consolidation: '{consolidation_type}'...
3039 |     """
3040 |     
3041 |     if consolidation_type == "summary":
3042 |         base_prompt += """TASK: Create a comprehensive and coherent summary that 
3043 |         synthesizes the key information and context from ALL the provided memories...
3044 |         """
3045 |     # Additional consolidation types...
3046 | ```
3047 | 
3048 | These prompts implement different cognitive functions by leveraging the LLM's capabilities within structured contexts:
3049 | 
3050 | 1. **Summary**: Integration of information across memories
3051 | 2. **Insight**: Pattern recognition and implication detection
3052 | 3. **Procedural**: Extraction of generalizable procedures and methods
3053 | 4. **Question**: Identification of knowledge gaps and uncertainties
3054 | 
3055 | Similarly, the reflection system analyzes agent behavior through targeted prompts:
3056 | 
3057 | ```python
3058 | def _generate_reflection_prompt(workflow_name, workflow_desc, operations, memories, reflection_type):
3059 |     # Format operations with memory context...
3060 |     base_prompt = f"""You are an advanced meta-cognitive system analyzing an AI agent's 
3061 |     workflow: "{workflow_name}"...
3062 |     """
3063 |     
3064 |     if reflection_type == "summary":
3065 |         base_prompt += """TASK: Create a reflective summary of this workflow's 
3066 |         progress and current state...
3067 |         """
3068 |     # Additional reflection types...
3069 | ```
3070 | 
3071 | These meta-cognitive capabilities represent an emergent property when LLMs are used to analyze the agent's own memory and behavior.
3072 | 
3073 | ## Cognitive State Management
3074 | 
3075 | An essential aspect of the system is its sophisticated cognitive state management:
3076 | 
3077 | ### Working Memory Optimization
3078 | 
3079 | The working memory implements capacity-constrained optimization:
3080 | 
3081 | ```python
3082 | async def optimize_working_memory(
3083 |     context_id: str,
3084 |     target_size: int = MAX_WORKING_MEMORY_SIZE,
3085 |     strategy: str = "balanced",  # balanced, importance, recency, diversity
3086 |     db_path: str = DEFAULT_DB_PATH
3087 | ) -> Dict[str, Any]:
3088 | ```
3089 | 
3090 | This function implements multiple strategies for managing limited attentional capacity:
3091 | 
3092 | 1. **Balanced**: Considers all relevance factors
3093 | 2. **Importance**: Prioritizes important memories
3094 | 3. **Recency**: Prioritizes recent memories
3095 | 4. **Diversity**: Ensures varied memory types for broader context
3096 | 
3097 | These strategies mirror different cognitive styles and attentional priorities in human cognition.
3098 | 
3099 | ### Focus Management and Attention
3100 | 
3101 | The system implements attentional mechanisms through focus management:
3102 | 
3103 | ```python
3104 | async def auto_update_focus(
3105 |     context_id: str,
3106 |     recent_actions_count: int = 3,
3107 |     db_path: str = DEFAULT_DB_PATH
3108 | ) -> Dict[str, Any]:
3109 | ```
3110 | 
3111 | This function models automatic attention shifting through sophisticated heuristics:
3112 | - Relevant to recent actions (recency bias)
3113 | - Memory type (questions and plans get priority)
3114 | - Memory level (semantic/procedural knowledge gets higher priority)
3115 | - Base relevance (importance, confidence)
3116 | 
3117 | This dynamic focus management creates an emergent attentional system resembling human cognitive focus.
3118 | 
3119 | ## Practical System Applications
3120 | 
3121 | The unified memory system enables several practical capabilities for AI agents:
3122 | 
3123 | ### Persistent Context Across Sessions
3124 | 
3125 | Through `save_cognitive_state()` and `load_cognitive_state()`, the system enables agents to maintain cognitive continuity across sessions. This allows for:
3126 | 
3127 | 1. Persistent user relationships that evolve over time
3128 | 2. Long-running projects with progress maintained between interactions
3129 | 3. Incremental knowledge accumulation and refinement
3130 | 
3131 | ### Knowledge Evolution and Refinement
3132 | 
3133 | The memory evolution pathways (episodic → semantic → procedural) enable knowledge maturation. Key applications include:
3134 | 
3135 | 1. Learning from repeated experiences
3136 | 2. Developing expertise through information refinement
3137 | 3. Converting learned patterns into reusable skills
3138 | 4. Building increasingly sophisticated domain understanding
3139 | 
3140 | ### Meta-Cognitive Self-Improvement
3141 | 
3142 | Through reflection and consolidation, the system enables emergent self-improvement capabilities:
3143 | 
3144 | 1. Identifying knowledge gaps through reflection
3145 | 2. Consolidating fragmented observations into coherent insights
3146 | 3. Recognizing patterns in its own problem-solving approaches
3147 | 4. Refining strategies based on past successes and failures
3148 | 
3149 | These capabilities represent stepping stones toward more sophisticated cognitive agents with emergent meta-learning capabilities.
3150 | 
3151 | ## Performance Optimization and Scaling
3152 | 
3153 | The system incorporates numerous optimizations for practical deployment:
3154 | 
3155 | ### Database Performance Tuning
3156 | 
3157 | ```python
3158 | SQLITE_PRAGMAS = [
3159 |     "PRAGMA journal_mode=WAL",
3160 |     "PRAGMA synchronous=NORMAL",
3161 |     "PRAGMA foreign_keys=ON",
3162 |     "PRAGMA temp_store=MEMORY",
3163 |     "PRAGMA cache_size=-32000",
3164 |     "PRAGMA mmap_size=2147483647",
3165 |     "PRAGMA busy_timeout=30000"
3166 | ]
3167 | ```
3168 | 
3169 | These pragmas optimize SQLite for:
3170 | 1. Write-Ahead Logging for concurrency
3171 | 2. Memory-based temporary storage
3172 | 3. Large cache size (32MB)
3173 | 4. Memory-mapped I/O for performance
3174 | 5. Extended busy timeout for reliability
3175 | 
3176 | ### Query Optimization
3177 | 
3178 | The schema includes comprehensive indexing:
3179 | 
3180 | ```sql
3181 | -- Workflow indices
3182 | CREATE INDEX IF NOT EXISTS idx_workflows_status ON workflows(status);
3183 | CREATE INDEX IF NOT EXISTS idx_workflows_parent ON workflows(parent_workflow_id);
3184 | CREATE INDEX IF NOT EXISTS idx_workflows_last_active ON workflows(last_active DESC);
3185 | -- Action indices
3186 | CREATE INDEX IF NOT EXISTS idx_actions_workflow_id ON actions(workflow_id);
3187 | -- Memory indices
3188 | CREATE INDEX IF NOT EXISTS idx_memories_workflow ON memories(workflow_id);
3189 | CREATE INDEX IF NOT EXISTS idx_memories_level ON memories(memory_level);
3190 | CREATE INDEX IF NOT EXISTS idx_memories_type ON memories(memory_type);
3191 | CREATE INDEX IF NOT EXISTS idx_memories_importance ON memories(importance DESC);
3192 | -- Many more indices...
3193 | ```
3194 | 
3195 | With over 30 carefully designed indices covering most query patterns, the system ensures efficient database access despite complex query patterns.
3196 | 
3197 | ### Memory Management
3198 | 
3199 | The system implements sophisticated memory lifecycle management:
3200 | 
3201 | 1. **Time-To-Live (TTL)**: Different memory levels have appropriate default lifespans
3202 | 2. **Expiration Management**: `delete_expired_memories()` handles cleanup
3203 | 3. **Importance-Based Prioritization**: More important memories persist longer
3204 | 4. **Access Reinforcement**: Frequently used memories remain accessible
3205 | 
3206 | For large-scale deployments, the system could be extended with:
3207 | - Archival mechanisms for cold storage
3208 | - Distributed database backends for horizontal scaling
3209 | - Memory sharding across workflows
3210 | 
3211 | ## Visualization and Reporting Capabilities
3212 | 
3213 | The system includes sophisticated visualization that wasn't fully explored in previous sections:
3214 | 
3215 | ### Interactive Mermaid Diagrams
3216 | 
3217 | The `visualize_memory_network()` and `visualize_reasoning_chain()` functions generate interactive Mermaid diagrams that represent complex cognitive structures:
3218 | 
3219 | ```mermaid
3220 | graph TD
3221 |     M_abc123["Observation<br/>Column A is numerical<br/><i>(I: 6.0)</i>"]:::levelepisodic
3222 |     M_def456["Observation<br/>Column B is categorical<br/><i>(I: 6.0)</i>"]:::levelepisodic
3223 |     M_ghi789["Insight<br/>Data requires mixed analysis<br/><i>(I: 7.5)</i>"]:::levelsemantic
3224 |     
3225 |     M_ghi789 -- generalizes --> M_abc123
3226 |     M_ghi789 -- generalizes --> M_def456
3227 |     
3228 |     classDef levelepisodic fill:#e8f5e9,stroke:#4caf50,color:#388e3c,stroke-width:1px;
3229 |     classDef levelsemantic fill:#fffde7,stroke:#ffc107,color:#ffa000,stroke-width:1px;
3230 | ```
3231 | 
3232 | These visualizations enable:
3233 | 1. Understanding complex memory relationships
3234 | 2. Tracing reasoning pathways
3235 | 3. Identifying key knowledge structures
3236 | 4. Visualizing the agent's cognitive evolution
3237 | 
3238 | ### Comprehensive Reports
3239 | 
3240 | The `generate_workflow_report()` function creates detailed reports in multiple formats and styles:
3241 | 
3242 | 1. **Professional**: Formal business-style reporting
3243 | 2. **Concise**: Brief executive summaries
3244 | 3. **Narrative**: Story-based explanations
3245 | 4. **Technical**: Data-oriented technical documentation
3246 | 
3247 | These reporting capabilities make the agent's internal processes transparent and understandable to human collaborators.
3248 | 
3249 | ## Integration Examples and Workflow
3250 | 
3251 | Let's examine a complete workflow to understand how all components integrate:
3252 | 
3253 | 1. **Workflow Creation**: Agent creates a workflow container for a data analysis task with `create_workflow()`
3254 | 2. **Initial Goals**: Records initial goals as thoughts with `record_thought()`
3255 | 3. **Action Planning**: Plans data loading as an action with `record_action_start()`
3256 | 4. **Tool Execution**: Executes the data loading tool and records results with `record_action_completion()`
3257 | 5. **Artifact Creation**: Saves loaded data as an artifact with `record_artifact()`
3258 | 6. **Observation Creation**: Records observations about data as memories with `store_memory()`
3259 | 7. **Memory Linking**: Creates associations between related observations with `create_memory_link()`
3260 | 8. **Insight Generation**: Consolidates observations into insights with `consolidate_memories()`
3261 | 9. **Action Planning (Continued)**: Plans analysis methods based on insights
3262 | 10. **Execution and Recording**: Continues execution, recording results
3263 | 11. **Reflection**: Periodically reflects on progress with `generate_reflection()`
3264 | 12. **Focus Management**: Shifts focus based on current priorities with `auto_update_focus()`
3265 | 13. **Memory Evolution**: Frequently accessed observations evolve into semantic knowledge with `promote_memory_level()`
3266 | 14. **State Preservation**: Saves cognitive state with `save_cognitive_state()` for later continuation
3267 | 
3268 | This integrated workflow demonstrates how the memory system supports sophisticated cognitive processes while maintaining continuity, evolving knowledge, and enabling metacognition.
3269 | 
3270 | ## Future Extensions and Research Directions
3271 | 
3272 | The architecture lays groundwork for several advanced capabilities:
3273 | 
3274 | ### Multi-Agent Memory Sharing
3275 | 
3276 | The system could be extended for knowledge sharing between agents through:
3277 | - Standardized memory export/import
3278 | - Selective memory sharing protocols
3279 | - Cross-agent memory linking
3280 | - Collaborative knowledge building
3281 | 
3282 | ### Emotional and Motivational Components
3283 | 
3284 | Cognitive architectures could incorporate:
3285 | - Affective tagging of memories
3286 | - Motivation-based memory prioritization
3287 | - Emotional context for memory formation
3288 | - Value-aligned memory evolution
3289 | 
3290 | ### Neural-Symbolic Integration
3291 | 
3292 | Future versions might incorporate:
3293 | - Structured knowledge representations
3294 | - Logical reasoning over memory networks
3295 | - Constraint satisfaction for memory consistency
3296 | - Rule-based memory consolidation
3297 | 
3298 | ### Learning Optimizations
3299 | 
3300 | The system could be enhanced with:
3301 | - Adaptive memory promotion thresholds
3302 | - Personalized decay rates
3303 | - Learning rate parameters for different domains
3304 | - Automated memory organization optimization
3305 | 
3306 | ## Conclusion: Toward Emergent Cognitive Systems
3307 | 
3308 | The Unified Agent Memory and Cognitive System represents a sophisticated architecture that bridges traditional database systems with cognitive science-inspired memory models. By implementing a structured yet flexible memory architecture with meta-cognitive capabilities, it creates a foundation for increasingly sophisticated AI agents that can:
3309 | 
3310 | 1. Learn from experiences through structured memory evolution
3311 | 2. Maintain cognitive continuity across sessions
3312 | 3. Develop increasingly refined understanding through consolidation
3313 | 4. Engage in self-reflection and improvement
3314 | 5. Organize and prioritize information effectively
3315 | 
3316 | As LLM-based agents continue to evolve, sophisticated memory architectures like this one will become increasingly essential for overcoming the limitations of context windows and enabling truly persistent, learning agents with emergent cognitive capabilities.
3317 | 
3318 | The system ultimately aims to address a core challenge in AI development: creating agents that don't just simulate intelligence in the moment, but that accumulate, refine, and evolve knowledge over time - a crucial stepping stone toward more capable and general artificial intelligence.
```
Page 37/45FirstPrevNextLast