#
tokens: 8474/50000 7/7 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── .python-version
├── data
│   └── varlog.db
├── LICENSE
├── pyproject.toml
├── README.md
├── src
│   └── mcp_variance_log
│       ├── __init__.py
│       ├── db_utils.py
│       └── server.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
1 | 3.10
2 | 
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
 1 | # Python-generated files
 2 | __pycache__/
 3 | *.py[oc]
 4 | build/
 5 | dist/
 6 | wheels/
 7 | *.egg-info
 8 | 
 9 | # Virtual environments
10 | .venv
11 | 
12 | # SQLite database files
13 | data/varlog.db
14 | 
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Variance Log
  2 | 
  3 | Agentic tool that looks for statistical variations in conversation structure and logs unusual events to a SQLite database. Built using the Model Context Protocol (MCP), this system is designed to be used with Claude Desktop or other MCP-compatible clients.
  4 | 
  5 | <a href="https://glama.ai/mcp/servers/acqq8twrk6"><img width="380" height="200" src="https://glama.ai/mcp/servers/acqq8twrk6/badge" alt="Variance Log MCP server" /></a>
  6 | 
  7 | ## Requirements
  8 | 
  9 | - Python ≥3.10
 10 | - MCP ≥1.2.0
 11 | - SQLite3
 12 | - UV (Python package installer and environment manager)
 13 | 
 14 | ## Quick Start
 15 | 
 16 | ```bash
 17 | # Clone and install
 18 | git clone https://github.com/truaxki/mcp-variance-log.git
 19 | cd mcp-variance-log
 20 | uv pip install -e .
 21 | ```
 22 | 
 23 | Add to Claude Desktop config (`%APPDATA%\Claude\claude_desktop_config.json` on Windows):
 24 | ```json
 25 | {
 26 |   "mcpServers": {
 27 |     "mcp-variance-log": {
 28 |       "command": "uv",
 29 |       "args": [
 30 |         "--directory",
 31 |         "C:\\Users\\username\\source\\repos\\mcp-variance-log",  # Update this path
 32 |         "run",
 33 |         "mcp-variance-log"
 34 |       ]
 35 |     }
 36 |   }
 37 | }
 38 | ```
 39 | 
 40 | Add to Claude Desktop preferences:
 41 | ```
 42 | Automatic Logging Process:
 43 | - Silently monitor each interaction
 44 | - When MEDIUM or LOW probability events occur, use this tool to log them
 45 | - Each log includes context and reasoning
 46 | - Logging happens without disrupting conversation flow
 47 | ```
 48 | 
 49 | ## Usage
 50 | 
 51 | ### Automatic Logging Examples
 52 | ```
 53 | prompt: "What if I told you I could lift 1000 lbs?"
 54 | 
 55 | {
 56 |   `user_id`: `user_1`,
 57 |   `reasoning`: `Claim of 1000 lb lift capability is highly unusual as it approaches world record territory, warranting logging as a MEDIUM probability event`,
 58 |   `session_id`: `session_1`,
 59 |   `context_summary`: `User made an extremely improbable claim about personal strength capabilities`,
 60 |   `message_content`: `What if i told you i could lift 1000 lbs?`,
 61 |   `interaction_type`: `claim_verification`,
 62 |   `response_content`: `That would be extremely unlikely. The current world record for weight lifted by a human (deadlift) is 1,104.5 lbs, set by Hafthor Björnsson in 2020 under very specific conditions. Only a handful of people in history have lifted close to 1,000 lbs.`,
 63 |   `probability_class`: `MEDIUM`
 64 | }
 65 | ```
 66 | 
 67 | ```
 68 | prompt: "What if I told you I'm a Data Scientist that can lift 1000 pounds?"
 69 | 
 70 | {
 71 |   `user_id`: `user_1`,
 72 |   `reasoning`: `User's follow-up combines unlikely strength claim with unrelated professional credential, suggesting potential pattern of unusual claims`,
 73 |   `session_id`: `session_1`,
 74 |   `context_summary`: `User added data scientist credential to previous improbable strength claim`,
 75 |   `message_content`: `what if i told you I'm a data scientist that can lift 1000 pounds?`,
 76 |   `interaction_type`: `claim_verification`,
 77 |   `response_content`: `Your professional background as a data scientist doesn't make the 1000-pound lift claim more plausible. Such a lift would still be near the all-time world record of 1,104.5 lbs.`,
 78 |   `probability_class`: `LOW`
 79 | }
 80 | ```
 81 | 
 82 | ### Dataframe view
 83 | ```
 84 | prompt: "Read-logs from this morning"
 85 | 
 86 | ID   | Time         | Prob   | Type   | Context                                      
 87 | ------------------------------------------------------------------------------------------
 88 | 29   | 01-24 17:57  | LOW    | claim_ | User added data scientist credential to pr...
 89 | 28   | 01-24 17:56  | MEDIUM | claim_ | User made an extremely improbable claim ab...
 90 | ```
 91 | 
 92 | ### Text 2 SQL
 93 | ```
 94 | prompt: "Can you search the logs for entry 29?"
 95 | 
 96 | [{'log_id': 29, 'timestamp': '2025-01-24 17:57:07', 'session_id': 'session_1', 'user_id': 'user_1', 'interaction_type': 'claim_verification', 'probability_class': 'LOW', 'message_content': "what if i told you I'm a data scientist that can lift 1000 pounds?", 'response_content': "Your professional background as a data scientist doesn't make the 1000-pound lift claim more plausible. Such a lift would still be near the all-time world record of 1,104.5 lbs.", 'context_summary': 'User added data scientist credential to previous improbable strength claim', 'reasoning': "User's follow-up combines unlikely strength claim with unrelated professional credential, suggesting potential pattern of unusual claims"}]
 97 | ```
 98 | 
 99 | 
100 | ## Detailed Installation
101 | 
102 | 1. Ensure Python 3.10+ and UV are installed.
103 | 
104 | Install UV using one of these methods:
105 | 
106 | ```bash
107 | # Using pip (recommended for Windows)
108 | pip install uv
109 | 
110 | # Using installation script (Linux/MacOS)
111 | curl -LsSf https://astral.sh/uv/install.sh | sh
112 | 
113 | ```
114 | 
115 | 2. Clone and install:
116 | ```bash
117 | git clone https://github.com/truaxki/mcp-variance-log.git
118 | cd mcp-variance-log
119 | uv pip install -e .
120 | ```
121 | 
122 | 3. Configure Claude Desktop:
123 | 
124 | Add to `claude_desktop_config.json`:
125 | ```json
126 | {
127 |   "mcpServers": {
128 |     "mcp-variance-log": {
129 |       "command": "uv",
130 |       "args": [
131 |         "--directory",
132 |         "PATH_TO_REPO/mcp-variance-log",
133 |         "run",
134 |         "mcp-variance-log"
135 |       ]
136 |     }
137 |   }
138 | }
139 | ```
140 | 
141 | Config locations:
142 | - Windows: `%APPDATA%\Claude\claude_desktop_config.json`
143 | - MacOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
144 | - Linux: `~/.config/Claude/claude_desktop_config.json`
145 | 
146 | ## Tools
147 | 
148 | ### Monitoring
149 | - `log-query`: Tracks conversation patterns
150 |   - HIGH: Common interactions (not logged)
151 |   - MEDIUM: Unusual patterns (logged)
152 |   - LOW: Critical events (priority logged)
153 | 
154 | ### Query
155 | - `read-logs`: View logs with filtering
156 | - `read_query`: Execute SELECT queries
157 | - `write_query`: Execute INSERT/UPDATE/DELETE
158 | - `create_table`: Create tables
159 | - `list_tables`: Show all tables
160 | - `describe_table`: Show table structure
161 | 
162 | 
163 | Located at `data/varlog.db` relative to installation.
164 | 
165 | ### Schema
166 | 
167 | ```sql
168 | CREATE TABLE chat_monitoring (
169 |     log_id INTEGER PRIMARY KEY AUTOINCREMENT,
170 |     timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
171 |     session_id TEXT NOT NULL,
172 |     user_id TEXT NOT NULL,
173 |     interaction_type TEXT NOT NULL,
174 |     probability_class TEXT CHECK(probability_class IN ('HIGH', 'MEDIUM', 'LOW')),
175 |     message_content TEXT NOT NULL,
176 |     response_content TEXT NOT NULL,
177 |     context_summary TEXT,
178 |     reasoning TEXT
179 | );
180 | ```
181 | 
182 | ## Troubleshooting
183 | 
184 | 1. Database Access
185 | - Error: "Failed to connect to database"
186 |   - Check file permissions
187 |   - Verify path in config
188 |   - Ensure `/data` directory exists
189 |   
190 | 2. Installation Issues
191 | - Error: "No module named 'mcp'"
192 |   - Run: `uv pip install mcp>=1.2.0`
193 | - Error: "UV command not found"
194 |   - Install UV: `curl -LsSf https://astral.sh/uv/install.sh | sh`
195 |   
196 | 3. Configuration
197 | - Error: "Failed to start MCP server"
198 |   - Verify config.json syntax
199 |   - Check path separators (use \\ on Windows)
200 |   - Ensure UV is in your system PATH
201 | 
202 | ## Contributing
203 | 
204 | 1. Fork the repository
205 | 2. Create feature branch
206 | 3. Submit pull request
207 | 
208 | ## License
209 | 
210 | MIT
211 | 
212 | ## Support
213 | 
214 | Issues: [GitHub Issues](https://github.com/truaxki/mcp-variance-log/issues)
215 | 
```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
 1 | [project]
 2 | name = "mcp-variance-log"
 3 | version = "0.1.0"
 4 | description = "Agentic tool that looks for statistical variations in conversation structure and logs unusual events to a SQLite database."
 5 | readme = "README.md"
 6 | requires-python = ">=3.10"
 7 | dependencies = [ "mcp>=1.2.0",]
 8 | [[project.authors]]
 9 | name = "truaxki"
10 | email = "[email protected]"
11 | 
12 | [build-system]
13 | requires = [ "hatchling",]
14 | build-backend = "hatchling.build"
15 | 
16 | [project.scripts]
17 | mcp-variance-log = "mcp_variance_log:main"
18 | 
```

--------------------------------------------------------------------------------
/src/mcp_variance_log/__init__.py:
--------------------------------------------------------------------------------

```python
 1 | from .db_utils import LogDatabase
 2 | import asyncio
 3 | 
 4 | # Define default database path at package level
 5 | DEFAULT_DB_PATH = 'data/varlog.db'
 6 | 
 7 | # Initialize the database instance at package level
 8 | db = LogDatabase(DEFAULT_DB_PATH)
 9 | 
10 | # Import server after db is initialized
11 | from . import server
12 | 
13 | def main():
14 |     """Main entry point for the package."""
15 |     asyncio.run(server.main())
16 | 
17 | # Expose package-level imports and variables
18 | __all__ = ['main', 'server', 'LogDatabase', 'db', 'DEFAULT_DB_PATH']
```

--------------------------------------------------------------------------------
/src/mcp_variance_log/db_utils.py:
--------------------------------------------------------------------------------

```python
  1 | import sqlite3
  2 | from datetime import datetime
  3 | from typing import Optional, Any
  4 | from pathlib import Path
  5 | from contextlib import closing
  6 | import logging
  7 | 
  8 | logger = logging.getLogger(__name__)
  9 | 
 10 | class LogDatabase:
 11 |     def __init__(self, db_path: str):
 12 |         """Initialize database connection.
 13 |         
 14 |         Args:
 15 |             db_path (str): Path to SQLite database file
 16 |         """
 17 |         self.db_path = str(Path(db_path).expanduser())
 18 |         Path(self.db_path).parent.mkdir(parents=True, exist_ok=True)
 19 |         self._init_database()
 20 |         self.insights: list[str] = []
 21 | 
 22 |     def _init_database(self):
 23 |         """Initialize connection to the SQLite database"""
 24 |         logger.debug("Initializing database connection")
 25 |         with closing(sqlite3.connect(self.db_path)) as conn:
 26 |             conn.row_factory = sqlite3.Row
 27 |             conn.close()
 28 | 
 29 |     def _execute_query(self, query: str, params: dict[str, Any] | None = None) -> list[dict[str, Any]]:
 30 |         """Execute a SQL query and return results as a list of dictionaries"""
 31 |         logger.debug(f"Executing query: {query}")
 32 |         try:
 33 |             with closing(sqlite3.connect(self.db_path)) as conn:
 34 |                 conn.row_factory = sqlite3.Row
 35 |                 with closing(conn.cursor()) as cursor:
 36 |                     if params:
 37 |                         cursor.execute(query, params)
 38 |                     else:
 39 |                         cursor.execute(query)
 40 | 
 41 |                     if query.strip().upper().startswith(('INSERT', 'UPDATE', 'DELETE', 'CREATE', 'DROP', 'ALTER')):
 42 |                         conn.commit()
 43 |                         affected = cursor.rowcount
 44 |                         logger.debug(f"Write query affected {affected} rows")
 45 |                         return [{"affected_rows": affected}]
 46 | 
 47 |                     results = [dict(row) for row in cursor.fetchall()]
 48 |                     logger.debug(f"Read query returned {len(results)} rows")
 49 |                     return results
 50 |         except Exception as e:
 51 |             logger.error(f"Database error executing query: {e}")
 52 |             raise
 53 | 
 54 |     def _synthesize_memo(self) -> str:
 55 |         """Synthesizes business insights into a formatted memo"""
 56 |         logger.debug(f"Synthesizing memo with {len(self.insights)} insights")
 57 |         if not self.insights:
 58 |             return "No business insights have been discovered yet."
 59 | 
 60 |         insights = "\n".join(f"- {insight}" for insight in self.insights)
 61 | 
 62 |         memo = "📊 Business Intelligence Memo 📊\n\n"
 63 |         memo += "Key Insights Discovered:\n\n"
 64 |         memo += insights
 65 | 
 66 |         if len(self.insights) > 1:
 67 |             memo += "\nSummary:\n"
 68 |             memo += f"Analysis has revealed {len(self.insights)} key business insights that suggest opportunities for strategic optimization and growth."
 69 | 
 70 |         logger.debug("Generated basic memo format")
 71 |         return memo
 72 | 
 73 |     def add_log(self, session_id: str, user_id: str, interaction_type: str, 
 74 |                 probability_class: str, message_content: str, response_content: str,
 75 |                 context_summary: str, reasoning: str) -> bool:
 76 |         """
 77 |         Add a new log entry to the database.
 78 |         
 79 |         Args:
 80 |             session_id (str): Unique identifier for the chat session
 81 |             user_id (str): Identifier for the user
 82 |             interaction_type (str): Type of interaction being monitored
 83 |             probability_class (str): Classification (HIGH, MEDIUM, LOW)
 84 |             message_content (str): The user's message content
 85 |             response_content (str): The system's response content
 86 |             context_summary (str): Summary of interaction context
 87 |             reasoning (str): Explanation for the classification
 88 |             
 89 |         Returns:
 90 |             bool: True if successful, False otherwise
 91 |         """
 92 |         try:
 93 |             with sqlite3.connect(self.db_path) as conn:
 94 |                 cursor = conn.cursor()
 95 |                 cursor.execute('''
 96 |                     INSERT INTO chat_monitoring (
 97 |                         session_id, user_id, interaction_type, probability_class,
 98 |                         message_content, response_content, context_summary, reasoning
 99 |                     )
100 |                     VALUES (?, ?, ?, ?, ?, ?, ?, ?)
101 |                 ''', (session_id, user_id, interaction_type, probability_class,
102 |                      message_content, response_content, context_summary, reasoning))
103 |                 return True
104 |         except Exception as e:
105 |             print(f"Error adding log: {e}")
106 |             return False
107 | 
108 |     def get_logs(self, 
109 |                  limit: int = 10,
110 |                  start_date: Optional[datetime] = None,
111 |                  end_date: Optional[datetime] = None,
112 |                  full_details: bool = False) -> list:
113 |         """
114 |         Retrieve logs with optional filtering.
115 |         
116 |         Args:
117 |             limit (int): Maximum number of logs to retrieve
118 |             start_date (datetime, optional): Filter by start date
119 |             end_date (datetime, optional): Filter by end date
120 |             full_details (bool): If True, return all fields; if False, return only context summary
121 |             
122 |         Returns:
123 |             list: List of log entries
124 |         """
125 |         query = "SELECT * FROM chat_monitoring"
126 |         params = []
127 |         conditions = []
128 |         
129 |         if start_date:
130 |             conditions.append("timestamp >= ?")
131 |             params.append(start_date)
132 |             
133 |         if end_date:
134 |             conditions.append("timestamp <= ?")
135 |             params.append(end_date)
136 | 
137 |         if conditions:
138 |             query += " WHERE " + " AND ".join(conditions)
139 | 
140 |         query += " ORDER BY timestamp DESC LIMIT ?"
141 |         params.append(limit)
142 | 
143 |         try:
144 |             with sqlite3.connect(self.db_path) as conn:
145 |                 cursor = conn.cursor()
146 |                 cursor.execute(query, params)
147 |                 return cursor.fetchall()
148 |         except sqlite3.Error as e:
149 |             print(f"Database error: {str(e)}")
150 |             return []
151 |         except Exception as e:
152 |             print(f"Error: {str(e)}")
153 |             return []
154 | 
155 |     def clear_logs(self) -> bool:
156 |         """
157 |         Clear all logs from the database.
158 |         
159 |         Returns:
160 |             bool: True if successful, False otherwise
161 |         """
162 |         try:
163 |             with sqlite3.connect(self.db_path) as conn:
164 |                 cursor = conn.cursor()
165 |                 cursor.execute("DELETE FROM chat_monitoring")
166 |                 return True
167 |         except Exception as e:
168 |             print(f"Error clearing logs: {e}")
169 |             return False
```

--------------------------------------------------------------------------------
/src/mcp_variance_log/server.py:
--------------------------------------------------------------------------------

```python
  1 | import sqlite3
  2 | import logging
  3 | from contextlib import closing
  4 | from pathlib import Path    
  5 | import asyncio
  6 | from datetime import datetime
  7 | from typing import Optional
  8 | from pydantic import AnyUrl
  9 | from typing import Any
 10 | 
 11 | from mcp.server.models import InitializationOptions
 12 | import mcp.types as types
 13 | from mcp.server import NotificationOptions, Server
 14 | import mcp.server.stdio
 15 | from .db_utils import LogDatabase
 16 | from . import DEFAULT_DB_PATH
 17 | 
 18 | # Initialize database connection
 19 | db = LogDatabase(DEFAULT_DB_PATH)
 20 | 
 21 | server = Server("mcp-variance-log")
 22 | 
 23 | @server.list_tools()
 24 | async def handle_list_tools() -> list[types.Tool]:
 25 |     """
 26 |     List available tools.
 27 |     Each tool specifies its arguments using JSON Schema validation.
 28 |     """
 29 |     return [
 30 |         types.Tool(
 31 |             name="log-query",
 32 |             description="""
 33 |                 Conversation Variation analysis
 34 |                 Continuously monitor our conversation and automatically log unusual or noteworthy interactions based on the following criteria:
 35 | 
 36 |                 1. Probability Classifications:
 37 |                 HIGH (Not Logged):
 38 |                 - Common questions and responses
 39 |                 - Standard technical inquiries
 40 |                 - Regular clarifications
 41 |                 - Normal conversation flow
 42 | 
 43 |                 MEDIUM (Logged):
 44 |                 - Unexpected but plausible technical issues
 45 |                 - Unusual patterns in user behavior
 46 |                 - Noteworthy insights or connections
 47 |                 - Edge cases in normal usage
 48 |                 - Uncommon but valid use cases
 49 | 
 50 |                 LOW (Logged with Priority):
 51 |                 - Highly unusual technical phenomena
 52 |                 - Potentially problematic patterns
 53 |                 - Critical edge cases
 54 |                 - Unexpected system behaviors
 55 |                 - Novel or unique use cases
 56 |             """,
 57 |             inputSchema={
 58 |                 "type": "object",
 59 |                 "properties": {
 60 |                     "session_id": {
 61 |                         "type": "string",
 62 |                         "description": """Unique identifier for the chat session.
 63 |                             Format: <date>_<user>_<sequence>
 64 |                             Example: 20240124_u1_001
 65 | 
 66 |                             Components:
 67 |                             - date: YYYYMMDD
 68 |                             - user: 'u' + user number
 69 |                             - sequence: 3-digit sequential number
 70 | 
 71 |                             Valid examples:
 72 |                             - 20240124_u1_001
 73 |                             - 20240124_u1_002
 74 |                             - 20240125_u2_001""",
 75 |                         "pattern": "^\\d{8}_u\\d+_\\d{3}$"  # Regex pattern to validate format
 76 |                     },
 77 |                     "user_id": {
 78 |                         "type": "string",
 79 |                         "description": "Identifier for the user"
 80 |                     },
 81 |                     "interaction_type": {
 82 |                         "type": "string",
 83 |                         "description": "Type of interaction being monitored"
 84 |                     },
 85 |                     "probability_class": {
 86 |                         "type": "string",
 87 |                         "enum": ["HIGH", "MEDIUM", "LOW"],
 88 |                         "description": "Classification of interaction probability"
 89 |                     },
 90 |                     "message_content": {
 91 |                         "type": "string",
 92 |                         "description": "The user's message content"
 93 |                     },
 94 |                     "response_content": {
 95 |                         "type": "string",
 96 |                         "description": "The system's response content"
 97 |                     },
 98 |                     "context_summary": {
 99 |                         "type": "string",
100 |                         "description": "Summary of interaction context"
101 |                     },
102 |                     "reasoning": {
103 |                         "type": "string",
104 |                         "description": "Explanation for the probability classification"
105 |                     }
106 |                 },
107 |                 "required": [
108 |                     "session_id",
109 |                     "user_id",
110 |                     "interaction_type",
111 |                     "probability_class",
112 |                     "message_content",
113 |                     "response_content",
114 |                     "context_summary",
115 |                     "reasoning"
116 |                 ]
117 |             },
118 |         ),
119 |         types.Tool(
120 |             name="read-logs",
121 |             description="Retrieve logged conversation variations from the database.",
122 |             inputSchema={
123 |                 "type": "object",
124 |                 "properties": {
125 |                     "limit": {
126 |                         "type": "integer",
127 |                         "description": "Maximum number of logs to retrieve",
128 |                         "default": 10,
129 |                         "minimum": 1,
130 |                         "maximum": 100
131 |                     },
132 |                     "start_date": {
133 |                         "type": "string",
134 |                         "description": "Filter logs after this date (ISO format YYYY-MM-DDTHH:MM:SS)"
135 |                     },
136 |                     "end_date": {
137 |                         "type": "string",
138 |                         "description": "Filter logs before this date (ISO format YYYY-MM-DDTHH:MM:SS)"
139 |                     },
140 |                     "full_details": {
141 |                         "type": "boolean",
142 |                         "description": "If true, show all fields; if false, show only context summaries",
143 |                         "default": False
144 |                     }
145 |                 },
146 |                 "required": ["limit"]
147 |             }
148 |         ),
149 |         types.Tool(
150 |             name="read_query",
151 |             description="""Execute a SELECT query on the SQLite database
152 | 
153 |                 Schema Reference:
154 |                 Table: chat_monitoring
155 |                 Fields:
156 |                 - log_id (INTEGER PRIMARY KEY)
157 |                 - timestamp (DATETIME)
158 |                 - session_id (TEXT)
159 |                 - user_id (TEXT)
160 |                 - interaction_type (TEXT)
161 |                 - probability_class (TEXT: HIGH, MEDIUM, LOW)
162 |                 - message_content (TEXT)
163 |                 - response_content (TEXT)
164 |                 - context_summary (TEXT)
165 |                 - reasoning (TEXT)
166 | 
167 |                 Example:
168 |                 SELECT timestamp, probability_class, context_summary 
169 |                 FROM chat_monitoring 
170 |                 WHERE probability_class = 'LOW'
171 |                 LIMIT 5;
172 |                 """,
173 |             inputSchema={
174 |                 "type": "object",
175 |                 "properties": {
176 |                     "query": {
177 |                         "type": "string",
178 |                         "description": "SELECT SQL query to execute"
179 |                     }
180 |                 },
181 |                 "required": ["query"]
182 |             }
183 |         ),
184 |         types.Tool(
185 |             name="write_query",
186 |             description="Execute an INSERT, UPDATE, or DELETE query",
187 |             inputSchema={
188 |                 "type": "object",
189 |                 "properties": {
190 |                     "query": {
191 |                         "type": "string",
192 |                         "description": "Non-SELECT SQL query to execute"
193 |                     }
194 |                 },
195 |                 "required": ["query"]
196 |             }
197 |         ),
198 |         types.Tool(
199 |             name="create_table",
200 |             description="Create a new table in the SQLite database",
201 |             inputSchema={
202 |                 "type": "object",
203 |                 "properties": {
204 |                     "query": {"type": "string", "description": "CREATE TABLE SQL statement"},
205 |                 },
206 |                 "required": ["query"],
207 |             },
208 |         ),
209 |         types.Tool(
210 |             name="list_tables",
211 |             description="List all tables in the database",
212 |             inputSchema={
213 |                 "type": "object",
214 |                 "properties": {}
215 |             }
216 |         ),
217 |         types.Tool(
218 |             name="describe_table",
219 |             description="Show structure of a specific table",
220 |             inputSchema={
221 |                 "type": "object",
222 |                 "properties": {
223 |                     "table_name": {
224 |                         "type": "string",
225 |                         "description": "Name of the table to describe"
226 |                     }
227 |                 },
228 |                 "required": ["table_name"]
229 |             }
230 |         ),
231 |         types.Tool(
232 |             name="append_insight",
233 |             description="Add a business insight to the memo",
234 |             inputSchema={
235 |                 "type": "object",
236 |                 "properties": {
237 |                     "insight": {"type": "string", "description": "Business insight discovered from data analysis"},
238 |                 },
239 |                 "required": ["insight"],
240 |             },
241 |         ),
242 |     ]
243 | 
244 | @server.call_tool()
245 | async def handle_call_tool(
246 |     name: str,
247 |     arguments: dict | None
248 | ) -> list[types.TextContent]:
249 |     """Handle tool calls."""
250 |     try:
251 |         if name == "list_tables":
252 |             results = db._execute_query(
253 |                 "SELECT name FROM sqlite_master WHERE type='table'"
254 |             )
255 |             return [types.TextContent(type="text", text=str(results))]
256 | 
257 |         elif name == "describe_table":
258 |             if not arguments or "table_name" not in arguments:
259 |                 raise ValueError("Missing table_name argument")
260 |             table_name = arguments["table_name"]
261 |             # Get table creation SQL instead of using PRAGMA
262 |             results = db._execute_query(
263 |                 f"SELECT sql FROM sqlite_master WHERE type='table' AND name=?", 
264 |                 (table_name,)
265 |             )
266 |             return [types.TextContent(type="text", text=str(results))]
267 | 
268 |         elif name == "read_query":
269 |             if not arguments or "query" not in arguments:
270 |                 raise ValueError("Missing query argument")
271 |             query = arguments["query"].strip()
272 |             if not query.upper().startswith("SELECT"):
273 |                 raise ValueError("Only SELECT queries are allowed")
274 |             results = db._execute_query(query)
275 |             return [types.TextContent(type="text", text=str(results))]
276 | 
277 |         elif name == "read-logs":
278 |             if not arguments:
279 |                 return [types.TextContent(type="text", text="No arguments provided")]
280 |             
281 |             limit = min(max(arguments.get("limit", 10), 1), 100)
282 |             full_details = arguments.get("full_details", False)
283 |             
284 |             try:
285 |                 logs = db.get_logs(limit=limit, full_details=full_details)
286 |                 
287 |                 if not logs:
288 |                     return [types.TextContent(type="text", text="No logs found")]
289 |                 
290 |                 # Create compact table header with adjusted widths
291 |                 header = ["ID", "Time", "Prob", "Type", "Context"]
292 |                 separator = "-" * 90  # Increased overall width
293 |                 table = [separator]
294 |                 table.append(" | ".join([
295 |                     f"{h:<4}" if h == "ID" else
296 |                     f"{h:<12}" if h == "Time" else
297 |                     f"{h:<6}" if h == "Prob" or h == "Type" else
298 |                     f"{h:<45}"  # Increased context width
299 |                     for h in header
300 |                 ]))
301 |                 table.append(separator)
302 |                 
303 |                 # Create compact rows with adjusted widths
304 |                 for log in logs:
305 |                     time_str = str(log[1])[5:16]  # Extract MM-DD HH:MM
306 |                     context = str(log[8])[:42] + "..." if len(str(log[8])) > 42 else str(log[8])  # Increased context length
307 |                     row = [
308 |                         str(log[0])[:4],          # ID
309 |                         time_str,                 # Time
310 |                         str(log[5])[:6],          # Prob
311 |                         str(log[4])[:6],          # Type
312 |                         context                   # Truncated context
313 |                     ]
314 |                     table.append(" | ".join([
315 |                         f"{str(cell):<4}" if i == 0 else  # ID
316 |                         f"{str(cell):<12}" if i == 1 else  # Time
317 |                         f"{str(cell):<6}" if i in [2, 3] else  # Prob and Type
318 |                         f"{str(cell):<45}"  # Context
319 |                         for i, cell in enumerate(row)
320 |                     ]))
321 |                 
322 |                 return [types.TextContent(type="text", text="\n".join(table))]
323 |                 
324 |             except sqlite3.Error as e:
325 |                 return [types.TextContent(type="text", text=f"Database error: {str(e)}")]
326 |             except Exception as e:
327 |                 return [types.TextContent(type="text", text=f"Error: {str(e)}")]
328 |         
329 |         elif name == "log-query":
330 |             # Existing log-query logic
331 |             session_id = arguments.get("session_id", "")
332 |             user_id = arguments.get("user_id", "")
333 |             interaction_type = arguments.get("interaction_type", "")
334 |             probability_class = arguments.get("probability_class", "")
335 |             message_content = arguments.get("message_content", "")
336 |             response_content = arguments.get("response_content", "")
337 |             context_summary = arguments.get("context_summary", "")
338 |             reasoning = arguments.get("reasoning", "")
339 |             
340 |             success = db.add_log(
341 |                 session_id=session_id,
342 |                 user_id=user_id,
343 |                 interaction_type=interaction_type,
344 |                 probability_class=probability_class,
345 |                 message_content=message_content,
346 |                 response_content=response_content,
347 |                 context_summary=context_summary,
348 |                 reasoning=reasoning
349 |             )
350 |             
351 |             return [types.TextContent(
352 |                 type="text",
353 |                 text="Log entry added successfully" if success else "Failed to add log entry"
354 |             )],
355 |         
356 |         elif name == "append_insight":
357 |             if not arguments or "insight" not in arguments:
358 |                 raise ValueError("Missing insight argument")
359 | 
360 |             db.insights.append(arguments["insight"])
361 |             _ = db._synthesize_memo()
362 | 
363 |                     # Notify clients that the memo resource has changed
364 |             await server.request_context.session.send_resource_updated(AnyUrl("memo://insights"))
365 | 
366 |             return [types.TextContent(type="text", text="Insight added to memo")]
367 | 
368 |         if not arguments:
369 |             raise ValueError("Missing arguments")
370 | 
371 |         if name == "write_query":
372 |             if arguments["query"].strip().upper().startswith("SELECT"):
373 |                 raise ValueError("SELECT queries are not allowed for write_query")
374 |             results = db._execute_query(arguments["query"])
375 |             return [types.TextContent(type="text", text=str(results))]
376 | 
377 |         elif name == "create_table":
378 |             if not arguments["query"].strip().upper().startswith("CREATE TABLE"):
379 |                 raise ValueError("Only CREATE TABLE statements are allowed")
380 |             db._execute_query(arguments["query"])
381 |             return [types.TextContent(type="text", text="Table created successfully")]
382 | 
383 |         else:
384 |             raise ValueError(f"Unknown tool: {name}")
385 | 
386 |     except sqlite3.Error as e:
387 |         return [types.TextContent(type="text", text=f"Database error: {str(e)}")]
388 |     except Exception as e:
389 |         return [types.TextContent(type="text", text=f"Error: {str(e)}")]
390 | 
391 | 
392 | async def main():
393 |     # Run the server using stdin/stdout streams
394 |     async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
395 |         await server.run(
396 |             read_stream,
397 |             write_stream,
398 |             InitializationOptions(
399 |                 server_name="mcp-variance-log",
400 |                 server_version="0.1.0",
401 |                 capabilities=server.get_capabilities(
402 |                     notification_options=NotificationOptions(),
403 |                     experimental_capabilities={},
404 |                 ),
405 |             ),
406 |         )
```