#
tokens: 6350/50000 7/7 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .gitignore
├── .python-version
├── data
│   └── varlog.db
├── LICENSE
├── pyproject.toml
├── README.md
├── src
│   └── mcp_variance_log
│       ├── __init__.py
│       ├── db_utils.py
│       └── server.py
└── uv.lock
```

# Files

--------------------------------------------------------------------------------
/.python-version:
--------------------------------------------------------------------------------

```
3.10

```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# Python-generated files
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info

# Virtual environments
.venv

# SQLite database files
data/varlog.db

```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# MCP Variance Log

Agentic tool that looks for statistical variations in conversation structure and logs unusual events to a SQLite database. Built using the Model Context Protocol (MCP), this system is designed to be used with Claude Desktop or other MCP-compatible clients.

<a href="https://glama.ai/mcp/servers/acqq8twrk6"><img width="380" height="200" src="https://glama.ai/mcp/servers/acqq8twrk6/badge" alt="Variance Log MCP server" /></a>

## Requirements

- Python ≥3.10
- MCP ≥1.2.0
- SQLite3
- UV (Python package installer and environment manager)

## Quick Start

```bash
# Clone and install
git clone https://github.com/truaxki/mcp-variance-log.git
cd mcp-variance-log
uv pip install -e .
```

Add to Claude Desktop config (`%APPDATA%\Claude\claude_desktop_config.json` on Windows):
```json
{
  "mcpServers": {
    "mcp-variance-log": {
      "command": "uv",
      "args": [
        "--directory",
        "C:\\Users\\username\\source\\repos\\mcp-variance-log",  # Update this path
        "run",
        "mcp-variance-log"
      ]
    }
  }
}
```

Add to Claude Desktop preferences:
```
Automatic Logging Process:
- Silently monitor each interaction
- When MEDIUM or LOW probability events occur, use this tool to log them
- Each log includes context and reasoning
- Logging happens without disrupting conversation flow
```

## Usage

### Automatic Logging Examples
```
prompt: "What if I told you I could lift 1000 lbs?"

{
  `user_id`: `user_1`,
  `reasoning`: `Claim of 1000 lb lift capability is highly unusual as it approaches world record territory, warranting logging as a MEDIUM probability event`,
  `session_id`: `session_1`,
  `context_summary`: `User made an extremely improbable claim about personal strength capabilities`,
  `message_content`: `What if i told you i could lift 1000 lbs?`,
  `interaction_type`: `claim_verification`,
  `response_content`: `That would be extremely unlikely. The current world record for weight lifted by a human (deadlift) is 1,104.5 lbs, set by Hafthor Björnsson in 2020 under very specific conditions. Only a handful of people in history have lifted close to 1,000 lbs.`,
  `probability_class`: `MEDIUM`
}
```

```
prompt: "What if I told you I'm a Data Scientist that can lift 1000 pounds?"

{
  `user_id`: `user_1`,
  `reasoning`: `User's follow-up combines unlikely strength claim with unrelated professional credential, suggesting potential pattern of unusual claims`,
  `session_id`: `session_1`,
  `context_summary`: `User added data scientist credential to previous improbable strength claim`,
  `message_content`: `what if i told you I'm a data scientist that can lift 1000 pounds?`,
  `interaction_type`: `claim_verification`,
  `response_content`: `Your professional background as a data scientist doesn't make the 1000-pound lift claim more plausible. Such a lift would still be near the all-time world record of 1,104.5 lbs.`,
  `probability_class`: `LOW`
}
```

### Dataframe view
```
prompt: "Read-logs from this morning"

ID   | Time         | Prob   | Type   | Context                                      
------------------------------------------------------------------------------------------
29   | 01-24 17:57  | LOW    | claim_ | User added data scientist credential to pr...
28   | 01-24 17:56  | MEDIUM | claim_ | User made an extremely improbable claim ab...
```

### Text 2 SQL
```
prompt: "Can you search the logs for entry 29?"

[{'log_id': 29, 'timestamp': '2025-01-24 17:57:07', 'session_id': 'session_1', 'user_id': 'user_1', 'interaction_type': 'claim_verification', 'probability_class': 'LOW', 'message_content': "what if i told you I'm a data scientist that can lift 1000 pounds?", 'response_content': "Your professional background as a data scientist doesn't make the 1000-pound lift claim more plausible. Such a lift would still be near the all-time world record of 1,104.5 lbs.", 'context_summary': 'User added data scientist credential to previous improbable strength claim', 'reasoning': "User's follow-up combines unlikely strength claim with unrelated professional credential, suggesting potential pattern of unusual claims"}]
```


## Detailed Installation

1. Ensure Python 3.10+ and UV are installed.

Install UV using one of these methods:

```bash
# Using pip (recommended for Windows)
pip install uv

# Using installation script (Linux/MacOS)
curl -LsSf https://astral.sh/uv/install.sh | sh

```

2. Clone and install:
```bash
git clone https://github.com/truaxki/mcp-variance-log.git
cd mcp-variance-log
uv pip install -e .
```

3. Configure Claude Desktop:

Add to `claude_desktop_config.json`:
```json
{
  "mcpServers": {
    "mcp-variance-log": {
      "command": "uv",
      "args": [
        "--directory",
        "PATH_TO_REPO/mcp-variance-log",
        "run",
        "mcp-variance-log"
      ]
    }
  }
}
```

Config locations:
- Windows: `%APPDATA%\Claude\claude_desktop_config.json`
- MacOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
- Linux: `~/.config/Claude/claude_desktop_config.json`

## Tools

### Monitoring
- `log-query`: Tracks conversation patterns
  - HIGH: Common interactions (not logged)
  - MEDIUM: Unusual patterns (logged)
  - LOW: Critical events (priority logged)

### Query
- `read-logs`: View logs with filtering
- `read_query`: Execute SELECT queries
- `write_query`: Execute INSERT/UPDATE/DELETE
- `create_table`: Create tables
- `list_tables`: Show all tables
- `describe_table`: Show table structure


Located at `data/varlog.db` relative to installation.

### Schema

```sql
CREATE TABLE chat_monitoring (
    log_id INTEGER PRIMARY KEY AUTOINCREMENT,
    timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
    session_id TEXT NOT NULL,
    user_id TEXT NOT NULL,
    interaction_type TEXT NOT NULL,
    probability_class TEXT CHECK(probability_class IN ('HIGH', 'MEDIUM', 'LOW')),
    message_content TEXT NOT NULL,
    response_content TEXT NOT NULL,
    context_summary TEXT,
    reasoning TEXT
);
```

## Troubleshooting

1. Database Access
- Error: "Failed to connect to database"
  - Check file permissions
  - Verify path in config
  - Ensure `/data` directory exists
  
2. Installation Issues
- Error: "No module named 'mcp'"
  - Run: `uv pip install mcp>=1.2.0`
- Error: "UV command not found"
  - Install UV: `curl -LsSf https://astral.sh/uv/install.sh | sh`
  
3. Configuration
- Error: "Failed to start MCP server"
  - Verify config.json syntax
  - Check path separators (use \\ on Windows)
  - Ensure UV is in your system PATH

## Contributing

1. Fork the repository
2. Create feature branch
3. Submit pull request

## License

MIT

## Support

Issues: [GitHub Issues](https://github.com/truaxki/mcp-variance-log/issues)

```

--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------

```toml
[project]
name = "mcp-variance-log"
version = "0.1.0"
description = "Agentic tool that looks for statistical variations in conversation structure and logs unusual events to a SQLite database."
readme = "README.md"
requires-python = ">=3.10"
dependencies = [ "mcp>=1.2.0",]
[[project.authors]]
name = "truaxki"
email = "[email protected]"

[build-system]
requires = [ "hatchling",]
build-backend = "hatchling.build"

[project.scripts]
mcp-variance-log = "mcp_variance_log:main"

```

--------------------------------------------------------------------------------
/src/mcp_variance_log/__init__.py:
--------------------------------------------------------------------------------

```python
from .db_utils import LogDatabase
import asyncio

# Define default database path at package level
DEFAULT_DB_PATH = 'data/varlog.db'

# Initialize the database instance at package level
db = LogDatabase(DEFAULT_DB_PATH)

# Import server after db is initialized
from . import server

def main():
    """Main entry point for the package."""
    asyncio.run(server.main())

# Expose package-level imports and variables
__all__ = ['main', 'server', 'LogDatabase', 'db', 'DEFAULT_DB_PATH']
```

--------------------------------------------------------------------------------
/src/mcp_variance_log/db_utils.py:
--------------------------------------------------------------------------------

```python
import sqlite3
from datetime import datetime
from typing import Optional, Any
from pathlib import Path
from contextlib import closing
import logging

logger = logging.getLogger(__name__)

class LogDatabase:
    def __init__(self, db_path: str):
        """Initialize database connection.
        
        Args:
            db_path (str): Path to SQLite database file
        """
        self.db_path = str(Path(db_path).expanduser())
        Path(self.db_path).parent.mkdir(parents=True, exist_ok=True)
        self._init_database()
        self.insights: list[str] = []

    def _init_database(self):
        """Initialize connection to the SQLite database"""
        logger.debug("Initializing database connection")
        with closing(sqlite3.connect(self.db_path)) as conn:
            conn.row_factory = sqlite3.Row
            conn.close()

    def _execute_query(self, query: str, params: dict[str, Any] | None = None) -> list[dict[str, Any]]:
        """Execute a SQL query and return results as a list of dictionaries"""
        logger.debug(f"Executing query: {query}")
        try:
            with closing(sqlite3.connect(self.db_path)) as conn:
                conn.row_factory = sqlite3.Row
                with closing(conn.cursor()) as cursor:
                    if params:
                        cursor.execute(query, params)
                    else:
                        cursor.execute(query)

                    if query.strip().upper().startswith(('INSERT', 'UPDATE', 'DELETE', 'CREATE', 'DROP', 'ALTER')):
                        conn.commit()
                        affected = cursor.rowcount
                        logger.debug(f"Write query affected {affected} rows")
                        return [{"affected_rows": affected}]

                    results = [dict(row) for row in cursor.fetchall()]
                    logger.debug(f"Read query returned {len(results)} rows")
                    return results
        except Exception as e:
            logger.error(f"Database error executing query: {e}")
            raise

    def _synthesize_memo(self) -> str:
        """Synthesizes business insights into a formatted memo"""
        logger.debug(f"Synthesizing memo with {len(self.insights)} insights")
        if not self.insights:
            return "No business insights have been discovered yet."

        insights = "\n".join(f"- {insight}" for insight in self.insights)

        memo = "📊 Business Intelligence Memo 📊\n\n"
        memo += "Key Insights Discovered:\n\n"
        memo += insights

        if len(self.insights) > 1:
            memo += "\nSummary:\n"
            memo += f"Analysis has revealed {len(self.insights)} key business insights that suggest opportunities for strategic optimization and growth."

        logger.debug("Generated basic memo format")
        return memo

    def add_log(self, session_id: str, user_id: str, interaction_type: str, 
                probability_class: str, message_content: str, response_content: str,
                context_summary: str, reasoning: str) -> bool:
        """
        Add a new log entry to the database.
        
        Args:
            session_id (str): Unique identifier for the chat session
            user_id (str): Identifier for the user
            interaction_type (str): Type of interaction being monitored
            probability_class (str): Classification (HIGH, MEDIUM, LOW)
            message_content (str): The user's message content
            response_content (str): The system's response content
            context_summary (str): Summary of interaction context
            reasoning (str): Explanation for the classification
            
        Returns:
            bool: True if successful, False otherwise
        """
        try:
            with sqlite3.connect(self.db_path) as conn:
                cursor = conn.cursor()
                cursor.execute('''
                    INSERT INTO chat_monitoring (
                        session_id, user_id, interaction_type, probability_class,
                        message_content, response_content, context_summary, reasoning
                    )
                    VALUES (?, ?, ?, ?, ?, ?, ?, ?)
                ''', (session_id, user_id, interaction_type, probability_class,
                     message_content, response_content, context_summary, reasoning))
                return True
        except Exception as e:
            print(f"Error adding log: {e}")
            return False

    def get_logs(self, 
                 limit: int = 10,
                 start_date: Optional[datetime] = None,
                 end_date: Optional[datetime] = None,
                 full_details: bool = False) -> list:
        """
        Retrieve logs with optional filtering.
        
        Args:
            limit (int): Maximum number of logs to retrieve
            start_date (datetime, optional): Filter by start date
            end_date (datetime, optional): Filter by end date
            full_details (bool): If True, return all fields; if False, return only context summary
            
        Returns:
            list: List of log entries
        """
        query = "SELECT * FROM chat_monitoring"
        params = []
        conditions = []
        
        if start_date:
            conditions.append("timestamp >= ?")
            params.append(start_date)
            
        if end_date:
            conditions.append("timestamp <= ?")
            params.append(end_date)

        if conditions:
            query += " WHERE " + " AND ".join(conditions)

        query += " ORDER BY timestamp DESC LIMIT ?"
        params.append(limit)

        try:
            with sqlite3.connect(self.db_path) as conn:
                cursor = conn.cursor()
                cursor.execute(query, params)
                return cursor.fetchall()
        except sqlite3.Error as e:
            print(f"Database error: {str(e)}")
            return []
        except Exception as e:
            print(f"Error: {str(e)}")
            return []

    def clear_logs(self) -> bool:
        """
        Clear all logs from the database.
        
        Returns:
            bool: True if successful, False otherwise
        """
        try:
            with sqlite3.connect(self.db_path) as conn:
                cursor = conn.cursor()
                cursor.execute("DELETE FROM chat_monitoring")
                return True
        except Exception as e:
            print(f"Error clearing logs: {e}")
            return False
```

--------------------------------------------------------------------------------
/src/mcp_variance_log/server.py:
--------------------------------------------------------------------------------

```python
import sqlite3
import logging
from contextlib import closing
from pathlib import Path    
import asyncio
from datetime import datetime
from typing import Optional
from pydantic import AnyUrl
from typing import Any

from mcp.server.models import InitializationOptions
import mcp.types as types
from mcp.server import NotificationOptions, Server
import mcp.server.stdio
from .db_utils import LogDatabase
from . import DEFAULT_DB_PATH

# Initialize database connection
db = LogDatabase(DEFAULT_DB_PATH)

server = Server("mcp-variance-log")

@server.list_tools()
async def handle_list_tools() -> list[types.Tool]:
    """
    List available tools.
    Each tool specifies its arguments using JSON Schema validation.
    """
    return [
        types.Tool(
            name="log-query",
            description="""
                Conversation Variation analysis
                Continuously monitor our conversation and automatically log unusual or noteworthy interactions based on the following criteria:

                1. Probability Classifications:
                HIGH (Not Logged):
                - Common questions and responses
                - Standard technical inquiries
                - Regular clarifications
                - Normal conversation flow

                MEDIUM (Logged):
                - Unexpected but plausible technical issues
                - Unusual patterns in user behavior
                - Noteworthy insights or connections
                - Edge cases in normal usage
                - Uncommon but valid use cases

                LOW (Logged with Priority):
                - Highly unusual technical phenomena
                - Potentially problematic patterns
                - Critical edge cases
                - Unexpected system behaviors
                - Novel or unique use cases
            """,
            inputSchema={
                "type": "object",
                "properties": {
                    "session_id": {
                        "type": "string",
                        "description": """Unique identifier for the chat session.
                            Format: <date>_<user>_<sequence>
                            Example: 20240124_u1_001

                            Components:
                            - date: YYYYMMDD
                            - user: 'u' + user number
                            - sequence: 3-digit sequential number

                            Valid examples:
                            - 20240124_u1_001
                            - 20240124_u1_002
                            - 20240125_u2_001""",
                        "pattern": "^\\d{8}_u\\d+_\\d{3}$"  # Regex pattern to validate format
                    },
                    "user_id": {
                        "type": "string",
                        "description": "Identifier for the user"
                    },
                    "interaction_type": {
                        "type": "string",
                        "description": "Type of interaction being monitored"
                    },
                    "probability_class": {
                        "type": "string",
                        "enum": ["HIGH", "MEDIUM", "LOW"],
                        "description": "Classification of interaction probability"
                    },
                    "message_content": {
                        "type": "string",
                        "description": "The user's message content"
                    },
                    "response_content": {
                        "type": "string",
                        "description": "The system's response content"
                    },
                    "context_summary": {
                        "type": "string",
                        "description": "Summary of interaction context"
                    },
                    "reasoning": {
                        "type": "string",
                        "description": "Explanation for the probability classification"
                    }
                },
                "required": [
                    "session_id",
                    "user_id",
                    "interaction_type",
                    "probability_class",
                    "message_content",
                    "response_content",
                    "context_summary",
                    "reasoning"
                ]
            },
        ),
        types.Tool(
            name="read-logs",
            description="Retrieve logged conversation variations from the database.",
            inputSchema={
                "type": "object",
                "properties": {
                    "limit": {
                        "type": "integer",
                        "description": "Maximum number of logs to retrieve",
                        "default": 10,
                        "minimum": 1,
                        "maximum": 100
                    },
                    "start_date": {
                        "type": "string",
                        "description": "Filter logs after this date (ISO format YYYY-MM-DDTHH:MM:SS)"
                    },
                    "end_date": {
                        "type": "string",
                        "description": "Filter logs before this date (ISO format YYYY-MM-DDTHH:MM:SS)"
                    },
                    "full_details": {
                        "type": "boolean",
                        "description": "If true, show all fields; if false, show only context summaries",
                        "default": False
                    }
                },
                "required": ["limit"]
            }
        ),
        types.Tool(
            name="read_query",
            description="""Execute a SELECT query on the SQLite database

                Schema Reference:
                Table: chat_monitoring
                Fields:
                - log_id (INTEGER PRIMARY KEY)
                - timestamp (DATETIME)
                - session_id (TEXT)
                - user_id (TEXT)
                - interaction_type (TEXT)
                - probability_class (TEXT: HIGH, MEDIUM, LOW)
                - message_content (TEXT)
                - response_content (TEXT)
                - context_summary (TEXT)
                - reasoning (TEXT)

                Example:
                SELECT timestamp, probability_class, context_summary 
                FROM chat_monitoring 
                WHERE probability_class = 'LOW'
                LIMIT 5;
                """,
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "SELECT SQL query to execute"
                    }
                },
                "required": ["query"]
            }
        ),
        types.Tool(
            name="write_query",
            description="Execute an INSERT, UPDATE, or DELETE query",
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "Non-SELECT SQL query to execute"
                    }
                },
                "required": ["query"]
            }
        ),
        types.Tool(
            name="create_table",
            description="Create a new table in the SQLite database",
            inputSchema={
                "type": "object",
                "properties": {
                    "query": {"type": "string", "description": "CREATE TABLE SQL statement"},
                },
                "required": ["query"],
            },
        ),
        types.Tool(
            name="list_tables",
            description="List all tables in the database",
            inputSchema={
                "type": "object",
                "properties": {}
            }
        ),
        types.Tool(
            name="describe_table",
            description="Show structure of a specific table",
            inputSchema={
                "type": "object",
                "properties": {
                    "table_name": {
                        "type": "string",
                        "description": "Name of the table to describe"
                    }
                },
                "required": ["table_name"]
            }
        ),
        types.Tool(
            name="append_insight",
            description="Add a business insight to the memo",
            inputSchema={
                "type": "object",
                "properties": {
                    "insight": {"type": "string", "description": "Business insight discovered from data analysis"},
                },
                "required": ["insight"],
            },
        ),
    ]

@server.call_tool()
async def handle_call_tool(
    name: str,
    arguments: dict | None
) -> list[types.TextContent]:
    """Handle tool calls."""
    try:
        if name == "list_tables":
            results = db._execute_query(
                "SELECT name FROM sqlite_master WHERE type='table'"
            )
            return [types.TextContent(type="text", text=str(results))]

        elif name == "describe_table":
            if not arguments or "table_name" not in arguments:
                raise ValueError("Missing table_name argument")
            table_name = arguments["table_name"]
            # Get table creation SQL instead of using PRAGMA
            results = db._execute_query(
                f"SELECT sql FROM sqlite_master WHERE type='table' AND name=?", 
                (table_name,)
            )
            return [types.TextContent(type="text", text=str(results))]

        elif name == "read_query":
            if not arguments or "query" not in arguments:
                raise ValueError("Missing query argument")
            query = arguments["query"].strip()
            if not query.upper().startswith("SELECT"):
                raise ValueError("Only SELECT queries are allowed")
            results = db._execute_query(query)
            return [types.TextContent(type="text", text=str(results))]

        elif name == "read-logs":
            if not arguments:
                return [types.TextContent(type="text", text="No arguments provided")]
            
            limit = min(max(arguments.get("limit", 10), 1), 100)
            full_details = arguments.get("full_details", False)
            
            try:
                logs = db.get_logs(limit=limit, full_details=full_details)
                
                if not logs:
                    return [types.TextContent(type="text", text="No logs found")]
                
                # Create compact table header with adjusted widths
                header = ["ID", "Time", "Prob", "Type", "Context"]
                separator = "-" * 90  # Increased overall width
                table = [separator]
                table.append(" | ".join([
                    f"{h:<4}" if h == "ID" else
                    f"{h:<12}" if h == "Time" else
                    f"{h:<6}" if h == "Prob" or h == "Type" else
                    f"{h:<45}"  # Increased context width
                    for h in header
                ]))
                table.append(separator)
                
                # Create compact rows with adjusted widths
                for log in logs:
                    time_str = str(log[1])[5:16]  # Extract MM-DD HH:MM
                    context = str(log[8])[:42] + "..." if len(str(log[8])) > 42 else str(log[8])  # Increased context length
                    row = [
                        str(log[0])[:4],          # ID
                        time_str,                 # Time
                        str(log[5])[:6],          # Prob
                        str(log[4])[:6],          # Type
                        context                   # Truncated context
                    ]
                    table.append(" | ".join([
                        f"{str(cell):<4}" if i == 0 else  # ID
                        f"{str(cell):<12}" if i == 1 else  # Time
                        f"{str(cell):<6}" if i in [2, 3] else  # Prob and Type
                        f"{str(cell):<45}"  # Context
                        for i, cell in enumerate(row)
                    ]))
                
                return [types.TextContent(type="text", text="\n".join(table))]
                
            except sqlite3.Error as e:
                return [types.TextContent(type="text", text=f"Database error: {str(e)}")]
            except Exception as e:
                return [types.TextContent(type="text", text=f"Error: {str(e)}")]
        
        elif name == "log-query":
            # Existing log-query logic
            session_id = arguments.get("session_id", "")
            user_id = arguments.get("user_id", "")
            interaction_type = arguments.get("interaction_type", "")
            probability_class = arguments.get("probability_class", "")
            message_content = arguments.get("message_content", "")
            response_content = arguments.get("response_content", "")
            context_summary = arguments.get("context_summary", "")
            reasoning = arguments.get("reasoning", "")
            
            success = db.add_log(
                session_id=session_id,
                user_id=user_id,
                interaction_type=interaction_type,
                probability_class=probability_class,
                message_content=message_content,
                response_content=response_content,
                context_summary=context_summary,
                reasoning=reasoning
            )
            
            return [types.TextContent(
                type="text",
                text="Log entry added successfully" if success else "Failed to add log entry"
            )],
        
        elif name == "append_insight":
            if not arguments or "insight" not in arguments:
                raise ValueError("Missing insight argument")

            db.insights.append(arguments["insight"])
            _ = db._synthesize_memo()

                    # Notify clients that the memo resource has changed
            await server.request_context.session.send_resource_updated(AnyUrl("memo://insights"))

            return [types.TextContent(type="text", text="Insight added to memo")]

        if not arguments:
            raise ValueError("Missing arguments")

        if name == "write_query":
            if arguments["query"].strip().upper().startswith("SELECT"):
                raise ValueError("SELECT queries are not allowed for write_query")
            results = db._execute_query(arguments["query"])
            return [types.TextContent(type="text", text=str(results))]

        elif name == "create_table":
            if not arguments["query"].strip().upper().startswith("CREATE TABLE"):
                raise ValueError("Only CREATE TABLE statements are allowed")
            db._execute_query(arguments["query"])
            return [types.TextContent(type="text", text="Table created successfully")]

        else:
            raise ValueError(f"Unknown tool: {name}")

    except sqlite3.Error as e:
        return [types.TextContent(type="text", text=f"Database error: {str(e)}")]
    except Exception as e:
        return [types.TextContent(type="text", text=f"Error: {str(e)}")]


async def main():
    # Run the server using stdin/stdout streams
    async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
        await server.run(
            read_stream,
            write_stream,
            InitializationOptions(
                server_name="mcp-variance-log",
                server_version="0.1.0",
                capabilities=server.get_capabilities(
                    notification_options=NotificationOptions(),
                    experimental_capabilities={},
                ),
            ),
        )
```