#
tokens: 3660/50000 7/7 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .gitignore
├── assets
│   └── mcp_logs.png
├── Dockerfile
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── src
│   └── index.ts
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
build
node_modules
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# MCP Server: Analyze & Debug MCP Logs

[![smithery badge](https://smithery.ai/badge/@klara-research/MCP-Analyzer)](https://smithery.ai/server/@klara-research/MCP-Analyzer)

<div align="center">
  <img src="assets/mcp_logs.png" width="400">
  
  <br>
  <br>
  <br>
  🔍 <b>Read logs from standard locations across all platforms</b>
  <br>
  <br>
  🔎 <b>Filter, paginate, and analyze large log collections</b>
  <br>
  <br>
</div>

## 🎯 Overview

MCP Log Reader is a specialized MCP server that helps you analyze and debug Model Context Protocol logs. It provides Claude with direct access to log files, making it easy to troubleshoot MCP integrations and understand how Claude interacts with your tools.

- **Multi-platform Support**: Works on macOS, Windows, and Linux with platform-specific log paths
- **Smart Filtering**: Find specific log entries with case-insensitive text search
- **Paginated Browsing**: Navigate large log collections efficiently
- **Size Management**: Handles large log files with intelligent truncation
- **Seamless Claude Integration**: Works directly with Claude Desktop

## 🚀 Quick Start

### Installing via Smithery

To install MCP Log Reader for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@klara-research/MCP-Analyzer):

```bash
npx -y @smithery/cli install @klara-research/MCP-Analyzer --client claude
```

### Installing Manually

Install directly from GitHub:
```bash
# Clone the repository
git clone https://github.com/klara-research/MCP-Analyzer.git
cd MCP-Analyzer

# Install dependencies
npm i
```

Build and run:
```bash
# Compile TypeScript
npx tsc
```

## 🔌 Connecting to Claude

Add the server to your Claude Desktop configuration:

```json
{
  "mcpServers": {
    "log-reader": {
      "command": "node",
      "args": [
        "/absolute/path/MCP-Analyzer/build"
      ]
    }
  }
}
```

Then restart Claude Desktop.

## 📋 Available Parameters

The log reader supports these parameters:

| Parameter | Description | Default |
|-----------|-------------|---------|
| `lines` | Number of lines to read from each log file | 100 |
| `filter` | Text to filter log entries by (case-insensitive) | "" |
| `customPath` | Custom path to log directory | OS-specific |
| `fileLimit` | Maximum number of files to read per page | 5 |
| `page` | Page number for pagination | 1 |

## 💡 Example Usage

Ask Claude to use the log reader tool:

```
Can you check my MCP logs for any connection errors in the last day?
```

Or with specific parameters:

```
Can you look through MCP logs with filter="error" and lines=50 to find initialization issues?
```

## ⚙️ How It Works

1. The server automatically detects your OS and finds the appropriate log directory
2. It locates all MCP log files and sorts them by modification time (newest first)
3. The requested page of log files is retrieved based on pagination settings
4. Files are processed with size limits to prevent overwhelming responses
5. Filtered content is returned in a structured format with pagination details

## 📄 License

MIT License

```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
{
    "compilerOptions": {
      "target": "ES2022",
      "module": "Node16",
      "moduleResolution": "Node16",
      "outDir": "./build",
      "rootDir": "./src",
      "strict": true,
      "esModuleInterop": true,
      "skipLibCheck": true,
      "forceConsistentCasingInFileNames": true
    },
    "include": ["src/**/*"],
    "exclude": ["node_modules"]
  }
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
# Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
# Use Node LTS on Alpine
FROM node:lts-alpine AS builder

# Set working directory
WORKDIR /app

# Copy dependency definitions
COPY package.json package-lock.json tsconfig.json ./
# Copy source
COPY src ./src
COPY assets ./assets

# Install dependencies (skip prepare scripts), then build
RUN npm ci --ignore-scripts
RUN npm run build

# Final image
FROM node:lts-alpine AS runtime
WORKDIR /app

# Copy built files and assets
COPY --from=builder /app/build ./build
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/assets ./assets

# Expose any needed port (none by default)

# Entry point
ENTRYPOINT ["node", "build/index.js"]

```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
{
    "name": "mcp-log-debug",
    "version": "0.1.0",
    "description": "A Model Context Protocol server to retrieve MCP logs for debugging",
    "private": true,
    "type": "module",
    "bin": {
        "mcp-server": "./build/index.js"
    },
    "files": [
        "build"
    ],
    "scripts": {
        "build": "tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
        "prepare": "npm run build",
        "watch": "tsc --watch",
        "inspector": "npx @modelcontextprotocol/inspector build/index.js"
    },
    "dependencies": {
        "@modelcontextprotocol/sdk": "0.6.0",
        "axios": "^1.8.4",
        "glob": "^11.0.1",
        "node-fetch": "^3.3.2",
        "replicate": "^1.0.1",
        "screenshot-desktop": "^1.15.1"
    },
    "devDependencies": {
        "@types/node": "^20.11.24",
        "@types/screenshot-desktop": "^1.12.3",
        "typescript": "^5.3.3"
    }
}

```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml

startCommand:
  type: stdio
  configSchema:
    # JSON Schema defining the configuration options for the MCP.
    type: object
    properties:
      lines:
        type: number
        default: 100
        description: Number of lines to read from each log file
      filter:
        type: string
        default: ""
        description: Text to filter log entries (case-insensitive)
      customPath:
        type: string
        default: ""
        description: Custom path to log directory
      fileLimit:
        type: number
        default: 5
        description: Maximum number of files per page
      page:
        type: number
        default: 1
        description: Page number for pagination
  commandFunction:
    # A JS function that produces the CLI command based on the given config to start the MCP on stdio.
    |-
    (config) => ({ command: 'node', args: ['build/index.js'] })
  exampleConfig:
    lines: 50
    filter: error
    customPath: /var/log
    fileLimit: 5
    page: 1

```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { 
  CallToolRequestSchema, 
  ErrorCode, 
  ListToolsRequestSchema, 
  McpError 
} from "@modelcontextprotocol/sdk/types.js";
import { readFile, readdir } from "node:fs/promises";
import path from "path";
import fs from "fs";
import os from "os";
import { glob } from "glob";

// Function to save logs safely without interfering with JSON-RPC
async function saveLog(message: string) {
  try {
    const logPath = path.join(process.cwd(), "mcp_debug.log");
    await fs.promises.appendFile(logPath, `${new Date().toISOString()}: ${message}\n`);
  } catch (error) {
    // Ignore errors in logging
  }
}

const server = new Server({
    name: "mcp-server",
    version: "1.0.0",
}, {
    capabilities: {
        tools: {}
    }
});

const transport = new StdioServerTransport();
await server.connect(transport);

server.setRequestHandler(ListToolsRequestSchema, async () => {
    return {
        tools: [
            {
                name: "read_mcp_logs",
                description: "Read MCP logs from the standard location",
                inputSchema: {
                    type: "object",
                    properties: {
                        lines: {
                            type: "number",
                            description: "Number of lines to read from the end of each log file (default: 100)"
                        },
                        filter: {
                            type: "string",
                            description: "Optional text to filter log entries by (case-insensitive)"
                        },
                        customPath: {
                            type: "string", 
                            description: "Optional custom path to log directory (default is system-specific)"
                        },
                        fileLimit: {
                            type: "number",
                            description: "Maximum number of files to read per page (default: 5)"
                        },
                        page: {
                            type: "number",
                            description: "Page number for pagination (default: 1)"
                        }
                    }
                }
            }
        ]
    };
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
    if (request.params.name === "read_mcp_logs") {
        const args = request.params.arguments || {};
        const { 
            lines = 100, 
            filter = "", 
            customPath,
            fileLimit = 5, // Limit number of files to process
            page = 1       // For pagination
        } = args as { 
            lines?: number, 
            filter?: string,
            customPath?: string,
            fileLimit?: number,
            page?: number
        };

        try {
            // Get default log directory based on operating system
            let logDir: string;
            
            if (customPath) {
                logDir = customPath;
            } else {
                const homedir = os.homedir();
                
                if (process.platform === 'darwin') {
                    // macOS log path
                    logDir = path.join(homedir, 'Library/Logs/Claude');
                } else if (process.platform === 'win32') {
                    // Windows log path
                    logDir = path.join(homedir, 'AppData/Roaming/Claude/logs');
                } else {
                    // Linux/other OS log path (might need adjustment)
                    logDir = path.join(homedir, '.config/Claude/logs');
                }
            }
            
            await saveLog(`Looking for MCP logs in: ${logDir}`);
            
            let allLogFiles: string[] = [];
            try {
                // Use glob to find all mcp log files
                allLogFiles = await glob(`${logDir}/mcp*.log`);
                
                // If no files found, try a more general pattern as fallback
                if (allLogFiles.length === 0) {
                    allLogFiles = await glob(`${logDir}/*.log`);
                }
            } catch (globError) {
                // If glob fails, try using readdir as a fallback
                const dirFiles = await readdir(logDir);
                allLogFiles = dirFiles
                    .filter(file => file.startsWith('mcp') && file.endsWith('.log'))
                    .map(file => path.join(logDir, file));
                
                // If still no files, try any log file
                if (allLogFiles.length === 0) {
                    allLogFiles = dirFiles
                        .filter(file => file.endsWith('.log'))
                        .map(file => path.join(logDir, file));
                }
            }
            
            // Sort files by modification time (newest first)
            const filesWithStats = await Promise.all(
                allLogFiles.map(async (file) => {
                    const stats = await fs.promises.stat(file);
                    return { 
                        path: file, 
                        mtime: stats.mtime 
                    };
                })
            );
            
            filesWithStats.sort((a, b) => b.mtime.getTime() - a.mtime.getTime());
            
            // Paginate the files
            const totalFiles = filesWithStats.length;
            const startIndex = (page - 1) * fileLimit;
            const endIndex = Math.min(startIndex + fileLimit, totalFiles);
            
            // Get the files for the current page
            const logFiles = filesWithStats
                .slice(startIndex, endIndex)
                .map(file => file.path);
            
            if (logFiles.length === 0) {
                return {
                    toolResult: {
                        success: false,
                        message: `No log files found in ${logDir}`,
                        logDirectory: logDir
                    }
                };
            }
            
            const results: Record<string, string> = {};
            
            // Process each log file - with size limiting
            const maxBytesPerFile = 100 * 1024; // 100KB per file max
            const maxTotalBytes = 500 * 1024; // 500KB total max
            let totalSize = 0;
            
            for (const logFile of logFiles) {
                try {
                    // Check if we've already exceeded total max size
                    if (totalSize >= maxTotalBytes) {
                        const filename = path.basename(logFile);
                        results[filename] = "[Log content skipped due to total size limits]";
                        continue;
                    }
                    
                    const filename = path.basename(logFile);
                    const content = await readFile(logFile, 'utf8');
                    
                    // Split content into lines
                    let logLines = content.split(/\r?\n/);
                    
                    // Apply filter if provided
                    if (filter) {
                        const filterLower = filter.toLowerCase();
                        logLines = logLines.filter(line => 
                            line.toLowerCase().includes(filterLower)
                        );
                    }
                    
                    // Get the specified number of lines from the end
                    const selectedLines = logLines.slice(-lines);
                    const selectedContent = selectedLines.join('\n');
                    
                    // Check if this file would exceed per-file limit
                    if (Buffer.from(selectedContent).length > maxBytesPerFile) {
                        // Take just enough lines to stay under the limit
                        let truncatedContent = '';
                        let truncatedLines = [];
                        for (let i = selectedLines.length - 1; i >= 0; i--) {
                            const newLine = selectedLines[i] + '\n';
                            if (Buffer.from(newLine + truncatedContent).length <= maxBytesPerFile) {
                                truncatedLines.unshift(selectedLines[i]);
                                truncatedContent = truncatedLines.join('\n');
                            } else {
                                break;
                            }
                        }
                        
                        results[filename] = '[Content truncated due to size limits]\n' + truncatedContent;
                        totalSize += Buffer.from(results[filename]).length;
                    } else {
                        // Store the results if under limit
                        results[filename] = selectedContent;
                        totalSize += Buffer.from(selectedContent).length;
                    }
                } catch (readError) {
                    const errorMessage = readError instanceof Error ? readError.message : String(readError);
                    results[path.basename(logFile)] = `Error reading log: ${errorMessage}`;
                    totalSize += Buffer.from(results[path.basename(logFile)]).length;
                }
            }
            
            return {
                toolResult: {
                    success: true,
                    message: `Read logs from ${logFiles.length} file(s)`,
                    logDirectory: logDir,
                    logs: results,
                    pagination: {
                        currentPage: page,
                        filesPerPage: fileLimit,
                        totalFiles: totalFiles,
                        totalPages: Math.ceil(totalFiles / fileLimit),
                        hasNextPage: endIndex < totalFiles,
                        hasPreviousPage: page > 1
                    }
                }
            };
        } catch (error) {
            const errorMessage = error instanceof Error ? error.message : String(error);
            await saveLog(`Error reading MCP logs: ${errorMessage}`);
            
            throw new McpError(ErrorCode.InternalError, `Failed to read MCP logs: ${errorMessage}`);
        }
    }
    
    throw new McpError(ErrorCode.MethodNotFound, "Tool not found");
});
```