#
tokens: 4938/50000 7/7 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── assets
│   └── mcp_logs.png
├── Dockerfile
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── src
│   └── index.ts
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | build
2 | node_modules
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Server: Analyze & Debug MCP Logs
  2 | 
  3 | [![smithery badge](https://smithery.ai/badge/@klara-research/MCP-Analyzer)](https://smithery.ai/server/@klara-research/MCP-Analyzer)
  4 | 
  5 | <div align="center">
  6 |   <img src="assets/mcp_logs.png" width="400">
  7 |   
  8 |   <br>
  9 |   <br>
 10 |   <br>
 11 |   🔍 <b>Read logs from standard locations across all platforms</b>
 12 |   <br>
 13 |   <br>
 14 |   🔎 <b>Filter, paginate, and analyze large log collections</b>
 15 |   <br>
 16 |   <br>
 17 | </div>
 18 | 
 19 | ## 🎯 Overview
 20 | 
 21 | MCP Log Reader is a specialized MCP server that helps you analyze and debug Model Context Protocol logs. It provides Claude with direct access to log files, making it easy to troubleshoot MCP integrations and understand how Claude interacts with your tools.
 22 | 
 23 | - **Multi-platform Support**: Works on macOS, Windows, and Linux with platform-specific log paths
 24 | - **Smart Filtering**: Find specific log entries with case-insensitive text search
 25 | - **Paginated Browsing**: Navigate large log collections efficiently
 26 | - **Size Management**: Handles large log files with intelligent truncation
 27 | - **Seamless Claude Integration**: Works directly with Claude Desktop
 28 | 
 29 | ## 🚀 Quick Start
 30 | 
 31 | ### Installing via Smithery
 32 | 
 33 | To install MCP Log Reader for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@klara-research/MCP-Analyzer):
 34 | 
 35 | ```bash
 36 | npx -y @smithery/cli install @klara-research/MCP-Analyzer --client claude
 37 | ```
 38 | 
 39 | ### Installing Manually
 40 | 
 41 | Install directly from GitHub:
 42 | ```bash
 43 | # Clone the repository
 44 | git clone https://github.com/klara-research/MCP-Analyzer.git
 45 | cd MCP-Analyzer
 46 | 
 47 | # Install dependencies
 48 | npm i
 49 | ```
 50 | 
 51 | Build and run:
 52 | ```bash
 53 | # Compile TypeScript
 54 | npx tsc
 55 | ```
 56 | 
 57 | ## 🔌 Connecting to Claude
 58 | 
 59 | Add the server to your Claude Desktop configuration:
 60 | 
 61 | ```json
 62 | {
 63 |   "mcpServers": {
 64 |     "log-reader": {
 65 |       "command": "node",
 66 |       "args": [
 67 |         "/absolute/path/MCP-Analyzer/build"
 68 |       ]
 69 |     }
 70 |   }
 71 | }
 72 | ```
 73 | 
 74 | Then restart Claude Desktop.
 75 | 
 76 | ## 📋 Available Parameters
 77 | 
 78 | The log reader supports these parameters:
 79 | 
 80 | | Parameter | Description | Default |
 81 | |-----------|-------------|---------|
 82 | | `lines` | Number of lines to read from each log file | 100 |
 83 | | `filter` | Text to filter log entries by (case-insensitive) | "" |
 84 | | `customPath` | Custom path to log directory | OS-specific |
 85 | | `fileLimit` | Maximum number of files to read per page | 5 |
 86 | | `page` | Page number for pagination | 1 |
 87 | 
 88 | ## 💡 Example Usage
 89 | 
 90 | Ask Claude to use the log reader tool:
 91 | 
 92 | ```
 93 | Can you check my MCP logs for any connection errors in the last day?
 94 | ```
 95 | 
 96 | Or with specific parameters:
 97 | 
 98 | ```
 99 | Can you look through MCP logs with filter="error" and lines=50 to find initialization issues?
100 | ```
101 | 
102 | ## ⚙️ How It Works
103 | 
104 | 1. The server automatically detects your OS and finds the appropriate log directory
105 | 2. It locates all MCP log files and sorts them by modification time (newest first)
106 | 3. The requested page of log files is retrieved based on pagination settings
107 | 4. Files are processed with size limits to prevent overwhelming responses
108 | 5. Filtered content is returned in a structured format with pagination details
109 | 
110 | ## 📄 License
111 | 
112 | MIT License
113 | 
```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |     "compilerOptions": {
 3 |       "target": "ES2022",
 4 |       "module": "Node16",
 5 |       "moduleResolution": "Node16",
 6 |       "outDir": "./build",
 7 |       "rootDir": "./src",
 8 |       "strict": true,
 9 |       "esModuleInterop": true,
10 |       "skipLibCheck": true,
11 |       "forceConsistentCasingInFileNames": true
12 |     },
13 |     "include": ["src/**/*"],
14 |     "exclude": ["node_modules"]
15 |   }
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
 2 | # Use Node LTS on Alpine
 3 | FROM node:lts-alpine AS builder
 4 | 
 5 | # Set working directory
 6 | WORKDIR /app
 7 | 
 8 | # Copy dependency definitions
 9 | COPY package.json package-lock.json tsconfig.json ./
10 | # Copy source
11 | COPY src ./src
12 | COPY assets ./assets
13 | 
14 | # Install dependencies (skip prepare scripts), then build
15 | RUN npm ci --ignore-scripts
16 | RUN npm run build
17 | 
18 | # Final image
19 | FROM node:lts-alpine AS runtime
20 | WORKDIR /app
21 | 
22 | # Copy built files and assets
23 | COPY --from=builder /app/build ./build
24 | COPY --from=builder /app/node_modules ./node_modules
25 | COPY --from=builder /app/assets ./assets
26 | 
27 | # Expose any needed port (none by default)
28 | 
29 | # Entry point
30 | ENTRYPOINT ["node", "build/index.js"]
31 | 
```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |     "name": "mcp-log-debug",
 3 |     "version": "0.1.0",
 4 |     "description": "A Model Context Protocol server to retrieve MCP logs for debugging",
 5 |     "private": true,
 6 |     "type": "module",
 7 |     "bin": {
 8 |         "mcp-server": "./build/index.js"
 9 |     },
10 |     "files": [
11 |         "build"
12 |     ],
13 |     "scripts": {
14 |         "build": "tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
15 |         "prepare": "npm run build",
16 |         "watch": "tsc --watch",
17 |         "inspector": "npx @modelcontextprotocol/inspector build/index.js"
18 |     },
19 |     "dependencies": {
20 |         "@modelcontextprotocol/sdk": "0.6.0",
21 |         "axios": "^1.8.4",
22 |         "glob": "^11.0.1",
23 |         "node-fetch": "^3.3.2",
24 |         "replicate": "^1.0.1",
25 |         "screenshot-desktop": "^1.15.1"
26 |     },
27 |     "devDependencies": {
28 |         "@types/node": "^20.11.24",
29 |         "@types/screenshot-desktop": "^1.12.3",
30 |         "typescript": "^5.3.3"
31 |     }
32 | }
33 | 
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
 2 | 
 3 | startCommand:
 4 |   type: stdio
 5 |   configSchema:
 6 |     # JSON Schema defining the configuration options for the MCP.
 7 |     type: object
 8 |     properties:
 9 |       lines:
10 |         type: number
11 |         default: 100
12 |         description: Number of lines to read from each log file
13 |       filter:
14 |         type: string
15 |         default: ""
16 |         description: Text to filter log entries (case-insensitive)
17 |       customPath:
18 |         type: string
19 |         default: ""
20 |         description: Custom path to log directory
21 |       fileLimit:
22 |         type: number
23 |         default: 5
24 |         description: Maximum number of files per page
25 |       page:
26 |         type: number
27 |         default: 1
28 |         description: Page number for pagination
29 |   commandFunction:
30 |     # A JS function that produces the CLI command based on the given config to start the MCP on stdio.
31 |     |-
32 |     (config) => ({ command: 'node', args: ['build/index.js'] })
33 |   exampleConfig:
34 |     lines: 50
35 |     filter: error
36 |     customPath: /var/log
37 |     fileLimit: 5
38 |     page: 1
39 | 
```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { Server } from "@modelcontextprotocol/sdk/server/index.js";
  2 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
  3 | import { 
  4 |   CallToolRequestSchema, 
  5 |   ErrorCode, 
  6 |   ListToolsRequestSchema, 
  7 |   McpError 
  8 | } from "@modelcontextprotocol/sdk/types.js";
  9 | import { readFile, readdir } from "node:fs/promises";
 10 | import path from "path";
 11 | import fs from "fs";
 12 | import os from "os";
 13 | import { glob } from "glob";
 14 | 
 15 | // Function to save logs safely without interfering with JSON-RPC
 16 | async function saveLog(message: string) {
 17 |   try {
 18 |     const logPath = path.join(process.cwd(), "mcp_debug.log");
 19 |     await fs.promises.appendFile(logPath, `${new Date().toISOString()}: ${message}\n`);
 20 |   } catch (error) {
 21 |     // Ignore errors in logging
 22 |   }
 23 | }
 24 | 
 25 | const server = new Server({
 26 |     name: "mcp-server",
 27 |     version: "1.0.0",
 28 | }, {
 29 |     capabilities: {
 30 |         tools: {}
 31 |     }
 32 | });
 33 | 
 34 | const transport = new StdioServerTransport();
 35 | await server.connect(transport);
 36 | 
 37 | server.setRequestHandler(ListToolsRequestSchema, async () => {
 38 |     return {
 39 |         tools: [
 40 |             {
 41 |                 name: "read_mcp_logs",
 42 |                 description: "Read MCP logs from the standard location",
 43 |                 inputSchema: {
 44 |                     type: "object",
 45 |                     properties: {
 46 |                         lines: {
 47 |                             type: "number",
 48 |                             description: "Number of lines to read from the end of each log file (default: 100)"
 49 |                         },
 50 |                         filter: {
 51 |                             type: "string",
 52 |                             description: "Optional text to filter log entries by (case-insensitive)"
 53 |                         },
 54 |                         customPath: {
 55 |                             type: "string", 
 56 |                             description: "Optional custom path to log directory (default is system-specific)"
 57 |                         },
 58 |                         fileLimit: {
 59 |                             type: "number",
 60 |                             description: "Maximum number of files to read per page (default: 5)"
 61 |                         },
 62 |                         page: {
 63 |                             type: "number",
 64 |                             description: "Page number for pagination (default: 1)"
 65 |                         }
 66 |                     }
 67 |                 }
 68 |             }
 69 |         ]
 70 |     };
 71 | });
 72 | 
 73 | server.setRequestHandler(CallToolRequestSchema, async (request) => {
 74 |     if (request.params.name === "read_mcp_logs") {
 75 |         const args = request.params.arguments || {};
 76 |         const { 
 77 |             lines = 100, 
 78 |             filter = "", 
 79 |             customPath,
 80 |             fileLimit = 5, // Limit number of files to process
 81 |             page = 1       // For pagination
 82 |         } = args as { 
 83 |             lines?: number, 
 84 |             filter?: string,
 85 |             customPath?: string,
 86 |             fileLimit?: number,
 87 |             page?: number
 88 |         };
 89 | 
 90 |         try {
 91 |             // Get default log directory based on operating system
 92 |             let logDir: string;
 93 |             
 94 |             if (customPath) {
 95 |                 logDir = customPath;
 96 |             } else {
 97 |                 const homedir = os.homedir();
 98 |                 
 99 |                 if (process.platform === 'darwin') {
100 |                     // macOS log path
101 |                     logDir = path.join(homedir, 'Library/Logs/Claude');
102 |                 } else if (process.platform === 'win32') {
103 |                     // Windows log path
104 |                     logDir = path.join(homedir, 'AppData/Roaming/Claude/logs');
105 |                 } else {
106 |                     // Linux/other OS log path (might need adjustment)
107 |                     logDir = path.join(homedir, '.config/Claude/logs');
108 |                 }
109 |             }
110 |             
111 |             await saveLog(`Looking for MCP logs in: ${logDir}`);
112 |             
113 |             let allLogFiles: string[] = [];
114 |             try {
115 |                 // Use glob to find all mcp log files
116 |                 allLogFiles = await glob(`${logDir}/mcp*.log`);
117 |                 
118 |                 // If no files found, try a more general pattern as fallback
119 |                 if (allLogFiles.length === 0) {
120 |                     allLogFiles = await glob(`${logDir}/*.log`);
121 |                 }
122 |             } catch (globError) {
123 |                 // If glob fails, try using readdir as a fallback
124 |                 const dirFiles = await readdir(logDir);
125 |                 allLogFiles = dirFiles
126 |                     .filter(file => file.startsWith('mcp') && file.endsWith('.log'))
127 |                     .map(file => path.join(logDir, file));
128 |                 
129 |                 // If still no files, try any log file
130 |                 if (allLogFiles.length === 0) {
131 |                     allLogFiles = dirFiles
132 |                         .filter(file => file.endsWith('.log'))
133 |                         .map(file => path.join(logDir, file));
134 |                 }
135 |             }
136 |             
137 |             // Sort files by modification time (newest first)
138 |             const filesWithStats = await Promise.all(
139 |                 allLogFiles.map(async (file) => {
140 |                     const stats = await fs.promises.stat(file);
141 |                     return { 
142 |                         path: file, 
143 |                         mtime: stats.mtime 
144 |                     };
145 |                 })
146 |             );
147 |             
148 |             filesWithStats.sort((a, b) => b.mtime.getTime() - a.mtime.getTime());
149 |             
150 |             // Paginate the files
151 |             const totalFiles = filesWithStats.length;
152 |             const startIndex = (page - 1) * fileLimit;
153 |             const endIndex = Math.min(startIndex + fileLimit, totalFiles);
154 |             
155 |             // Get the files for the current page
156 |             const logFiles = filesWithStats
157 |                 .slice(startIndex, endIndex)
158 |                 .map(file => file.path);
159 |             
160 |             if (logFiles.length === 0) {
161 |                 return {
162 |                     toolResult: {
163 |                         success: false,
164 |                         message: `No log files found in ${logDir}`,
165 |                         logDirectory: logDir
166 |                     }
167 |                 };
168 |             }
169 |             
170 |             const results: Record<string, string> = {};
171 |             
172 |             // Process each log file - with size limiting
173 |             const maxBytesPerFile = 100 * 1024; // 100KB per file max
174 |             const maxTotalBytes = 500 * 1024; // 500KB total max
175 |             let totalSize = 0;
176 |             
177 |             for (const logFile of logFiles) {
178 |                 try {
179 |                     // Check if we've already exceeded total max size
180 |                     if (totalSize >= maxTotalBytes) {
181 |                         const filename = path.basename(logFile);
182 |                         results[filename] = "[Log content skipped due to total size limits]";
183 |                         continue;
184 |                     }
185 |                     
186 |                     const filename = path.basename(logFile);
187 |                     const content = await readFile(logFile, 'utf8');
188 |                     
189 |                     // Split content into lines
190 |                     let logLines = content.split(/\r?\n/);
191 |                     
192 |                     // Apply filter if provided
193 |                     if (filter) {
194 |                         const filterLower = filter.toLowerCase();
195 |                         logLines = logLines.filter(line => 
196 |                             line.toLowerCase().includes(filterLower)
197 |                         );
198 |                     }
199 |                     
200 |                     // Get the specified number of lines from the end
201 |                     const selectedLines = logLines.slice(-lines);
202 |                     const selectedContent = selectedLines.join('\n');
203 |                     
204 |                     // Check if this file would exceed per-file limit
205 |                     if (Buffer.from(selectedContent).length > maxBytesPerFile) {
206 |                         // Take just enough lines to stay under the limit
207 |                         let truncatedContent = '';
208 |                         let truncatedLines = [];
209 |                         for (let i = selectedLines.length - 1; i >= 0; i--) {
210 |                             const newLine = selectedLines[i] + '\n';
211 |                             if (Buffer.from(newLine + truncatedContent).length <= maxBytesPerFile) {
212 |                                 truncatedLines.unshift(selectedLines[i]);
213 |                                 truncatedContent = truncatedLines.join('\n');
214 |                             } else {
215 |                                 break;
216 |                             }
217 |                         }
218 |                         
219 |                         results[filename] = '[Content truncated due to size limits]\n' + truncatedContent;
220 |                         totalSize += Buffer.from(results[filename]).length;
221 |                     } else {
222 |                         // Store the results if under limit
223 |                         results[filename] = selectedContent;
224 |                         totalSize += Buffer.from(selectedContent).length;
225 |                     }
226 |                 } catch (readError) {
227 |                     const errorMessage = readError instanceof Error ? readError.message : String(readError);
228 |                     results[path.basename(logFile)] = `Error reading log: ${errorMessage}`;
229 |                     totalSize += Buffer.from(results[path.basename(logFile)]).length;
230 |                 }
231 |             }
232 |             
233 |             return {
234 |                 toolResult: {
235 |                     success: true,
236 |                     message: `Read logs from ${logFiles.length} file(s)`,
237 |                     logDirectory: logDir,
238 |                     logs: results,
239 |                     pagination: {
240 |                         currentPage: page,
241 |                         filesPerPage: fileLimit,
242 |                         totalFiles: totalFiles,
243 |                         totalPages: Math.ceil(totalFiles / fileLimit),
244 |                         hasNextPage: endIndex < totalFiles,
245 |                         hasPreviousPage: page > 1
246 |                     }
247 |                 }
248 |             };
249 |         } catch (error) {
250 |             const errorMessage = error instanceof Error ? error.message : String(error);
251 |             await saveLog(`Error reading MCP logs: ${errorMessage}`);
252 |             
253 |             throw new McpError(ErrorCode.InternalError, `Failed to read MCP logs: ${errorMessage}`);
254 |         }
255 |     }
256 |     
257 |     throw new McpError(ErrorCode.MethodNotFound, "Tool not found");
258 | });
```