#
tokens: 5168/50000 6/6 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── Dockerfile
├── LICENSE
├── package.json
├── README.ja.md
├── README.md
├── smithery.yaml
├── src
│   └── index.ts
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # image-mcp-server
  2 | 
  3 | [日本語の README](README.ja.md)
  4 | 
  5 | <a href="https://glama.ai/mcp/servers/@champierre/image-mcp-server">
  6 |   <img width="380" height="200" src="https://glama.ai/mcp/servers/@champierre/image-mcp-server/badge" alt="Image Analysis MCP Server" />
  7 | </a>
  8 | 
  9 | [![smithery badge](https://smithery.ai/badge/@champierre/image-mcp-server)](https://smithery.ai/server/@champierre/image-mcp-server)
 10 | An MCP server that receives image URLs or local file paths and analyzes image content using the GPT-4o-mini model.
 11 | 
 12 | ## Features
 13 | 
 14 | - Receives image URLs or local file paths as input and provides detailed analysis of the image content
 15 | - High-precision image recognition and description using the GPT-4o-mini model
 16 | - Image URL validity checking
 17 | - Image loading from local files and Base64 encoding
 18 | 
 19 | ## Installation
 20 | 
 21 | ### Installing via Smithery
 22 | 
 23 | To install Image Analysis Server for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@champierre/image-mcp-server):
 24 | 
 25 | ```bash
 26 | npx -y @smithery/cli install @champierre/image-mcp-server --client claude
 27 | ```
 28 | 
 29 | ### Manual Installation
 30 | 
 31 | ```bash
 32 | # Clone the repository
 33 | git clone https://github.com/champierre/image-mcp-server.git # or your forked repository
 34 | cd image-mcp-server
 35 | 
 36 | # Install dependencies
 37 | npm install
 38 | 
 39 | # Compile TypeScript
 40 | npm run build
 41 | ```
 42 | 
 43 | ## Configuration
 44 | 
 45 | To use this server, you need an OpenAI API key. Set the following environment variable:
 46 | 
 47 | ```
 48 | OPENAI_API_KEY=your_openai_api_key
 49 | ```
 50 | 
 51 | ## MCP Server Configuration
 52 | 
 53 | To use with tools like Cline, add the following settings to your MCP server configuration file:
 54 | 
 55 | ### For Cline
 56 | 
 57 | Add the following to `cline_mcp_settings.json`:
 58 | 
 59 | ```json
 60 | {
 61 |   "mcpServers": {
 62 |     "image-analysis": {
 63 |       "command": "node",
 64 |       "args": ["/path/to/image-mcp-server/dist/index.js"],
 65 |       "env": {
 66 |         "OPENAI_API_KEY": "your_openai_api_key"
 67 |       }
 68 |     }
 69 |   }
 70 | }
 71 | ```
 72 | 
 73 | ### For Claude Desktop App
 74 | 
 75 | Add the following to `claude_desktop_config.json`:
 76 | 
 77 | ```json
 78 | {
 79 |   "mcpServers": {
 80 |     "image-analysis": {
 81 |       "command": "node",
 82 |       "args": ["/path/to/image-mcp-server/dist/index.js"],
 83 |       "env": {
 84 |         "OPENAI_API_KEY": "your_openai_api_key"
 85 |       }
 86 |     }
 87 |   }
 88 | }
 89 | ```
 90 | 
 91 | ## Usage
 92 | 
 93 | Once the MCP server is configured, the following tools become available:
 94 | 
 95 | - `analyze_image`: Receives an image URL and analyzes its content.
 96 | - `analyze_image_from_path`: Receives a local file path and analyzes its content.
 97 | 
 98 | ### Usage Examples
 99 | 
100 | **Analyzing from URL:**
101 | 
102 | ```
103 | Please analyze this image URL: https://example.com/image.jpg
104 | ```
105 | 
106 | **Analyzing from local file path:**
107 | 
108 | ```
109 | Please analyze this image: /path/to/your/image.jpg
110 | ```
111 | 
112 | ### Note: Specifying Local File Paths
113 | 
114 | When using the `analyze_image_from_path` tool, the AI assistant (client) must specify a **valid file path in the environment where this server is running**.
115 | 
116 | - **If the server is running on WSL:**
117 |   - If the AI assistant has a Windows path (e.g., `C:\...`), it needs to convert it to a WSL path (e.g., `/mnt/c/...`) before passing it to the tool.
118 |   - If the AI assistant has a WSL path, it can pass it as is.
119 | - **If the server is running on Windows:**
120 |   - If the AI assistant has a WSL path (e.g., `/home/user/...`), it needs to convert it to a UNC path (e.g., `\\wsl$\Distro\...`) before passing it to the tool.
121 |   - If the AI assistant has a Windows path, it can pass it as is.
122 | 
123 | **Path conversion is the responsibility of the AI assistant (or its execution environment).** The server will try to interpret the received path as is.
124 | 
125 | ### Note: Type Errors During Build
126 | 
127 | When running `npm run build`, you may see an error (TS7016) about missing TypeScript type definitions for the `mime-types` module.
128 | 
129 | ```
130 | src/index.ts:16:23 - error TS7016: Could not find a declaration file for module 'mime-types'. ...
131 | ```
132 | 
133 | This is a type checking error, and since the JavaScript compilation itself succeeds, it **does not affect the server's execution**. If you want to resolve this error, install the type definition file as a development dependency.
134 | 
135 | ```bash
136 | npm install --save-dev @types/mime-types
137 | # or
138 | yarn add --dev @types/mime-types
139 | ```
140 | 
141 | ## Development
142 | 
143 | ```bash
144 | # Run in development mode
145 | npm run dev
146 | ```
147 | 
148 | ## License
149 | 
150 | MIT
151 | 
```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "compilerOptions": {
 3 |     "target": "ES2020",
 4 |     "module": "NodeNext",
 5 |     "moduleResolution": "NodeNext",
 6 |     "esModuleInterop": true,
 7 |     "strict": true,
 8 |     "outDir": "dist",
 9 |     "declaration": true,
10 |     "sourceMap": true
11 |   },
12 |   "include": ["src/**/*"],
13 |   "exclude": ["node_modules", "dist"]
14 | }
15 | 
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
 2 | FROM node:lts-alpine
 3 | 
 4 | # Set working directory
 5 | WORKDIR /app
 6 | 
 7 | # Copy package files
 8 | COPY package.json package-lock.json* ./
 9 | 
10 | # Install dependencies without running scripts
11 | RUN npm install --ignore-scripts
12 | 
13 | # Copy rest of the source code
14 | COPY . .
15 | 
16 | # Build the project (TypeScript compilation)
17 | RUN npm run build
18 | 
19 | # Expose port if necessary (not specified in MCP, so optional)
20 | # EXPOSE 3000
21 | 
22 | # Start the MCP server
23 | CMD ["npm", "run", "start"]
24 | 
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
 2 | 
 3 | startCommand:
 4 |   type: stdio
 5 |   configSchema:
 6 |     # JSON Schema defining the configuration options for the MCP.
 7 |     type: object
 8 |     required:
 9 |       - openaiApiKey
10 |     properties:
11 |       openaiApiKey:
12 |         type: string
13 |         description: Your OpenAI API key required for image analysis.
14 |   commandFunction:
15 |     # A JS function that produces the CLI command based on the given config to start the MCP on stdio.
16 |     |-
17 |     (config) => ({
18 |       command: 'node',
19 |       args: ['dist/index.js'],
20 |       env: { OPENAI_API_KEY: config.openaiApiKey }
21 |     })
22 |   exampleConfig:
23 |     openaiApiKey: your_openai_api_key_here
24 | 
```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "name": "image-mcp-server",
 3 |   "version": "1.0.0",
 4 |   "description": "MCP server for image analysis using GPT-4o-mini",
 5 |   "main": "dist/index.js",
 6 |   "type": "module",
 7 |   "scripts": {
 8 |     "build": "tsc",
 9 |     "start": "node dist/index.js",
10 |     "dev": "ts-node --esm src/index.ts",
11 |     "test": "echo \"Error: no test specified\" && exit 1"
12 |   },
13 |   "repository": {
14 |     "type": "git",
15 |     "url": "git+https://github.com/champierre/image-mcp-server.git"
16 |   },
17 |   "keywords": [],
18 |   "author": "Junya Ishihara",
19 |   "license": "MIT",
20 |   "bugs": {
21 |     "url": "https://github.com/champierre/image-mcp-server/issues"
22 |   },
23 |   "homepage": "https://github.com/champierre/image-mcp-server#readme",
24 |   "dependencies": {
25 |     "@modelcontextprotocol/sdk": "^1.7.0",
26 |     "@types/node": "^22.13.13",
27 |     "axios": "^1.8.4",
28 |     "dotenv": "^16.4.7",
29 |     "mime-types": "^3.0.1",
30 |     "openai": "^4.89.0",
31 |     "ts-node": "^10.9.2",
32 |     "typescript": "^5.8.2"
33 |   },
34 |   "devDependencies": {
35 |     "@types/mime-types": "^2.1.4"
36 |   }
37 | }
38 | 
```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
  1 | #!/usr/bin/env node
  2 | import { Server } from '@modelcontextprotocol/sdk/server/index.js';
  3 | import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
  4 | import {
  5 |   CallToolRequestSchema,
  6 |   ErrorCode,
  7 |   ListToolsRequestSchema,
  8 |   McpError,
  9 | } from '@modelcontextprotocol/sdk/types.js';
 10 | import { OpenAI } from 'openai';
 11 | import axios from 'axios';
 12 | import * as dotenv from 'dotenv';
 13 | import * as fs from 'fs'; // Import fs for file reading
 14 | import * as path from 'path'; // Import path for path operations
 15 | import * as os from 'os'; // Import os module
 16 | import * as mime from 'mime-types'; // Revert to import statement
 17 | 
 18 | // Load environment variables from .env file
 19 | dotenv.config();
 20 | 
 21 | // Get OpenAI API key from environment variables
 22 | const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
 23 | if (!OPENAI_API_KEY) {
 24 |   throw new Error('OPENAI_API_KEY environment variable is required');
 25 | }
 26 | 
 27 | // Initialize OpenAI client
 28 | const openai = new OpenAI({
 29 |   apiKey: OPENAI_API_KEY,
 30 | });
 31 | 
 32 | // --- Argument Type Guards ---
 33 | const isValidAnalyzeImageArgs = (
 34 |   args: any
 35 | ): args is { imageUrl: string } =>
 36 |   typeof args === 'object' &&
 37 |   args !== null &&
 38 |   typeof args.imageUrl === 'string';
 39 | 
 40 | const isValidAnalyzeImagePathArgs = (
 41 |   args: any
 42 | ): args is { imagePath: string } => // New type guard for path tool
 43 |   typeof args === 'object' &&
 44 |   args !== null &&
 45 |   typeof args.imagePath === 'string';
 46 | // --- End Argument Type Guards ---
 47 | 
 48 | class ImageAnalysisServer {
 49 |   private server: Server;
 50 | 
 51 |   constructor() {
 52 |     this.server = new Server(
 53 |       {
 54 |         name: 'image-analysis-server',
 55 |         version: '1.1.0', // Version bump
 56 |       },
 57 |       {
 58 |         capabilities: {
 59 |           tools: {},
 60 |         },
 61 |       }
 62 |     );
 63 | 
 64 |     this.setupToolHandlers();
 65 | 
 66 |     // Error handling
 67 |     this.server.onerror = (error) => console.error('[MCP Error]', error);
 68 |     process.on('SIGINT', async () => {
 69 |       await this.server.close();
 70 |       process.exit(0);
 71 |     });
 72 |   }
 73 | 
 74 |   private setupToolHandlers() {
 75 |     // Define tool list
 76 |     this.server.setRequestHandler(ListToolsRequestSchema, async () => ({
 77 |       tools: [
 78 |         {
 79 |           name: 'analyze_image',
 80 |           description: 'Receives an image URL and analyzes the image content using GPT-4o-mini',
 81 |           inputSchema: {
 82 |             type: 'object',
 83 |             properties: {
 84 |               imageUrl: {
 85 |                 type: 'string',
 86 |                 description: 'URL of the image to analyze',
 87 |               },
 88 |             },
 89 |             required: ['imageUrl'],
 90 |           },
 91 |         },
 92 |         // --- New Tool Definition ---
 93 |         {
 94 |           name: 'analyze_image_from_path',
 95 |           description: 'Loads an image from a local file path and analyzes its content using GPT-4o-mini. AI assistants need to provide a valid path for the server execution environment (e.g., Linux path if the server is running on WSL).',
 96 |           inputSchema: {
 97 |             type: 'object',
 98 |             properties: {
 99 |               imagePath: {
100 |                 type: 'string',
101 |                 description: 'Local file path of the image to analyze (must be accessible from the server execution environment)',
102 |               },
103 |             },
104 |             required: ['imagePath'],
105 |           },
106 |         },
107 |         // --- End New Tool Definition ---
108 |       ],
109 |     }));
110 | 
111 |     // Tool execution handler
112 |     this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
113 |       const toolName = request.params.name;
114 |       const args = request.params.arguments;
115 | 
116 |       try {
117 |         let analysis: string;
118 | 
119 |         if (toolName === 'analyze_image') {
120 |           if (!isValidAnalyzeImageArgs(args)) {
121 |             throw new McpError(
122 |               ErrorCode.InvalidParams,
123 |               'Invalid arguments for analyze_image: imageUrl (string) is required'
124 |             );
125 |           }
126 |           const imageUrl = args.imageUrl;
127 |           await this.validateImageUrl(imageUrl); // Validate URL accessibility
128 |           analysis = await this.analyzeImageWithGpt4({ type: 'url', data: imageUrl });
129 | 
130 |         } else if (toolName === 'analyze_image_from_path') {
131 |           if (!isValidAnalyzeImagePathArgs(args)) {
132 |             throw new McpError(
133 |               ErrorCode.InvalidParams,
134 |               'Invalid arguments for analyze_image_from_path: imagePath (string) is required'
135 |             );
136 |           }
137 |           const imagePath = args.imagePath;
138 |           // Basic security check: prevent absolute paths trying to escape common roots (adjust as needed)
139 |           // This is a VERY basic check and might need refinement based on security requirements.
140 |           if (path.isAbsolute(imagePath) && !imagePath.startsWith(process.cwd()) && !imagePath.startsWith(os.homedir()) && !imagePath.startsWith('/mnt/')) {
141 |              // Allow relative paths, paths within cwd, home, or WSL mounts. Adjust if needed.
142 |              console.warn(`Potential unsafe path access attempt blocked: ${imagePath}`);
143 |              throw new McpError(ErrorCode.InvalidParams, 'Invalid or potentially unsafe imagePath provided.');
144 |           }
145 | 
146 |           const resolvedPath = path.resolve(imagePath); // Resolve relative paths
147 |           if (!fs.existsSync(resolvedPath)) {
148 |             throw new McpError(ErrorCode.InvalidParams, `File not found at path: ${resolvedPath}`);
149 |           }
150 |           const imageDataBuffer = fs.readFileSync(resolvedPath);
151 |           const base64String = imageDataBuffer.toString('base64');
152 |           const mimeType = mime.lookup(resolvedPath) || 'application/octet-stream'; // Detect MIME type or default
153 | 
154 |           if (!mimeType.startsWith('image/')) {
155 |              throw new McpError(ErrorCode.InvalidParams, `File is not an image: ${mimeType}`);
156 |           }
157 | 
158 |           analysis = await this.analyzeImageWithGpt4({ type: 'base64', data: base64String, mimeType: mimeType });
159 | 
160 |         } else {
161 |           throw new McpError(
162 |             ErrorCode.MethodNotFound,
163 |             `Unknown tool: ${toolName}`
164 |           );
165 |         }
166 | 
167 |         // Return successful analysis
168 |         return {
169 |           content: [
170 |             {
171 |               type: 'text',
172 |               text: analysis,
173 |             },
174 |           ],
175 |         };
176 | 
177 |       } catch (error) {
178 |         console.error(`Error calling tool ${toolName}:`, error);
179 |         // Return error content
180 |         return {
181 |           content: [
182 |             {
183 |               type: 'text',
184 |               text: `Tool execution error (${toolName}): ${error instanceof Error ? error.message : String(error)}`,
185 |             },
186 |           ],
187 |           isError: true,
188 |         };
189 |       }
190 |     });
191 |   }
192 | 
193 |   // Method to check if the image URL is valid (existing)
194 |   private async validateImageUrl(url: string): Promise<void> {
195 |     try {
196 |       const response = await axios.head(url);
197 |       const contentType = response.headers['content-type'];
198 |       if (!contentType || !contentType.startsWith('image/')) {
199 |         throw new Error(`URL is not an image: ${contentType}`);
200 |       }
201 |     } catch (error) {
202 |       if (axios.isAxiosError(error)) {
203 |         throw new Error(`Cannot access image URL: ${error.message}`);
204 |       }
205 |       throw error;
206 |     }
207 |   }
208 | 
209 |   // Method to analyze images with GPT-4o-mini (modified: accepts URL or Base64)
210 |   private async analyzeImageWithGpt4(
211 |      imageData: { type: 'url', data: string } | { type: 'base64', data: string, mimeType: string }
212 |    ): Promise<string> {
213 |     try {
214 |       let imageInput: any;
215 |       if (imageData.type === 'url') {
216 |         imageInput = { type: 'image_url', image_url: { url: imageData.data } };
217 |       } else {
218 |         // Construct data URI for OpenAI API
219 |         imageInput = { type: 'image_url', image_url: { url: `data:${imageData.mimeType};base64,${imageData.data}` } };
220 |       }
221 | 
222 |       const response = await openai.chat.completions.create({
223 |         model: 'gpt-4o-mini',
224 |         messages: [
225 |           {
226 |             role: 'system',
227 |             content: 'Analyze the image content in detail and provide an explanation in English.',
228 |           },
229 |           {
230 |             role: 'user',
231 |             content: [
232 |               { type: 'text', text: 'Please analyze the following image and explain its content in detail.' },
233 |               imageInput, // Use the constructed image input
234 |             ],
235 |           },
236 |         ],
237 |         max_tokens: 1000,
238 |       });
239 | 
240 |       return response.choices[0]?.message?.content || 'Could not retrieve analysis results.';
241 |     } catch (error) {
242 |       console.error('OpenAI API error:', error);
243 |       throw new Error(`OpenAI API error: ${error instanceof Error ? error.message : String(error)}`);
244 |     }
245 |   }
246 | 
247 |   async run() {
248 |     const transport = new StdioServerTransport();
249 |     await this.server.connect(transport);
250 |     console.error('Image Analysis MCP server (v1.1.0) running on stdio'); // Updated version
251 |   }
252 | }
253 | 
254 | const server = new ImageAnalysisServer();
255 | server.run().catch(console.error);
256 | 
```