#
tokens: 7531/50000 10/10 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .github
│   └── workflows
│       └── publish.yml
├── .gitignore
├── biome.json
├── Dockerfile
├── index.ts
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── tsconfig.json
└── types.d.ts
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | dist
2 | node_modules
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # MCP Fetch
  2 | 
  3 | [![smithery badge](https://smithery.ai/badge/@kazuph/mcp-fetch)](https://smithery.ai/server/@kazuph/mcp-fetch)
  4 | 
  5 | Model Context Protocol server for fetching web content and processing images. This allows Claude Desktop (or any MCP client) to fetch web content and handle images appropriately.
  6 | 
  7 | ## Quick Start (For Users)
  8 | 
  9 | To use this tool with Claude Desktop, simply add the following to your Claude Desktop configuration (`~/Library/Application Support/Claude/claude_desktop_config.json`):
 10 | 
 11 | ```json
 12 | {
 13 |   "tools": {
 14 |     "fetch": {
 15 |       "command": "npx",
 16 |       "args": ["-y", "@kazuph/mcp-fetch"]
 17 |     }
 18 |   }
 19 | }
 20 | ```
 21 | 
 22 | This will automatically download and run the latest version of the tool when needed.
 23 | 
 24 | ### Required Setup
 25 | 
 26 | 1. Enable Accessibility for Claude:
 27 |    - Open System Settings
 28 |    - Go to Privacy & Security > Accessibility
 29 |    - Click the "+" button
 30 |    - Add Claude from your Applications folder
 31 |    - Turn ON the toggle for Claude
 32 | 
 33 | This accessibility setting is required for automated clipboard operations (Cmd+V) to work properly.
 34 | 
 35 | ## For Developers
 36 | 
 37 | The following sections are for those who want to develop or modify the tool.
 38 | 
 39 | ## Prerequisites
 40 | 
 41 | - Node.js 18+
 42 | - macOS (for clipboard operations)
 43 | - Claude Desktop (install from https://claude.ai/desktop)
 44 | - tsx (install via `npm install -g tsx`)
 45 | 
 46 | ## Installation
 47 | 
 48 | ### Installing via Smithery
 49 | 
 50 | To install MCP Fetch for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@kazuph/mcp-fetch):
 51 | 
 52 | ```bash
 53 | npx -y @smithery/cli install @kazuph/mcp-fetch --client claude
 54 | ```
 55 | 
 56 | ### Manual Installation
 57 | ```bash
 58 | git clone https://github.com/kazuph/mcp-fetch.git
 59 | cd mcp-fetch
 60 | npm install
 61 | npm run build
 62 | ```
 63 | 
 64 | ## Image Processing Specifications
 65 | 
 66 | When processing images from web content, the following limits are applied:
 67 | 
 68 | - Maximum 6 images per group
 69 | - Maximum height of 8000 pixels per group
 70 | - Maximum size of 30MB per group
 71 | 
 72 | If content exceeds these limits, images will be automatically split into multiple groups, and you'll need to paste (Cmd+V) multiple times.
 73 | 
 74 | ## Configuration
 75 | 
 76 | 1. Make sure Claude Desktop is installed and running.
 77 | 
 78 | 2. Install tsx globally if you haven't:
 79 | ```bash
 80 | npm install -g tsx
 81 | # or
 82 | pnpm add -g tsx
 83 | ```
 84 | 
 85 | 3. Modify your Claude Desktop config located at:
 86 | `~/Library/Application Support/Claude/claude_desktop_config.json`
 87 | 
 88 | You can easily find this through the Claude Desktop menu:
 89 | 1. Open Claude Desktop
 90 | 2. Click Claude on the Mac menu bar
 91 | 3. Click "Settings"
 92 | 4. Click "Developer"
 93 | 
 94 | Add the following to your MCP client's configuration:
 95 | 
 96 | ```json
 97 | {
 98 |   "tools": {
 99 |     "fetch": {
100 |       "args": ["tsx", "/path/to/mcp-fetch/index.ts"]
101 |     }
102 |   }
103 | }
104 | ```
105 | 
106 | ## Available Tools
107 | 
108 | - `fetch`: Retrieves URLs from the Internet and extracts their content as markdown. Images are automatically processed and prepared for clipboard operations.
109 | 
110 | ## Notes
111 | 
112 | - This tool is designed for macOS only due to its dependency on macOS-specific clipboard operations.
113 | - Images are processed using Sharp for optimal performance and quality.
114 | - When multiple images are found, they are merged vertically with consideration for size limits.
115 | - Animated GIFs are automatically handled by extracting their first frame.
116 | 
```

--------------------------------------------------------------------------------
/biome.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "formatter": {
 3 |     "enabled": true,
 4 |     "indentStyle": "space",
 5 |     "indentWidth": 2,
 6 |     "lineWidth": 80
 7 |   },
 8 |   "linter": {
 9 |     "enabled": true,
10 |     "rules": {
11 |       "recommended": true
12 |     }
13 |   },
14 |   "javascript": {
15 |     "formatter": {
16 |       "quoteStyle": "double",
17 |       "trailingComma": "es5"
18 |     }
19 |   }
20 | }
21 | 
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | startCommand:
 2 |   type: stdio
 3 |   configSchema:
 4 |     # JSON Schema defining the configuration options for the MCP.
 5 |     {}
 6 |   commandFunction:
 7 |     # A function that produces the CLI command to start the MCP on stdio.
 8 |     |-
 9 |     (config) => ({
10 |       "command": "node",
11 |       "args": [
12 |         "dist/index.js"
13 |       ]
14 |     })
15 | 
```

--------------------------------------------------------------------------------
/types.d.ts:
--------------------------------------------------------------------------------

```typescript
 1 | declare module "applescript" {
 2 | 	export function execString(
 3 | 		script: string,
 4 | 		callback: (err: Error | null, result: unknown) => void,
 5 | 	): void;
 6 | }
 7 | 
 8 | declare module "robots-parser" {
 9 | 	interface RobotsParser {
10 | 		isAllowed(url: string, userAgent: string): boolean;
11 | 	}
12 | 	export default function (robotsUrl: string, robotsTxt: string): RobotsParser;
13 | }
14 | 
```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 | 	"compilerOptions": {
 3 | 		"target": "ES2022",
 4 | 		"strict": true,
 5 | 		"esModuleInterop": true,
 6 | 		"skipLibCheck": true,
 7 | 		"forceConsistentCasingInFileNames": true,
 8 | 		"resolveJsonModule": true,
 9 | 		"outDir": "./dist",
10 | 		"rootDir": ".",
11 | 		"moduleResolution": "NodeNext",
12 | 		"module": "NodeNext"
13 | 	},
14 | 	"exclude": ["node_modules"],
15 | 	"include": ["./**/*.ts"]
16 | }
17 | 
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | FROM node:22.12-alpine as builder
 2 | 
 3 | # Must be entire project because `prepare` script is run during `npm install` and requires all files.
 4 | COPY . /app
 5 | WORKDIR /app
 6 | 
 7 | RUN --mount=type=cache,target=/root/.npm npm install
 8 | 
 9 | FROM node:22-alpine AS release
10 | 
11 | WORKDIR /app
12 | COPY --from=builder /app/dist /app/dist
13 | COPY --from=builder /app/package.json /app/package.json
14 | COPY --from=builder /app/package-lock.json /app/package-lock.json
15 | 
16 | ENV NODE_ENV=production
17 | 
18 | RUN npm ci --ignore-scripts --omit-dev
19 | 
20 | ENTRYPOINT ["node", "dist/index.js"]
21 | 
```

--------------------------------------------------------------------------------
/.github/workflows/publish.yml:
--------------------------------------------------------------------------------

```yaml
 1 | name: Publish to npm
 2 | 
 3 | on:
 4 |   push:
 5 |     branches:
 6 |       - main
 7 |   workflow_dispatch:  # Allows manual triggering
 8 | 
 9 | permissions:
10 |   contents: write
11 | 
12 | concurrency:
13 |   group: ${{ github.workflow }}-${{ github.ref }}
14 |   cancel-in-progress: true
15 | 
16 | jobs:
17 |   publish:
18 |     runs-on: ubuntu-latest
19 | 
20 |     steps:
21 |       - name: Check out repository
22 |         uses: actions/checkout@v2
23 |         with:
24 |           fetch-depth: 0
25 | 
26 |       - name: Set up Node.js
27 |         uses: actions/setup-node@v2
28 |         with:
29 |           node-version: '18'
30 |           registry-url: 'https://registry.npmjs.org'
31 | 
32 |       - name: Install dependencies
33 |         run: npm ci
34 | 
35 |       - name: Build project
36 |         run: npm run build
37 | 
38 |       - name: Configure Git
39 |         run: |
40 |           git config --local user.email "[email protected]"
41 |           git config --local user.name "GitHub Action"
42 | 
43 |       - name: Bump version
44 |         run: |
45 |           npm version patch -m "chore: bump version to %s [skip ci]"
46 |           git push
47 |           git push --tags
48 | 
49 |       - name: Publish to npm
50 |         run: npm publish --access public -ws --include-workspace-root
51 |         env:
52 |           NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
53 | 
```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 | 	"name": "@smithery/mcp-fetch",
 3 | 	"version": "0.8.11",
 4 | 	"type": "module",
 5 | 	"description": "A Model Context Protocol server that provides web content fetching capabilities",
 6 | 	"main": "dist/index.js",
 7 | 	"scripts": {
 8 | 		"prepare": "npm run build",
 9 | 		"build": "tsc",
10 | 		"start": "node dist/index.js",
11 | 		"dev": "tsc && node dist/index.js",
12 | 		"check": "biome check .",
13 | 		"format": "biome format . --write",
14 | 		"lint": "biome lint .",
15 | 		"typecheck": "tsc --noEmit",
16 | 		"test": "npm run typecheck && npm run check"
17 | 	},
18 | 	"dependencies": {
19 | 		"@modelcontextprotocol/sdk": "^1.0.0",
20 | 		"@mozilla/readability": "^0.5.0",
21 | 		"@types/sharp": "^0.31.1",
22 | 		"jsdom": "^24.0.0",
23 | 		"node-fetch": "^3.3.2",
24 | 		"robots-parser": "^3.0.1",
25 | 		"sharp": "^0.33.5",
26 | 		"turndown": "^7.1.2",
27 | 		"zod": "^3.22.4",
28 | 		"zod-to-json-schema": "^3.22.4"
29 | 	},
30 | 	"devDependencies": {
31 | 		"@types/jsdom": "^21.1.6",
32 | 		"@types/node": "^20.10.5",
33 | 		"@types/turndown": "^5.0.4",
34 | 		"typescript": "^5.3.3"
35 | 	},
36 | 	"author": "kazuph",
37 | 	"license": "MIT",
38 | 	"publishConfig": {
39 | 		"access": "public"
40 | 	},
41 | 	"files": [
42 | 		"dist",
43 | 		"dist/**/*.map",
44 | 		"README.md"
45 | 	],
46 | 	"repository": {
47 | 		"type": "git",
48 | 		"url": "git+https://github.com/kazuph/mcp-fetch.git"
49 | 	},
50 | 	"keywords": [
51 | 		"mcp",
52 | 		"fetch",
53 | 		"web",
54 | 		"content"
55 | 	],
56 | 	"bugs": {
57 | 		"url": "https://github.com/kazuph/mcp-fetch/issues"
58 | 	},
59 | 	"homepage": "https://github.com/kazuph/mcp-fetch#readme",
60 | 	"bin": {
61 | 		"mcp-fetch": "./dist/index.js"
62 | 	}
63 | }
64 | 
```

--------------------------------------------------------------------------------
/index.ts:
--------------------------------------------------------------------------------

```typescript
  1 | #!/usr/bin/env node
  2 | 
  3 | import { Server } from "@modelcontextprotocol/sdk/server/index.js"
  4 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js"
  5 | import { z } from "zod"
  6 | import { zodToJsonSchema } from "zod-to-json-schema"
  7 | import fetch from "node-fetch"
  8 | import { JSDOM } from "jsdom"
  9 | import { Readability } from "@mozilla/readability"
 10 | import TurndownService from "turndown"
 11 | import { exec } from "node:child_process"
 12 | import { promisify } from "node:util"
 13 | import sharp from "sharp"
 14 | 
 15 | const execAsync = promisify(exec)
 16 | 
 17 | function sleep(ms: number) {
 18 | 	return new Promise((resolve) => setTimeout(resolve, ms))
 19 | }
 20 | 
 21 | interface Image {
 22 | 	src: string
 23 | 	alt: string
 24 | 	data?: Buffer
 25 | }
 26 | 
 27 | interface ExtractedContent {
 28 | 	markdown: string
 29 | 	images: Image[]
 30 | }
 31 | 
 32 | const DEFAULT_USER_AGENT_AUTONOMOUS =
 33 | 	"ModelContextProtocol/1.0 (Autonomous; +https://github.com/modelcontextprotocol/servers)"
 34 | const DEFAULT_USER_AGENT_MANUAL =
 35 | 	"ModelContextProtocol/1.0 (User-Specified; +https://github.com/modelcontextprotocol/servers)"
 36 | 
 37 | const FetchArgsSchema = z.object({
 38 | 	url: z.string().url(),
 39 | 	maxLength: z.number().positive().max(1000000).default(20000),
 40 | 	startIndex: z.number().min(0).default(0),
 41 | 	raw: z.boolean().default(false),
 42 | })
 43 | 
 44 | const ListToolsSchema = z.object({
 45 | 	method: z.literal("tools/list"),
 46 | })
 47 | 
 48 | const CallToolSchema = z.object({
 49 | 	method: z.literal("tools/call"),
 50 | 	params: z.object({
 51 | 		name: z.string(),
 52 | 		arguments: z.record(z.unknown()).optional(),
 53 | 	}),
 54 | })
 55 | 
 56 | function extractContentFromHtml(
 57 | 	html: string,
 58 | 	url: string,
 59 | ): ExtractedContent | string {
 60 | 	const dom = new JSDOM(html, { url })
 61 | 	const reader = new Readability(dom.window.document)
 62 | 	const article = reader.parse()
 63 | 
 64 | 	if (!article || !article.content) {
 65 | 		return "<e>Page failed to be simplified from HTML</e>"
 66 | 	}
 67 | 
 68 | 	// Extract images from the article content only
 69 | 	const articleDom = new JSDOM(article.content)
 70 | 	const imgElements = Array.from(
 71 | 		articleDom.window.document.querySelectorAll("img"),
 72 | 	)
 73 | 
 74 | 	const images: Image[] = imgElements.map((img) => {
 75 | 		const src = img.src
 76 | 		const alt = img.alt || ""
 77 | 		return { src, alt }
 78 | 	})
 79 | 
 80 | 	const turndownService = new TurndownService({
 81 | 		headingStyle: "atx",
 82 | 		codeBlockStyle: "fenced",
 83 | 	})
 84 | 	const markdown = turndownService.turndown(article.content)
 85 | 
 86 | 	return { markdown, images }
 87 | }
 88 | 
 89 | async function fetchImages(
 90 | 	images: Image[],
 91 | ): Promise<(Image & { data: Buffer })[]> {
 92 | 	const fetchedImages = []
 93 | 	for (const img of images) {
 94 | 		const response = await fetch(img.src)
 95 | 		if (!response.ok) {
 96 | 			throw new Error(
 97 | 				`Failed to fetch image ${img.src}: status ${response.status}`,
 98 | 			)
 99 | 		}
100 | 		const buffer = await response.arrayBuffer()
101 | 		const imageBuffer = Buffer.from(buffer)
102 | 
103 | 		// Check if the image is a GIF and extract first frame if animated
104 | 		if (img.src.toLowerCase().endsWith(".gif")) {
105 | 			try {
106 | 				const metadata = await sharp(imageBuffer).metadata()
107 | 				if (metadata.pages && metadata.pages > 1) {
108 | 					// Extract first frame of animated GIF
109 | 					const firstFrame = await sharp(imageBuffer, { page: 0 })
110 | 						.png()
111 | 						.toBuffer()
112 | 					fetchedImages.push({
113 | 						...img,
114 | 						data: firstFrame,
115 | 					})
116 | 					continue
117 | 				}
118 | 			} catch (error) {
119 | 				console.warn(`Warning: Failed to process GIF image ${img.src}:`, error)
120 | 			}
121 | 		}
122 | 
123 | 		fetchedImages.push({
124 | 			...img,
125 | 			data: imageBuffer,
126 | 		})
127 | 	}
128 | 	return fetchedImages
129 | }
130 | 
131 | async function commandExists(cmd: string): Promise<boolean> {
132 | 	try {
133 | 		await execAsync(`which ${cmd}`)
134 | 		return true
135 | 	} catch {
136 | 		return false
137 | 	}
138 | }
139 | 
140 | async function getImageDimensions(
141 | 	buffer: Buffer,
142 | ): Promise<{ width: number; height: number; size: number }> {
143 | 	const metadata = await sharp(buffer).metadata()
144 | 	return {
145 | 		width: metadata.width || 0,
146 | 		height: metadata.height || 0,
147 | 		size: buffer.length,
148 | 	}
149 | }
150 | 
151 | async function addImagesToClipboard(
152 | 	images: (Image & { data: Buffer })[],
153 | ): Promise<void> {
154 | 	if (images.length === 0) return
155 | 
156 | 	const hasPbcopy = await commandExists("pbcopy")
157 | 	const hasOsascript = await commandExists("osascript")
158 | 	if (!hasPbcopy) {
159 | 		throw new Error(
160 | 			"'pbcopy' command not found. This tool works on macOS only by default.",
161 | 		)
162 | 	}
163 | 	if (!hasOsascript) {
164 | 		throw new Error(
165 | 			"'osascript' command not found. Required to set clipboard with images.",
166 | 		)
167 | 	}
168 | 
169 | 	const MAX_HEIGHT = 8000
170 | 	const MAX_SIZE_BYTES = 30 * 1024 * 1024 // 30MB
171 | 	const MAX_IMAGES_PER_GROUP = 6 // 1グループあたりの最大画像数
172 | 
173 | 	const tempDir = "/tmp/mcp-fetch-images"
174 | 	await execAsync(`mkdir -p ${tempDir} && rm -f ${tempDir}/*.png`)
175 | 
176 | 	// 画像をグループ化して処理
177 | 	let currentGroup: Buffer[] = []
178 | 	let currentHeight = 0
179 | 	let currentSize = 0
180 | 
181 | 	const processGroup = async (group: Buffer[]) => {
182 | 		if (group.length === 0) return
183 | 
184 | 		// 垂直方向に画像を結合
185 | 		const mergedImagePath = `${tempDir}/merged_${Date.now()}.png`
186 | 		await sharp({
187 | 			create: {
188 | 				width: Math.max(
189 | 					...(await Promise.all(
190 | 						group.map(async (buffer) => {
191 | 							const metadata = await sharp(buffer).metadata()
192 | 							return metadata.width || 0
193 | 						}),
194 | 					)),
195 | 				),
196 | 				height: (
197 | 					await Promise.all(
198 | 						group.map(async (buffer) => {
199 | 							const metadata = await sharp(buffer).metadata()
200 | 							return metadata.height || 0
201 | 						}),
202 | 					)
203 | 				).reduce((a, b) => a + b, 0),
204 | 				channels: 4,
205 | 				background: { r: 255, g: 255, b: 255, alpha: 1 },
206 | 			},
207 | 		})
208 | 			.composite(
209 | 				await Promise.all(
210 | 					group.map(async (buffer, index) => {
211 | 						const previousHeights = await Promise.all(
212 | 							group.slice(0, index).map(async (b) => {
213 | 								const metadata = await sharp(b).metadata()
214 | 								return metadata.height || 0
215 | 							}),
216 | 						)
217 | 						const top = previousHeights.reduce((a, b) => a + b, 0)
218 | 						return {
219 | 							input: buffer,
220 | 							top,
221 | 							left: 0,
222 | 						}
223 | 					}),
224 | 				),
225 | 			)
226 | 			.png()
227 | 			.toFile(mergedImagePath)
228 | 
229 | 		const { stderr } = await execAsync(
230 | 			`osascript -e 'set the clipboard to (read (POSIX file "${mergedImagePath}") as «class PNGf»)'`,
231 | 		)
232 | 		if (stderr?.trim()) {
233 | 			const lines = stderr.trim().split("\n")
234 | 			const nonWarningLines = lines.filter((line) => !line.includes("WARNING:"))
235 | 			if (nonWarningLines.length > 0) {
236 | 				throw new Error("Failed to copy merged image to clipboard.")
237 | 			}
238 | 		}
239 | 
240 | 		await sleep(500)
241 | 		const pasteScript = `osascript -e 'tell application "System Events" to keystroke "v" using command down'`
242 | 		const { stderr: pasteStderr } = await execAsync(pasteScript)
243 | 		if (pasteStderr?.trim()) {
244 | 			const lines = pasteStderr.trim().split("\n")
245 | 			const nonWarningLines = lines.filter((line) => !line.includes("WARNING:"))
246 | 			if (nonWarningLines.length > 0) {
247 | 				console.warn("Failed to paste merged image.")
248 | 			}
249 | 		}
250 | 		await sleep(500)
251 | 	}
252 | 
253 | 	for (const img of images) {
254 | 		const { height, size } = await getImageDimensions(img.data)
255 | 
256 | 		if (
257 | 			currentGroup.length >= MAX_IMAGES_PER_GROUP ||
258 | 			currentHeight + height > MAX_HEIGHT ||
259 | 			currentSize + size > MAX_SIZE_BYTES
260 | 		) {
261 | 			// 現在のグループを処理
262 | 			await processGroup(currentGroup)
263 | 			// 新しいグループを開始
264 | 			currentGroup = [img.data]
265 | 			currentHeight = height
266 | 			currentSize = size
267 | 		} else {
268 | 			currentGroup.push(img.data)
269 | 			currentHeight += height
270 | 			currentSize += size
271 | 		}
272 | 	}
273 | 
274 | 	// 残りのグループを処理
275 | 	await processGroup(currentGroup)
276 | 
277 | 	await execAsync(`rm -rf ${tempDir}`)
278 | }
279 | 
280 | interface FetchResult {
281 | 	content: string
282 | 	prefix: string
283 | 	imageUrls?: string[]
284 | }
285 | 
286 | async function fetchUrl(
287 | 	url: string,
288 | 	userAgent: string,
289 | 	forceRaw = false,
290 | ): Promise<FetchResult> {
291 | 	const response = await fetch(url, {
292 | 		headers: { "User-Agent": userAgent },
293 | 	})
294 | 
295 | 	if (!response.ok) {
296 | 		throw new Error(`Failed to fetch ${url} - status code ${response.status}`)
297 | 	}
298 | 
299 | 	const contentType = response.headers.get("content-type") || ""
300 | 	const text = await response.text()
301 | 	const isHtml =
302 | 		text.toLowerCase().includes("<html") || contentType.includes("text/html")
303 | 
304 | 	if (isHtml && !forceRaw) {
305 | 		const result = extractContentFromHtml(text, url)
306 | 		if (typeof result === "string") {
307 | 			return {
308 | 				content: result,
309 | 				prefix: "",
310 | 			}
311 | 		}
312 | 
313 | 		const { markdown, images } = result
314 | 		const fetchedImages = await fetchImages(images)
315 | 		const imageUrls = fetchedImages.map((img) => img.src)
316 | 
317 | 		if (fetchedImages.length > 0) {
318 | 			try {
319 | 				await addImagesToClipboard(fetchedImages)
320 | 				return {
321 | 					content: markdown,
322 | 					prefix: `Found and processed ${fetchedImages.length} images. Images have been merged vertically (max 6 images per group) and copied to your clipboard. Please paste (Cmd+V) to combine with the retrieved content.\n`,
323 | 					imageUrls,
324 | 				}
325 | 			} catch (err) {
326 | 				return {
327 | 					content: markdown,
328 | 					prefix: `Found ${fetchedImages.length} images but failed to copy them to the clipboard.\nError: ${err instanceof Error ? err.message : String(err)}\n`,
329 | 					imageUrls,
330 | 				}
331 | 			}
332 | 		}
333 | 		return {
334 | 			content: markdown,
335 | 			prefix: "",
336 | 			imageUrls,
337 | 		}
338 | 	}
339 | 
340 | 	return {
341 | 		content: text,
342 | 		prefix: `Content type ${contentType} cannot be simplified to markdown, but here is the raw content:\n`,
343 | 	}
344 | }
345 | 
346 | // Server setup
347 | const server = new Server(
348 | 	{
349 | 		name: "mcp-fetch",
350 | 		version: "1.0.0",
351 | 	},
352 | 	{
353 | 		capabilities: {
354 | 			tools: {},
355 | 		},
356 | 	},
357 | )
358 | 
359 | interface RequestHandlerExtra {
360 | 	signal: AbortSignal
361 | }
362 | 
363 | server.setRequestHandler(
364 | 	ListToolsSchema,
365 | 	async (request: { method: "tools/list" }, extra: RequestHandlerExtra) => {
366 | 		const tools = [
367 | 			{
368 | 				name: "fetch",
369 | 				description:
370 | 					"Retrieves URLs from the Internet and extracts their content as markdown. If images are found, they are merged vertically (max 6 images per group, max height 8000px, max size 30MB per group) and copied to the clipboard of the user's host machine. You will need to paste (Cmd+V) to insert the images.",
371 | 				inputSchema: zodToJsonSchema(FetchArgsSchema),
372 | 			},
373 | 		]
374 | 		return { tools }
375 | 	},
376 | )
377 | 
378 | server.setRequestHandler(
379 | 	CallToolSchema,
380 | 	async (
381 | 		request: {
382 | 			method: "tools/call"
383 | 			params: { name: string; arguments?: Record<string, unknown> }
384 | 		},
385 | 		extra: RequestHandlerExtra,
386 | 	) => {
387 | 		try {
388 | 			const { name, arguments: args } = request.params
389 | 
390 | 			if (name !== "fetch") {
391 | 				throw new Error(`Unknown tool: ${name}`)
392 | 			}
393 | 
394 | 			const parsed = FetchArgsSchema.safeParse(args)
395 | 			if (!parsed.success) {
396 | 				throw new Error(`Invalid arguments: ${parsed.error}`)
397 | 			}
398 | 
399 | 			const { content, prefix, imageUrls } = await fetchUrl(
400 | 				parsed.data.url,
401 | 				DEFAULT_USER_AGENT_AUTONOMOUS,
402 | 				parsed.data.raw,
403 | 			)
404 | 
405 | 			let finalContent = content
406 | 			if (finalContent.length > parsed.data.maxLength) {
407 | 				finalContent = finalContent.slice(
408 | 					parsed.data.startIndex,
409 | 					parsed.data.startIndex + parsed.data.maxLength,
410 | 				)
411 | 				finalContent += `\n\n<e>Content truncated. Call the fetch tool with a start_index of ${
412 | 					parsed.data.startIndex + parsed.data.maxLength
413 | 				} to get more content.</e>`
414 | 			}
415 | 
416 | 			let imagesSection = ""
417 | 			if (imageUrls && imageUrls.length > 0) {
418 | 				imagesSection =
419 | 					"\n\nImages found in article:\n" +
420 | 					imageUrls.map((url) => `- ${url}`).join("\n")
421 | 			}
422 | 
423 | 			return {
424 | 				content: [
425 | 					{
426 | 						type: "text",
427 | 						text: `${prefix}Contents of ${parsed.data.url}:\n${finalContent}${imagesSection}`,
428 | 					},
429 | 				],
430 | 			}
431 | 		} catch (error) {
432 | 			return {
433 | 				content: [
434 | 					{
435 | 						type: "text",
436 | 						text: `Error: ${error instanceof Error ? error.message : String(error)}`,
437 | 					},
438 | 				],
439 | 				isError: true,
440 | 			}
441 | 		}
442 | 	},
443 | )
444 | 
445 | // Start server
446 | async function runServer() {
447 | 	const transport = new StdioServerTransport()
448 | 	await server.connect(transport)
449 | }
450 | 
451 | runServer().catch((error) => {
452 | 	process.stderr.write(`Fatal error running server: ${error}\n`)
453 | 	process.exit(1)
454 | })
455 | 
```