#
tokens: 15062/50000 12/12 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .github
│   └── workflows
│       └── npm-publish.yml
├── .gitignore
├── .npmignore
├── docker-compose.yml
├── Dockerfile
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── src
│   ├── index.ts
│   └── test-client.ts
├── swagger.json
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
 1 | # Dependency directories
 2 | node_modules/
 3 | 
 4 | # Build outputs
 5 | dist/
 6 | build/
 7 | *.tsbuildinfo
 8 | 
 9 | # Logs
10 | logs
11 | *.log
12 | npm-debug.log*
13 | yarn-debug.log*
14 | yarn-error.log*
15 | lerna-debug.log*
16 | 
17 | # Environment variables
18 | .env
19 | .env.local
20 | .env.development
21 | .env.test
22 | .env.production
23 | 
24 | # IDE specific files
25 | .idea/
26 | .vscode/
27 | *.swp
28 | *.swo
29 | .DS_Store
30 | .history/
31 | 
32 | # Debug logs
33 | .npm
34 | .eslintcache
35 | 
36 | # Test coverage
37 | coverage/
38 | 
39 | # Temporary files
40 | tmp/
41 | temp/
42 | 
43 | # Docker volumes from testing
44 | .docker-volumes/
45 | 
46 | # Mac files
47 | .DS_Store
48 | 
49 | # Windows files
50 | Thumbs.db
51 | ehthumbs.db
52 | Desktop.ini
53 | $RECYCLE.BIN/ 
```

--------------------------------------------------------------------------------
/.npmignore:
--------------------------------------------------------------------------------

```
 1 | # Source files (we only want to publish the dist folder with compiled code)
 2 | src/
 3 | 
 4 | # TypeScript config files
 5 | tsconfig.json
 6 | tsconfig.*.json
 7 | 
 8 | # Test files
 9 | test/
10 | *.test.ts
11 | *.spec.ts
12 | __tests__/
13 | __mocks__/
14 | 
15 | # Git files
16 | .git/
17 | .github/
18 | .gitignore
19 | .gitattributes
20 | 
21 | # CI/CD
22 | .travis.yml
23 | .circleci/
24 | .gitlab-ci.yml
25 | .github/
26 | .azure-pipelines.yml
27 | 
28 | # IDE specific files
29 | .idea/
30 | .vscode/
31 | *.swp
32 | *.swo
33 | .DS_Store
34 | 
35 | # Docker files
36 | docker-compose.yml
37 | Dockerfile
38 | .docker-volumes/
39 | 
40 | # Development tools and configs
41 | .prettierrc
42 | .eslintrc
43 | .eslintignore
44 | .editorconfig
45 | jest.config.js
46 | rollup.config.js
47 | webpack.config.js
48 | babel.config.js
49 | .babelrc
50 | .nvmrc
51 | 
52 | # Documentation that's not necessary for npm
53 | CONTRIBUTING.md
54 | CHANGELOG.md
55 | swagger.json
56 | docs/
57 | 
58 | # Examples not needed for npm
59 | examples/
60 | 
61 | # Logs
62 | logs
63 | *.log
64 | npm-debug.log*
65 | yarn-debug.log*
66 | yarn-error.log*
67 | lerna-debug.log*
68 | 
69 | # Misc
70 | .DS_Store
71 | *.env
72 | coverage/
73 | tmp/
74 | .nyc_output/
75 | 
76 | # Don't ignore these files
77 | !dist/
78 | !LICENSE
79 | !README.md
80 | !package.json 
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # WebSearch-MCP
  2 | 
  3 | [![smithery badge](https://smithery.ai/badge/@mnhlt/WebSearch-MCP)](https://smithery.ai/server/@mnhlt/WebSearch-MCP)
  4 | 
  5 | A Model Context Protocol (MCP) server implementation that provides a web search capability over stdio transport. This server integrates with a WebSearch Crawler API to retrieve search results.
  6 | 
  7 | ## Table of Contents
  8 | 
  9 | - [About](#about)
 10 | - [Installation](#installation)
 11 | - [Configuration](#configuration)
 12 | - [Setup & Integration](#setup--integration)
 13 |   - [Setting Up the Crawler Service](#setting-up-the-crawler-service)
 14 |     - [Prerequisites](#prerequisites)
 15 |     - [Starting the Crawler Service](#starting-the-crawler-service)
 16 |     - [Testing the Crawler API](#testing-the-crawler-api)
 17 |     - [Custom Configuration](#custom-configuration)
 18 |   - [Integrating with MCP Clients](#integrating-with-mcp-clients)
 19 |     - [Quick Reference: MCP Configuration](#quick-reference-mcp-configuration)
 20 |     - [Claude Desktop](#claude-desktop)
 21 |     - [Cursor IDE](#cursor-ide)
 22 |     - [Cline](#cline-command-line-interface-for-claude)
 23 | - [Usage](#usage)
 24 |   - [Parameters](#parameters)
 25 |   - [Example Search Response](#example-search-response)
 26 |   - [Testing Locally](#testing-locally)
 27 |   - [As a Library](#as-a-library)
 28 | - [Troubleshooting](#troubleshooting)
 29 |   - [Crawler Service Issues](#crawler-service-issues)
 30 |   - [MCP Server Issues](#mcp-server-issues)
 31 | - [Development](#development)
 32 |   - [Project Structure](#project-structure)
 33 |   - [Publishing to npm](#publishing-to-npm)
 34 | - [Contributing](#contributing)
 35 | - [License](#license)
 36 | 
 37 | ## About
 38 | 
 39 | WebSearch-MCP is a Model Context Protocol server that provides web search capabilities to AI assistants that support MCP. It allows AI models like Claude to search the web in real-time, retrieving up-to-date information about any topic.
 40 | 
 41 | The server integrates with a Crawler API service that handles the actual web searches, and communicates with AI assistants using the standardized Model Context Protocol.
 42 | 
 43 | ## Installation
 44 | 
 45 | ### Installing via Smithery
 46 | 
 47 | To install WebSearch for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@mnhlt/WebSearch-MCP):
 48 | 
 49 | ```bash
 50 | npx -y @smithery/cli install @mnhlt/WebSearch-MCP --client claude
 51 | ```
 52 | 
 53 | ### Manual Installation
 54 | ```bash
 55 | npm install -g websearch-mcp
 56 | ```
 57 | 
 58 | Or use without installing:
 59 | 
 60 | ```bash
 61 | npx websearch-mcp
 62 | ```
 63 | 
 64 | ## Configuration
 65 | 
 66 | The WebSearch MCP server can be configured using environment variables:
 67 | 
 68 | - `API_URL`: The URL of the WebSearch Crawler API (default: `http://localhost:3001`)
 69 | - `MAX_SEARCH_RESULT`: Maximum number of search results to return when not specified in the request (default: `5`)
 70 | 
 71 | Examples:
 72 | ```bash
 73 | # Configure API URL
 74 | API_URL=https://crawler.example.com npx websearch-mcp
 75 | 
 76 | # Configure maximum search results
 77 | MAX_SEARCH_RESULT=10 npx websearch-mcp
 78 | 
 79 | # Configure both
 80 | API_URL=https://crawler.example.com MAX_SEARCH_RESULT=10 npx websearch-mcp
 81 | ```
 82 | 
 83 | ## Setup & Integration
 84 | 
 85 | Setting up WebSearch-MCP involves two main parts: configuring the crawler service that performs the actual web searches, and integrating the MCP server with your AI client applications.
 86 | 
 87 | ### Setting Up the Crawler Service
 88 | 
 89 | The WebSearch MCP server requires a crawler service to perform the actual web searches. You can easily set up the crawler service using Docker Compose.
 90 | 
 91 | ### Prerequisites
 92 | 
 93 | - [Docker](https://www.docker.com/get-started) and [Docker Compose](https://docs.docker.com/compose/install/)
 94 | 
 95 | ### Starting the Crawler Service
 96 | 
 97 | 1. Create a file named `docker-compose.yml` with the following content:
 98 | 
 99 | ```yaml
100 | version: '3.8'
101 | 
102 | services:
103 |   crawler:
104 |     image: laituanmanh/websearch-crawler:latest
105 |     container_name: websearch-api
106 |     restart: unless-stopped
107 |     ports:
108 |       - "3001:3001"
109 |     environment:
110 |       - NODE_ENV=production
111 |       - PORT=3001
112 |       - LOG_LEVEL=info
113 |       - FLARESOLVERR_URL=http://flaresolverr:8191/v1
114 |     depends_on:
115 |       - flaresolverr
116 |     volumes:
117 |       - crawler_storage:/app/storage
118 | 
119 |   flaresolverr:
120 |     image: 21hsmw/flaresolverr:nodriver
121 |     container_name: flaresolverr
122 |     restart: unless-stopped
123 |     environment:
124 |       - LOG_LEVEL=info
125 |       - TZ=UTC
126 | 
127 | volumes:
128 |   crawler_storage:
129 | ```
130 | workaround for Mac Apple Silicon
131 | ```
132 | version: '3.8'
133 | 
134 | services:
135 |   crawler:
136 |     image: laituanmanh/websearch-crawler:latest
137 |     container_name: websearch-api
138 |     platform: "linux/amd64"
139 |     restart: unless-stopped
140 |     ports:
141 |       - "3001:3001"
142 |     environment:
143 |       - NODE_ENV=production
144 |       - PORT=3001
145 |       - LOG_LEVEL=info
146 |       - FLARESOLVERR_URL=http://flaresolverr:8191/v1
147 |     depends_on:
148 |       - flaresolverr
149 |     volumes:
150 |       - crawler_storage:/app/storage
151 | 
152 |   flaresolverr:
153 |     image: 21hsmw/flaresolverr:nodriver
154 |     platform: "linux/arm64"
155 |     container_name: flaresolverr
156 |     restart: unless-stopped
157 |     environment:
158 |       - LOG_LEVEL=info
159 |       - TZ=UTC
160 | 
161 | volumes:
162 |   crawler_storage:
163 | ```
164 | 
165 | 2. Start the services:
166 | 
167 | ```bash
168 | docker-compose up -d
169 | ```
170 | 
171 | 3. Verify that the services are running:
172 | 
173 | ```bash
174 | docker-compose ps
175 | ```
176 | 
177 | 4. Test the crawler API health endpoint:
178 | 
179 | ```bash
180 | curl http://localhost:3001/health
181 | ```
182 | 
183 | Expected response:
184 | ```json
185 | {
186 |   "status": "ok",
187 |   "details": {
188 |     "status": "ok",
189 |     "flaresolverr": true,
190 |     "google": true,
191 |     "message": null
192 |   }
193 | }
194 | ```
195 | 
196 | The crawler API will be available at `http://localhost:3001`.
197 | 
198 | ### Testing the Crawler API
199 | 
200 | You can test the crawler API directly using curl:
201 | 
202 | ```bash
203 | curl -X POST http://localhost:3001/crawl \
204 |   -H "Content-Type: application/json" \
205 |   -d '{
206 |     "query": "typescript best practices",
207 |     "numResults": 2,
208 |     "language": "en",
209 |     "filters": {
210 |       "excludeDomains": ["youtube.com"],
211 |       "resultType": "all" 
212 |     }
213 |   }'
214 | ```
215 | 
216 | ### Custom Configuration
217 | 
218 | You can customize the crawler service by modifying the environment variables in the `docker-compose.yml` file:
219 | 
220 | - `PORT`: The port on which the crawler API listens (default: 3001)
221 | - `LOG_LEVEL`: Logging level (options: debug, info, warn, error)
222 | - `FLARESOLVERR_URL`: URL of the FlareSolverr service (for bypassing Cloudflare protection)
223 | 
224 | ## Integrating with MCP Clients
225 | 
226 | ### Quick Reference: MCP Configuration
227 | 
228 | Here's a quick reference for MCP configuration across different clients:
229 | 
230 | ```json
231 | {
232 |     "mcpServers": {
233 |         "websearch": {
234 |             "command": "npx",
235 |             "args": [
236 |                 "websearch-mcp"
237 |             ],
238 |             "environment": {
239 |                 "API_URL": "http://localhost:3001",
240 |                 "MAX_SEARCH_RESULT": "5" // reduce to save your tokens, increase for wider information gain
241 |             }
242 |         }
243 |     }
244 | }
245 | ```
246 | 
247 | Workaround for Windows, due to [Issue](https://github.com/smithery-ai/mcp-obsidian/issues/19)
248 | ```
249 | {
250 | 	"mcpServers": {
251 | 	  "websearch": {
252 |             "command": "cmd",
253 |             "args": [
254 | 				"/c",
255 | 				"npx",
256 |                 "websearch-mcp"
257 |             ],
258 |             "environment": {
259 |                 "API_URL": "http://localhost:3001",
260 |                 "MAX_SEARCH_RESULT": "1"
261 |             }
262 |         }
263 | 	}
264 |   }
265 | ```
266 | 
267 | ## Usage
268 | 
269 | This package implements an MCP server using stdio transport that exposes a `web_search` tool with the following parameters:
270 | 
271 | ### Parameters
272 | 
273 | - `query` (required): The search query to look up
274 | - `numResults` (optional): Number of results to return (default: 5)
275 | - `language` (optional): Language code for search results (e.g., 'en')
276 | - `region` (optional): Region code for search results (e.g., 'us')
277 | - `excludeDomains` (optional): Domains to exclude from results
278 | - `includeDomains` (optional): Only include these domains in results
279 | - `excludeTerms` (optional): Terms to exclude from results
280 | - `resultType` (optional): Type of results to return ('all', 'news', or 'blogs')
281 | 
282 | ### Example Search Response
283 | 
284 | Here's an example of a search response:
285 | 
286 | ```json
287 | {
288 |   "query": "machine learning trends",
289 |   "results": [
290 |     {
291 |       "title": "Top Machine Learning Trends in 2025",
292 |       "snippet": "The key machine learning trends for 2025 include multimodal AI, generative models, and quantum machine learning applications in enterprise...",
293 |       "url": "https://example.com/machine-learning-trends-2025",
294 |       "siteName": "AI Research Today",
295 |       "byline": "Dr. Jane Smith"
296 |     },
297 |     {
298 |       "title": "The Evolution of Machine Learning: 2020-2025",
299 |       "snippet": "Over the past five years, machine learning has evolved from primarily supervised learning approaches to more sophisticated self-supervised and reinforcement learning paradigms...",
300 |       "url": "https://example.com/ml-evolution",
301 |       "siteName": "Tech Insights",
302 |       "byline": "John Doe"
303 |     }
304 |   ]
305 | }
306 | ```
307 | 
308 | ### Testing Locally
309 | 
310 | To test the WebSearch MCP server locally, you can use the included test client:
311 | 
312 | ```bash
313 | npm run test-client
314 | ```
315 | 
316 | This will start the MCP server and a simple command-line interface that allows you to enter search queries and see the results.
317 | 
318 | You can also configure the API_URL for the test client:
319 | 
320 | ```bash
321 | API_URL=https://crawler.example.com npm run test-client
322 | ```
323 | 
324 | ### As a Library
325 | 
326 | You can use this package programmatically:
327 | 
328 | ```typescript
329 | import { createMCPClient } from '@modelcontextprotocol/sdk';
330 | 
331 | // Create an MCP client
332 | const client = createMCPClient({
333 |   transport: { type: 'subprocess', command: 'npx websearch-mcp' }
334 | });
335 | 
336 | // Execute a web search
337 | const response = await client.request({
338 |   method: 'call_tool',
339 |   params: {
340 |     name: 'web_search',
341 |     arguments: {
342 |       query: 'your search query',
343 |       numResults: 5,
344 |       language: 'en'
345 |     }
346 |   }
347 | });
348 | 
349 | console.log(response.result);
350 | ```
351 | 
352 | ## Troubleshooting
353 | 
354 | ### Crawler Service Issues
355 | 
356 | - **API Unreachable**: Ensure that the crawler service is running and accessible at the configured API_URL.
357 | - **Search Results Not Available**: Check the logs of the crawler service to see if there are any errors:
358 |   ```bash
359 |   docker-compose logs crawler
360 |   ```
361 | - **FlareSolverr Issues**: Some websites use Cloudflare protection. If you see errors related to this, check if FlareSolverr is working:
362 |   ```bash
363 |   docker-compose logs flaresolverr
364 |   ```
365 | 
366 | ### MCP Server Issues
367 | 
368 | - **Import Errors**: Ensure you have the latest version of the MCP SDK:
369 |   ```bash
370 |   npm install -g @modelcontextprotocol/sdk@latest
371 |   ```
372 | - **Connection Issues**: Make sure the stdio transport is properly configured for your client.
373 | 
374 | ## Development
375 | 
376 | To work on this project:
377 | 
378 | 1. Clone the repository
379 | 2. Install dependencies: `npm install`
380 | 3. Build the project: `npm run build`
381 | 4. Run in development mode: `npm run dev`
382 | 
383 | The server expects a WebSearch Crawler API as defined in the included swagger.json file. Make sure the API is running at the configured API_URL.
384 | 
385 | ### Project Structure
386 | 
387 | - `.gitignore`: Specifies files that Git should ignore (node_modules, dist, logs, etc.)
388 | - `.npmignore`: Specifies files that shouldn't be included when publishing to npm
389 | - `package.json`: Project metadata and dependencies
390 | - `src/`: Source TypeScript files
391 | - `dist/`: Compiled JavaScript files (generated when building)
392 | 
393 | ### Publishing to npm
394 | 
395 | To publish this package to npm:
396 | 
397 | 1. Make sure you have an npm account and are logged in (`npm login`)
398 | 2. Update the version in package.json (`npm version patch|minor|major`)
399 | 3. Run `npm publish`
400 | 
401 | The `.npmignore` file ensures that only the necessary files are included in the published package:
402 | - The compiled code in `dist/`
403 | - README.md and LICENSE files
404 | - package.json
405 | 
406 | ## Contributing
407 | 
408 | Contributions are welcome! Please feel free to submit a Pull Request.
409 | 
410 | ## License
411 | 
412 | ISC
413 | 
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
 2 | FROM node:lts-alpine
 3 | 
 4 | # Create app directory
 5 | WORKDIR /app
 6 | 
 7 | # Copy package files
 8 | COPY package*.json ./
 9 | 
10 | # Install dependencies without running scripts
11 | RUN npm install --ignore-scripts
12 | 
13 | # Copy the rest of the application
14 | COPY . ./
15 | 
16 | # Build the project
17 | RUN npm run build
18 | 
19 | # Expose port if necessary (not strictly required for stdio MCP server)
20 | 
21 | # Command to run the server
22 | CMD ["node", "dist/index.js"]
23 | 
```

--------------------------------------------------------------------------------
/docker-compose.yml:
--------------------------------------------------------------------------------

```yaml
 1 | version: '3.8'
 2 | 
 3 | services:
 4 |   crawler:
 5 |     image: laituanmanh/websearch-crawler:latest
 6 |     container_name: websearch-api
 7 |     restart: unless-stopped
 8 |     ports:
 9 |       - "3001:3001"
10 |     environment:
11 |       - NODE_ENV=production
12 |       - PORT=3001
13 |       - LOG_LEVEL=info
14 |       - FLARESOLVERR_URL=http://flaresolverr:8191/v1
15 |     depends_on:
16 |       - flaresolverr
17 |     volumes:
18 |       - crawler_storage:/app/storage
19 | 
20 |   flaresolverr:
21 |     image: 21hsmw/flaresolverr:nodriver
22 |     container_name: flaresolverr
23 |     restart: unless-stopped
24 |     environment:
25 |       - LOG_LEVEL=info
26 |       - TZ=UTC
27 | 
28 | volumes:
29 |   crawler_storage:
```

--------------------------------------------------------------------------------
/.github/workflows/npm-publish.yml:
--------------------------------------------------------------------------------

```yaml
 1 | # This workflow will run tests using node and then publish a package to GitHub Packages when a release is created
 2 | # For more information see: https://docs.github.com/en/actions/publishing-packages/publishing-nodejs-packages
 3 | 
 4 | name: Node.js Package
 5 | 
 6 | on:
 7 |   release:
 8 |     types: [published]
 9 | 
10 | jobs:
11 |   build:
12 |     runs-on: ubuntu-latest
13 |     steps:
14 |       - uses: actions/checkout@v4
15 |       - uses: actions/setup-node@v4
16 |         with:
17 |           node-version: 20
18 |       - run: npm ci
19 |       # - run: npm test
20 | 
21 |   publish-npm:
22 |     needs: build
23 |     runs-on: ubuntu-latest
24 |     steps:
25 |       - uses: actions/checkout@v4
26 |       - uses: actions/setup-node@v4
27 |         with:
28 |           node-version: 20
29 |           registry-url: https://registry.npmjs.org/
30 |       - run: npm ci
31 |       - run: npm publish
32 |         env:
33 |           NODE_AUTH_TOKEN: ${{secrets.NPM_TOKEN}}
34 | 
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
 2 | 
 3 | startCommand:
 4 |   type: stdio
 5 |   configSchema:
 6 |     # JSON Schema defining the configuration options for the MCP.
 7 |     type: object
 8 |     required: []
 9 |     properties:
10 |       apiUrl:
11 |         type: string
12 |         default: http://localhost:3001
13 |         description: The URL of the WebSearch Crawler API.
14 |       maxSearchResult:
15 |         type: number
16 |         default: 5
17 |         description: Maximum number of search results to return.
18 |   commandFunction:
19 |     # A JS function that produces the CLI command based on the given config to start the MCP on stdio.
20 |     |-
21 |     (config) => ({
22 |       command: 'node',
23 |       args: ['dist/index.js'],
24 |       env: {
25 |         API_URL: config.apiUrl,
26 |         MAX_SEARCH_RESULT: String(config.maxSearchResult)
27 |       }
28 |     })
29 |   exampleConfig:
30 |     apiUrl: http://localhost:3001
31 |     maxSearchResult: 5
32 | 
```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "name": "websearch-mcp",
 3 |   "version": "1.0.2",
 4 |   "description": "A Model Context Protocol (MCP) server implementation that provides real-time web search capabilities through a simple API",
 5 |   "main": "dist/index.js",
 6 |   "bin": {
 7 |     "websearch-mcp": "./dist/index.js"
 8 |   },
 9 |   "scripts": {
10 |     "build": "tsc",
11 |     "start": "node dist/index.js",
12 |     "dev": "ts-node src/index.ts",
13 |     "test-client": "ts-node src/test-client.ts",
14 |     "test": "echo \"Error: no test specified\" && exit 1",
15 |     "prepare": "npm run build",
16 |     "prepublishOnly": "npm run build"
17 |   },
18 |   "author": {
19 |     "name": "Manh La",
20 |     "url": "https://github.com/mnhlt"
21 |   },
22 |   "license": "ISC",
23 |   "dependencies": {
24 |     "@modelcontextprotocol/sdk": "^1.7.0",
25 |     "axios": "^1.6.7",
26 |     "zod": "^3.24.2"
27 |   },
28 |   "devDependencies": {
29 |     "@types/node": "^20.10.5",
30 |     "ts-node": "^10.9.2",
31 |     "typescript": "^5.3.3"
32 |   },
33 |   "keywords": [
34 |     "mcp",
35 |     "websearch",
36 |     "claude",
37 |     "ai",
38 |     "llm",
39 |     "model-context-protocol",
40 |     "search",
41 |     "web-search",
42 |     "crawler",
43 |     "anthropic",
44 |     "cursor"
45 |   ],
46 |   "files": [
47 |     "dist/**/*",
48 |     "README.md",
49 |     "LICENSE"
50 |   ],
51 |   "repository": {
52 |     "type": "git",
53 |     "url": "https://github.com/mnhlt/WebSearch-MCP.git"
54 |   },
55 |   "homepage": "https://github.com/mnhlt/WebSearch-MCP",
56 |   "bugs": {
57 |     "url": "https://github.com/mnhlt/WebSearch-MCP/issues"
58 |   },
59 |   "engines": {
60 |     "node": ">=16.0.0"
61 |   }
62 | }
63 | 
```

--------------------------------------------------------------------------------
/src/test-client.ts:
--------------------------------------------------------------------------------

```typescript
  1 | #!/usr/bin/env node
  2 | 
  3 | import { spawn } from 'child_process';
  4 | import { createInterface } from 'readline';
  5 | import * as path from 'path';
  6 | 
  7 | interface MCPMessage {
  8 |   jsonrpc: string;
  9 |   id: number;
 10 |   method?: string;
 11 |   params?: any;
 12 |   result?: any;
 13 |   error?: {
 14 |     code: number;
 15 |     message: string;
 16 |   };
 17 | }
 18 | 
 19 | async function main() {
 20 |   // Start the MCP server as a child process
 21 |   const serverProcess = spawn('ts-node', [path.join(__dirname, 'index.ts')], {
 22 |     stdio: ['pipe', 'pipe', process.stderr],
 23 |     env: {
 24 |       ...process.env,
 25 |       API_URL: process.env.API_URL || 'http://localhost:3001',
 26 |       MAX_SEARCH_RESULT: process.env.MAX_SEARCH_RESULT || '5'
 27 |     }
 28 |   });
 29 | 
 30 |   // Set up readline for nice formatting
 31 |   const readline = createInterface({
 32 |     input: process.stdin,
 33 |     output: process.stdout
 34 |   });
 35 | 
 36 |   let messageId = 1;
 37 | 
 38 |   // Handle server output
 39 |   serverProcess.stdout.on('data', (data) => {
 40 |     try {
 41 |       const message = JSON.parse(data.toString()) as MCPMessage;
 42 |       console.log('\nReceived from server:');
 43 |       console.log(JSON.stringify(message, null, 2));
 44 |       
 45 |       if (message.result) {
 46 |         try {
 47 |           const resultContent = JSON.parse(message.result.content[0].text);
 48 |           console.log('\nSearch Results:');
 49 |           console.log(JSON.stringify(resultContent, null, 2));
 50 |         } catch (e) {
 51 |           console.log('\nResult Content:');
 52 |           console.log(message.result.content[0].text);
 53 |         }
 54 |       }
 55 |       
 56 |       readline.prompt();
 57 |     } catch (error) {
 58 |       console.error('Error parsing server response:', error);
 59 |       console.log('Raw response:', data.toString());
 60 |       readline.prompt();
 61 |     }
 62 |   });
 63 | 
 64 |   // Initial prompt
 65 |   console.log('WebSearch MCP Test Client');
 66 |   console.log('------------------------');
 67 |   console.log('Type a search query and press Enter to search.');
 68 |   console.log('Type \'exit\' to quit.');
 69 |   console.log('');
 70 | 
 71 |   readline.setPrompt('Search> ');
 72 |   readline.prompt();
 73 | 
 74 |   // Handle user input
 75 |   readline.on('line', (line) => {
 76 |     const input = line.trim();
 77 |     
 78 |     if (input.toLowerCase() === 'exit') {
 79 |       console.log('Exiting...');
 80 |       serverProcess.kill();
 81 |       process.exit(0);
 82 |     }
 83 | 
 84 |     // Create a web_search request
 85 |     const request: MCPMessage = {
 86 |       jsonrpc: '2.0',
 87 |       id: messageId++,
 88 |       method: 'call_tool',
 89 |       params: {
 90 |         name: 'web_search',
 91 |         arguments: {
 92 |           query: input,
 93 |           numResults: parseInt(process.env.MAX_SEARCH_RESULT || '3', 10)
 94 |         }
 95 |       }
 96 |     };
 97 | 
 98 |     console.log('\nSending request:');
 99 |     console.log(JSON.stringify(request, null, 2));
100 |     
101 |     // Send the request to the server
102 |     serverProcess.stdin.write(JSON.stringify(request) + '\n');
103 |   });
104 | 
105 |   // Handle exit
106 |   readline.on('close', () => {
107 |     console.log('Exiting...');
108 |     serverProcess.kill();
109 |     process.exit(0);
110 |   });
111 | 
112 |   // Handle server exit
113 |   serverProcess.on('exit', (code) => {
114 |     console.log(`Server exited with code ${code}`);
115 |     process.exit(code || 0);
116 |   });
117 | }
118 | 
119 | main().catch(error => {
120 |   console.error('Error running test client:', error);
121 |   process.exit(1);
122 | }); 
```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
  1 | #!/usr/bin/env node
  2 | 
  3 | import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
  4 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
  5 | import { z } from "zod";
  6 | import axios from "axios";
  7 | 
  8 | // Configuration
  9 | const API_URL = process.env.API_URL || "http://localhost:3001";
 10 | const MAX_SEARCH_RESULT = parseInt(process.env.MAX_SEARCH_RESULT || "5", 10);
 11 | 
 12 | // Interface definitions based on swagger.json
 13 | interface CrawlRequest {
 14 |   query: string;
 15 |   numResults?: number;
 16 |   language?: string;
 17 |   region?: string;
 18 |   filters?: {
 19 |     excludeDomains?: string[];
 20 |     includeDomains?: string[];
 21 |     excludeTerms?: string[];
 22 |     resultType?: "all" | "news" | "blogs";
 23 |   };
 24 | }
 25 | 
 26 | interface CrawlResult {
 27 |   url: string;
 28 |   title: string;
 29 |   excerpt: string;
 30 |   text?: string;
 31 |   html?: string;
 32 |   siteName?: string;
 33 |   byline?: string;
 34 |   error?: string | null;
 35 | }
 36 | 
 37 | interface CrawlResponse {
 38 |   query: string;
 39 |   results: CrawlResult[];
 40 |   error: string | null;
 41 | }
 42 | 
 43 | // Main function to set up and run the MCP server
 44 | async function main() {
 45 |   // Create an MCP server
 46 |   const server = new McpServer({
 47 |     name: "WebSearch-MCP",
 48 |     version: "1.0.0",
 49 |   });
 50 | 
 51 |   // Add a web_search tool
 52 |   server.tool(
 53 |     "web_search",
 54 |     "Search the web for information.\n"
 55 |     + "Use this tool when you need to search the web for information.\n"
 56 |     + "You can use this tool to search for news, blogs, or all types of information.\n"
 57 |     + "You can also use this tool to search for information about a specific company or product.\n"
 58 |     + "You can also use this tool to search for information about a specific person.\n"
 59 |     + "You can also use this tool to search for information about a specific product.\n"
 60 |     + "You can also use this tool to search for information about a specific company.\n"
 61 |     + "You can also use this tool to search for information about a specific event.\n"
 62 |     + "You can also use this tool to search for information about a specific location.\n"
 63 |     + "You can also use this tool to search for information about a specific thing.\n"
 64 |     + "If you request search with 1 result number and failed, retry with bigger results number.",
 65 |     {
 66 |       query: z.string().describe("The search query to look up"),
 67 |       numResults: z
 68 |         .number()
 69 |         .optional()
 70 |         .describe(
 71 |           `Number of results to return (default: ${MAX_SEARCH_RESULT})`
 72 |         ),
 73 |       language: z
 74 |         .string()
 75 |         .optional()
 76 |         .describe("Language code for search results (e.g., 'en')"),
 77 |       region: z
 78 |         .string()
 79 |         .optional()
 80 |         .describe("Region code for search results (e.g., 'us')"),
 81 |       excludeDomains: z
 82 |         .array(z.string())
 83 |         .optional()
 84 |         .describe("Domains to exclude from results"),
 85 |       includeDomains: z
 86 |         .array(z.string())
 87 |         .optional()
 88 |         .describe("Only include these domains in results"),
 89 |       excludeTerms: z
 90 |         .array(z.string())
 91 |         .optional()
 92 |         .describe("Terms to exclude from results"),
 93 |       resultType: z
 94 |         .enum(["all", "news", "blogs"])
 95 |         .optional()
 96 |         .describe("Type of results to return"),
 97 |     },
 98 |     async (params) => {
 99 |       try {
100 |         console.error(`Performing web search for: ${params.query}`);
101 | 
102 |         // Prepare request payload for crawler API
103 |         const requestPayload: CrawlRequest = {
104 |           query: params.query,
105 |           numResults: params.numResults ?? MAX_SEARCH_RESULT,
106 |           language: params.language,
107 |           region: params.region,
108 |           filters: {
109 |             excludeDomains: params.excludeDomains,
110 |             includeDomains: params.includeDomains,
111 |             excludeTerms: params.excludeTerms,
112 |             resultType: params.resultType as "all" | "news" | "blogs",
113 |           },
114 |         };
115 | 
116 |         // Call the crawler API
117 |         console.error(`Sending request to ${API_URL}/crawl`);
118 |         const response = await axios.post<CrawlResponse>(
119 |           `${API_URL}/crawl`,
120 |           requestPayload
121 |         );
122 | 
123 |         // Format the response for the MCP client
124 |         const results = response.data.results.map((result) => ({
125 |           title: result.title,
126 |           snippet: result.excerpt,
127 |           text: result.text,
128 |           url: result.url,
129 |           siteName: result.siteName || "",
130 |           byline: result.byline || "",
131 |         }));
132 | 
133 |         return {
134 |           content: [
135 |             {
136 |               type: "text",
137 |               text: JSON.stringify(
138 |                 {
139 |                   query: response.data.query,
140 |                   results: results,
141 |                 },
142 |                 null,
143 |                 2
144 |               ),
145 |             },
146 |           ],
147 |         };
148 |       } catch (error) {
149 |         console.error("Error performing web search:", error);
150 | 
151 |         if (axios.isAxiosError(error)) {
152 |           const errorMessage = error.response?.data?.error || error.message;
153 |           return {
154 |             content: [{ type: "text", text: `Error: ${errorMessage}` }],
155 |             isError: true,
156 |           };
157 |         }
158 | 
159 |         return {
160 |           content: [
161 |             {
162 |               type: "text",
163 |               text: `Error: ${
164 |                 error instanceof Error ? error.message : "Unknown error"
165 |               }`,
166 |             },
167 |           ],
168 |           isError: true,
169 |         };
170 |       }
171 |     }
172 |   );
173 | 
174 |   // Start receiving messages on stdin and sending messages on stdout
175 |   console.error("Starting WebSearch MCP server...");
176 |   console.error(`Using API_URL: ${API_URL}`);
177 |   const transport = new StdioServerTransport();
178 |   await server.connect(transport);
179 |   console.error("WebSearch MCP server started");
180 | }
181 | 
182 | // Start the server
183 | main().catch((error) => {
184 |   console.error("Failed to start WebSearch MCP server:", error);
185 |   process.exit(1);
186 | });
187 | 
```

--------------------------------------------------------------------------------
/swagger.json:
--------------------------------------------------------------------------------

```json
  1 | {
  2 |   "openapi": "3.0.0",
  3 |   "info": {
  4 |     "title": "WebSearch API - Crawler Service",
  5 |     "version": "1.0.0",
  6 |     "description": "API documentation for the WebSearch API crawler service",
  7 |     "license": {
  8 |       "name": "ISC",
  9 |       "url": "https://opensource.org/licenses/ISC"
 10 |     },
 11 |     "contact": {
 12 |       "name": "WebSearch API Support",
 13 |       "url": "https://github.com/yourusername/WebSearchAPI",
 14 |       "email": "[email protected]"
 15 |     }
 16 |   },
 17 |   "servers": [
 18 |     {
 19 |       "url": "/",
 20 |       "description": "Development server"
 21 |     },
 22 |     {
 23 |       "url": "https://crawler.example.com",
 24 |       "description": "Production server"
 25 |     }
 26 |   ],
 27 |   "components": {
 28 |     "schemas": {
 29 |       "Error": {
 30 |         "type": "object",
 31 |         "properties": {
 32 |           "error": {
 33 |             "type": "string",
 34 |             "example": "Error message"
 35 |           }
 36 |         }
 37 |       },
 38 |       "CrawlRequest": {
 39 |         "type": "object",
 40 |         "required": [
 41 |           "query"
 42 |         ],
 43 |         "properties": {
 44 |           "query": {
 45 |             "type": "string",
 46 |             "example": "artificial intelligence"
 47 |           },
 48 |           "numResults": {
 49 |             "type": "integer",
 50 |             "example": 5,
 51 |             "description": "Maximum number of results to return"
 52 |           },
 53 |           "language": {
 54 |             "type": "string",
 55 |             "example": "en",
 56 |             "description": "Language code for search results"
 57 |           },
 58 |           "region": {
 59 |             "type": "string",
 60 |             "example": "us",
 61 |             "description": "Region code for search results"
 62 |           },
 63 |           "filters": {
 64 |             "type": "object",
 65 |             "properties": {
 66 |               "excludeDomains": {
 67 |                 "type": "array",
 68 |                 "items": {
 69 |                   "type": "string"
 70 |                 },
 71 |                 "example": [
 72 |                   "youtube.com",
 73 |                   "facebook.com"
 74 |                 ],
 75 |                 "description": "Domains to exclude from results"
 76 |               },
 77 |               "includeDomains": {
 78 |                 "type": "array",
 79 |                 "items": {
 80 |                   "type": "string"
 81 |                 },
 82 |                 "example": [
 83 |                   "example.com",
 84 |                   "blog.example.com"
 85 |                 ],
 86 |                 "description": "Only include these domains in results"
 87 |               },
 88 |               "excludeTerms": {
 89 |                 "type": "array",
 90 |                 "items": {
 91 |                   "type": "string"
 92 |                 },
 93 |                 "example": [
 94 |                   "video",
 95 |                   "course"
 96 |                 ],
 97 |                 "description": "Terms to exclude from results"
 98 |               },
 99 |               "resultType": {
100 |                 "type": "string",
101 |                 "enum": [
102 |                   "all",
103 |                   "news",
104 |                   "blogs"
105 |                 ],
106 |                 "example": "all",
107 |                 "description": "Type of results to return"
108 |               }
109 |             }
110 |           }
111 |         }
112 |       },
113 |       "CrawlResponse": {
114 |         "type": "object",
115 |         "properties": {
116 |           "query": {
117 |             "type": "string",
118 |             "example": "artificial intelligence"
119 |           },
120 |           "results": {
121 |             "type": "array",
122 |             "items": {
123 |               "type": "object",
124 |               "properties": {
125 |                 "url": {
126 |                   "type": "string",
127 |                   "example": "https://example.com/article"
128 |                 },
129 |                 "html": {
130 |                   "type": "string",
131 |                   "example": "<div>Article content...</div>"
132 |                 },
133 |                 "text": {
134 |                   "type": "string",
135 |                   "example": "Artificial intelligence (AI) is intelligence—perceiving..."
136 |                 },
137 |                 "title": {
138 |                   "type": "string",
139 |                   "example": "Understanding AI"
140 |                 },
141 |                 "excerpt": {
142 |                   "type": "string",
143 |                   "example": "A brief overview of artificial intelligence..."
144 |                 },
145 |                 "siteName": {
146 |                   "type": "string",
147 |                   "example": "Example.com"
148 |                 },
149 |                 "byline": {
150 |                   "type": "string",
151 |                   "example": "John Doe"
152 |                 },
153 |                 "error": {
154 |                   "type": "string",
155 |                   "example": null
156 |                 }
157 |               }
158 |             }
159 |           },
160 |           "error": {
161 |             "type": "string",
162 |             "example": null
163 |           }
164 |         }
165 |       },
166 |       "HealthCheckResponse": {
167 |         "type": "object",
168 |         "properties": {
169 |           "status": {
170 |             "type": "string",
171 |             "enum": [
172 |               "ok",
173 |               "degraded",
174 |               "down"
175 |             ],
176 |             "example": "ok"
177 |           },
178 |           "details": {
179 |             "type": "object",
180 |             "properties": {
181 |               "status": {
182 |                 "type": "string",
183 |                 "enum": [
184 |                   "ok",
185 |                   "degraded",
186 |                   "down"
187 |                 ],
188 |                 "example": "ok"
189 |               },
190 |               "flaresolverr": {
191 |                 "type": "boolean",
192 |                 "example": true
193 |               },
194 |               "google": {
195 |                 "type": "boolean",
196 |                 "example": true
197 |               },
198 |               "message": {
199 |                 "type": "string",
200 |                 "example": null
201 |               }
202 |             }
203 |           }
204 |         }
205 |       },
206 |       "QuotaResponse": {
207 |         "type": "object",
208 |         "properties": {
209 |           "dailyQuota": {
210 |             "type": "integer",
211 |             "example": 1000
212 |           },
213 |           "usedToday": {
214 |             "type": "integer",
215 |             "example": 150
216 |           },
217 |           "remaining": {
218 |             "type": "integer",
219 |             "example": 850
220 |           },
221 |           "resetTime": {
222 |             "type": "string",
223 |             "format": "date-time",
224 |             "example": "2023-01-02T00:00:00Z"
225 |           }
226 |         }
227 |       }
228 |     }
229 |   },
230 |   "tags": [
231 |     {
232 |       "name": "Crawling",
233 |       "description": "API endpoints for web crawling operations"
234 |     },
235 |     {
236 |       "name": "Health",
237 |       "description": "Health check and monitoring endpoints"
238 |     }
239 |   ],
240 |   "paths": {
241 |     "/crawl": {
242 |       "post": {
243 |         "summary": "Crawl web pages based on a search query",
244 |         "description": "Searches the web for results matching the given query and returns the content of those pages",
245 |         "tags": [
246 |           "Crawling"
247 |         ],
248 |         "requestBody": {
249 |           "required": true,
250 |           "content": {
251 |             "application/json": {
252 |               "schema": {
253 |                 "$ref": "#/components/schemas/CrawlRequest",
254 |                 "properties": {
255 |                   "query": {
256 |                     "type": "string",
257 |                     "description": "The search query to crawl for",
258 |                     "required": true
259 |                   },
260 |                   "numResults": {
261 |                     "type": "integer",
262 |                     "description": "Number of results to return",
263 |                     "default": 5
264 |                   },
265 |                   "debug": {
266 |                     "type": "boolean",
267 |                     "description": "When true, include HTML content in the response",
268 |                     "default": true
269 |                   },
270 |                   "language": {
271 |                     "type": "string",
272 |                     "description": "Language code for search results"
273 |                   },
274 |                   "region": {
275 |                     "type": "string",
276 |                     "description": "Region code for search results"
277 |                   }
278 |                 }
279 |               }
280 |             }
281 |           }
282 |         },
283 |         "responses": {
284 |           "200": {
285 |             "description": "Successfully crawled web pages",
286 |             "content": {
287 |               "application/json": {
288 |                 "schema": {
289 |                   "$ref": "#/components/schemas/CrawlResponse"
290 |                 }
291 |               }
292 |             }
293 |           },
294 |           "400": {
295 |             "description": "Missing query parameter",
296 |             "content": {
297 |               "application/json": {
298 |                 "schema": {
299 |                   "$ref": "#/components/schemas/Error"
300 |                 }
301 |               }
302 |             }
303 |           },
304 |           "500": {
305 |             "description": "Server error",
306 |             "content": {
307 |               "application/json": {
308 |                 "schema": {
309 |                   "$ref": "#/components/schemas/Error"
310 |                 }
311 |               }
312 |             }
313 |           }
314 |         }
315 |       }
316 |     },
317 |     "/health": {
318 |       "get": {
319 |         "summary": "Check crawler service health",
320 |         "description": "Returns the health status of the crawler service and its dependencies",
321 |         "tags": [
322 |           "Health"
323 |         ],
324 |         "responses": {
325 |           "200": {
326 |             "description": "Health status (OK or degraded)",
327 |             "content": {
328 |               "application/json": {
329 |                 "schema": {
330 |                   "$ref": "#/components/schemas/HealthCheckResponse"
331 |                 }
332 |               }
333 |             }
334 |           },
335 |           "503": {
336 |             "description": "Service unavailable",
337 |             "content": {
338 |               "application/json": {
339 |                 "schema": {
340 |                   "$ref": "#/components/schemas/HealthCheckResponse"
341 |                 }
342 |               }
343 |             }
344 |           }
345 |         }
346 |       }
347 |     },
348 |     "/quota": {
349 |       "get": {
350 |         "summary": "Get crawling quota status",
351 |         "description": "Returns information about the current usage and limits of the crawling quota",
352 |         "tags": [
353 |           "Crawling"
354 |         ],
355 |         "responses": {
356 |           "200": {
357 |             "description": "Quota information",
358 |             "content": {
359 |               "application/json": {
360 |                 "schema": {
361 |                   "$ref": "#/components/schemas/QuotaResponse"
362 |                 }
363 |               }
364 |             }
365 |           },
366 |           "500": {
367 |             "description": "Server error",
368 |             "content": {
369 |               "application/json": {
370 |                 "schema": {
371 |                   "$ref": "#/components/schemas/Error"
372 |                 }
373 |               }
374 |             }
375 |           }
376 |         }
377 |       }
378 |     }
379 |   }
380 | }
```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
  1 | {
  2 |   "compilerOptions": {
  3 |     /* Visit https://aka.ms/tsconfig to read more about this file */
  4 | 
  5 |     /* Projects */
  6 |     // "incremental": true,                              /* Save .tsbuildinfo files to allow for incremental compilation of projects. */
  7 |     // "composite": true,                                /* Enable constraints that allow a TypeScript project to be used with project references. */
  8 |     // "tsBuildInfoFile": "./.tsbuildinfo",              /* Specify the path to .tsbuildinfo incremental compilation file. */
  9 |     // "disableSourceOfProjectReferenceRedirect": true,  /* Disable preferring source files instead of declaration files when referencing composite projects. */
 10 |     // "disableSolutionSearching": true,                 /* Opt a project out of multi-project reference checking when editing. */
 11 |     // "disableReferencedProjectLoad": true,             /* Reduce the number of projects loaded automatically by TypeScript. */
 12 | 
 13 |     /* Language and Environment */
 14 |     "target": "es2016",                                  /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
 15 |     // "lib": [],                                        /* Specify a set of bundled library declaration files that describe the target runtime environment. */
 16 |     // "jsx": "preserve",                                /* Specify what JSX code is generated. */
 17 |     // "libReplacement": true,                           /* Enable lib replacement. */
 18 |     // "experimentalDecorators": true,                   /* Enable experimental support for legacy experimental decorators. */
 19 |     // "emitDecoratorMetadata": true,                    /* Emit design-type metadata for decorated declarations in source files. */
 20 |     // "jsxFactory": "",                                 /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h'. */
 21 |     // "jsxFragmentFactory": "",                         /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */
 22 |     // "jsxImportSource": "",                            /* Specify module specifier used to import the JSX factory functions when using 'jsx: react-jsx*'. */
 23 |     // "reactNamespace": "",                             /* Specify the object invoked for 'createElement'. This only applies when targeting 'react' JSX emit. */
 24 |     // "noLib": true,                                    /* Disable including any library files, including the default lib.d.ts. */
 25 |     // "useDefineForClassFields": true,                  /* Emit ECMAScript-standard-compliant class fields. */
 26 |     // "moduleDetection": "auto",                        /* Control what method is used to detect module-format JS files. */
 27 | 
 28 |     /* Modules */
 29 |     "module": "commonjs",                                /* Specify what module code is generated. */
 30 |     "outDir": "./dist",
 31 |     "rootDir": "./src",
 32 |     // "moduleResolution": "node10",                     /* Specify how TypeScript looks up a file from a given module specifier. */
 33 |     // "baseUrl": "./",                                  /* Specify the base directory to resolve non-relative module names. */
 34 |     // "paths": {},                                      /* Specify a set of entries that re-map imports to additional lookup locations. */
 35 |     // "rootDirs": [],                                   /* Allow multiple folders to be treated as one when resolving modules. */
 36 |     // "typeRoots": [],                                  /* Specify multiple folders that act like './node_modules/@types'. */
 37 |     // "types": [],                                      /* Specify type package names to be included without being referenced in a source file. */
 38 |     // "allowUmdGlobalAccess": true,                     /* Allow accessing UMD globals from modules. */
 39 |     // "moduleSuffixes": [],                             /* List of file name suffixes to search when resolving a module. */
 40 |     // "allowImportingTsExtensions": true,               /* Allow imports to include TypeScript file extensions. Requires '--moduleResolution bundler' and either '--noEmit' or '--emitDeclarationOnly' to be set. */
 41 |     // "rewriteRelativeImportExtensions": true,          /* Rewrite '.ts', '.tsx', '.mts', and '.cts' file extensions in relative import paths to their JavaScript equivalent in output files. */
 42 |     // "resolvePackageJsonExports": true,                /* Use the package.json 'exports' field when resolving package imports. */
 43 |     // "resolvePackageJsonImports": true,                /* Use the package.json 'imports' field when resolving imports. */
 44 |     // "customConditions": [],                           /* Conditions to set in addition to the resolver-specific defaults when resolving imports. */
 45 |     // "noUncheckedSideEffectImports": true,             /* Check side effect imports. */
 46 |     // "resolveJsonModule": true,                        /* Enable importing .json files. */
 47 |     // "allowArbitraryExtensions": true,                 /* Enable importing files with any extension, provided a declaration file is present. */
 48 |     // "noResolve": true,                                /* Disallow 'import's, 'require's or '<reference>'s from expanding the number of files TypeScript should add to a project. */
 49 | 
 50 |     /* JavaScript Support */
 51 |     // "allowJs": true,                                  /* Allow JavaScript files to be a part of your program. Use the 'checkJS' option to get errors from these files. */
 52 |     // "checkJs": true,                                  /* Enable error reporting in type-checked JavaScript files. */
 53 |     // "maxNodeModuleJsDepth": 1,                        /* Specify the maximum folder depth used for checking JavaScript files from 'node_modules'. Only applicable with 'allowJs'. */
 54 | 
 55 |     /* Emit */
 56 |     // "declaration": true,                              /* Generate .d.ts files from TypeScript and JavaScript files in your project. */
 57 |     // "declarationMap": true,                           /* Create sourcemaps for d.ts files. */
 58 |     // "emitDeclarationOnly": true,                      /* Only output d.ts files and not JavaScript files. */
 59 |     // "sourceMap": true,                                /* Create source map files for emitted JavaScript files. */
 60 |     // "inlineSourceMap": true,                          /* Include sourcemap files inside the emitted JavaScript. */
 61 |     // "noEmit": true,                                   /* Disable emitting files from a compilation. */
 62 |     // "outFile": "./",                                  /* Specify a file that bundles all outputs into one JavaScript file. If 'declaration' is true, also designates a file that bundles all .d.ts output. */
 63 |     // "removeComments": true,                           /* Disable emitting comments. */
 64 |     // "importHelpers": true,                            /* Allow importing helper functions from tslib once per project, instead of including them per-file. */
 65 |     // "downlevelIteration": true,                       /* Emit more compliant, but verbose and less performant JavaScript for iteration. */
 66 |     // "sourceRoot": "",                                 /* Specify the root path for debuggers to find the reference source code. */
 67 |     // "mapRoot": "",                                    /* Specify the location where debugger should locate map files instead of generated locations. */
 68 |     // "inlineSources": true,                            /* Include source code in the sourcemaps inside the emitted JavaScript. */
 69 |     // "emitBOM": true,                                  /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */
 70 |     // "newLine": "crlf",                                /* Set the newline character for emitting files. */
 71 |     // "stripInternal": true,                            /* Disable emitting declarations that have '@internal' in their JSDoc comments. */
 72 |     // "noEmitHelpers": true,                            /* Disable generating custom helper functions like '__extends' in compiled output. */
 73 |     // "noEmitOnError": true,                            /* Disable emitting files if any type checking errors are reported. */
 74 |     // "preserveConstEnums": true,                       /* Disable erasing 'const enum' declarations in generated code. */
 75 |     // "declarationDir": "./",                           /* Specify the output directory for generated declaration files. */
 76 | 
 77 |     /* Interop Constraints */
 78 |     // "isolatedModules": true,                          /* Ensure that each file can be safely transpiled without relying on other imports. */
 79 |     // "verbatimModuleSyntax": true,                     /* Do not transform or elide any imports or exports not marked as type-only, ensuring they are written in the output file's format based on the 'module' setting. */
 80 |     // "isolatedDeclarations": true,                     /* Require sufficient annotation on exports so other tools can trivially generate declaration files. */
 81 |     // "erasableSyntaxOnly": true,                       /* Do not allow runtime constructs that are not part of ECMAScript. */
 82 |     // "allowSyntheticDefaultImports": true,             /* Allow 'import x from y' when a module doesn't have a default export. */
 83 |     "esModuleInterop": true,                             /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */
 84 |     // "preserveSymlinks": true,                         /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */
 85 |     "forceConsistentCasingInFileNames": true,            /* Ensure that casing is correct in imports. */
 86 | 
 87 |     /* Type Checking */
 88 |     "strict": true,                                      /* Enable all strict type-checking options. */
 89 |     // "noImplicitAny": true,                            /* Enable error reporting for expressions and declarations with an implied 'any' type. */
 90 |     // "strictNullChecks": true,                         /* When type checking, take into account 'null' and 'undefined'. */
 91 |     // "strictFunctionTypes": true,                      /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */
 92 |     // "strictBindCallApply": true,                      /* Check that the arguments for 'bind', 'call', and 'apply' methods match the original function. */
 93 |     // "strictPropertyInitialization": true,             /* Check for class properties that are declared but not set in the constructor. */
 94 |     // "strictBuiltinIteratorReturn": true,              /* Built-in iterators are instantiated with a 'TReturn' type of 'undefined' instead of 'any'. */
 95 |     // "noImplicitThis": true,                           /* Enable error reporting when 'this' is given the type 'any'. */
 96 |     // "useUnknownInCatchVariables": true,               /* Default catch clause variables as 'unknown' instead of 'any'. */
 97 |     // "alwaysStrict": true,                             /* Ensure 'use strict' is always emitted. */
 98 |     // "noUnusedLocals": true,                           /* Enable error reporting when local variables aren't read. */
 99 |     // "noUnusedParameters": true,                       /* Raise an error when a function parameter isn't read. */
100 |     // "exactOptionalPropertyTypes": true,               /* Interpret optional property types as written, rather than adding 'undefined'. */
101 |     // "noImplicitReturns": true,                        /* Enable error reporting for codepaths that do not explicitly return in a function. */
102 |     // "noFallthroughCasesInSwitch": true,               /* Enable error reporting for fallthrough cases in switch statements. */
103 |     // "noUncheckedIndexedAccess": true,                 /* Add 'undefined' to a type when accessed using an index. */
104 |     // "noImplicitOverride": true,                       /* Ensure overriding members in derived classes are marked with an override modifier. */
105 |     // "noPropertyAccessFromIndexSignature": true,       /* Enforces using indexed accessors for keys declared using an indexed type. */
106 |     // "allowUnusedLabels": true,                        /* Disable error reporting for unused labels. */
107 |     // "allowUnreachableCode": true,                     /* Disable error reporting for unreachable code. */
108 | 
109 |     /* Completeness */
110 |     // "skipDefaultLibCheck": true,                      /* Skip type checking .d.ts files that are included with TypeScript. */
111 |     "skipLibCheck": true                                 /* Skip type checking all .d.ts files. */
112 |   },
113 |   "include": ["src/**/*"],
114 |   "exclude": ["node_modules", "dist"]
115 | }
116 | 
```