#
tokens: 3491/50000 8/8 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .github
│   └── workflows
│       └── npm-publish.yml
├── .gitignore
├── Dockerfile
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── src
│   └── index.ts
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
node_modules/
build/
*.log
.env*
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# mcp-google-server A MCP Server for Google Custom Search and Webpage Reading
[![smithery badge](https://smithery.ai/badge/@adenot/mcp-google-search)](https://smithery.ai/server/@adenot/mcp-google-search)

A Model Context Protocol server that provides web search capabilities using Google Custom Search API and webpage content extraction functionality.

## Setup

### Getting Google API Key and Search Engine ID

1. Create a Google Cloud Project:
   - Go to [Google Cloud Console](https://console.cloud.google.com/)
   - Create a new project or select an existing one
   - Enable billing for your project

2. Enable Custom Search API:
   - Go to [API Library](https://console.cloud.google.com/apis/library)
   - Search for "Custom Search API"
   - Click "Enable"

3. Get API Key:
   - Go to [Credentials](https://console.cloud.google.com/apis/credentials)
   - Click "Create Credentials" > "API Key"
   - Copy your API key
   - (Optional) Restrict the API key to only Custom Search API

4. Create Custom Search Engine:
   - Go to [Programmable Search Engine](https://programmablesearchengine.google.com/create/new)
   - Enter the sites you want to search (use www.google.com for general web search)
   - Click "Create"
   - On the next page, click "Customize"
   - In the settings, enable "Search the entire web"
   - Copy your Search Engine ID (cx)

## Development

Install dependencies:
```bash
npm install
```

Build the server:
```bash
npm run build
```

For development with auto-rebuild:
```bash
npm run watch
```

## Features

### Search Tool
Perform web searches using Google Custom Search API:
- Search the entire web or specific sites
- Control number of results (1-10)
- Get structured results with title, link, and snippet

### Webpage Reader Tool
Extract content from any webpage:
- Fetch and parse webpage content
- Extract page title and main text
- Clean content by removing scripts and styles
- Return structured data with title, text, and URL

## Installation

### Installing via Smithery

To install Google Custom Search Server for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@adenot/mcp-google-search):

```bash
npx -y @smithery/cli install @adenot/mcp-google-search --client claude
```

To use with Claude Desktop, add the server config with your Google API credentials:

On MacOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
On Windows: `%APPDATA%/Claude/claude_desktop_config.json`

```json
{
  "mcpServers": {
    "google-search": {
      "command": "npx",
      "args": [
        "-y",
        "@adenot/mcp-google-search"
      ],
      "env": {
        "GOOGLE_API_KEY": "your-api-key-here",
        "GOOGLE_SEARCH_ENGINE_ID": "your-search-engine-id-here"
      }
    }
  }
}
```

## Usage

### Search Tool
```json
{
  "name": "search",
  "arguments": {
    "query": "your search query",
    "num": 5  // optional, default is 5, max is 10
  }
}
```

### Webpage Reader Tool
```json
{
  "name": "read_webpage",
  "arguments": {
    "url": "https://example.com"
  }
}
```

Example response from webpage reader:
```json
{
  "title": "Example Domain",
  "text": "Extracted and cleaned webpage content...",
  "url": "https://example.com"
}
```

### Debugging

Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the [MCP Inspector](https://github.com/modelcontextprotocol/inspector), which is available as a package script:

```bash
npm run inspector
```

The Inspector will provide a URL to access debugging tools in your browser.

```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./build",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules"]
}

```

--------------------------------------------------------------------------------
/.github/workflows/npm-publish.yml:
--------------------------------------------------------------------------------

```yaml
# This workflow will run tests using node and then publish a package to GitHub Packages when a release is created
# For more information see: https://docs.github.com/en/actions/publishing-packages/publishing-nodejs-packages

name: NPM Publish

on:
  release:
    types: [created]

jobs:
  publish-npm:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 20
          registry-url: https://registry.npmjs.org/
      - run: npm ci
      - run: npm run build
      - run: npm publish --access public
        env:
          NODE_AUTH_TOKEN: ${{secrets.npm_token}}

```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
{
  "name": "@adenot/mcp-google-search",
  "version": "0.3.1",
  "description": "A Model Context Protocol server for Google Search",
  "type": "module",
  "bin": {
    "mcp-google-search": "./build/index.js"
  },
  "files": [
    "build"
  ],
  "scripts": {
    "build": "npx tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
    "prepare": "npm run build",
    "watch": "npx tsc --watch",
    "inspector": "npx @modelcontextprotocol/inspector build/index.js",
    "prepublishOnly": "npm run build"
  },
  "dependencies": {
    "@modelcontextprotocol/sdk": "0.6.0",
    "axios": "^1.7.9",
    "cheerio": "^1.0.0"
  },
  "devDependencies": {
    "@types/node": "^20.11.24",
    "typescript": "^5.3.3"
  }
}

```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml

startCommand:
  type: stdio
  configSchema:
    # JSON Schema defining the configuration options for the MCP.
    type: object
    required:
      - googleApiKey
      - googleSearchEngineId
    properties:
      googleApiKey:
        type: string
        description: The API key for Google Custom Search.
      googleSearchEngineId:
        type: string
        description: The Search Engine ID for Google Custom Search.
  commandFunction:
    # A function that produces the CLI command to start the MCP on stdio.
    |-
    (config) => ({command: 'node', args: ['build/index.js'], env: {GOOGLE_API_KEY: config.googleApiKey, GOOGLE_SEARCH_ENGINE_ID: config.googleSearchEngineId}})

```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
# Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
# Use an official Node.js image as a parent image
FROM node:18-alpine AS builder

# Set the working directory
WORKDIR /app

# Copy package.json and package-lock.json
COPY package.json package-lock.json ./

# Install dependencies
RUN npm install --ignore-scripts

# Copy the TypeScript source code
COPY src ./src
COPY tsconfig.json ./

# Build the TypeScript code
RUN npm run build

# Use a lightweight image for the final build
FROM node:18-alpine

# Set the working directory
WORKDIR /app

# Copy the build files from the builder stage
COPY --from=builder /app/build ./build
COPY package.json package-lock.json ./

# Install only production dependencies
RUN npm ci --omit=dev --ignore-scripts

# Expose the port on which the server will run (if required)
# EXPOSE 8080

# Define environment variables
ENV GOOGLE_API_KEY=your-api-key-here
ENV GOOGLE_SEARCH_ENGINE_ID=your-search-engine-id-here

# Run the application
ENTRYPOINT ["node", "build/index.js"]

```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
#!/usr/bin/env node
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
  CallToolRequestSchema,
  ErrorCode,
  ListToolsRequestSchema,
  McpError,
} from '@modelcontextprotocol/sdk/types.js';
import axios, { AxiosProxyConfig } from 'axios';
import * as cheerio from 'cheerio';
import { URL } from 'url';

const API_KEY = process.env.GOOGLE_API_KEY;
const SEARCH_ENGINE_ID = process.env.GOOGLE_SEARCH_ENGINE_ID;

// Create proxy configuration from environment variables
function createProxyConfig(): AxiosProxyConfig | false {
  const httpsProxy = process.env.HTTPS_PROXY || process.env.https_proxy;
  const httpProxy = process.env.HTTP_PROXY || process.env.http_proxy;

  const proxyUrl = httpsProxy || httpProxy;

  if (!proxyUrl) {
    return false;
  }

  try {
    const url = new URL(proxyUrl);
    return {
      protocol: url.protocol.replace(':', ''),
      host: url.hostname,
      port: parseInt(url.port) || (url.protocol === 'https:' ? 443 : 80),
      auth: url.username && url.password ? {
        username: url.username,
        password: url.password
      } : undefined
    };
  } catch (error) {
    console.warn(`Invalid proxy URL: ${proxyUrl}`);
    return false;
  }
}

if (!API_KEY) {
  throw new Error('GOOGLE_API_KEY environment variable is required');
}

if (!SEARCH_ENGINE_ID) {
  throw new Error('GOOGLE_SEARCH_ENGINE_ID environment variable is required');
}

interface SearchResult {
  title: string;
  link: string;
  snippet: string;
}

interface WebpageContent {
  title: string;
  text: string;
  url: string;
}

const isValidSearchArgs = (
  args: any
): args is { query: string; num?: number } =>
  typeof args === 'object' &&
  args !== null &&
  typeof args.query === 'string' &&
  (args.num === undefined || typeof args.num === 'number');

const isValidWebpageArgs = (
  args: any
): args is { url: string } =>
  typeof args === 'object' &&
  args !== null &&
  typeof args.url === 'string';

class SearchServer {
  private server: Server;
  private axiosInstance;

  constructor() {
    this.server = new Server(
      {
        name: 'search-server',
        version: '0.1.0',
      },
      {
        capabilities: {
          tools: {},
        },
      }
    );

    const proxyConfig = createProxyConfig();
    this.axiosInstance = axios.create({
      baseURL: 'https://www.googleapis.com/customsearch/v1',
      params: {
        key: API_KEY,
        cx: SEARCH_ENGINE_ID,
      },
      proxy: proxyConfig,
    });

    this.setupToolHandlers();

    // Error handling
    this.server.onerror = (error) => console.error('[MCP Error]', error);
    process.on('SIGINT', async () => {
      await this.server.close();
      process.exit(0);
    });
  }

  private setupToolHandlers() {
    this.server.setRequestHandler(ListToolsRequestSchema, async () => ({
      tools: [
        {
          name: 'search',
          description: 'Perform a web search query',
          inputSchema: {
            type: 'object',
            properties: {
              query: {
                type: 'string',
                description: 'Search query',
              },
              num: {
                type: 'number',
                description: 'Number of results (1-10)',
                minimum: 1,
                maximum: 10,
              },
            },
            required: ['query'],
          },
        },
        {
          name: 'read_webpage',
          description: 'Fetch and extract text content from a webpage',
          inputSchema: {
            type: 'object',
            properties: {
              url: {
                type: 'string',
                description: 'URL of the webpage to read',
              },
            },
            required: ['url'],
          },
        },
      ],
    }));

    this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
      if (request.params.name === 'search') {
        if (!isValidSearchArgs(request.params.arguments)) {
          throw new McpError(
            ErrorCode.InvalidParams,
            'Invalid search arguments'
          );
        }

        const { query, num = 5 } = request.params.arguments;

        try {
          const response = await this.axiosInstance.get('', {
            params: {
              q: query,
              num: Math.min(num, 10),
            },
          });

          const results: SearchResult[] = response.data.items.map((item: any) => ({
            title: item.title,
            link: item.link,
            snippet: item.snippet,
          }));

          return {
            content: [
              {
                type: 'text',
                text: JSON.stringify(results, null, 2),
              },
            ],
          };
        } catch (error) {
          if (axios.isAxiosError(error)) {
            return {
              content: [
                {
                  type: 'text',
                  text: `Search API error: ${
                    error.response?.data?.error?.message ?? error.message
                  }`,
                },
              ],
              isError: true,
            };
          }
          throw error;
        }
      } else if (request.params.name === 'read_webpage') {
        if (!isValidWebpageArgs(request.params.arguments)) {
          throw new McpError(
            ErrorCode.InvalidParams,
            'Invalid webpage arguments'
          );
        }

        const { url } = request.params.arguments;

        try {
          const proxyConfig = createProxyConfig();
          const response = await axios.get(url, {
            proxy: proxyConfig,
          });
          const $ = cheerio.load(response.data);

          // Remove script and style elements
          $('script, style').remove();

          const content: WebpageContent = {
            title: $('title').text().trim(),
            text: $('body').text().trim().replace(/\s+/g, ' '),
            url: url,
          };

          return {
            content: [
              {
                type: 'text',
                text: JSON.stringify(content, null, 2),
              },
            ],
          };
        } catch (error) {
          if (axios.isAxiosError(error)) {
            return {
              content: [
                {
                  type: 'text',
                  text: `Webpage fetch error: ${error.message}`,
                },
              ],
              isError: true,
            };
          }
          throw error;
        }
      }

      throw new McpError(
        ErrorCode.MethodNotFound,
        `Unknown tool: ${request.params.name}`
      );
    });
  }

  async run() {
    const transport = new StdioServerTransport();
    await this.server.connect(transport);
    console.log('Search MCP server running on stdio');
  }
}

const server = new SearchServer();
server.run().catch(console.error);

```