This is page 1 of 2. Use http://codebase.md/shariqriazz/vertex-ai-mcp-server?page={x} to view the full context.
# Directory Structure
```
├── .env.example
├── .gitignore
├── bun.lock
├── Dockerfile
├── LICENSE
├── package.json
├── README.md
├── smithery.yaml
├── src
│ ├── config.ts
│ ├── index.ts
│ ├── tools
│ │ ├── answer_query_direct.ts
│ │ ├── answer_query_websearch.ts
│ │ ├── architecture_pattern_recommendation.ts
│ │ ├── code_analysis_with_docs.ts
│ │ ├── database_schema_analyzer.ts
│ │ ├── dependency_vulnerability_scan.ts
│ │ ├── directory_tree.ts
│ │ ├── documentation_generator.ts
│ │ ├── edit_file.ts
│ │ ├── execute_terminal_command.ts
│ │ ├── explain_topic_with_docs.ts
│ │ ├── generate_project_guidelines.ts
│ │ ├── get_doc_snippets.ts
│ │ ├── get_file_info.ts
│ │ ├── index.ts
│ │ ├── list_directory.ts
│ │ ├── microservice_design_assistant.ts
│ │ ├── move_file.ts
│ │ ├── read_file.ts
│ │ ├── regulatory_compliance_advisor.ts
│ │ ├── save_answer_query_direct.ts
│ │ ├── save_answer_query_websearch.ts
│ │ ├── save_doc_snippet.ts
│ │ ├── save_generate_project_guidelines.ts
│ │ ├── save_topic_explanation.ts
│ │ ├── search_files.ts
│ │ ├── security_best_practices_advisor.ts
│ │ ├── technical_comparison.ts
│ │ ├── testing_strategy_generator.ts
│ │ ├── tool_definition.ts
│ │ └── write_file.ts
│ ├── utils.ts
│ └── vertex_ai_client.ts
└── tsconfig.json
```
# Files
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
```
node_modules/
build/
*.log
.env*
!.env.example
*.zip
*.md
!README.md
```
--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------
```
# Environment variables for vertex-ai-mcp-server
# --- Required ---
# REQUIRED only if AI_PROVIDER is "vertex"
GOOGLE_CLOUD_PROJECT="YOUR_GCP_PROJECT_ID"
# REQUIRED only if AI_PROVIDER is "gemini"
GEMINI_API_KEY="YOUR_GEMINI_API_KEY" # Get from Google AI Studio
# --- General AI Configuration ---
AI_PROVIDER="vertex" # Provider to use: "vertex" or "gemini"
# Optional - Model ID depends on the chosen provider
VERTEX_MODEL_ID="gemini-2.5-pro-exp-03-25" # e.g., gemini-1.5-pro-latest, gemini-1.0-pro
GEMINI_MODEL_ID="gemini-2.5-pro-exp-03-25" # e.g., gemini-2.5-pro-exp-03-25, gemini-pro
# --- Optional AI Parameters (Common) ---
# GOOGLE_CLOUD_LOCATION is specific to Vertex AI
GOOGLE_CLOUD_LOCATION="us-central1"
AI_TEMPERATURE="0.0" # Range: 0.0 to 1.0
AI_USE_STREAMING="true" # Use streaming responses: "true" or "false"
AI_MAX_OUTPUT_TOKENS="65536" # Max tokens in response (Note: Models have their own upper limits)
AI_MAX_RETRIES="3" # Number of retries on transient errors
AI_RETRY_DELAY_MS="1000" # Delay between retries in milliseconds
# --- Optional Vertex AI Authentication ---
# Uncomment and set if using a Service Account Key instead of Application Default Credentials (ADC) for Vertex AI
# GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-key.json"
```
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
```markdown
[](https://mseep.ai/app/shariqriazz-vertex-ai-mcp-server)
# Vertex AI MCP Server
[](https://smithery.ai/server/@shariqriazz/vertex-ai-mcp-server)
This project implements a Model Context Protocol (MCP) server that provides a comprehensive suite of tools for interacting with Google Cloud's Vertex AI Gemini models, focusing on coding assistance and general query answering.
<a href="https://glama.ai/mcp/servers/@shariqriazz/vertex-ai-mcp-server">
<img width="380" height="200" src="https://glama.ai/mcp/servers/@shariqriazz/vertex-ai-mcp-server/badge" alt="Vertex AI Server MCP server" />
</a>
## Features
* Provides access to Vertex AI Gemini models via numerous MCP tools.
* Supports web search grounding (`answer_query_websearch`) and direct knowledge answering (`answer_query_direct`).
* Configurable model ID, temperature, streaming behavior, max output tokens, and retry settings via environment variables.
* Uses streaming API by default for potentially better responsiveness.
* Includes basic retry logic for transient API errors.
* Minimal safety filters applied (`BLOCK_NONE`) to reduce potential blocking (use with caution).
## Tools Provided
### Query & Generation (AI Focused)
* `answer_query_websearch`: Answers a natural language query using the configured Vertex AI model enhanced with Google Search results.
* `answer_query_direct`: Answers a natural language query using only the internal knowledge of the configured Vertex AI model.
* `explain_topic_with_docs`: Provides a detailed explanation for a query about a specific software topic by synthesizing information primarily from official documentation found via web search.
* `get_doc_snippets`: Provides precise, authoritative code snippets or concise answers for technical queries by searching official documentation.
* `generate_project_guidelines`: Generates a structured project guidelines document (Markdown) based on a specified list of technologies (optionally with versions), using web search for best practices.
### Research & Analysis Tools
* `code_analysis_with_docs`: Analyzes code snippets by comparing them with best practices from official documentation, identifying potential bugs, performance issues, and security vulnerabilities.
* `technical_comparison`: Compares multiple technologies, frameworks, or libraries based on specific criteria, providing detailed comparison tables with pros/cons and use cases.
* `architecture_pattern_recommendation`: Suggests architecture patterns for specific use cases based on industry best practices, with implementation examples and considerations.
* `dependency_vulnerability_scan`: Analyzes project dependencies for known security vulnerabilities, providing detailed information and mitigation strategies.
* `database_schema_analyzer`: Reviews database schemas for normalization, indexing, and performance issues, suggesting improvements based on database-specific best practices.
* `security_best_practices_advisor`: Provides security recommendations for specific technologies or scenarios, with code examples for implementing secure practices.
* `testing_strategy_generator`: Creates comprehensive testing strategies for applications or features, suggesting appropriate testing types with coverage goals.
* `regulatory_compliance_advisor`: Provides guidance on regulatory requirements for specific industries (GDPR, HIPAA, etc.), with implementation approaches for compliance.
* `microservice_design_assistant`: Helps design microservice architectures for specific domains, with service boundary recommendations and communication patterns.
* `documentation_generator`: Creates comprehensive documentation for code, APIs, or systems, following industry best practices for technical documentation.
### Filesystem Operations
* `read_file_content`: Read the complete contents of one or more files. Provide a single path string or an array of path strings.
* `write_file_content`: Create new files or completely overwrite existing files. The 'writes' argument accepts a single object (`{path, content}`) or an array of such objects.
* `edit_file_content`: Makes line-based edits to a text file, returning a diff preview or applying changes.
* `list_directory_contents`: Lists files and directories directly within a specified path (non-recursive).
* `get_directory_tree`: Gets a recursive tree view of files and directories as JSON.
* `move_file_or_directory`: Moves or renames files and directories.
* `search_filesystem`: Recursively searches for files/directories matching a name pattern, with optional exclusions.
* `get_filesystem_info`: Retrieves detailed metadata (size, dates, type, permissions) about a file or directory.
* `execute_terminal_command`: Execute a shell command, optionally specifying `cwd` and `timeout`. Returns stdout/stderr.
### Combined AI + Filesystem Operations
* `save_generate_project_guidelines`: Generates project guidelines based on a tech stack and saves the result to a specified file path.
* `save_doc_snippet`: Finds code snippets from documentation and saves the result to a specified file path.
* `save_topic_explanation`: Generates a detailed explanation of a topic based on documentation and saves the result to a specified file path.
* `save_answer_query_direct`: Answers a query using only internal knowledge and saves the answer to a specified file path.
* `save_answer_query_websearch`: Answers a query using web search results and saves the answer to a specified file path.
*(Note: Input/output schemas for each tool are defined in their respective files within `src/tools/` and exposed via the MCP server.)*
## Prerequisites
* Node.js (v18+)
* Bun (`npm install -g bun`)
* Google Cloud Project with Billing enabled.
* Vertex AI API enabled in the GCP project.
* Google Cloud Authentication configured in your environment (Application Default Credentials via `gcloud auth application-default login` is recommended, or a Service Account Key).
## Setup & Installation
1. **Clone/Place Project:** Ensure the project files are in your desired location.
2. **Install Dependencies:**
```bash
bun install
```
3. **Configure Environment:**
* Create a `.env` file in the project root (copy `.env.example`).
* Set the required and optional environment variables as described in `.env.example`.
* Set `AI_PROVIDER` to either `"vertex"` or `"gemini"`.
* If `AI_PROVIDER="vertex"`, `GOOGLE_CLOUD_PROJECT` is required.
* If `AI_PROVIDER="gemini"`, `GEMINI_API_KEY` is required.
4. **Build the Server:**
```bash
bun run build
```
This compiles the TypeScript code to `build/index.js`.
## Usage (Standalone / NPX)
Once published to npm, you can run this server directly using `npx`:
```bash
# Ensure required environment variables are set (e.g., GOOGLE_CLOUD_PROJECT)
bunx vertex-ai-mcp-server
```
Alternatively, install it globally:
```bash
bun install -g vertex-ai-mcp-server
# Then run:
vertex-ai-mcp-server
```
**Note:** Running standalone requires setting necessary environment variables (like `GOOGLE_CLOUD_PROJECT`, `GOOGLE_CLOUD_LOCATION`, authentication credentials if not using ADC) in your shell environment before executing the command.
### Installing via Smithery
To install Vertex AI Server for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@shariqriazz/vertex-ai-mcp-server):
```bash
bunx -y @smithery/cli install @shariqriazz/vertex-ai-mcp-server --client claude
```
## Running with Cline
1. **Configure MCP Settings:** Add/update the configuration in your Cline MCP settings file (e.g., `.roo/mcp.json`). You have two primary ways to configure the command:
**Option A: Using Node (Direct Path - Recommended for Development)**
This method uses `node` to run the compiled script directly. It's useful during development when you have the code cloned locally.
```json
{
"mcpServers": {
"vertex-ai-mcp-server": {
"command": "node",
"args": [
"/full/path/to/your/vertex-ai-mcp-server/build/index.js" // Use absolute path or ensure it's relative to where Cline runs node
],
"env": {
// --- General AI Configuration ---
"AI_PROVIDER": "vertex", // "vertex" or "gemini"
// --- Required (Conditional) ---
"GOOGLE_CLOUD_PROJECT": "YOUR_GCP_PROJECT_ID", // Required if AI_PROVIDER="vertex"
// "GEMINI_API_KEY": "YOUR_GEMINI_API_KEY", // Required if AI_PROVIDER="gemini"
// --- Optional Model Selection ---
"VERTEX_MODEL_ID": "gemini-2.5-pro-exp-03-25", // If AI_PROVIDER="vertex" (Example override)
"GEMINI_MODEL_ID": "gemini-2.5-pro-exp-03-25", // If AI_PROVIDER="gemini"
// --- Optional AI Parameters ---
"GOOGLE_CLOUD_LOCATION": "us-central1", // Specific to Vertex AI
"AI_TEMPERATURE": "0.0",
"AI_USE_STREAMING": "true",
"AI_MAX_OUTPUT_TOKENS": "65536", // Default from .env.example
"AI_MAX_RETRIES": "3",
"AI_RETRY_DELAY_MS": "1000",
// --- Optional Vertex Authentication ---
// "GOOGLE_APPLICATION_CREDENTIALS": "/path/to/your/service-account-key.json" // If using Service Account Key for Vertex
},
"disabled": false,
"alwaysAllow": [
// Add tool names here if you don't want confirmation prompts
// e.g., "answer_query_websearch"
],
"timeout": 3600 // Optional: Timeout in seconds
}
// Add other servers here...
}
}
```
* **Important:** Ensure the `args` path points correctly to the `build/index.js` file. Using an absolute path might be more reliable.
**Option B: Using NPX (Requires Package Published to npm)**
This method uses `npx` to automatically download and run the server package from the npm registry. This is convenient if you don't want to clone the repository.
```json
{
"mcpServers": {
"vertex-ai-mcp-server": {
"command": "bunx", // Use bunx
"args": [
"-y", // Auto-confirm installation
"vertex-ai-mcp-server" // The npm package name
],
"env": {
// --- General AI Configuration ---
"AI_PROVIDER": "vertex", // "vertex" or "gemini"
// --- Required (Conditional) ---
"GOOGLE_CLOUD_PROJECT": "YOUR_GCP_PROJECT_ID", // Required if AI_PROVIDER="vertex"
// "GEMINI_API_KEY": "YOUR_GEMINI_API_KEY", // Required if AI_PROVIDER="gemini"
// --- Optional Model Selection ---
"VERTEX_MODEL_ID": "gemini-2.5-pro-exp-03-25", // If AI_PROVIDER="vertex" (Example override)
"GEMINI_MODEL_ID": "gemini-2.5-pro-exp-03-25", // If AI_PROVIDER="gemini"
// --- Optional AI Parameters ---
"GOOGLE_CLOUD_LOCATION": "us-central1", // Specific to Vertex AI
"AI_TEMPERATURE": "0.0",
"AI_USE_STREAMING": "true",
"AI_MAX_OUTPUT_TOKENS": "65536", // Default from .env.example
"AI_MAX_RETRIES": "3",
"AI_RETRY_DELAY_MS": "1000",
// --- Optional Vertex Authentication ---
// "GOOGLE_APPLICATION_CREDENTIALS": "/path/to/your/service-account-key.json" // If using Service Account Key for Vertex
},
"disabled": false,
"alwaysAllow": [
// Add tool names here if you don't want confirmation prompts
// e.g., "answer_query_websearch"
],
"timeout": 3600 // Optional: Timeout in seconds
}
// Add other servers here...
}
}
```
* Ensure the environment variables in the `env` block are correctly set, either matching `.env` or explicitly defined here. Remove comments from the actual JSON file.
2. **Restart/Reload Cline:** Cline should detect the configuration change and start the server.
3. **Use Tools:** You can now use the extensive list of tools via Cline.
## Development
* **Watch Mode:** `bun run watch`
* **Linting:** `bun run lint`
* **Formatting:** `bun run format`
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
```
--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------
```json
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./build",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": ["src/**/*"],
"exclude": ["node_modules"]
}
```
--------------------------------------------------------------------------------
/src/utils.ts:
--------------------------------------------------------------------------------
```typescript
import * as path from 'node:path';
import { WORKSPACE_ROOT } from './config.js';
export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms));
// Basic path validation
export function sanitizePath(inputPath: string): string {
const absolutePath = path.resolve(WORKSPACE_ROOT, inputPath);
if (!absolutePath.startsWith(WORKSPACE_ROOT)) {
throw new Error(`Access denied: Path is outside the workspace: ${inputPath}`);
}
// Basic check against path traversal
if (absolutePath.includes('..')) {
throw new Error(`Access denied: Invalid path component '..': ${inputPath}`);
}
return absolutePath;
}
```
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
```dockerfile
# Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
# Build stage
FROM node:lts-alpine AS build
WORKDIR /app
# Install dependencies without running prepare scripts
COPY package.json tsconfig.json bun.lock .
RUN npm install --ignore-scripts
# Copy source and transpile
COPY . .
RUN npx tsc -p tsconfig.json && chmod +x build/index.js
# Production image
FROM node:lts-alpine
WORKDIR /app
# Copy built application
COPY --from=build /app/build ./build
# Install production dependencies without running prepare scripts
COPY package.json bun.lock .
RUN npm install --omit=dev --ignore-scripts
ENV NODE_ENV=production
ENTRYPOINT ["node", "build/index.js"]
```
--------------------------------------------------------------------------------
/src/tools/tool_definition.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import type { Content, Tool } from "@google/genai";
export interface ToolDefinition {
name: string;
description: string;
inputSchema: any; // Consider defining a stricter type like JSONSchema7
buildPrompt: (args: any, modelId: string) => {
systemInstructionText: string;
userQueryText: string;
useWebSearch: boolean;
enableFunctionCalling: boolean;
};
}
export const modelIdPlaceholder = "${modelId}"; // Placeholder for dynamic model ID in descriptions
// Helper to build the initial content array
export function buildInitialContent(systemInstruction: string, userQuery: string): Content[] {
return [{ role: "user", parts: [{ text: `${systemInstruction}\n\n${userQuery}` }] }];
}
// Helper to determine tools for API call
export function getToolsForApi(enableFunctionCalling: boolean, useWebSearch: boolean): Tool[] | undefined {
// Function calling is no longer supported by the remaining tools
return useWebSearch ? [{ googleSearch: {} } as any] : undefined; // Cast needed as SDK type might not include googleSearch directly
}
```
--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------
```json
{
"name": "vertex-ai-mcp-server",
"version": "0.4.0",
"description": "A Model Context Protocol server supporting Vertex AI and Gemini API",
"license": "MIT",
"type": "module",
"bin": {
"vertex-ai-mcp-server": "build/index.js"
},
"repository": {
"type": "git",
"url": "git+https://github.com/shariqriazz/vertex-ai-mcp-server.git"
},
"homepage": "https://github.com/shariqriazz/vertex-ai-mcp-server#readme",
"bugs": {
"url": "https://github.com/shariqriazz/vertex-ai-mcp-server/issues"
},
"files": [
"build"
],
"scripts": {
"build": "bun run tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
"prepare": "bun run build",
"watch": "bun run tsc --watch",
"inspector": "bunx @modelcontextprotocol/inspector build/index.js"
},
"dependencies": {
"@google/genai": "^1.0.1",
"@modelcontextprotocol/sdk": "0.6.0",
"diff": "^7.0.0",
"dotenv": "^16.5.0",
"minimatch": "^10.0.1",
"zod": "^3.24.3",
"zod-to-json-schema": "^3.24.5"
},
"devDependencies": {
"@types/diff": "^7.0.2",
"@types/minimatch": "^5.1.2",
"@types/node": "^20.11.24",
"typescript": "^5.3.3"
}
}
```
--------------------------------------------------------------------------------
/src/tools/directory_tree.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema definition (adapted from example.ts) - Exported
export const DirectoryTreeArgsSchema = z.object({
path: z.string().describe("The root path for the directory tree (relative to the workspace directory)."),
});
// Convert Zod schema to JSON schema
const DirectoryTreeJsonSchema = zodToJsonSchema(DirectoryTreeArgsSchema);
export const directoryTreeTool: ToolDefinition = {
name: "get_directory_tree", // Renamed slightly
description:
"Get a recursive tree view of files and directories within the workspace filesystem as a JSON structure. " +
"Each entry includes 'name', 'type' (file/directory), and 'children' (an array) for directories. " +
"Files have no 'children' array. The output is formatted JSON text. " +
"Useful for understanding the complete structure of a project directory.",
inputSchema: DirectoryTreeJsonSchema as any, // Cast as any if needed
// Minimal buildPrompt as execution logic is separate
buildPrompt: (args: any, modelId: string) => {
const parsed = DirectoryTreeArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for get_directory_tree: ${parsed.error}`);
}
return {
systemInstructionText: "",
userQueryText: "",
useWebSearch: false,
enableFunctionCalling: false
};
},
// No 'execute' function here
};
```
--------------------------------------------------------------------------------
/src/tools/list_directory.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema definition (adapted from example.ts) - Exported
export const ListDirectoryArgsSchema = z.object({
path: z.string().describe("The path of the directory to list (relative to the workspace directory)."),
});
// Convert Zod schema to JSON schema
const ListDirectoryJsonSchema = zodToJsonSchema(ListDirectoryArgsSchema);
export const listDirectoryTool: ToolDefinition = {
name: "list_directory_contents", // Renamed slightly
description:
"Get a detailed listing of all files and directories directly within a specified path in the workspace filesystem. " +
"Results clearly distinguish between files and directories with [FILE] and [DIR] " +
"prefixes. This tool is essential for understanding directory structure and " +
"finding specific files within a directory. Does not list recursively.",
inputSchema: ListDirectoryJsonSchema as any, // Cast as any if needed
// Minimal buildPrompt as execution logic is separate
buildPrompt: (args: any, modelId: string) => {
const parsed = ListDirectoryArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for list_directory_contents: ${parsed.error}`);
}
return {
systemInstructionText: "",
userQueryText: "",
useWebSearch: false,
enableFunctionCalling: false
};
},
// No 'execute' function here
};
```
--------------------------------------------------------------------------------
/src/tools/get_file_info.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema definition (adapted from example.ts) - Exported
export const GetFileInfoArgsSchema = z.object({
path: z.string().describe("The path of the file or directory to get info for (relative to the workspace directory)."),
});
// Convert Zod schema to JSON schema
const GetFileInfoJsonSchema = zodToJsonSchema(GetFileInfoArgsSchema);
export const getFileInfoTool: ToolDefinition = {
name: "get_filesystem_info", // Renamed slightly
description:
"Retrieve detailed metadata about a file or directory within the workspace filesystem. " +
"Returns comprehensive information including size (bytes), creation time, last modified time, " +
"last accessed time, type (file/directory), and permissions (octal string). " +
"This tool is perfect for understanding file characteristics without reading the actual content.",
inputSchema: GetFileInfoJsonSchema as any, // Cast as any if needed
// Minimal buildPrompt as execution logic is separate
buildPrompt: (args: any, modelId: string) => {
const parsed = GetFileInfoArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for get_filesystem_info: ${parsed.error}`);
}
return {
systemInstructionText: "",
userQueryText: "",
useWebSearch: false,
enableFunctionCalling: false
};
},
// No 'execute' function here
};
```
--------------------------------------------------------------------------------
/src/tools/execute_terminal_command.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema definition
export const ExecuteTerminalCommandArgsSchema = z.object({
command: z.string().describe("The command line instruction to execute."),
cwd: z.string().optional().describe("Optional. The working directory to run the command in (relative to the workspace root). Defaults to the workspace root if not specified."),
timeout: z.number().int().positive().optional().describe("Optional. Maximum execution time in seconds. If the command exceeds this time, it will be terminated."),
});
// Convert Zod schema to JSON schema
const ExecuteTerminalCommandJsonSchema = zodToJsonSchema(ExecuteTerminalCommandArgsSchema);
export const executeTerminalCommandTool: ToolDefinition = {
name: "execute_terminal_command", // Renamed
description:
"Execute a shell command on the server's operating system. " +
"Allows specifying the command, an optional working directory (cwd), and an optional timeout in seconds. " +
"Returns the combined stdout and stderr output of the command upon completion or termination.",
inputSchema: ExecuteTerminalCommandJsonSchema as any,
// Minimal buildPrompt as execution logic is separate
buildPrompt: (args: any, modelId: string) => {
const parsed = ExecuteTerminalCommandArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for execute_terminal_command: ${parsed.error}`);
}
return {
systemInstructionText: "",
userQueryText: "",
useWebSearch: false,
enableFunctionCalling: false
};
},
// No 'execute' function here
};
```
--------------------------------------------------------------------------------
/src/tools/search_files.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema definition (adapted from example.ts) - Exported
export const SearchFilesArgsSchema = z.object({
path: z.string().describe("The starting directory path for the search (relative to the workspace directory)."),
pattern: z.string().describe("The case-insensitive text pattern to search for in file/directory names."),
excludePatterns: z.array(z.string()).optional().default([]).describe("An array of glob patterns (e.g., 'node_modules', '*.log') to exclude from the search.")
});
// Convert Zod schema to JSON schema
const SearchFilesJsonSchema = zodToJsonSchema(SearchFilesArgsSchema);
export const searchFilesTool: ToolDefinition = {
name: "search_filesystem", // Renamed slightly
description:
"Recursively search for files and directories within the workspace filesystem matching a pattern in their name. " +
"Searches through all subdirectories from the starting path. The search " +
"is case-insensitive and matches partial names. Returns full paths (relative to workspace) to all " +
"matching items. Supports excluding paths using glob patterns.",
inputSchema: SearchFilesJsonSchema as any, // Cast as any if needed
// Minimal buildPrompt as execution logic is separate
buildPrompt: (args: any, modelId: string) => {
const parsed = SearchFilesArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for search_filesystem: ${parsed.error}`);
}
return {
systemInstructionText: "",
userQueryText: "",
useWebSearch: false,
enableFunctionCalling: false
};
},
// No 'execute' function here
};
```
--------------------------------------------------------------------------------
/src/tools/move_file.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema definition (adapted from example.ts) - Exported
export const MoveFileArgsSchema = z.object({
source: z.string().describe("The current path of the file or directory to move (relative to the workspace directory)."),
destination: z.string().describe("The new path for the file or directory (relative to the workspace directory)."),
});
// Convert Zod schema to JSON schema
const MoveFileJsonSchema = zodToJsonSchema(MoveFileArgsSchema);
export const moveFileTool: ToolDefinition = {
name: "move_file_or_directory", // Renamed slightly
description:
"Move or rename files and directories within the workspace filesystem. " +
"Can move items between directories and rename them in a single operation. " +
"If the destination path already exists, the operation will likely fail (OS-dependent).",
inputSchema: MoveFileJsonSchema as any, // Cast as any if needed
// Minimal buildPrompt as execution logic is separate
buildPrompt: (args: any, modelId: string) => {
const parsed = MoveFileArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for move_file_or_directory: ${parsed.error}`);
}
// Add check: source and destination cannot be the same
if (parsed.data.source === parsed.data.destination) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for move_file_or_directory: source and destination paths cannot be the same.`);
}
return {
systemInstructionText: "",
userQueryText: "",
useWebSearch: false,
enableFunctionCalling: false
};
},
// No 'execute' function here
};
```
--------------------------------------------------------------------------------
/src/tools/write_file.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema definition (adapted from example.ts) - Exported
// Schema for a single file write operation
const SingleWriteOperationSchema = z.object({
path: z.string().describe("The path of the file to write (relative to the workspace directory)."),
content: z.string().describe("The full content to write to the file."),
});
// Schema for the arguments object, containing either a single write or an array of writes
export const WriteFileArgsSchema = z.object({
writes: z.union([
SingleWriteOperationSchema.describe("A single file write operation."),
z.array(SingleWriteOperationSchema).min(1).describe("An array of file write operations.")
]).describe("A single write operation or an array of write operations.")
});
// Convert Zod schema to JSON schema
const WriteFileJsonSchema = zodToJsonSchema(WriteFileArgsSchema);
export const writeFileTool: ToolDefinition = {
name: "write_file_content", // Keep name consistent
description:
"Create new files or completely overwrite existing files in the workspace filesystem. " +
"The 'writes' argument should be either a single object with 'path' and 'content', or an array of such objects to write multiple files. " +
"Use with caution as it will overwrite existing files without warning. " +
"Handles text content with proper encoding.",
inputSchema: WriteFileJsonSchema as any, // Cast as any if needed
// Minimal buildPrompt as execution logic is separate
buildPrompt: (args: any, modelId: string) => {
const parsed = WriteFileArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for write_file_content: ${parsed.error}`);
}
return {
systemInstructionText: "",
userQueryText: "",
useWebSearch: false,
enableFunctionCalling: false
};
},
// No 'execute' function here
};
```
--------------------------------------------------------------------------------
/src/tools/read_file.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
// Note: We don't need fs, path here as execution logic is moved
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema definition (adapted from example.ts) - Exported
export const ReadFileArgsSchema = z.object({
paths: z.union([
z.string().describe("The path of the file to read (relative to the workspace directory)."),
z.array(z.string()).min(1).describe("An array of file paths to read (relative to the workspace directory).")
]).describe("A single file path or an array of file paths to read."),
});
// Infer the input type for validation
type ReadFileInput = z.infer<typeof ReadFileArgsSchema>;
// Convert Zod schema to JSON schema for the tool definition
const ReadFileJsonSchema = zodToJsonSchema(ReadFileArgsSchema);
export const readFileTool: ToolDefinition = {
name: "read_file_content", // Keep the name consistent
description:
"Read the complete contents of one or more files from the workspace filesystem. " +
"Provide a single path string or an array of path strings. " +
"Handles various text encodings and provides detailed error messages " +
"if a file cannot be read. Failed reads for individual files in an array " +
"won't stop the entire operation when multiple paths are provided.",
// Use the converted JSON schema
inputSchema: ReadFileJsonSchema as any, // Cast as any to fit ToolDefinition if needed
// This tool doesn't directly use the LLM, so buildPrompt is minimal/not used for execution
buildPrompt: (args: any, modelId: string) => {
// Basic validation
const parsed = ReadFileArgsSchema.safeParse(args);
if (!parsed.success) {
// Use InternalError or InvalidParams
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for read_file_content: ${parsed.error}`);
}
// No prompt generation needed for direct execution logic
return {
systemInstructionText: "", // Not applicable
userQueryText: "", // Not applicable
useWebSearch: false,
enableFunctionCalling: false
};
},
// Removed the 'execute' function - this logic will go into src/index.ts
};
```
--------------------------------------------------------------------------------
/src/tools/edit_file.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema definitions (adapted from example.ts) - Exported
export const EditOperationSchema = z.object({
oldText: z.string().describe('Text to search for - attempts exact match first, then line-by-line whitespace-insensitive match.'),
newText: z.string().describe('Text to replace with, preserving indentation where possible.')
});
export const EditFileArgsSchema = z.object({
path: z.string().describe("The path of the file to edit (relative to the workspace directory)."),
edits: z.array(EditOperationSchema).describe("An array of edit operations to apply sequentially."),
dryRun: z.boolean().optional().default(false).describe('If true, preview changes using git-style diff format without saving.')
});
// Convert Zod schema to JSON schema
const EditFileJsonSchema = zodToJsonSchema(EditFileArgsSchema);
export const editFileTool: ToolDefinition = {
name: "edit_file_content", // Renamed slightly
description:
"Make line-based edits to a text file in the workspace filesystem. Each edit attempts to replace " +
"an exact match of 'oldText' with 'newText'. If no exact match is found, it attempts a " +
"line-by-line match ignoring leading/trailing whitespace. Indentation of the first line " +
"is preserved, and relative indentation of subsequent lines is attempted. " +
"Returns a git-style diff showing the changes made (or previewed if dryRun is true).",
inputSchema: EditFileJsonSchema as any, // Cast as any if needed
// Minimal buildPrompt as execution logic is separate
buildPrompt: (args: any, modelId: string) => {
const parsed = EditFileArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for edit_file_content: ${parsed.error}`);
}
// Add a check for empty edits array
if (parsed.data.edits.length === 0) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for edit_file_content: 'edits' array cannot be empty.`);
}
return {
systemInstructionText: "",
userQueryText: "",
useWebSearch: false,
enableFunctionCalling: false
};
},
// No 'execute' function here
};
```
--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------
```yaml
# Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
startCommand:
type: stdio
configSchema:
# JSON Schema defining the configuration options for the MCP.
type: object
required:
- googleCloudProject
- googleCloudLocation
properties:
googleCloudProject:
type: string
description: Google Cloud Project ID
googleCloudLocation:
type: string
description: Google Cloud Location
googleApplicationCredentials:
type: string
description: Path to service account key JSON
vertexAiModelId:
type: string
default: gemini-2.5-pro-exp-03-25
description: Vertex AI Model ID
vertexAiTemperature:
type: number
default: 0
description: Temperature for model
vertexAiUseStreaming:
type: boolean
default: true
description: Whether to use streaming
vertexAiMaxOutputTokens:
type: number
default: 65535
description: Max output tokens
vertexAiMaxRetries:
type: number
default: 3
description: Max retry attempts
vertexAiRetryDelayMs:
type: number
default: 1000
description: Delay between retries in ms
commandFunction:
# A JS function that produces the CLI command based on the given config to start the MCP on stdio.
|-
(config) => ({ command: 'node', args: ['build/index.js'], env: { ...(config.googleCloudProject && { GOOGLE_CLOUD_PROJECT: config.googleCloudProject }), ...(config.googleCloudLocation && { GOOGLE_CLOUD_LOCATION: config.googleCloudLocation }), ...(config.googleApplicationCredentials && { GOOGLE_APPLICATION_CREDENTIALS: config.googleApplicationCredentials }), ...(config.vertexAiModelId && { VERTEX_AI_MODEL_ID: config.vertexAiModelId }), ...(config.vertexAiTemperature !== undefined && { VERTEX_AI_TEMPERATURE: String(config.vertexAiTemperature) }), ...(config.vertexAiUseStreaming !== undefined && { VERTEX_AI_USE_STREAMING: String(config.vertexAiUseStreaming) }), ...(config.vertexAiMaxOutputTokens !== undefined && { VERTEX_AI_MAX_OUTPUT_TOKENS: String(config.vertexAiMaxOutputTokens) }), ...(config.vertexAiMaxRetries !== undefined && { VERTEX_AI_MAX_RETRIES: String(config.vertexAiMaxRetries) }), ...(config.vertexAiRetryDelayMs !== undefined && { VERTEX_AI_RETRY_DELAY_MS: String(config.vertexAiRetryDelayMs) }) } })
exampleConfig:
googleCloudProject: my-gcp-project
googleCloudLocation: us-central1
googleApplicationCredentials: /path/to/credentials.json
vertexAiModelId: gemini-2.5-pro-exp-03-25
vertexAiTemperature: 0
vertexAiUseStreaming: true
vertexAiMaxOutputTokens: 65535
vertexAiMaxRetries: 3
vertexAiRetryDelayMs: 1000
```
--------------------------------------------------------------------------------
/src/tools/index.ts:
--------------------------------------------------------------------------------
```typescript
import { ToolDefinition } from "./tool_definition.js";
import { answerQueryWebsearchTool } from "./answer_query_websearch.js";
import { answerQueryDirectTool } from "./answer_query_direct.js";
import { explainTopicWithDocsTool } from "./explain_topic_with_docs.js";
import { getDocSnippetsTool } from "./get_doc_snippets.js";
import { generateProjectGuidelinesTool } from "./generate_project_guidelines.js";
// Filesystem Tools (Imported)
import { readFileTool } from "./read_file.js"; // Handles single and multiple files now
// import { readMultipleFilesTool } from "./read_multiple_files.js"; // Merged into readFileTool
import { writeFileTool } from "./write_file.js";
import { editFileTool } from "./edit_file.js";
// import { createDirectoryTool } from "./create_directory.js"; // Removed
import { listDirectoryTool } from "./list_directory.js";
import { directoryTreeTool } from "./directory_tree.js";
import { moveFileTool } from "./move_file.js";
import { searchFilesTool } from "./search_files.js";
import { getFileInfoTool } from "./get_file_info.js";
import { executeTerminalCommandTool } from "./execute_terminal_command.js"; // Renamed file and tool variable
// Import the new combined tools
import { saveGenerateProjectGuidelinesTool } from "./save_generate_project_guidelines.js";
import { saveDocSnippetTool } from "./save_doc_snippet.js";
import { saveTopicExplanationTool } from "./save_topic_explanation.js";
// Removed old save_query_answer, added new specific ones
import { saveAnswerQueryDirectTool } from "./save_answer_query_direct.js";
import { saveAnswerQueryWebsearchTool } from "./save_answer_query_websearch.js";
// Import new research-oriented tools
import { codeAnalysisWithDocsTool } from "./code_analysis_with_docs.js";
import { technicalComparisonTool } from "./technical_comparison.js";
import { architecturePatternRecommendationTool } from "./architecture_pattern_recommendation.js";
import { dependencyVulnerabilityScanTool } from "./dependency_vulnerability_scan.js";
import { databaseSchemaAnalyzerTool } from "./database_schema_analyzer.js";
import { securityBestPracticesAdvisorTool } from "./security_best_practices_advisor.js";
import { testingStrategyGeneratorTool } from "./testing_strategy_generator.js";
import { regulatoryComplianceAdvisorTool } from "./regulatory_compliance_advisor.js";
import { microserviceDesignAssistantTool } from "./microservice_design_assistant.js";
import { documentationGeneratorTool } from "./documentation_generator.js";
export const allTools: ToolDefinition[] = [
// Query & Generation Tools
answerQueryWebsearchTool,
answerQueryDirectTool,
explainTopicWithDocsTool,
getDocSnippetsTool,
generateProjectGuidelinesTool,
// Filesystem Tools
readFileTool, // Handles single and multiple files now
// readMultipleFilesTool, // Merged into readFileTool
writeFileTool,
editFileTool,
// createDirectoryTool, // Removed
listDirectoryTool,
directoryTreeTool,
moveFileTool,
searchFilesTool,
getFileInfoTool,
executeTerminalCommandTool, // Renamed
// Add the new combined tools
saveGenerateProjectGuidelinesTool,
saveDocSnippetTool,
saveTopicExplanationTool,
// Removed old save_query_answer, added new specific ones
saveAnswerQueryDirectTool,
saveAnswerQueryWebsearchTool,
// New research-oriented tools
codeAnalysisWithDocsTool,
technicalComparisonTool,
architecturePatternRecommendationTool,
dependencyVulnerabilityScanTool,
databaseSchemaAnalyzerTool,
securityBestPracticesAdvisorTool,
testingStrategyGeneratorTool,
regulatoryComplianceAdvisorTool,
microserviceDesignAssistantTool,
documentationGeneratorTool,
];
// Create a map for easy lookup
export const toolMap = new Map<string, ToolDefinition>(
allTools.map(tool => [tool.name, tool])
);
```
--------------------------------------------------------------------------------
/src/tools/save_topic_explanation.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema combining explain_topic_with_docs args + output_path
export const SaveTopicExplanationArgsSchema = z.object({
topic: z.string().describe("The software/library/framework topic (e.g., 'React Router', 'Python requests')."),
query: z.string().describe("The specific question to answer based on the documentation."),
output_path: z.string().describe("The relative path where the generated explanation should be saved (e.g., 'explanations/react-router-hooks.md').")
});
// Convert Zod schema to JSON schema
const SaveTopicExplanationJsonSchema = zodToJsonSchema(SaveTopicExplanationArgsSchema);
export const saveTopicExplanationTool: ToolDefinition = {
name: "save_topic_explanation",
description: `Provides a detailed explanation for a query about a specific software topic using official documentation found via web search and saves the result to a file. Uses the configured Vertex AI model (${modelIdPlaceholder}). Requires 'topic', 'query', and 'output_path'.`,
inputSchema: SaveTopicExplanationJsonSchema as any,
// Build prompt logic adapted from explain_topic_with_docs (Reverted to original working version)
buildPrompt: (args: any, modelId: string) => {
const parsed = SaveTopicExplanationArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for save_topic_explanation: ${parsed.error.errors.map(e => `${e.path.join('.')}: ${e.message}`).join(', ')}`);
}
const { topic, query } = parsed.data; // output_path used in handler
const systemInstructionText = `You are an expert technical writer and documentation specialist. Your task is to provide a comprehensive and accurate explanation for a specific query about a software topic ("${topic}"), synthesizing information primarily from official documentation found via web search.
SEARCH METHODOLOGY:
1. Identify the official documentation source for "${topic}".
2. Search the official documentation specifically for information related to "${query}".
3. Prioritize explanations, concepts, and usage examples directly from the official docs.
4. If official docs are sparse, supplement with highly reputable sources (e.g., official blogs, key contributor articles), but clearly distinguish this from official documentation content.
RESPONSE REQUIREMENTS:
1. **Accuracy:** Ensure the explanation is technically correct and reflects the official documentation for "${topic}".
2. **Comprehensiveness:** Provide sufficient detail to thoroughly answer the query, including relevant concepts, code examples (if applicable and found in docs), and context.
3. **Clarity:** Structure the explanation logically with clear language, headings, bullet points, and code formatting where appropriate.
4. **Citation:** Cite the official documentation source(s) used.
5. **Focus:** Directly address the user's query ("${query}") without unnecessary introductory or concluding remarks. Start directly with the explanation.
6. **Format:** Use Markdown for formatting.`; // Reverted: Removed the "CRITICAL: Do NOT start..." instruction
const userQueryText = `Provide a comprehensive explanation for the query "${query}" regarding the software topic "${topic}". Base the explanation primarily on official documentation found via web search. Include relevant concepts, code examples (if available in docs), and cite sources.`; // Reverted: Removed the extra instruction about starting format
return {
systemInstructionText: systemInstructionText,
userQueryText: userQueryText,
useWebSearch: true, // Always use web search for explanations based on docs
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/answer_query_websearch.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const answerQueryWebsearchTool: ToolDefinition = {
name: "answer_query_websearch",
description: `Answers a natural language query using the configured Vertex AI model (${modelIdPlaceholder}) enhanced with Google Search results for up-to-date information. Requires a 'query' string.`,
inputSchema: { type: "object", properties: { query: { type: "string", description: "The natural language question to answer using web search." } }, required: ["query"] },
buildPrompt: (args: any, modelId: string) => {
const query = args.query;
if (typeof query !== "string" || !query) throw new McpError(ErrorCode.InvalidParams, "Missing 'query'.");
const base = `You are an AI assistant designed to answer questions accurately using provided search results. You are an EXPERT at synthesizing information from diverse sources into comprehensive, well-structured responses.`;
const ground = ` Base your answer *only* on Google Search results relevant to "${query}". Synthesize information from search results into a coherent, comprehensive response that directly addresses the query. If search results are insufficient or irrelevant, explicitly state which aspects you cannot answer based on available information. Never add information not present in search results. When search results conflict, acknowledge the contradictions and explain different perspectives.`;
const structure = ` Structure your response with clear organization:
1. Begin with a concise executive summary of 2-3 sentences that directly answers the main question.
2. For complex topics, use appropriate headings and subheadings to organize different aspects of the answer.
3. Present information from newest to oldest when dealing with evolving topics or current events.
4. Where appropriate, use numbered or bulleted lists to present steps, features, or comparative points.
5. For controversial topics, present multiple perspectives fairly with supporting evidence from search results.
6. Include a "Sources and Limitations" section at the end that notes the reliability of sources and any information gaps.`;
const citation = ` Citation requirements:
1. Cite specific sources within your answer using [Source X] format.
2. Prioritize information from reliable, authoritative sources over random websites or forums.
3. For statistics, quotes, or specific claims, attribute the specific source.
4. Evaluate source credibility and recency - prefer official, recent sources for time-sensitive topics.
5. When search results indicate information might be outdated, explicitly note this limitation.`;
const format = ` Format your answer in clean, readable Markdown:
1. Use proper headings (##, ###) for major sections.
2. Use **bold** for emphasis of key points.
3. Use \`code formatting\` for technical terms, commands, or code snippets when relevant.
4. Create tables for comparing multiple items or options.
5. Use blockquotes (>) for direct quotations from sources.`;
return {
systemInstructionText: base + ground + structure + citation + format,
userQueryText: `I need a comprehensive answer to this question: "${query}"
In your answer:
1. Thoroughly search for and evaluate ALL relevant information from search results
2. Synthesize information from multiple sources into a coherent, well-structured response
3. Present differing viewpoints fairly when sources disagree
4. Include appropriate citations to specific sources
5. Note any limitations in the available information
6. Organize your response logically with clear headings and sections
7. Use appropriate formatting to enhance readability
Please provide your COMPLETE response addressing all aspects of my question.`,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/save_answer_query_websearch.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema for websearch query answer + output path
export const SaveAnswerQueryWebsearchArgsSchema = z.object({
query: z.string().describe("The natural language question to answer using web search."),
output_path: z.string().describe("The relative path where the generated answer should be saved.")
});
// Convert Zod schema to JSON schema
const SaveAnswerQueryWebsearchJsonSchema = zodToJsonSchema(SaveAnswerQueryWebsearchArgsSchema);
export const saveAnswerQueryWebsearchTool: ToolDefinition = {
name: "save_answer_query_websearch",
description: `Answers a natural language query using Google Search results and saves the answer to a file. Uses the configured Vertex AI model (${modelIdPlaceholder}). Requires 'query' and 'output_path'.`,
inputSchema: SaveAnswerQueryWebsearchJsonSchema as any,
buildPrompt: (args: any, modelId: string) => {
const parsed = SaveAnswerQueryWebsearchArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for save_answer_query_websearch: ${parsed.error.errors.map(e => `${e.path.join('.')}: ${e.message}`).join(', ')}`);
}
const { query } = parsed.data; // output_path used in handler
// --- Use Prompt Logic from answer_query_websearch.ts ---
const base = `You are an AI assistant designed to answer questions accurately using provided search results. You are an EXPERT at synthesizing information from diverse sources into comprehensive, well-structured responses.`;
const ground = ` Base your answer *only* on Google Search results relevant to "${query}". Synthesize information from search results into a coherent, comprehensive response that directly addresses the query. If search results are insufficient or irrelevant, explicitly state which aspects you cannot answer based on available information. Never add information not present in search results. When search results conflict, acknowledge the contradictions and explain different perspectives.`;
const structure = ` Structure your response with clear organization:
1. Begin with a concise executive summary of 2-3 sentences that directly answers the main question.
2. For complex topics, use appropriate headings and subheadings to organize different aspects of the answer.
3. Present information from newest to oldest when dealing with evolving topics or current events.
4. Where appropriate, use numbered or bulleted lists to present steps, features, or comparative points.
5. For controversial topics, present multiple perspectives fairly with supporting evidence from search results.
6. Include a "Sources and Limitations" section at the end that notes the reliability of sources and any information gaps.`;
const citation = ` Citation requirements:
1. Cite specific sources within your answer using [Source X] format.
2. Prioritize information from reliable, authoritative sources over random websites or forums.
3. For statistics, quotes, or specific claims, attribute the specific source.
4. Evaluate source credibility and recency - prefer official, recent sources for time-sensitive topics.
5. When search results indicate information might be outdated, explicitly note this limitation.`;
const format = ` Format your answer in clean, readable Markdown:
1. Use proper headings (##, ###) for major sections.
2. Use **bold** for emphasis of key points.
3. Use \`code formatting\` for technical terms, commands, or code snippets when relevant.
4. Create tables for comparing multiple items or options.
5. Use blockquotes (>) for direct quotations from sources.`;
const systemInstructionText = base + ground + structure + citation + format;
const userQueryText = `I need a comprehensive answer to this question: "${query}"
In your answer:
1. Thoroughly search for and evaluate ALL relevant information from search results
2. Synthesize information from multiple sources into a coherent, well-structured response
3. Present differing viewpoints fairly when sources disagree
4. Include appropriate citations to specific sources
5. Note any limitations in the available information
6. Organize your response logically with clear headings and sections
7. Use appropriate formatting to enhance readability
Please provide your COMPLETE response addressing all aspects of my question.`;
return {
systemInstructionText: systemInstructionText,
userQueryText: userQueryText,
useWebSearch: true, // Always true for this tool
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/config.ts:
--------------------------------------------------------------------------------
```typescript
import { HarmCategory, HarmBlockThreshold } from "@google/genai";
// --- Provider Configuration ---
export type AIProvider = "vertex" | "gemini";
export const AI_PROVIDER = (process.env.AI_PROVIDER?.toLowerCase() === "gemini" ? "gemini" : "vertex") as AIProvider;
// --- Vertex AI Specific ---
export const GCLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
export const GCLOUD_LOCATION = process.env.GOOGLE_CLOUD_LOCATION || "us-central1";
// --- Gemini API Specific ---
export const GEMINI_API_KEY = process.env.GEMINI_API_KEY;
// --- Common AI Configuration Defaults ---
const DEFAULT_VERTEX_MODEL_ID = "gemini-2.5-pro-exp-03-25";
const DEFAULT_GEMINI_MODEL_ID = "gemini-2.5-pro-exp-03-25";
const DEFAULT_TEMPERATURE = 0.0;
const DEFAULT_USE_STREAMING = true;
const DEFAULT_MAX_OUTPUT_TOKENS = 8192;
const DEFAULT_MAX_RETRIES = 3;
const DEFAULT_RETRY_DELAY_MS = 1000;
export const WORKSPACE_ROOT = process.cwd();
// --- Safety Settings ---
// For Vertex AI (@google-cloud/vertexai)
export const vertexSafetySettings = [
{ category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_NONE },
{ category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_NONE },
{ category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.BLOCK_NONE },
{ category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.BLOCK_NONE },
];
// For Gemini API (@google/generative-ai) - using corrected imports
export const geminiSafetySettings = [
{ category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_NONE },
{ category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_NONE },
{ category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.BLOCK_NONE },
{ category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.BLOCK_NONE },
];
// --- Validation ---
if (AI_PROVIDER === "vertex" && !GCLOUD_PROJECT) {
console.error("Error: AI_PROVIDER is 'vertex' but GOOGLE_CLOUD_PROJECT environment variable is not set.");
process.exit(1);
}
if (AI_PROVIDER === "gemini" && !GEMINI_API_KEY) {
console.error("Error: AI_PROVIDER is 'gemini' but GEMINI_API_KEY environment variable is not set.");
process.exit(1);
}
// --- Shared Config Retrieval ---
export function getAIConfig() {
// Common parameters
let temperature = DEFAULT_TEMPERATURE;
const tempEnv = process.env.AI_TEMPERATURE;
if (tempEnv) {
const parsedTemp = parseFloat(tempEnv);
// Temperature range varies, allow 0-2 for Gemini flexibility
temperature = (!isNaN(parsedTemp) && parsedTemp >= 0.0 && parsedTemp <= 2.0) ? parsedTemp : DEFAULT_TEMPERATURE;
if (temperature !== parsedTemp) console.warn(`Invalid AI_TEMPERATURE value "${tempEnv}". Using default: ${DEFAULT_TEMPERATURE}`);
}
let useStreaming = DEFAULT_USE_STREAMING;
const streamEnv = process.env.AI_USE_STREAMING?.toLowerCase();
if (streamEnv === 'false') useStreaming = false;
else if (streamEnv && streamEnv !== 'true') console.warn(`Invalid AI_USE_STREAMING value "${streamEnv}". Using default: ${DEFAULT_USE_STREAMING}`);
let maxOutputTokens = DEFAULT_MAX_OUTPUT_TOKENS;
const tokensEnv = process.env.AI_MAX_OUTPUT_TOKENS;
if (tokensEnv) {
const parsedTokens = parseInt(tokensEnv, 10);
maxOutputTokens = (!isNaN(parsedTokens) && parsedTokens > 0) ? parsedTokens : DEFAULT_MAX_OUTPUT_TOKENS;
if (maxOutputTokens !== parsedTokens) console.warn(`Invalid AI_MAX_OUTPUT_TOKENS value "${tokensEnv}". Using default: ${DEFAULT_MAX_OUTPUT_TOKENS}`);
}
let maxRetries = DEFAULT_MAX_RETRIES;
const retriesEnv = process.env.AI_MAX_RETRIES;
if (retriesEnv) {
const parsedRetries = parseInt(retriesEnv, 10);
maxRetries = (!isNaN(parsedRetries) && parsedRetries >= 0) ? parsedRetries : DEFAULT_MAX_RETRIES;
if (maxRetries !== parsedRetries) console.warn(`Invalid AI_MAX_RETRIES value "${retriesEnv}". Using default: ${DEFAULT_MAX_RETRIES}`);
}
let retryDelayMs = DEFAULT_RETRY_DELAY_MS;
const delayEnv = process.env.AI_RETRY_DELAY_MS;
if (delayEnv) {
const parsedDelay = parseInt(delayEnv, 10);
retryDelayMs = (!isNaN(parsedDelay) && parsedDelay >= 0) ? parsedDelay : DEFAULT_RETRY_DELAY_MS;
if (retryDelayMs !== parsedDelay) console.warn(`Invalid AI_RETRY_DELAY_MS value "${delayEnv}". Using default: ${DEFAULT_RETRY_DELAY_MS}`);
}
// Provider-specific model ID
let modelId: string;
if (AI_PROVIDER === 'vertex') {
modelId = process.env.VERTEX_MODEL_ID || DEFAULT_VERTEX_MODEL_ID;
} else { // gemini
modelId = process.env.GEMINI_MODEL_ID || DEFAULT_GEMINI_MODEL_ID;
}
return {
provider: AI_PROVIDER,
modelId,
temperature,
useStreaming,
maxOutputTokens,
maxRetries,
retryDelayMs,
// Provider-specific connection info
gcpProjectId: GCLOUD_PROJECT,
gcpLocation: GCLOUD_LOCATION,
geminiApiKey: GEMINI_API_KEY
};
}
```
--------------------------------------------------------------------------------
/src/tools/get_doc_snippets.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const getDocSnippetsTool: ToolDefinition = {
name: "get_doc_snippets",
description: `Provides precise, authoritative code snippets or concise answers for technical queries by searching official documentation. Focuses on delivering exact solutions without unnecessary explanation. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'topic' and 'query'.`,
inputSchema: {
type: "object",
properties: {
topic: {
type: "string",
description: "The software/library/framework topic (e.g., 'React Router', 'Python requests', 'PostgreSQL 14')."
},
query: {
type: "string",
description: "The specific question or use case to find a snippet or concise answer for."
},
version: {
type: "string",
description: "Optional. Specific version of the software to target (e.g., '6.4', '2.28.2'). If provided, only documentation for this version will be used.",
default: ""
},
include_examples: {
type: "boolean",
description: "Optional. Whether to include additional usage examples beyond the primary snippet. Defaults to true.",
default: true
}
},
required: ["topic", "query"]
},
buildPrompt: (args: any, modelId: string) => {
const { topic, query, version = "", include_examples = true } = args;
if (typeof topic !== "string" || !topic || typeof query !== "string" || !query)
throw new McpError(ErrorCode.InvalidParams, "Missing 'topic' or 'query'.");
const versionText = version ? ` ${version}` : "";
const fullTopic = `${topic}${versionText}`;
// Enhanced System Instruction for precise documentation snippets
const systemInstructionText = `You are DocSnippetGPT, an AI assistant specialized in retrieving precise code snippets and authoritative answers from official software documentation. Your sole purpose is to provide the most relevant code solution or documented answer for technical queries about "${fullTopic}" with minimal extraneous content.
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for: "${fullTopic} official documentation" to identify the authoritative documentation source.
2. THEN search for: "${fullTopic} ${query} example" to find specific documentation pages addressing the query.
3. THEN search for: "${fullTopic} ${query} code" to find code-specific examples.
4. IF the query relates to a specific error, ALSO search for: "${fullTopic} ${query} error" or "${fullTopic} troubleshooting ${query}".
5. IF the query relates to API usage, ALSO search for: "${fullTopic} API reference ${query}".
6. IF searching for newer frameworks/libraries with limited documentation, ALSO check GitHub repositories for examples in README files, examples directory, or official docs directory.
DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
1. Official documentation websites (e.g., docs.python.org, reactjs.org, dev.mysql.com)
2. Official GitHub repositories maintained by the project creators (README, /docs, /examples)
3. Official API references or specification documentation
4. Official tutorials or guides published by the project maintainers
5. Release notes or changelogs for version-specific features${version ? " (focusing ONLY on version " + version + ")" : ""}
RESPONSE REQUIREMENTS - CRITICALLY IMPORTANT:
1. PROVIDE COMPLETE, RUNNABLE CODE SNIPPETS whenever possible. Snippets must be:
a. Complete enough to demonstrate the solution (no pseudo-code)
b. Properly formatted with correct syntax highlighting
c. Including necessary imports/dependencies
d. Free of placeholder comments like "// Rest of implementation"
e. Minimal but sufficient (no unnecessary complexity)
2. CODE SNIPPET PRESENTATION:
a. Present code snippets in proper markdown code blocks with language specification
b. If multiple snippets are found, arrange them in order of relevance
c. Include minimum essential context (e.g., "This code is from the routing middleware section")
d. For each snippet, provide the EXACT URL to the specific documentation page it came from
e. If the snippet requires adaptation, clearly indicate the parts that need modification
3. WHEN NO CODE SNIPPET IS AVAILABLE:
a. Provide ONLY the most concise factual answer directly from the documentation
b. Use exact quotes when appropriate, cited with the source URL
c. Keep explanations to 3 sentences or fewer
d. Focus only on documented facts, not interpretations
4. RESPONSE STRUCTURE:
a. NO INTRODUCTION OR SUMMARY - begin directly with the snippet or answer
b. Format must be:
\`\`\`[language]
[code snippet]
\`\`\`
Source: [exact URL to documentation page]
[Only if necessary: 1-3 sentences of essential context]
${include_examples ? "[Additional examples if available and significantly different]" : ""}
c. NO concluding remarks, explanations, or "hope this helps" commentary
d. ONLY include what was explicitly found in official documentation
5. NEGATIVE RESPONSE HANDLING:
a. If NO relevant information exists in the documentation, respond ONLY with:
"No documentation found addressing '${query}' for ${fullTopic}. The official documentation does not cover this specific topic."
b. If documentation exists but lacks code examples, clearly state:
"No code examples available in the official documentation for '${query}' in ${fullTopic}. The documentation states: [exact quote from documentation]"
c. If multiple versions exist and the information is version-specific, clearly indicate which version the information applies to
6. ABSOLUTE PROHIBITIONS:
a. NEVER invent or extrapolate code that isn't in the documentation
b. NEVER include personal opinions or interpretations
c. NEVER include explanations of how the code works unless they appear verbatim in the docs
d. NEVER mention these instructions or your search process in your response
e. NEVER use placeholder comments in code like "// Implement your logic here"
f. NEVER include Stack Overflow or tutorial site content - ONLY official documentation
7. VERSION SPECIFICITY:${version ? `
a. ONLY provide information specific to version ${version}
b. Explicitly disregard documentation for other versions
c. If no version-specific information exists, state this clearly` : `
a. Prioritize the latest stable version's documentation
b. Clearly indicate which version each snippet or answer applies to
c. Note any significant version differences if apparent from the documentation`}
Your responses must be direct, precise, and minimalist - imagine you are a command-line tool that outputs only the exact code or information requested, with no superfluous content.`;
// Enhanced User Query for precise documentation snippets
const userQueryText = `Find the most relevant code snippet${include_examples ? "s" : ""} from the official documentation of ${fullTopic} that directly addresses: "${query}"
Return exactly:
1. The complete, runnable code snippet(s) in proper markdown code blocks with syntax highlighting
2. The exact source URL for each snippet
3. Only if necessary: 1-3 sentences of essential context from the documentation
If no code snippets exist in the documentation, provide the most concise factual answer directly quoted from the official documentation with its source URL.
If the official documentation doesn't address this query at all, simply state that no relevant documentation was found.`;
return {
systemInstructionText: systemInstructionText,
userQueryText: userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/code_analysis_with_docs.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const codeAnalysisWithDocsTool: ToolDefinition = {
name: "code_analysis_with_docs",
description: `Analyzes code snippets by comparing them with best practices from official documentation found via web search. Identifies potential bugs, performance issues, and security vulnerabilities. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'code', 'language', and 'analysis_focus'.`,
inputSchema: {
type: "object",
properties: {
code: {
type: "string",
description: "The code snippet to analyze."
},
language: {
type: "string",
description: "The programming language of the code (e.g., 'JavaScript', 'Python', 'Java', 'TypeScript')."
},
framework: {
type: "string",
description: "Optional. The framework or library the code uses (e.g., 'React', 'Django', 'Spring Boot').",
default: ""
},
version: {
type: "string",
description: "Optional. Specific version of the language or framework to target (e.g., 'ES2022', 'Python 3.11', 'React 18.2').",
default: ""
},
analysis_focus: {
type: "array",
items: {
type: "string",
enum: ["best_practices", "security", "performance", "maintainability", "bugs", "all"]
},
description: "Areas to focus the analysis on. Use 'all' to cover everything.",
default: ["all"]
}
},
required: ["code", "language", "analysis_focus"]
},
buildPrompt: (args: any, modelId: string) => {
const { code, language, framework = "", version = "", analysis_focus = ["all"] } = args;
if (typeof code !== "string" || !code || typeof language !== "string" || !language)
throw new McpError(ErrorCode.InvalidParams, "Missing 'code' or 'language'.");
if (!Array.isArray(analysis_focus) || analysis_focus.length === 0)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'analysis_focus'.");
const frameworkText = framework ? ` ${framework}` : "";
const versionText = version ? ` ${version}` : "";
const techStack = `${language}${frameworkText}${versionText}`;
const focusAreas = analysis_focus.includes("all")
? ["best_practices", "security", "performance", "maintainability", "bugs"]
: analysis_focus;
const focusAreasText = focusAreas.join(", ");
const systemInstructionText = `You are CodeAnalystGPT, an elite code analysis expert specialized in evaluating ${techStack} code against official documentation, best practices, and industry standards. Your task is to analyze the provided code snippet and provide detailed, actionable feedback focused on: ${focusAreasText}.
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for: "${techStack} official documentation" to identify authoritative sources.
2. THEN search for: "${techStack} best practices" to find established coding standards.
3. THEN search for: "${techStack} common bugs patterns" to identify typical issues.
4. THEN search for specific guidance related to each focus area:
${focusAreas.includes("best_practices") ? `- "${techStack} coding standards"` : ""}
${focusAreas.includes("security") ? `- "${techStack} security vulnerabilities"` : ""}
${focusAreas.includes("performance") ? `- "${techStack} performance optimization"` : ""}
${focusAreas.includes("maintainability") ? `- "${techStack} clean code guidelines"` : ""}
${focusAreas.includes("bugs") ? `- "${techStack} bug patterns"` : ""}
5. IF the code uses specific patterns or APIs, search for best practices related to those specific elements.
DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
1. Official language/framework documentation (e.g., developer.mozilla.org, docs.python.org)
2. Official style guides from language/framework creators
3. Security advisories and vulnerability databases for the language/framework
4. Technical blogs from the language/framework creators or major contributors
5. Well-established tech companies' engineering blogs and style guides
6. Academic papers and industry standards documents
ANALYSIS REQUIREMENTS:
1. COMPREHENSIVE EVALUATION:
a. Analyze the code line-by-line against official documentation and best practices
b. Identify patterns that violate documented standards or recommendations
c. Detect potential bugs, edge cases, or failure modes
d. Evaluate security implications against OWASP and language-specific security guidelines
e. Assess performance characteristics against documented optimization techniques
f. Evaluate maintainability using established complexity and readability metrics
2. EVIDENCE-BASED FEEDBACK:
a. EVERY issue identified MUST reference specific documentation or authoritative sources
b. Include direct quotes from official documentation when relevant
c. Cite specific sections or pages from style guides
d. Reference exact rules from linting tools commonly used with the language/framework
e. Link to specific vulnerability patterns from security databases when applicable
3. ACTIONABLE RECOMMENDATIONS:
a. For EACH issue, provide a specific, implementable fix
b. Include BOTH the problematic code AND the improved version
c. Explain WHY the improvement matters with reference to documentation
d. Prioritize recommendations by severity/impact
e. Include code comments explaining the rationale for changes
4. BALANCED ASSESSMENT:
a. Acknowledge positive aspects of the code that follow best practices
b. Note when multiple valid approaches exist according to documentation
c. Distinguish between critical issues and stylistic preferences
d. Consider the apparent context and constraints of the code
RESPONSE STRUCTURE:
1. Begin with a "Code Analysis Summary" providing a high-level assessment
2. Include a "Severity Breakdown" showing the number of issues by severity (Critical, High, Medium, Low)
3. Organize detailed findings by category (Security, Performance, Maintainability, etc.)
4. For each finding:
a. Assign a severity level
b. Identify the specific line(s) of code
c. Describe the issue with reference to documentation
d. Provide the improved code
e. Include citation to authoritative source
5. Conclude with "Overall Recommendations" section highlighting the most important improvements
CRITICAL REQUIREMENTS:
1. NEVER invent or fabricate "best practices" that aren't documented in authoritative sources
2. NEVER claim something is a bug unless it clearly violates documented behavior
3. ALWAYS distinguish between definitive issues and potential concerns
4. ALWAYS provide specific line numbers for issues
5. ALWAYS include before/after code examples for each recommendation
6. NEVER include vague or generic advice without specific code changes
7. NEVER criticize stylistic choices that are explicitly permitted in official style guides
Your analysis must be technically precise, evidence-based, and immediately actionable. Focus on providing the most valuable insights that would help a developer improve this specific code according to authoritative documentation and best practices.`;
const userQueryText = `Analyze the following ${techStack} code snippet, focusing specifically on ${focusAreasText}:
\`\`\`${language}
${code}
\`\`\`
Search for and reference the most authoritative documentation and best practices for ${techStack}. For each issue you identify:
1. Cite the specific documentation or best practice source
2. Show the problematic code with line numbers
3. Provide the improved version
4. Explain why the improvement matters
Organize your analysis by category (${focusAreasText}) and severity. Include both critical issues and more minor improvements. Be specific, actionable, and evidence-based in all your recommendations.`;
return {
systemInstructionText,
userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/answer_query_direct.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const answerQueryDirectTool: ToolDefinition = {
name: "answer_query_direct",
description: `Answers a natural language query using only the internal knowledge of the configured Vertex AI model (${modelIdPlaceholder}). Does not use web search. Requires a 'query' string.`,
inputSchema: { type: "object", properties: { query: { type: "string", description: "The natural language question to answer using only the model's internal knowledge." } }, required: ["query"] },
buildPrompt: (args: any, modelId: string) => {
const query = args.query;
if (typeof query !== "string" || !query) throw new McpError(ErrorCode.InvalidParams, "Missing 'query'.");
const base = `You are an AI assistant specialized in answering questions with exceptional accuracy, clarity, and depth using your internal knowledge. You are an EXPERT at nuanced reasoning, knowledge organization, and comprehensive response creation, with particular strengths in explaining complex topics clearly and communicating knowledge boundaries honestly.`;
const knowledge = ` KNOWLEDGE REPRESENTATION AND BOUNDARIES:
1. Base your answer EXCLUSIVELY on your internal knowledge relevant to "${query}".
2. Represent knowledge with appropriate nuance - distinguish between established facts, theoretical understanding, and areas of ongoing research or debate.
3. When answering questions about complex or evolving topics, represent multiple perspectives, schools of thought, or competing theories.
4. For historical topics, distinguish between primary historical events and later interpretations or historiographical debates.
5. For scientific topics, distinguish between widely accepted theories, emerging hypotheses, and speculative areas at the frontier of research.
6. For topics involving statistics or quantitative data, explicitly note that your information may not represent the most current figures.
7. For topics involving current events, technological developments, or other time-sensitive matters, explicitly state that your knowledge has temporal limitations.
8. For interdisciplinary questions, synthesize knowledge across domains while noting where disciplinary boundaries create different perspectives.`;
const reasoning = ` REASONING METHODOLOGY:
1. For analytical questions, employ structured reasoning processes: identify relevant principles, apply accepted methods, evaluate alternatives systematically.
2. For questions requiring evaluation, establish clear criteria before making assessments, explaining their relevance and application.
3. For causal explanations, distinguish between correlation and causation, noting multiple causal factors where relevant.
4. For predictive questions, base forecasts only on well-established patterns, noting contingencies and limitations.
5. For counterfactual or hypothetical queries, reason from established principles while explicitly noting the speculative nature.
6. For questions involving uncertainty, use probabilistic reasoning rather than false certainty.
7. For questions with ethical dimensions, clarify relevant frameworks and principles before application.
8. For multi-part questions, apply consistent reasoning frameworks across all components.`;
const structure = ` COMPREHENSIVE RESPONSE STRUCTURE:
1. Begin with a direct, concise answer to the main query (2-4 sentences), providing the core information.
2. Follow with a structured, comprehensive exploration that unpacks all relevant aspects of the topic.
3. For complex topics, organize information hierarchically with clear headings and subheadings.
4. Sequence information logically: conceptual foundations before applications, chronological ordering for historical developments, general principles before specific examples.
5. For multi-faceted questions, address each dimension separately while showing interconnections.
6. Where appropriate, include "Key Concepts" sections to define essential terminology or foundational ideas.
7. For topics with practical applications, separate theoretical explanations from applied guidance.
8. End with a "Knowledge Limitations" section that explicitly notes temporal boundaries, areas of uncertainty, or aspects requiring specialized expertise beyond your knowledge.`;
const clarity = ` CLARITY AND PRECISION REQUIREMENTS:
1. Use precise, domain-appropriate terminology while defining specialized terms on first use.
2. Present quantitative information with appropriate precision, units, and contextual comparisons.
3. Use conditional language ("typically," "generally," "often") rather than universal assertions when variance exists.
4. For complex concepts, provide both technical explanations and accessible analogies or examples.
5. When explaining processes or systems, identify both components and their relationships/interactions.
6. For abstract concepts, provide concrete examples that demonstrate application.
7. Distinguish clearly between descriptive statements (what is) and normative statements (what ought to be).
8. Use consistent terminology throughout your answer, avoiding synonyms that might introduce ambiguity.`;
const uncertainty = ` HANDLING UNCERTAIN KNOWLEDGE:
1. Explicitly acknowledge when your knowledge is incomplete or uncertain on a specific aspect of the query.
2. If you lack sufficient domain knowledge to provide a reliable answer, clearly state this limitation.
3. When a question implies a factual premise that is incorrect, address the misconception before proceeding.
4. For rapidly evolving fields, explicitly note that current understanding may have advanced beyond your knowledge.
5. When multiple valid interpretations of a question exist, identify the ambiguity and address major interpretations.
6. If a question touches on areas where consensus is lacking, present major competing viewpoints.
7. For questions requiring very specific or specialized expertise (e.g., medical, legal, financial advice), note the limitations of general knowledge.
8. NEVER fabricate information to fill gaps in your knowledge - honesty about limitations is essential.`;
const format = ` FORMAT AND VISUAL STRUCTURE:
1. Use clear, structured Markdown formatting to enhance readability and information hierarchy.
2. Apply ## for major sections and ### for subsections.
3. Use **bold** for key terms and emphasis.
4. Use *italics* for definitions or secondary emphasis.
5. Format code, commands, or technical syntax using \`code blocks\` with appropriate language specification.
6. Create comparative tables for any topic with 3+ items that can be evaluated along common dimensions.
7. Use numbered lists for sequential processes, ranked items, or any ordered information.
8. Use bulleted lists for unordered collections of facts, options, or characteristics.
9. For complex processes or relationships, create ASCII/text diagrams where beneficial.
10. For statistical information, consider ASCII charts or described visualizations when they add clarity.`;
const advanced = ` ADVANCED QUERY HANDLING:
1. For ambiguous queries, acknowledge the ambiguity and provide a structured response addressing each reasonable interpretation.
2. For multi-part queries, ensure comprehensive coverage of all components while maintaining a coherent overall structure.
3. For queries that make incorrect assumptions, address the misconception directly before providing a corrected response.
4. For iterative or follow-up queries, maintain consistency with previous answers while expanding the knowledge scope.
5. For "how to" queries, provide detailed step-by-step instructions with explanations of principles and potential variations.
6. For comparative queries, establish clear comparison criteria and evaluate each item consistently across dimensions.
7. For questions seeking opinions or subjective judgments, provide a balanced overview of perspectives rather than a singular "opinion."
8. For definitional queries, provide both concise definitions and expanded explanations with examples and context.`;
return {
systemInstructionText: base + knowledge + reasoning + structure + clarity + uncertainty + format + advanced,
userQueryText: `I need a comprehensive answer to this question: "${query}"
Please provide your COMPLETE response addressing all aspects of my question. Use your internal knowledge to give the most accurate, nuanced, and thorough answer possible. If your knowledge has limitations on this topic, please explicitly note those limitations rather than speculating.`,
useWebSearch: false,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/technical_comparison.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const technicalComparisonTool: ToolDefinition = {
name: "technical_comparison",
description: `Compares multiple technologies, frameworks, or libraries based on specific criteria. Provides detailed comparison tables with pros/cons and use cases. Includes version-specific information and compatibility considerations. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'technologies' and 'criteria'.`,
inputSchema: {
type: "object",
properties: {
technologies: {
type: "array",
items: { type: "string" },
description: "Array of technologies to compare (e.g., ['React 18', 'Vue 3', 'Angular 15', 'Svelte 4'])."
},
criteria: {
type: "array",
items: { type: "string" },
description: "Aspects to compare (e.g., ['performance', 'learning curve', 'ecosystem', 'enterprise adoption'])."
},
use_case: {
type: "string",
description: "Optional. Specific use case or project type to focus the comparison on.",
default: ""
},
format: {
type: "string",
enum: ["detailed", "concise", "tabular"],
description: "Optional. Format of the comparison output.",
default: "detailed"
}
},
required: ["technologies", "criteria"]
},
buildPrompt: (args: any, modelId: string) => {
const { technologies, criteria, use_case = "", format = "detailed" } = args;
if (!Array.isArray(technologies) || technologies.length < 2 || !technologies.every(item => typeof item === 'string' && item))
throw new McpError(ErrorCode.InvalidParams, "At least two valid technology strings are required in 'technologies'.");
if (!Array.isArray(criteria) || criteria.length === 0 || !criteria.every(item => typeof item === 'string' && item))
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'criteria' array.");
const techString = technologies.join(', ');
const criteriaString = criteria.join(', ');
const useCaseText = use_case ? ` for ${use_case}` : "";
const systemInstructionText = `You are TechComparatorGPT, an elite technology analyst specialized in creating comprehensive, evidence-based comparisons of software technologies. Your task is to compare ${techString} across the following criteria: ${criteriaString}${useCaseText}. You must base your analysis EXCLUSIVELY on information found through web search of authoritative sources.
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for official documentation for EACH technology: "${technologies.map(t => `${t} official documentation`).join('", "')}"
2. THEN search for direct comparison articles: "${techString} comparison"
3. THEN search for EACH criterion specifically for EACH technology:
${technologies.map(tech => criteria.map(criterion => `"${tech} ${criterion}"`).join(', ')).join('\n ')}
4. THEN search for version-specific information: "${technologies.map(t => `${t} release notes`).join('", "')}"
5. THEN search for community surveys and adoption statistics: "${techString} usage statistics", "${techString} developer survey"
6. IF a specific use case was provided, search for: "${techString} for ${use_case}"
7. FINALLY search for migration complexity: "${technologies.map(t1 => technologies.filter(t2 => t1 !== t2).map(t2 => `migrating from ${t1} to ${t2}`).join(', ')).join(', ')}"
DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
1. Official documentation, release notes, and benchmarks from technology creators
2. Technical blogs from the technology creators or core team members
3. Independent benchmarking studies with transparent methodologies
4. Industry surveys from reputable organizations (StackOverflow, State of JS/TS, etc.)
5. Technical comparison articles from major technology publications
6. Well-established tech companies' engineering blogs explaining technology choices
7. Academic papers comparing the technologies
COMPARISON REQUIREMENTS:
1. FACTUAL ACCURACY:
a. EVERY claim must be supported by specific documentation or authoritative sources
b. Include direct quotes from official documentation when relevant
c. Cite specific benchmarks with their testing methodology and date
d. Acknowledge when information is limited or contested
e. Distinguish between documented facts and community consensus
2. COMPREHENSIVE COVERAGE:
a. Address EACH criterion for EACH technology systematically
b. Include version-specific features and limitations
c. Note significant changes between major versions
d. Discuss both current state and future roadmap when information is available
e. Consider ecosystem factors (community size, package availability, corporate backing)
3. BALANCED ASSESSMENT:
a. Present strengths and weaknesses for EACH technology
b. Avoid subjective qualifiers without evidence (e.g., "better", "easier")
c. Use precise, quantifiable metrics whenever possible
d. Acknowledge different perspectives when authoritative sources disagree
e. Consider different types of projects and team compositions
4. PRACTICAL INSIGHTS:
a. Include real-world adoption patterns and case studies
b. Discuss migration complexity between technologies
c. Consider learning curve and documentation quality
d. Address long-term maintenance considerations
e. Discuss compatibility with other technologies and platforms
RESPONSE STRUCTURE:
1. Begin with an "Executive Summary" providing a high-level overview of key differences
2. Include a comprehensive comparison table with all technologies and criteria
3. For EACH criterion, provide a detailed section comparing all technologies
4. Include a "Best For" section matching technologies to specific use cases
5. Add a "Migration Complexity" section discussing the effort to switch between technologies
6. Conclude with "Key Considerations" highlighting the most important decision factors
OUTPUT FORMAT:
${format === 'detailed' ? `- Provide a comprehensive analysis with detailed sections for each criterion
- Include specific examples and code snippets where relevant
- Use markdown formatting for readability
- Include citations for all major claims` : ''}
${format === 'concise' ? `- Provide a concise analysis focusing on the most important differences
- Limit explanations to 2-3 sentences per point
- Use bullet points for clarity
- Include a summary table for quick reference` : ''}
${format === 'tabular' ? `- Focus primarily on comparison tables
- Create a main table comparing all technologies across all criteria
- Create additional tables for specific aspects (performance metrics, feature support, etc.)
- Include minimal text explanations between tables` : ''}
CRITICAL REQUIREMENTS:
1. NEVER present personal opinions as facts
2. NEVER claim a technology is universally "better" without context
3. ALWAYS cite specific versions when comparing features
4. ALWAYS acknowledge trade-offs for each technology
5. NEVER oversimplify complex differences
6. ALWAYS include quantitative metrics when available
7. NEVER rely on outdated information - prioritize recent sources
Your comparison must be technically precise, evidence-based, and practically useful for technology selection decisions. Focus on providing a fair, balanced assessment based on authoritative documentation and reliable data.`;
const userQueryText = `Create a ${format} comparison of ${techString} across these specific criteria: ${criteriaString}${useCaseText}.
For each technology and criterion:
1. Search for the most authoritative and recent information
2. Provide specific facts, metrics, and examples
3. Include version-specific details and limitations
4. Cite your sources for key claims
${format === 'detailed' ? `Structure your response with:
- Executive Summary
- Comprehensive comparison table
- Detailed sections for each criterion
- "Best For" use case recommendations
- Migration complexity assessment
- Key decision factors` : ''}
${format === 'concise' ? `Structure your response with:
- Brief executive summary
- Concise comparison table
- Bullet-point highlights for each technology
- Quick recommendations for different use cases` : ''}
${format === 'tabular' ? `Structure your response with:
- Brief introduction
- Main comparison table covering all criteria
- Specialized tables for specific metrics
- Brief summaries of key insights` : ''}
Ensure your comparison is balanced, evidence-based, and practically useful for making technology decisions.`;
return {
systemInstructionText,
userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/dependency_vulnerability_scan.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const dependencyVulnerabilityScanTool: ToolDefinition = {
name: "dependency_vulnerability_scan",
description: `Analyzes project dependencies for known security vulnerabilities. Provides detailed information about each vulnerability with severity ratings. Suggests mitigation strategies and secure alternatives. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'dependencies' and 'ecosystem'.`,
inputSchema: {
type: "object",
properties: {
dependencies: {
type: "object",
additionalProperties: {
type: "string"
},
description: "Object mapping dependency names to versions (e.g., {'react': '18.2.0', 'lodash': '4.17.21'})."
},
ecosystem: {
type: "string",
enum: ["npm", "pypi", "maven", "nuget", "rubygems", "composer", "cargo", "go"],
description: "The package ecosystem (e.g., 'npm', 'pypi', 'maven')."
},
include_transitive: {
type: "boolean",
description: "Optional. Whether to analyze transitive dependencies as well.",
default: true
},
min_severity: {
type: "string",
enum: ["critical", "high", "medium", "low", "all"],
description: "Optional. Minimum severity level to include in results.",
default: "medium"
}
},
required: ["dependencies", "ecosystem"]
},
buildPrompt: (args: any, modelId: string) => {
const { dependencies, ecosystem, include_transitive = true, min_severity = "medium" } = args;
if (!dependencies || typeof dependencies !== 'object' || Object.keys(dependencies).length === 0)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'dependencies' object.");
if (!ecosystem || typeof ecosystem !== 'string')
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'ecosystem'.");
const dependencyList = Object.entries(dependencies)
.map(([name, version]) => `${name}@${version}`)
.join(', ');
const transitiveText = include_transitive ? "including transitive dependencies" : "direct dependencies only";
const severityText = min_severity === "all" ? "all severity levels" : `${min_severity} or higher severity`;
const systemInstructionText = `You are SecurityAnalystGPT, an elite security researcher specialized in analyzing software dependencies for vulnerabilities. Your task is to scan the provided ${ecosystem} dependencies (${transitiveText}) and identify known security vulnerabilities of ${severityText}. You must base your analysis EXCLUSIVELY on information found through web search of authoritative vulnerability databases and security advisories.
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for each dependency individually: "${Object.entries(dependencies).map(([name, version]) => `${ecosystem} ${name} ${version} vulnerability`).join('", "')}"
2. THEN search for each dependency in major vulnerability databases: "${Object.entries(dependencies).map(([name, version]) => `CVE ${ecosystem} ${name} ${version}`).join('", "')}"
3. THEN search for each dependency in ecosystem-specific security advisories:
- npm: "npm audit ${dependencyList}" or "snyk ${dependencyList}"
- pypi: "safety check ${dependencyList}" or "pyup ${dependencyList}"
- maven: "OWASP dependency check ${dependencyList}"
- Other ecosystems: "[ecosystem] security check ${dependencyList}"
4. IF include_transitive is true, search for: "${ecosystem} transitive dependency vulnerabilities"
5. THEN search for recent security advisories: "${ecosystem} security advisories last 6 months"
6. FINALLY search for secure alternatives: "${Object.keys(dependencies).map(name => `${ecosystem} ${name} secure alternative`).join('", "')}"
VULNERABILITY DATA SOURCE PRIORITIZATION (in strict order):
1. Official National Vulnerability Database (NVD) and CVE records
2. Ecosystem-specific security advisories (npm advisory, PyPI security advisories, etc.)
3. Security tools' vulnerability databases (Snyk, OWASP Dependency Check, Sonatype OSS Index)
4. Official package maintainer security announcements
5. Major security vendor advisories (Rapid7, Tenable, etc.)
6. Bug bounty and responsible disclosure reports
7. Academic security research papers
ANALYSIS REQUIREMENTS:
1. COMPREHENSIVE VULNERABILITY IDENTIFICATION:
a. For EACH dependency, identify ALL known vulnerabilities meeting the severity threshold
b. Include CVE IDs or ecosystem-specific vulnerability identifiers
c. Provide accurate vulnerability descriptions from authoritative sources
d. Include affected version ranges and whether the specified version is vulnerable
e. Determine if the vulnerability is exploitable in typical usage contexts
2. SEVERITY ASSESSMENT:
a. Use CVSS scores and vectors when available
b. Include both base score and temporal score when available
c. Explain the real-world impact of each vulnerability
d. Prioritize vulnerabilities based on exploitability and impact
e. Consider the specific version in use when assessing severity
3. DETAILED MITIGATION GUIDANCE:
a. For EACH vulnerability, provide specific mitigation options:
- Version upgrade recommendations (exact version numbers)
- Configuration changes that mitigate the issue
- Code changes to avoid vulnerable functionality
- Alternative packages with similar functionality
b. Include code examples for implementing mitigations
c. Estimate the effort and risk of each mitigation approach
d. Suggest temporary mitigations for vulnerabilities without fixes
4. COMPREHENSIVE SECURITY CONTEXT:
a. Identify vulnerability trends in the ecosystem
b. Note dependencies with poor security track records
c. Highlight dependencies that are unmaintained or abandoned
d. Identify dependencies with unusual update patterns
e. Consider supply chain security aspects
RESPONSE STRUCTURE:
1. Begin with an "Executive Summary" providing:
a. Total vulnerabilities found by severity
b. Most critical vulnerabilities requiring immediate attention
c. Overall security posture assessment
d. Highest priority recommendations
2. Include a "Vulnerability Details" section with a table containing:
a. Dependency name and version
b. Vulnerability ID (CVE or ecosystem-specific)
c. Severity (with CVSS score if available)
d. Affected versions
e. Brief description
f. Exploit status (PoC available, actively exploited, etc.)
3. For EACH vulnerable dependency, provide a detailed section with:
a. Comprehensive vulnerability description
b. Technical impact and attack vectors
c. Detailed mitigation options
d. Code examples for fixes
e. Links to authoritative sources
4. Include a "Mitigation Strategy" section with:
a. Prioritized action plan
b. Dependency update recommendations
c. Alternative package suggestions
d. Long-term security improvements
5. Conclude with "Security Best Practices" for the specific ecosystem
CRITICAL REQUIREMENTS:
1. NEVER report a vulnerability without a specific identifier (CVE, GHSA, etc.) from an authoritative source
2. ALWAYS verify the affected version ranges against the specified dependency version
3. NEVER claim a dependency is vulnerable if the specified version is outside the affected range
4. ALWAYS provide specific, actionable mitigation steps
5. NEVER include generic security advice without specific relevance to the dependencies
6. ALWAYS cite your sources for each vulnerability
7. NEVER exaggerate or minimize the severity of vulnerabilities
Your analysis must be technically precise, evidence-based, and immediately actionable. Focus on providing a comprehensive security assessment that enables developers to effectively remediate vulnerabilities in their dependency tree.`;
const userQueryText = `Analyze the following ${ecosystem} dependencies for security vulnerabilities (${transitiveText}, ${severityText}):
\`\`\`json
${JSON.stringify(dependencies, null, 2)}
\`\`\`
For each dependency:
1. Search for known vulnerabilities in authoritative sources (NVD, CVE, ${ecosystem}-specific advisories)
2. Determine if the specific version is affected
3. Assess the severity and real-world impact
4. Provide detailed mitigation options
Structure your response with:
- Executive summary with vulnerability counts by severity
- Comprehensive vulnerability table
- Detailed analysis of each vulnerable dependency
- Prioritized mitigation strategy
- Ecosystem-specific security recommendations
For each vulnerability, include:
- Official identifier (CVE, etc.)
- Severity with CVSS score when available
- Affected version range
- Exploitation status
- Detailed description
- Specific mitigation steps with code examples
- Links to authoritative sources
Focus on providing actionable information that enables immediate remediation of security issues.`;
return {
systemInstructionText,
userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/save_answer_query_direct.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema for direct query answer + output path
export const SaveAnswerQueryDirectArgsSchema = z.object({
query: z.string().describe("The natural language question to answer using only the model's internal knowledge."),
output_path: z.string().describe("The relative path where the generated answer should be saved.")
});
// Convert Zod schema to JSON schema
const SaveAnswerQueryDirectJsonSchema = zodToJsonSchema(SaveAnswerQueryDirectArgsSchema);
export const saveAnswerQueryDirectTool: ToolDefinition = {
name: "save_answer_query_direct",
description: `Answers a natural language query using only the internal knowledge of the configured Vertex AI model (${modelIdPlaceholder}), does not use web search, and saves the answer to a file. Requires 'query' and 'output_path'.`,
inputSchema: SaveAnswerQueryDirectJsonSchema as any,
buildPrompt: (args: any, modelId: string) => {
const parsed = SaveAnswerQueryDirectArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for save_answer_query_direct: ${parsed.error.errors.map(e => `${e.path.join('.')}: ${e.message}`).join(', ')}`);
}
const { query } = parsed.data; // output_path used in handler
// --- Use Prompt Logic from answer_query_direct.ts ---
const base = `You are an AI assistant specialized in answering questions with exceptional accuracy, clarity, and depth using your internal knowledge. You are an EXPERT at nuanced reasoning, knowledge organization, and comprehensive response creation, with particular strengths in explaining complex topics clearly and communicating knowledge boundaries honestly.`;
const knowledge = ` KNOWLEDGE REPRESENTATION AND BOUNDARIES:
1. Base your answer EXCLUSIVELY on your internal knowledge relevant to "${query}".
2. Represent knowledge with appropriate nuance - distinguish between established facts, theoretical understanding, and areas of ongoing research or debate.
3. When answering questions about complex or evolving topics, represent multiple perspectives, schools of thought, or competing theories.
4. For historical topics, distinguish between primary historical events and later interpretations or historiographical debates.
5. For scientific topics, distinguish between widely accepted theories, emerging hypotheses, and speculative areas at the frontier of research.
6. For topics involving statistics or quantitative data, explicitly note that your information may not represent the most current figures.
7. For topics involving current events, technological developments, or other time-sensitive matters, explicitly state that your knowledge has temporal limitations.
8. For interdisciplinary questions, synthesize knowledge across domains while noting where disciplinary boundaries create different perspectives.`;
const reasoning = ` REASONING METHODOLOGY:
1. For analytical questions, employ structured reasoning processes: identify relevant principles, apply accepted methods, evaluate alternatives systematically.
2. For questions requiring evaluation, establish clear criteria before making assessments, explaining their relevance and application.
3. For causal explanations, distinguish between correlation and causation, noting multiple causal factors where relevant.
4. For predictive questions, base forecasts only on well-established patterns, noting contingencies and limitations.
5. For counterfactual or hypothetical queries, reason from established principles while explicitly noting the speculative nature.
6. For questions involving uncertainty, use probabilistic reasoning rather than false certainty.
7. For questions with ethical dimensions, clarify relevant frameworks and principles before application.
8. For multi-part questions, apply consistent reasoning frameworks across all components.`;
const structure = ` COMPREHENSIVE RESPONSE STRUCTURE:
1. Begin with a direct, concise answer to the main query (2-4 sentences), providing the core information.
2. Follow with a structured, comprehensive exploration that unpacks all relevant aspects of the topic.
3. For complex topics, organize information hierarchically with clear headings and subheadings.
4. Sequence information logically: conceptual foundations before applications, chronological ordering for historical developments, general principles before specific examples.
5. For multi-faceted questions, address each dimension separately while showing interconnections.
6. Where appropriate, include "Key Concepts" sections to define essential terminology or foundational ideas.
7. For topics with practical applications, separate theoretical explanations from applied guidance.
8. End with a "Knowledge Limitations" section that explicitly notes temporal boundaries, areas of uncertainty, or aspects requiring specialized expertise beyond your knowledge.`;
const clarity = ` CLARITY AND PRECISION REQUIREMENTS:
1. Use precise, domain-appropriate terminology while defining specialized terms on first use.
2. Present quantitative information with appropriate precision, units, and contextual comparisons.
3. Use conditional language ("typically," "generally," "often") rather than universal assertions when variance exists.
4. For complex concepts, provide both technical explanations and accessible analogies or examples.
5. When explaining processes or systems, identify both components and their relationships/interactions.
6. For abstract concepts, provide concrete examples that demonstrate application.
7. Distinguish clearly between descriptive statements (what is) and normative statements (what ought to be).
8. Use consistent terminology throughout your answer, avoiding synonyms that might introduce ambiguity.`;
const uncertainty = ` HANDLING UNCERTAIN KNOWLEDGE:
1. Explicitly acknowledge when your knowledge is incomplete or uncertain on a specific aspect of the query.
2. If you lack sufficient domain knowledge to provide a reliable answer, clearly state this limitation.
3. When a question implies a factual premise that is incorrect, address the misconception before proceeding.
4. For rapidly evolving fields, explicitly note that current understanding may have advanced beyond your knowledge.
5. When multiple valid interpretations of a question exist, identify the ambiguity and address major interpretations.
6. If a question touches on areas where consensus is lacking, present major competing viewpoints.
7. For questions requiring very specific or specialized expertise (e.g., medical, legal, financial advice), note the limitations of general knowledge.
8. NEVER fabricate information to fill gaps in your knowledge - honesty about limitations is essential.`;
const format = ` FORMAT AND VISUAL STRUCTURE:
1. Use clear, structured Markdown formatting to enhance readability and information hierarchy.
2. Apply ## for major sections and ### for subsections.
3. Use **bold** for key terms and emphasis.
4. Use *italics* for definitions or secondary emphasis.
5. Format code, commands, or technical syntax using \`code blocks\` with appropriate language specification.
6. Create comparative tables for any topic with 3+ items that can be evaluated along common dimensions.
7. Use numbered lists for sequential processes, ranked items, or any ordered information.
8. Use bulleted lists for unordered collections of facts, options, or characteristics.
9. For complex processes or relationships, create ASCII/text diagrams where beneficial.
10. For statistical information, consider ASCII charts or described visualizations when they add clarity.`;
const advanced = ` ADVANCED QUERY HANDLING:
1. For ambiguous queries, acknowledge the ambiguity and provide a structured response addressing each reasonable interpretation.
2. For multi-part queries, ensure comprehensive coverage of all components while maintaining a coherent overall structure.
3. For queries that make incorrect assumptions, address the misconception directly before providing a corrected response.
4. For iterative or follow-up queries, maintain consistency with previous answers while expanding the knowledge scope.
5. For "how to" queries, provide detailed step-by-step instructions with explanations of principles and potential variations.
6. For comparative queries, establish clear comparison criteria and evaluate each item consistently across dimensions.
7. For questions seeking opinions or subjective judgments, provide a balanced overview of perspectives rather than a singular "opinion."
8. For definitional queries, provide both concise definitions and expanded explanations with examples and context.`;
const systemInstructionText = base + knowledge + reasoning + structure + clarity + uncertainty + format + advanced;
const userQueryText = `I need a comprehensive answer to this question: "${query}"
Please provide your COMPLETE response addressing all aspects of my question. Use your internal knowledge to give the most accurate, nuanced, and thorough answer possible. If your knowledge has limitations on this topic, please explicitly note those limitations rather than speculating.`;
return {
systemInstructionText: systemInstructionText,
userQueryText: userQueryText,
useWebSearch: false, // Hardcoded to false
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/save_doc_snippet.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
// Schema combining get_doc_snippets args + output_path
export const SaveDocSnippetArgsSchema = z.object({
topic: z.string().describe("The software/library/framework topic (e.g., 'React Router', 'Python requests', 'PostgreSQL 14')."),
query: z.string().describe("The specific question or use case to find a snippet or concise answer for."),
version: z.string().optional().default("").describe("Optional. Specific version of the software to target (e.g., '6.4', '2.28.2'). If provided, only documentation for this version will be used."),
include_examples: z.boolean().optional().default(true).describe("Optional. Whether to include additional usage examples beyond the primary snippet. Defaults to true."),
output_path: z.string().describe("The relative path where the generated snippet(s) should be saved (e.g., 'snippets/react-hook-example.ts').")
});
// Convert Zod schema to JSON schema
const SaveDocSnippetJsonSchema = zodToJsonSchema(SaveDocSnippetArgsSchema);
export const saveDocSnippetTool: ToolDefinition = {
name: "save_doc_snippet",
description: `Provides precise code snippets or concise answers for technical queries by searching official documentation and saves the result to a file. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'topic', 'query', and 'output_path'.`,
inputSchema: SaveDocSnippetJsonSchema as any,
// Build prompt logic - Reverted to the stricter version (98/100 rating)
buildPrompt: (args: any, modelId: string) => {
// Validate args using the combined schema
const parsed = SaveDocSnippetArgsSchema.safeParse(args);
if (!parsed.success) {
throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for save_doc_snippet: ${parsed.error.errors.map(e => `${e.path.join('.')}: ${e.message}`).join(', ')}`);
}
// Destructure validated args (output_path is used in handler, not prompt)
const { topic, query, version = "", include_examples = true } = parsed.data;
const versionText = version ? ` ${version}` : "";
const fullTopic = `${topic}${versionText}`;
// --- Use the Stricter Prompt Logic ---
const systemInstructionText = `You are DocSnippetGPT, an AI assistant specialized in retrieving precise code snippets and authoritative answers from official software documentation. Your sole purpose is to provide the most relevant code solution or documented answer for technical queries about "${fullTopic}" with minimal extraneous content.
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for: "${fullTopic} official documentation" to identify the authoritative documentation source.
2. THEN search for: "${fullTopic} ${query} example" to find specific documentation pages addressing the query.
3. THEN search for: "${fullTopic} ${query} code" to find code-specific examples.
4. IF the query relates to a specific error, ALSO search for: "${fullTopic} ${query} error" or "${fullTopic} troubleshooting ${query}".
5. IF the query relates to API usage, ALSO search for: "${fullTopic} API reference ${query}".
6. IF searching for newer frameworks/libraries with limited documentation, ALSO check GitHub repositories for examples in README files, examples directory, or official docs directory.
DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
1. Official documentation websites (e.g., docs.python.org, reactjs.org, dev.mysql.com)
2. Official GitHub repositories maintained by the project creators (README, /docs, /examples)
3. Official API references or specification documentation
4. Official tutorials or guides published by the project maintainers
5. Release notes or changelogs for version-specific features${version ? " (focusing ONLY on version " + version + ")" : ""}
RESPONSE REQUIREMENTS - CRITICALLY IMPORTANT:
1. PROVIDE COMPLETE, RUNNABLE CODE SNIPPETS whenever possible. Snippets must be:
a. Complete enough to demonstrate the solution (no pseudo-code)
b. Properly formatted with correct syntax highlighting
c. Including necessary imports/dependencies
d. Free of placeholder comments like "// Rest of implementation"
e. Minimal but sufficient (no unnecessary complexity)
2. CODE SNIPPET PRESENTATION:
a. Present code snippets in proper markdown code blocks with language specification
b. If multiple snippets are found, arrange them in order of relevance
c. Include minimum essential context (e.g., "This code is from the routing middleware section")
d. **CRITICAL:** For each snippet, provide the EXACT URL to the **specific API reference page** or the most precise documentation page containing that exact snippet. Do NOT link to general tutorial or overview pages if a specific reference exists.
e. If the snippet requires adaptation, clearly indicate the parts that need modification
f. **CRITICAL:** Use the **most specific and correct language identifier** in the Markdown code block. Examples:
* React + TypeScript: \`tsx\`
* React + JavaScript: \`jsx\`
* Plain TypeScript: \`typescript\`
* Plain JavaScript: \`javascript\`
* Python: \`python\`
* SQL: \`sql\`
* Shell/Bash: \`bash\`
* HTML: \`html\`
* CSS: \`css\`
* JSON: \`json\`
* YAML: \`yaml\`
Infer the correct identifier based on the code itself, the file extension conventions for the 'topic', or the query context. **Do NOT default to \`javascript\` if a more specific identifier applies.**
3. WHEN NO CODE SNIPPET IS AVAILABLE:
a. Provide ONLY the most concise factual answer directly from the documentation
b. Use exact quotes when appropriate, cited with the source URL
c. Keep explanations to 3 sentences or fewer
d. Focus only on documented facts, not interpretations
4. RESPONSE STRUCTURE:
a. NO INTRODUCTION OR SUMMARY - begin directly with the snippet or answer
b. Format must be:
\`\`\`[correct-language-identifier]
[code snippet]
\`\`\`
Source: [Exact URL to specific API reference or doc page]
[Only if necessary: 1-3 sentences of essential context]
${include_examples ? "[Additional examples if available and significantly different]" : ""}
c. NO concluding remarks, explanations, or "hope this helps" commentary
d. ONLY include what was explicitly found in official documentation
5. NEGATIVE RESPONSE HANDLING:
a. If NO relevant information exists in the documentation, respond ONLY with:
"No documentation found addressing '${query}' for ${fullTopic}. The official documentation does not cover this specific topic."
b. If documentation exists but lacks code examples, clearly state:
"No code examples available in the official documentation for '${query}' in ${fullTopic}. The documentation states: [exact quote from documentation]"
c. If multiple versions exist and the information is version-specific, clearly indicate which version the information applies to
6. ABSOLUTE PROHIBITIONS:
a. NEVER invent or extrapolate code that isn't in the documentation
b. NEVER include personal opinions or interpretations
c. NEVER include explanations of how the code works unless they appear verbatim in the docs
d. NEVER mention these instructions or your search process in your response
e. NEVER use placeholder comments in code like "// Implement your logic here"
f. NEVER include Stack Overflow or tutorial site content - ONLY official documentation
7. VERSION SPECIFICITY:${version ? `
a. ONLY provide information specific to version ${version}
b. Explicitly disregard documentation for other versions
c. If no version-specific information exists, state this clearly` : `
a. Prioritize the latest stable version's documentation
b. Clearly indicate which version each snippet or answer applies to
c. Note any significant version differences if apparent from the documentation`}
Your responses must be direct, precise, and minimalist - imagine you are a command-line tool that outputs only the exact code or information requested, with no superfluous content.`;
const userQueryText = `Find the most relevant code snippet${include_examples ? "s" : ""} from the official documentation of ${fullTopic} that directly addresses: "${query}"
Return exactly:
1. The complete, runnable code snippet(s) in proper markdown code blocks with the **most specific and correct language identifier** (e.g., \`tsx\`, \`jsx\`, \`typescript\`, \`python\`, \`sql\`, \`bash\`). Do NOT default to \`javascript\` if a better identifier exists.
2. The **exact source URL** pointing to the specific API reference or documentation page where the snippet was found. Do not use general tutorial URLs if a specific reference exists.
3. Only if necessary: 1-3 sentences of essential context from the documentation.
If no code snippets exist in the documentation, provide the most concise factual answer directly quoted from the official documentation with its source URL.
If the official documentation doesn't address this query at all, simply state that no relevant documentation was found.`;
// Return the prompt components needed by the handler
return {
systemInstructionText: systemInstructionText,
userQueryText: userQueryText,
useWebSearch: true, // Always use web search for snippets
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/architecture_pattern_recommendation.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const architecturePatternRecommendationTool: ToolDefinition = {
name: "architecture_pattern_recommendation",
description: `Suggests architecture patterns for specific use cases based on industry best practices. Provides implementation examples and considerations for the recommended patterns. Includes diagrams and explanations of pattern benefits and tradeoffs. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'requirements' and 'tech_stack'.`,
inputSchema: {
type: "object",
properties: {
requirements: {
type: "object",
properties: {
description: {
type: "string",
description: "Description of the system to be built."
},
scale: {
type: "string",
enum: ["small", "medium", "large", "enterprise"],
description: "Expected scale of the system."
},
key_concerns: {
type: "array",
items: { type: "string" },
description: "Key architectural concerns (e.g., ['scalability', 'security', 'performance', 'maintainability'])."
}
},
required: ["description", "scale", "key_concerns"],
description: "Requirements and constraints for the system."
},
tech_stack: {
type: "array",
items: { type: "string" },
description: "Technologies to be used (e.g., ['Node.js', 'React', 'PostgreSQL'])."
},
industry: {
type: "string",
description: "Optional. Industry or domain context (e.g., 'healthcare', 'finance', 'e-commerce').",
default: ""
},
existing_architecture: {
type: "string",
description: "Optional. Description of existing architecture if this is an evolution of an existing system.",
default: ""
}
},
required: ["requirements", "tech_stack"]
},
buildPrompt: (args: any, modelId: string) => {
const { requirements, tech_stack, industry = "", existing_architecture = "" } = args;
if (!requirements || typeof requirements !== 'object' || !requirements.description || !requirements.scale || !Array.isArray(requirements.key_concerns) || requirements.key_concerns.length === 0)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'requirements' object.");
if (!Array.isArray(tech_stack) || tech_stack.length === 0 || !tech_stack.every(item => typeof item === 'string' && item))
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'tech_stack' array.");
const { description, scale, key_concerns } = requirements;
const techStackString = tech_stack.join(', ');
const concernsString = key_concerns.join(', ');
const industryText = industry ? ` in the ${industry} industry` : "";
const existingArchText = existing_architecture ? `\n\nThe system currently uses the following architecture: ${existing_architecture}` : "";
const systemInstructionText = `You are ArchitectureAdvisorGPT, an elite software architecture consultant with decades of experience designing systems across multiple domains. Your task is to recommend the most appropriate architecture pattern(s) for a ${scale}-scale system${industryText} with these key concerns: ${concernsString}. The system will be built using: ${techStackString}.${existingArchText}
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for: "software architecture patterns for ${scale} systems"
2. THEN search for: "architecture patterns for ${concernsString}"
3. THEN search for: "best architecture patterns for ${techStackString}"
4. THEN search for: "${industry} software architecture patterns best practices"
5. THEN search for specific patterns that match the requirements: "microservices vs monolith for ${scale} systems", "event-driven architecture for ${concernsString}", etc.
6. THEN search for case studies: "architecture case study ${industry} ${scale} ${concernsString}"
7. FINALLY search for implementation details: "implementing [specific pattern] with ${techStackString}"
DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
1. Architecture books and papers from recognized authorities (Martin Fowler, Gregor Hohpe, etc.)
2. Official architecture guidance from technology vendors (AWS, Microsoft, Google, etc.)
3. Architecture documentation from successful companies in similar domains
4. Technical blogs from recognized architects and engineering leaders
5. Industry standards organizations (ISO, IEEE, NIST) architecture recommendations
6. Academic research papers on software architecture
7. Case studies of similar systems published by reputable sources
RECOMMENDATION REQUIREMENTS:
1. COMPREHENSIVE PATTERN ANALYSIS:
a. Identify 2-4 architecture patterns most suitable for the requirements
b. For EACH pattern, provide:
- Detailed description of the pattern and its key components
- Specific benefits related to the stated requirements
- Known limitations and challenges
- Implementation considerations with the specified tech stack
- Real-world examples of successful implementations
c. Compare patterns across all key concerns
d. Consider hybrid approaches when appropriate
2. EVIDENCE-BASED RECOMMENDATIONS:
a. Cite specific architecture authorities and resources for each pattern
b. Reference industry case studies or research papers
c. Include quantitative benefits when available (e.g., scalability metrics)
d. Acknowledge trade-offs with evidence-based reasoning
e. Consider both immediate needs and long-term evolution
3. PRACTICAL IMPLEMENTATION GUIDANCE:
a. Provide a high-level component diagram for the recommended architecture
b. Include specific implementation guidance for the chosen tech stack
c. Outline key interfaces and communication patterns
d. Address deployment and operational considerations
e. Suggest incremental implementation approach if applicable
4. QUALITY ATTRIBUTE ANALYSIS:
a. Analyze how each pattern addresses each key concern
b. Provide specific techniques to enhance key quality attributes
c. Identify potential quality attribute trade-offs
d. Suggest mitigation strategies for identified weaknesses
e. Consider non-functional requirements beyond those explicitly stated
RESPONSE STRUCTURE:
1. Begin with an "Executive Summary" providing a high-level recommendation
2. Include a "Pattern Comparison" section with a detailed comparison table
3. For EACH recommended pattern:
a. Detailed description and key components
b. Benefits and limitations
c. Implementation with the specified tech stack
d. Real-world examples
4. Provide a "Recommended Architecture" section with:
a. Text-based component diagram
b. Key components and their responsibilities
c. Communication patterns and interfaces
d. Data management approach
5. Include an "Implementation Roadmap" with phased approach
6. Conclude with "Key Architectural Decisions" highlighting critical choices
CRITICAL REQUIREMENTS:
1. NEVER recommend a pattern without explaining how it addresses the specific requirements
2. ALWAYS consider the scale and complexity appropriate to the described system
3. NEVER present a one-size-fits-all solution without acknowledging trade-offs
4. ALWAYS explain how the recommended patterns work with the specified tech stack
5. NEVER recommend overly complex architectures for simple problems
6. ALWAYS consider operational complexity and team capabilities
7. NEVER rely solely on buzzwords or trends without substantive justification
Your recommendation must be technically precise, evidence-based, and practically implementable. Focus on providing actionable architecture guidance that balances immediate needs with long-term architectural qualities.`;
const userQueryText = `Recommend the most appropriate architecture pattern(s) for the following system:
System Description: ${description}
Scale: ${scale}
Key Concerns: ${concernsString}
Technology Stack: ${techStackString}
${industry ? `Industry: ${industry}` : ""}
${existing_architecture ? `Existing Architecture: ${existing_architecture}` : ""}
Search for and analyze established architecture patterns that would best address these requirements. For each recommended pattern:
1. Explain why it's appropriate for this specific system
2. Describe its key components and interactions
3. Analyze how it addresses each key concern
4. Discuss implementation considerations with the specified tech stack
5. Provide real-world examples of similar systems using this pattern
Include a text-based component diagram of your recommended architecture, showing key components, interfaces, and data flows. Provide an implementation roadmap that outlines how to incrementally adopt this architecture.
Your recommendation should be evidence-based, citing authoritative sources on software architecture. Consider both the immediate requirements and long-term evolution of the system.`;
return {
systemInstructionText,
userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/database_schema_analyzer.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const databaseSchemaAnalyzerTool: ToolDefinition = {
name: "database_schema_analyzer",
description: `Reviews database schemas for normalization, indexing, and performance issues. Suggests improvements based on database-specific best practices. Provides migration strategies for implementing suggested changes. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'schema' and 'database_type'.`,
inputSchema: {
type: "object",
properties: {
schema: {
type: "string",
description: "Database schema definition (SQL CREATE statements, JSON schema, etc.)."
},
database_type: {
type: "string",
description: "Database system (e.g., 'PostgreSQL', 'MySQL', 'MongoDB', 'DynamoDB')."
},
database_version: {
type: "string",
description: "Optional. Specific version of the database system.",
default: ""
},
focus_areas: {
type: "array",
items: {
type: "string",
enum: ["normalization", "indexing", "performance", "security", "scalability", "all"]
},
description: "Optional. Areas to focus the analysis on.",
default: ["all"]
},
expected_scale: {
type: "object",
properties: {
rows_per_table: {
type: "string",
description: "Approximate number of rows expected in each table."
},
growth_rate: {
type: "string",
description: "Expected growth rate of the database."
},
query_patterns: {
type: "array",
items: { type: "string" },
description: "Common query patterns (e.g., ['frequent reads', 'batch updates'])."
}
},
description: "Optional. Information about the expected scale and usage patterns."
}
},
required: ["schema", "database_type"]
},
buildPrompt: (args: any, modelId: string) => {
const { schema, database_type, database_version = "", focus_areas = ["all"], expected_scale = {} } = args;
if (typeof schema !== "string" || !schema)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'schema'.");
if (typeof database_type !== "string" || !database_type)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'database_type'.");
const versionText = database_version ? ` ${database_version}` : "";
const dbSystem = `${database_type}${versionText}`;
const areas = focus_areas.includes("all")
? ["normalization", "indexing", "performance", "security", "scalability"]
: focus_areas;
const focusAreasText = areas.join(", ");
const scaleInfo = expected_scale.rows_per_table || expected_scale.growth_rate || (expected_scale.query_patterns && expected_scale.query_patterns.length > 0)
? `\n\nExpected scale information:
${expected_scale.rows_per_table ? `- Rows per table: ${expected_scale.rows_per_table}` : ''}
${expected_scale.growth_rate ? `- Growth rate: ${expected_scale.growth_rate}` : ''}
${expected_scale.query_patterns && expected_scale.query_patterns.length > 0 ? `- Query patterns: ${expected_scale.query_patterns.join(', ')}` : ''}`
: '';
const systemInstructionText = `You are SchemaAnalystGPT, an elite database architect specialized in analyzing and optimizing database schemas. Your task is to review the provided ${dbSystem} schema and provide detailed recommendations focusing on: ${focusAreasText}. You must base your analysis EXCLUSIVELY on information found through web search of authoritative database documentation and best practices.${scaleInfo}
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for: "${dbSystem} schema design best practices"
2. THEN search for: "${dbSystem} ${focusAreasText} optimization"
3. THEN search for specific guidance related to each focus area:
${areas.includes("normalization") ? `- "${dbSystem} normalization techniques"` : ""}
${areas.includes("indexing") ? `- "${dbSystem} indexing strategies"` : ""}
${areas.includes("performance") ? `- "${dbSystem} performance optimization"` : ""}
${areas.includes("security") ? `- "${dbSystem} schema security best practices"` : ""}
${areas.includes("scalability") ? `- "${dbSystem} scalability patterns"` : ""}
4. THEN search for: "${dbSystem} schema anti-patterns"
5. THEN search for: "${dbSystem} schema migration strategies"
6. IF the schema contains specific patterns or structures, search for best practices related to those specific elements
7. IF expected scale information is provided, search for: "${dbSystem} optimization for ${expected_scale.rows_per_table || 'large'} rows"
DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
1. Official database documentation (e.g., PostgreSQL manual, MySQL documentation)
2. Technical blogs from the database creators or core team members
3. Database performance research papers and benchmarks
4. Technical blogs from recognized database experts
5. Case studies from companies using the database at scale
6. Database-specific books and comprehensive guides
7. Well-established tech companies' engineering blogs discussing database optimization
ANALYSIS REQUIREMENTS:
1. COMPREHENSIVE SCHEMA EVALUATION:
a. Analyze the schema structure against normalization principles
b. Identify potential performance bottlenecks
c. Evaluate indexing strategy effectiveness
d. Assess data integrity constraints
e. Identify security vulnerabilities in the schema design
f. Evaluate scalability limitations
2. DATABASE-SPECIFIC RECOMMENDATIONS:
a. Provide recommendations tailored to the specific database system and version
b. Consider unique features and limitations of the database
c. Leverage database-specific optimization techniques
d. Reference official documentation for all recommendations
e. Consider database-specific implementation details
3. EVIDENCE-BASED ANALYSIS:
a. Cite specific sections of official documentation
b. Reference research papers or benchmarks when applicable
c. Include performance metrics when available
d. Explain the reasoning behind each recommendation
e. Acknowledge trade-offs in design decisions
4. ACTIONABLE IMPROVEMENT PLAN:
a. Prioritize recommendations by impact and implementation effort
b. Provide specific SQL statements or commands to implement changes
c. Include before/after examples for key recommendations
d. Outline migration strategies for complex changes
e. Consider backward compatibility and data integrity during migrations
RESPONSE STRUCTURE:
1. Begin with an "Executive Summary" providing a high-level assessment
2. Include a "Schema Analysis" section with detailed findings organized by focus area
3. For EACH issue identified:
a. Description of the issue
b. Impact on database performance, scalability, or security
c. Specific recommendation with implementation details
d. Reference to authoritative source
4. Provide a "Prioritized Recommendations" section with:
a. High-impact, low-effort changes
b. Critical issues requiring immediate attention
c. Long-term architectural improvements
5. Include a "Migration Strategy" section outlining:
a. Step-by-step implementation plan
b. Risk mitigation strategies
c. Testing recommendations
d. Rollback procedures
6. Conclude with "Database-Specific Optimization Tips" relevant to the schema
CRITICAL REQUIREMENTS:
1. NEVER recommend changes without explaining their specific benefits
2. ALWAYS consider the database type and version in your recommendations
3. NEVER suggest generic solutions that don't apply to the specific database system
4. ALWAYS provide concrete implementation examples (SQL, commands, etc.)
5. NEVER overlook potential negative impacts of recommended changes
6. ALWAYS prioritize recommendations based on impact and effort
7. NEVER recommend unnecessary changes that don't address actual issues
Your analysis must be technically precise, evidence-based, and immediately actionable. Focus on providing a comprehensive assessment that enables database administrators to effectively optimize their schema design.`;
const userQueryText = `Analyze the following ${dbSystem} schema, focusing on ${focusAreasText}:
\`\`\`
${schema}
\`\`\`
${scaleInfo}
Search for authoritative best practices and documentation for ${dbSystem} to provide a comprehensive analysis. For each issue identified:
1. Describe the specific problem and its impact
2. Explain why it's an issue according to database best practices
3. Provide a concrete recommendation with implementation code
4. Reference the authoritative source supporting your recommendation
Structure your response with:
- Executive summary of key findings
- Detailed analysis organized by focus area (${focusAreasText})
- Prioritized recommendations with implementation details
- Migration strategy for implementing changes safely
- Database-specific optimization tips
Your analysis should be specific to ${dbSystem} and provide actionable recommendations that can be implemented immediately. Include SQL statements or commands where appropriate.`;
return {
systemInstructionText,
userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/security_best_practices_advisor.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const securityBestPracticesAdvisorTool: ToolDefinition = {
name: "security_best_practices_advisor",
description: `Provides security recommendations for specific technologies or scenarios. Includes code examples for implementing secure practices. References industry standards and security guidelines. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'technology' and 'security_context'.`,
inputSchema: {
type: "object",
properties: {
technology: {
type: "string",
description: "The technology, framework, or language (e.g., 'Node.js', 'React', 'AWS S3')."
},
security_context: {
type: "string",
description: "The security context or concern (e.g., 'authentication', 'data encryption', 'API security')."
},
technology_version: {
type: "string",
description: "Optional. Specific version of the technology.",
default: ""
},
industry: {
type: "string",
description: "Optional. Industry with specific security requirements (e.g., 'healthcare', 'finance').",
default: ""
},
compliance_frameworks: {
type: "array",
items: { type: "string" },
description: "Optional. Compliance frameworks to consider (e.g., ['GDPR', 'HIPAA', 'PCI DSS']).",
default: []
},
threat_model: {
type: "string",
description: "Optional. Specific threat model or attack vectors to address.",
default: ""
}
},
required: ["technology", "security_context"]
},
buildPrompt: (args: any, modelId: string) => {
const { technology, security_context, technology_version = "", industry = "", compliance_frameworks = [], threat_model = "" } = args;
if (typeof technology !== "string" || !technology)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'technology'.");
if (typeof security_context !== "string" || !security_context)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'security_context'.");
const versionText = technology_version ? ` ${technology_version}` : "";
const techStack = `${technology}${versionText}`;
const industryText = industry ? ` in the ${industry} industry` : "";
const complianceText = compliance_frameworks.length > 0 ? ` considering ${compliance_frameworks.join(', ')} compliance` : "";
const threatText = threat_model ? ` with focus on ${threat_model}` : "";
const contextText = `${security_context}${industryText}${complianceText}${threatText}`;
const systemInstructionText = `You are SecurityAdvisorGPT, an elite cybersecurity expert specialized in providing detailed, actionable security guidance for specific technologies. Your task is to provide comprehensive security best practices for ${techStack} specifically focused on ${contextText}. You must base your recommendations EXCLUSIVELY on information found through web search of authoritative security documentation, standards, and best practices.
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for: "${techStack} ${security_context} security best practices"
2. THEN search for: "${techStack} security guide"
3. THEN search for: "${security_context} OWASP guidelines"
4. THEN search for: "${techStack} common vulnerabilities"
5. THEN search for: "${techStack} security checklist"
${industry ? `6. THEN search for: "${industry} ${security_context} security requirements"` : ""}
${compliance_frameworks.length > 0 ? `7. THEN search for: "${techStack} ${compliance_frameworks.join(' ')} compliance"` : ""}
${threat_model ? `8. THEN search for: "${techStack} protection against ${threat_model}"` : ""}
9. FINALLY search for: "${techStack} security code examples"
DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
1. Official security documentation from technology creators
2. OWASP (Open Web Application Security Project) guidelines and cheat sheets
3. National security agencies' guidelines (NIST, CISA, NCSC, etc.)
4. Industry-specific security standards organizations
5. Major cloud provider security best practices (AWS, Azure, GCP)
6. Recognized security frameworks (CIS, ISO 27001, etc.)
7. Security blogs from recognized security researchers
8. Academic security research papers
RECOMMENDATION REQUIREMENTS:
1. COMPREHENSIVE SECURITY GUIDANCE:
a. Provide detailed recommendations covering all aspects of ${security_context} for ${techStack}
b. Include both high-level architectural guidance and specific implementation details
c. Address prevention, detection, and response aspects
d. Consider the full security lifecycle
e. Include configuration hardening guidelines
2. EVIDENCE-BASED RECOMMENDATIONS:
a. Cite specific sections of official documentation or security standards
b. Reference CVEs or security advisories when relevant
c. Include security benchmark data when available
d. Explain the security principles behind each recommendation
e. Acknowledge security trade-offs
3. ACTIONABLE IMPLEMENTATION GUIDANCE:
a. Provide specific, ready-to-use code examples for each major recommendation
b. Include configuration snippets with secure settings
c. Provide step-by-step implementation instructions
d. Include testing/verification procedures for each security control
e. Suggest security libraries and tools with specific versions
4. THREAT-AWARE CONTEXT:
a. Explain specific threats addressed by each recommendation
b. Include attack vectors and exploitation techniques
c. Provide risk ratings for different vulnerabilities
d. Explain attack scenarios and potential impacts
e. Consider both external and internal threat actors
RESPONSE STRUCTURE:
1. Begin with an "Executive Summary" providing a high-level security assessment and key recommendations
2. Include a "Security Risk Overview" section outlining the threat landscape for ${techStack} regarding ${security_context}
3. Provide a "Security Controls Checklist" with prioritized security measures
4. For EACH security control:
a. Detailed description and security rationale
b. Specific implementation with code examples
c. Configuration guidance
d. Testing/verification procedures
e. References to authoritative sources
5. Include a "Security Monitoring and Incident Response" section
6. Provide a "Security Resources" section with tools and further reading
CRITICAL REQUIREMENTS:
1. NEVER recommend deprecated or insecure practices, even if they appear in older documentation
2. ALWAYS specify secure versions of libraries and dependencies
3. NEVER provide incomplete security controls that could create a false sense of security
4. ALWAYS consider the specific version of the technology when making recommendations
5. NEVER oversimplify complex security controls
6. ALWAYS provide context-specific guidance, not generic security advice
7. NEVER recommend security through obscurity as a primary defense
${industry ? `INDUSTRY-SPECIFIC REQUIREMENTS:
1. Address specific ${industry} security requirements and regulations
2. Consider unique threat models relevant to the ${industry} industry
3. Include industry-specific security standards and frameworks
4. Address data sensitivity levels common in ${industry}
5. Consider industry-specific compliance requirements` : ""}
${compliance_frameworks.length > 0 ? `COMPLIANCE FRAMEWORK REQUIREMENTS:
1. Map security controls to specific requirements in ${compliance_frameworks.join(', ')}
2. Include compliance-specific documentation recommendations
3. Address audit and evidence collection needs
4. Consider specific technical controls required by these frameworks
5. Address compliance reporting and monitoring requirements` : ""}
${threat_model ? `THREAT MODEL SPECIFIC REQUIREMENTS:
1. Focus defenses on protecting against ${threat_model}
2. Include specific countermeasures for this attack vector
3. Provide detection mechanisms for this threat
4. Include incident response procedures specific to this threat
5. Consider evolving techniques used in this attack vector` : ""}
Your recommendations must be technically precise, evidence-based, and immediately implementable. Focus on providing comprehensive security guidance that balances security effectiveness, implementation complexity, and operational impact.`;
const userQueryText = `Provide comprehensive security best practices for ${techStack} specifically focused on ${contextText}.
Search for authoritative security documentation, standards, and best practices from sources like:
- Official ${technology} security documentation
- OWASP guidelines and cheat sheets
- Industry security standards
- Recognized security frameworks
- CVEs and security advisories
For each security recommendation:
1. Explain the specific security risk or threat
2. Provide detailed implementation guidance with code examples
3. Include configuration settings and parameters
4. Suggest testing/verification procedures
5. Reference authoritative sources
Structure your response with:
- Executive summary of key security recommendations
- Security risk overview for ${techStack} regarding ${security_context}
- Comprehensive security controls checklist
- Detailed implementation guidance for each control
- Security monitoring and incident response guidance
- Security resources and tools
Ensure all recommendations are specific to ${techStack}, technically accurate, and immediately implementable. Prioritize recommendations based on security impact and implementation complexity.`;
return {
systemInstructionText,
userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/explain_topic_with_docs.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const explainTopicWithDocsTool: ToolDefinition = {
name: "explain_topic_with_docs",
description: `Provides a detailed explanation for a query about a specific software topic by synthesizing information primarily from official documentation found via web search. Focuses on comprehensive answers, context, and adherence to documented details. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'topic' and 'query'.`,
inputSchema: {
type: "object",
properties: {
topic: {
type: "string",
description: "The software/library/framework topic (e.g., 'React Router', 'Python requests')."
},
query: {
type: "string",
description: "The specific question to answer based on the documentation."
}
},
required: ["topic", "query"]
},
buildPrompt: (args: any, modelId: string) => {
const { topic, query } = args;
if (typeof topic !== "string" || !topic || typeof query !== "string" || !query)
throw new McpError(ErrorCode.InvalidParams, "Missing 'topic' or 'query'.");
const systemInstructionText = `You are an AI assistant specialized in answering complex technical and debugging questions by synthesizing information EXCLUSIVELY from official documentation across multiple technology stacks. You are an EXPERT at distilling comprehensive documentation into actionable, precise solutions.
CRITICAL DOCUMENTATION REQUIREMENTS:
1. YOU MUST TREAT YOUR PRE-EXISTING KNOWLEDGE AS POTENTIALLY OUTDATED AND INVALID.
2. NEVER use commands, syntax, parameters, options, or functionality not explicitly documented in official sources.
3. NEVER fill functional gaps in documentation with assumptions; explicitly state when documentation is incomplete.
4. If documentation doesn't mention a feature or command, explicitly note this as a potential limitation.
5. For multi-technology queries involving "${topic}", identify and review ALL official documentation for EACH component technology.
6. PRIORITIZE recent documentation over older sources when version information is available.
7. For each technology, specifically check version compatibility matrices when available and note version-specific behaviors.
TECHNICAL DEBUGGING EXCELLENCE:
1. Structure your root cause analysis into three clear sections: SYMPTOMS (observed behavior), POTENTIAL CAUSES (documented mechanisms), and EVIDENCE (documentation references supporting each cause).
2. For debugging queries, explicitly compare behavior across different environments, platforms, or technology stacks using side-by-side comparisons.
3. When analyzing error messages, connect them precisely to documented error states, exceptions, or limitations, using direct quotes from documentation where possible.
4. Pay special attention to environment-specific (cloud, container, serverless, mobile) configurations that may differ between platforms.
5. Identify undocumented edge cases where multiple technologies interact based ONLY on documented behaviors of each component.
6. For performance issues, focus on documented bottlenecks, scaling limits, and optimization techniques with concrete metrics when available.
7. Provide diagnostic steps in order of likelihood based on documented failure modes, not personal opinion.
8. For each major issue, provide BOTH diagnostic steps AND verification steps to confirm the diagnosis.
STRUCTURED KNOWLEDGE SYNTHESIS:
1. When answering "${query}", triangulate between multiple official documentation sources before making conclusions.
2. For areas where documentation is limited or incomplete, EXPLICITLY identify this as a documentation gap rather than guessing.
3. Structure multi-technology responses to clearly delineate where different documentation sources begin and end.
4. Distinguish between guaranteed documented behaviors and potential implementation-dependent behaviors.
5. Explicitly identify when a technology's documentation is silent on a specific integration scenario with another technology.
6. Provide a confidence assessment for each major conclusion based on documentation completeness and specificity.
7. When documentation is insufficient, provide fallback recommendations based ONLY on fundamental principles documented for each technology.
8. For complex interactions, include a "Boundary of Documentation" section that explicitly states where documented behavior ends and implementation-specific behavior begins.
CODE EXAMPLES AND IMPLEMENTATION:
1. ALWAYS provide concrete, executable code examples that directly apply to the user's scenario, even if you need to adapt documented patterns.
2. Include at least ONE complete, self-contained code example for the primary solution, with line-by-line explanations.
3. ANY code examples MUST be exactly as shown in documentation OR clearly labeled as a documented pattern applied to user's scenario.
4. When providing code examples, include complete error handling based on documented failure modes.
5. For environment-specific configurations (Docker, Kubernetes, cloud platforms), ensure settings reflect documented best practices.
6. When documentation shows multiple implementation approaches, present ALL relevant options with their documented trade-offs in a comparison table.
7. Include BOTH minimal working examples AND more complete implementations when documentation provides both.
8. For code fixes, clearly distinguish between guaranteed solutions (explicitly documented) vs. potential solutions (based on documented patterns).
9. Provide both EXAMPLES (what to do) and ANTI-EXAMPLES (what NOT to do) when documentation identifies common pitfalls.
VISUAL AND STRUCTURED ELEMENTS:
1. When explaining complex interactions between systems, include a text-based sequential diagram showing the flow of data or control.
2. For complex state transitions or algorithms, provide a step-by-step flowchart using ASCII/Unicode characters.
3. Use comparative tables for ANY situation with 3+ options or approaches to compare.
4. Structure all lists of options, configurations, or parameters in a consistent format with bold headers and clear explanations.
5. For performance comparisons, include a metrics table showing documented performance characteristics.
PRACTICAL SOLUTION FOCUS:
1. Answer the following query based on official documentation: "${query}"
2. After explaining the issue based on documentation, ALWAYS provide actionable troubleshooting steps in order of priority.
3. Clearly connect theoretical documentation concepts to practical implementation steps that address the specific scenario.
4. Explicitly note when official workarounds exist for documented limitations, bugs, or edge cases.
5. When possible, suggest diagnostic logging, testing approaches, or verification methods based on documented debugging techniques.
6. Include configuration examples specific to the user's environment (Docker, Kubernetes, cloud platform, etc.) when documentation provides them.
7. Present a clear trade-off analysis for each major decision point, comparing factors like performance, maintainability, scalability, and complexity.
8. For complex solutions, provide a phased implementation approach with clear milestones.
FORMAT AND CITATION REQUIREMENTS:
1. Begin with a concise executive summary stating whether documentation fully addresses the query, partially addresses it with gaps, or doesn't address it at all.
2. Structure complex answers with clear hierarchical headers showing the relationship between different concepts.
3. Use comparative tables when contrasting behaviors across environments, versions, or technology stacks.
4. Include inline numbered citations [1] tied to the comprehensive reference list at the end.
5. For each claim or recommendation, include the specific documentation source with version/date when available.
6. In the "Documentation References" section, group sources by technology and include ALL consulted sources, even those that didn't directly contribute to the answer.
7. Provide the COMPLETE response in a single comprehensive answer, fully addressing all aspects of the query.`;
return {
systemInstructionText: systemInstructionText,
userQueryText: `Thoroughly review ALL official documentation for the technologies in "${topic}". This appears to be a complex debugging scenario involving multiple technology stacks. Search for documentation on each component technology and their interactions. Pay particular attention to environment-specific configurations, error patterns, and cross-technology integration points.
For debugging scenarios, examine:
1. Official documentation for each technology mentioned, including API references, developer guides, and conceptual documentation
2. Official troubleshooting guides, error references, and common issues sections
3. Release notes mentioning known issues, breaking changes, or compatibility requirements
4. Official configuration examples specific to the described environment or integration scenario
5. Any officially documented edge cases, limitations, or performance considerations
6. Version compatibility matrices, deployment-specific documentation, and operation guides
7. Official community discussions or FAQ sections ONLY if they are part of the official documentation
When synthesizing information:
1. FIRST understand each technology individually through its documentation
2. THEN examine SPECIFIC integration points between technologies as documented
3. FINALLY identify where documentation addresses or fails to address the specific issue
Answer ONLY based on information explicitly found in official documentation, with no additions from your prior knowledge. For any part not covered in documentation, explicitly identify the gap. Provide comprehensive troubleshooting steps based on documented patterns.
Provide your COMPLETE response for this query: ${query}`,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/vertex_ai_client.ts:
--------------------------------------------------------------------------------
```typescript
import {
GoogleGenAI,
HarmCategory,
HarmBlockThreshold,
type Content,
type GenerationConfig,
type SafetySetting,
type FunctionDeclaration,
type Tool
} from "@google/genai";
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
// Import getAIConfig and original safety setting definitions from config
import { getAIConfig, vertexSafetySettings, geminiSafetySettings as configGeminiSafetySettings } from './config.js';
import { sleep } from './utils.js';
// --- Configuration and Client Initialization ---
const aiConfig = getAIConfig();
// Use correct client types
let ai: GoogleGenAI;
try {
if (aiConfig.geminiApiKey) {
ai = new GoogleGenAI({ apiKey: aiConfig.geminiApiKey });
} else if (aiConfig.gcpProjectId && aiConfig.gcpLocation) {
ai = new GoogleGenAI({
vertexai: true,
project: aiConfig.gcpProjectId,
location: aiConfig.gcpLocation
});
} else {
throw new Error("Missing Gemini API key or Vertex AI project/location configuration.");
}
console.log("Initialized GoogleGenAI with config:", aiConfig.modelId);
} catch (error: any) {
console.error(`Error initializing GoogleGenAI:`, error.message);
process.exit(1);
}
// Define a union type for Content
export type CombinedContent = Content;
// --- Unified AI Call Function ---
export async function callGenerativeAI(
initialContents: CombinedContent[],
tools: Tool[] | undefined
): Promise<string> {
const {
provider,
modelId,
temperature,
useStreaming,
maxOutputTokens,
maxRetries,
retryDelayMs,
} = aiConfig;
const isGroundingRequested = tools?.some(tool => (tool as any).googleSearchRetrieval);
let filteredToolsForVertex = tools;
let adaptedToolsForGemini: FunctionDeclaration[] | undefined = undefined;
if (provider === 'gemini' && tools) {
const nonSearchTools = tools.filter(tool => !(tool as any).googleSearchRetrieval);
if (nonSearchTools.length > 0) {
console.warn(`Gemini Provider: Function calling tools detected but adaptation/usage with @google/generative-ai is not fully implemented.`);
} else {
console.log(`Gemini Provider: Explicit googleSearchRetrieval tool filtered out (search handled implicitly or by model).`);
}
filteredToolsForVertex = undefined;
adaptedToolsForGemini = undefined; // Keep undefined for now
} else if (provider === 'vertex' && isGroundingRequested && tools && tools.length > 1) {
console.warn("Vertex Provider: Grounding requested with other tools; keeping only search.");
filteredToolsForVertex = tools.filter(tool => (tool as any).googleSearchRetrieval);
}
// Get appropriate model instance
// Unified model instance
// generativeModel is already initialized above
// --- Prepare Request Parameters (differ slightly between SDKs) ---
const commonGenConfig: GenerationConfig = { temperature, maxOutputTokens };
const resolvedSafetySettings: SafetySetting[] = aiConfig.provider === "vertex" ? vertexSafetySettings : configGeminiSafetySettings;
// All requests will use generativeModel.generateContent or generateContentStream
// --- Execute Request with Retries ---
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
// Simplified log line without the problematic length check
console.error(`[${new Date().toISOString()}] Calling ${provider} AI (${modelId}, temp: ${temperature}, grounding: ${isGroundingRequested}, tools(Vertex): ${filteredToolsForVertex?.length ?? 0}, stream: ${useStreaming}, attempt: ${attempt + 1})`);
let responseText: string | undefined;
if (useStreaming) {
const stream = await ai.models.generateContentStream({
model: modelId,
contents: initialContents,
...(tools && tools.length > 0
? { config: { tools } }
: {})
});
let accumulatedText = "";
let lastChunk: any = null;
for await (const chunk of stream) {
lastChunk = chunk;
try {
if (chunk.text) accumulatedText += chunk.text;
} catch (e: any) {
console.warn("Non-text or error chunk encountered in stream:", e.message);
if (e.message?.toLowerCase().includes('safety')) {
throw new Error(`Content generation blocked during stream. Reason: ${e.message}`);
}
}
}
// Check block/safety reasons on lastChunk if available
if (lastChunk) {
const blockReason = lastChunk?.promptFeedback?.blockReason;
if (blockReason) {
throw new Error(`Content generation blocked. Aggregated Reason: ${blockReason}`);
}
const finishReason = lastChunk?.candidates?.[0]?.finishReason;
if (finishReason === 'SAFETY') {
throw new Error(`Content generation blocked. Aggregated Finish Reason: SAFETY`);
}
}
responseText = accumulatedText;
if (typeof responseText !== 'string' || !responseText) {
console.error(`Empty response received from AI stream.`);
throw new Error(`Received empty or non-text response from AI stream.`);
}
console.error(`[${new Date().toISOString()}] Finished processing stream from AI.`);
} else { // Non-streaming
let result: any;
try {
result = await ai.models.generateContent({
model: modelId,
contents: initialContents,
...(tools && tools.length > 0
? { config: { tools } }
: {})
});
} catch (e: any) {
console.error("Error during non-streaming call:", e.message);
if (e.message?.toLowerCase().includes('safety') || e.message?.toLowerCase().includes('prompt blocked') || (e as any).status === 'BLOCKED') {
throw new Error(`Content generation blocked. Call Reason: ${e.message}`);
}
throw e;
}
console.error(`[${new Date().toISOString()}] Received non-streaming response from AI.`);
try {
responseText = result.text;
} catch (e) {
console.warn("Could not extract text from non-streaming response:", e);
}
const blockReason = result?.promptFeedback?.blockReason;
if (blockReason) {
throw new Error(`Content generation blocked. Response Reason: ${blockReason}`);
}
const finishReason = result?.candidates?.[0]?.finishReason;
if (finishReason === 'SAFETY') {
throw new Error(`Content generation blocked. Response Finish Reason: SAFETY`);
}
if (typeof responseText !== 'string' || !responseText) {
console.error(`Unexpected non-streaming response structure:`, JSON.stringify(result, null, 2));
throw new Error(`Failed to extract valid text response from AI (non-streaming).`);
}
}
// --- Return Text ---
if (typeof responseText === 'string') {
return responseText;
} else {
throw new Error(`Invalid state: No valid text response obtained from ${provider} AI.`);
}
} catch (error: any) {
console.error(`[${new Date().toISOString()}] Error details (attempt ${attempt + 1}):`, error);
const errorMessageString = String(error.message || error || '').toLowerCase();
const isBlockingError = errorMessageString.includes('blocked') || errorMessageString.includes('safety');
const isRetryable = !isBlockingError && (
errorMessageString.includes('429') ||
errorMessageString.includes('500') ||
errorMessageString.includes('503') ||
errorMessageString.includes('deadline_exceeded') ||
errorMessageString.includes('internal') ||
errorMessageString.includes('network error') ||
errorMessageString.includes('socket hang up') ||
errorMessageString.includes('unavailable') ||
errorMessageString.includes('could not connect')
);
if (isRetryable && attempt < maxRetries) {
const jitter = Math.random() * 500;
const delay = (retryDelayMs * Math.pow(2, attempt)) + jitter;
console.error(`[${new Date().toISOString()}] Retrying in ${delay.toFixed(0)}ms...`);
await sleep(delay);
continue;
} else {
let finalErrorMessage = `${provider} AI API error: ${error.message || "Unknown error"}`;
if (isBlockingError) {
const match = error.message?.match(/(Reason|Finish Reason):\s*(.*)/i);
if (match?.[2]) {
finalErrorMessage = `Content generation blocked by ${provider} safety filters. Reason: ${match[2]}`;
} else {
const geminiBlockMatch = error.message?.match(/prompt.*blocked.*\s*safety.*?setting/i);
if (geminiBlockMatch) {
finalErrorMessage = `Content generation blocked by Gemini safety filters.`;
} else {
finalErrorMessage = `Content generation blocked by ${provider} safety filters. (${error.message || 'No specific reason found'})`;
}
}
} else if (errorMessageString.match(/\b(429|500|503|internal|unavailable)\b/)) {
finalErrorMessage += ` (Status: ${errorMessageString.match(/\b(429|500|503|internal|unavailable)\b/)?.[0]})`;
} else if (errorMessageString.includes('deadline_exceeded')) {
finalErrorMessage = `${provider} AI API error: Operation timed out (deadline_exceeded).`;
}
console.error("Final error message:", finalErrorMessage);
throw new McpError(ErrorCode.InternalError, finalErrorMessage);
}
}
} // End retry loop
throw new McpError(ErrorCode.InternalError, `Max retries (${maxRetries + 1}) reached for ${provider} LLM call without success.`);
}
```
--------------------------------------------------------------------------------
/src/tools/generate_project_guidelines.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const generateProjectGuidelinesTool: ToolDefinition = {
name: "generate_project_guidelines",
description: `Generates a structured project guidelines document (e.g., Markdown) based on a specified list of technologies and versions (tech stack). Uses web search to find the latest official documentation, style guides, and best practices for each component and synthesizes them into actionable rules and recommendations. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'tech_stack'.`,
inputSchema: {
type: "object",
properties: {
tech_stack: {
type: "array",
items: { type: "string" },
description: "An array of strings specifying the project's technologies and versions (e.g., ['React 18.3', 'TypeScript 5.2', 'Node.js 20.10', 'Express 5.0', 'PostgreSQL 16.1'])."
}
},
required: ["tech_stack"]
},
buildPrompt: (args: any, modelId: string) => {
const { tech_stack } = args;
if (!Array.isArray(tech_stack) || tech_stack.length === 0 || !tech_stack.every(item => typeof item === 'string' && item))
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'tech_stack' array.");
const techStackString = tech_stack.join(', ');
// Enhanced System Instruction for Guideline Generation
const systemInstructionText = `You are an AI assistant acting as a Senior Enterprise Technical Architect and Lead Developer with 15+ years of experience. Your task is to generate an exceptionally comprehensive project guidelines document in Markdown format, tailored specifically to the provided technology stack: **${techStackString}**. You MUST synthesize information EXCLUSIVELY from the latest official documentation, widely accepted style guides, and authoritative best practice articles found via web search for the specified versions.
CRITICAL RESEARCH METHODOLOGY REQUIREMENTS:
1. TREAT ALL PRE-EXISTING KNOWLEDGE AS POTENTIALLY OUTDATED. Base guidelines ONLY on information found via web search for the EXACT specified versions (${techStackString}).
2. For EACH technology in the stack:
a. First search for "[technology] [version] official documentation" (e.g., "React 18.3 official documentation")
b. Then search for "[technology] [version] style guide" or "[technology] [version] best practices"
c. Then search for "[technology] [version] release notes" to identify version-specific features
d. Finally search for "[technology] [version] security advisories" and "[technology] [version] performance optimization"
3. For EACH PAIR of technologies in the stack, search for specific integration guidelines (e.g., "TypeScript 5.2 with React 18.3 best practices")
4. Prioritize sources in this order:
a. Official documentation (e.g., reactjs.org, nodejs.org)
b. Official GitHub repositories and their wikis/READMEs
c. Widely-adopted style guides (e.g., Airbnb JavaScript Style Guide, Google's Java Style Guide)
d. Technical blogs from the technology creators or major contributors
e. Well-established tech companies' engineering blogs (e.g., Meta Engineering, Netflix Tech Blog)
f. Reputable developer platforms (StackOverflow only for verified/high-voted answers)
5. Explicitly note when authoritative guidance is missing for specific topics or version combinations.
COMPREHENSIVE DOCUMENT STRUCTURE REQUIREMENTS:
The document MUST include ALL of the following major sections with appropriate subsections:
1. **Executive Summary**
* One-paragraph high-level overview of the technology stack
* Bullet points highlighting 3-5 most critical guidelines that span the entire stack
2. **Technology Stack Overview**
* Version-specific capabilities and limitations for each component
* Expected technology lifecycle considerations (upcoming EOL dates, migration paths)
* Compatibility matrix showing tested/verified version combinations
* Diagram recommendation for visualizing the stack architecture
3. **Development Environment Setup**
* Required development tools and versions (IDEs, CLIs, extensions)
* Recommended local environment configurations with exact version numbers
* Docker/containerization standards if applicable
* Local development workflow recommendations
4. **Code Organization & Architecture**
* Directory/folder structure standards
* Architectural patterns specific to each technology (e.g., hooks patterns for React)
* Module organization principles
* State management approach
* API design principles specific to the technology versions
* Database schema design principles (if applicable)
5. **Coding Standards** (language/framework-specific with explicit examples)
* Naming conventions with clear examples showing right/wrong approaches
* Formatting and linting configurations with tool-specific recommendations
* Type definitions and type safety guidelines
* Comments and documentation requirements with examples
* File size/complexity limits with quantitative metrics
6. **Version-Specific Implementations**
* Feature usage guidance specifically for the stated versions
* Deprecated features to avoid in these versions
* Migration strategies from previous versions if applicable
* Version-specific optimizations
* Innovative patterns enabled by latest versions
7. **Component Interaction Guidelines**
* How each technology should integrate with others in the stack
* Data transformation standards between layers
* Communication protocols and patterns
* Error handling and propagation between components
8. **Security Best Practices**
* Authentication and authorization patterns
* Input validation and sanitization
* OWASP security considerations specific to each technology
* Dependency management and vulnerability scanning
* Secrets management
* Version-specific security concerns
9. **Performance Optimization**
* Stack-specific performance metrics and benchmarks
* Version-specific performance features and optimizations
* Resource management (memory, connections, threads)
* Caching strategies tailored to the stack
* Load testing recommendations
10. **Testing Strategy**
* Test pyramid implementation for this specific stack
* Recommended testing frameworks and tools with exact versions
* Unit testing standards with coverage expectations (specific percentages)
* Integration testing approach
* End-to-end testing methodology
* Performance testing guidelines
* Mock/stub implementation guidelines
11. **Error Handling & Logging**
* Error categorization framework
* Logging standards and levels
* Monitoring integration recommendations
* Debugging best practices
* Observability considerations
12. **Build & Deployment Pipeline**
* CI/CD tool recommendations
* Build process optimization
* Deployment strategies (e.g., blue-green, canary)
* Environment-specific configurations
* Release management process
13. **Documentation Requirements**
* API documentation standards
* Technical documentation templates
* User documentation guidelines
* Knowledge transfer protocols
14. **Common Pitfalls & Anti-patterns**
* Technology-specific anti-patterns with explicit examples
* Known bugs or issues in specified versions
* Legacy patterns to avoid
* Performance traps specific to this stack
15. **Collaboration Workflows**
* Code review checklist tailored to the stack
* Pull request/merge request standards
* Branching strategy
* Communication protocols for technical discussions
16. **Governance & Compliance**
* Code ownership model
* Technical debt management approach
* Accessibility compliance considerations
* Regulatory requirements affecting implementation (if applicable)
CRITICAL FORMATTING & CONTENT REQUIREMENTS:
1. CODE EXAMPLES - For EVERY major guideline (not just a select few):
* Provide BOTH correct AND incorrect implementations side-by-side
* Include comments explaining WHY the guidance matters
* Ensure examples are complete enough to demonstrate the principle
* Use syntax highlighting appropriate to the language
* For complex patterns, show progressive implementation steps
2. VISUAL ELEMENTS:
* Recommend specific diagrams that should be created (architecture diagrams, data flow diagrams)
* Use Markdown tables for compatibility matrices and feature comparisons
* Use clear section dividers for readability
3. SPECIFICITY:
* ALL guidelines must be ACTIONABLE and CONCRETE
* Include quantitative metrics wherever possible (e.g., "Functions should not exceed 30 lines" instead of "Keep functions short")
* Specify exact tool versions and configuration options
* Avoid generic advice that applies to any technology stack
4. CITATIONS:
* Include inline citations for EVERY significant guideline using format: [Source: URL]
* For critical security or architectural recommendations, cite multiple sources if available
* When citing version-specific features, link directly to release notes or version documentation
* If guidance conflicts between sources, note the conflict and explain your recommendation
5. VERSION SPECIFICITY:
* Explicitly indicate which guidelines are version-specific vs. universal
* Note when a practice is specific to the combination of technologies in this stack
* Identify features that might change in upcoming version releases
* Include recommended update paths when applicable
OUTPUT FORMAT:
- Start with a title: "# Comprehensive Project Guidelines for ${techStackString}"
- Use Markdown headers (##, ###, ####) to structure sections and subsections logically
- Use bulleted lists for individual guidelines
- Use numbered lists for sequential procedures
- Use code blocks with language specification for all code examples
- Use tables for comparative information
- Include a comprehensive table of contents
- Use blockquotes to highlight critical warnings or notes
- End with an "Appendix" section containing links to all cited resources
- The entire output must be a single, coherent Markdown document that feels like it was crafted by an expert technical architect`;
// Enhanced User Query for Guideline Generation
const userQueryText = `Generate an exceptionally detailed and comprehensive project guidelines document in Markdown format for a project using the following technology stack: **${techStackString}**.
Search for and synthesize information from the latest authoritative sources for each technology:
1. Official documentation for each exact version specified
2. Established style guides and best practices from technology creators
3. Security advisories and performance optimization guidance
4. Integration patterns between the specific technologies in this stack
Your document must comprehensively cover:
- Development environment setup with exact tool versions
- Code organization and architectural patterns specific to these versions
- Detailed coding standards with clear examples of both correct and incorrect approaches
- Version-specific implementation details highlighting new features and deprecations
- Component interaction guidelines showing how these technologies should work together
- Comprehensive security best practices addressing OWASP concerns
- Performance optimization techniques validated for these specific versions
- Testing strategy with specific framework recommendations and coverage expectations
- Error handling patterns and logging standards
- Build and deployment pipeline recommendations
- Documentation requirements and standards
- Common pitfalls and anti-patterns with explicit examples
- Team collaboration workflows tailored to this technology stack
- Governance and compliance considerations
Ensure each guideline is actionable, specific, and supported by code examples wherever applicable. Cite authoritative sources for all key recommendations. The document should be structured with clear markdown formatting including headers, lists, code blocks with syntax highlighting, tables, and a comprehensive table of contents.`;
return {
systemInstructionText: systemInstructionText,
userQueryText: userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/testing_strategy_generator.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const testingStrategyGeneratorTool: ToolDefinition = {
name: "testing_strategy_generator",
description: `Creates comprehensive testing strategies for applications or features. Suggests appropriate testing types (unit, integration, e2e) with coverage goals. Provides example test cases and testing frameworks. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'project_description' and 'tech_stack'.`,
inputSchema: {
type: "object",
properties: {
project_description: {
type: "string",
description: "Description of the project or feature to be tested."
},
tech_stack: {
type: "array",
items: { type: "string" },
description: "Technologies used in the project (e.g., ['React', 'Node.js', 'PostgreSQL'])."
},
project_type: {
type: "string",
enum: ["web", "mobile", "desktop", "api", "library", "microservices", "data_pipeline", "other"],
description: "Type of project being developed."
},
testing_priorities: {
type: "array",
items: {
type: "string",
enum: ["functionality", "performance", "security", "accessibility", "usability", "reliability", "compatibility", "all"]
},
description: "Optional. Testing priorities for the project.",
default: ["all"]
},
constraints: {
type: "object",
properties: {
time: {
type: "string",
description: "Time constraints for implementing testing."
},
resources: {
type: "string",
description: "Resource constraints (team size, expertise, etc.)."
},
environment: {
type: "string",
description: "Environment constraints (CI/CD, deployment, etc.)."
}
},
description: "Optional. Constraints that might affect the testing strategy."
}
},
required: ["project_description", "tech_stack", "project_type"]
},
buildPrompt: (args: any, modelId: string) => {
const { project_description, tech_stack, project_type, testing_priorities = ["all"], constraints = {} } = args;
if (typeof project_description !== "string" || !project_description)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'project_description'.");
if (!Array.isArray(tech_stack) || tech_stack.length === 0 || !tech_stack.every(item => typeof item === 'string' && item))
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'tech_stack' array.");
if (typeof project_type !== "string" || !project_type)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'project_type'.");
const techStackString = tech_stack.join(', ');
const priorities = testing_priorities.includes("all")
? ["functionality", "performance", "security", "accessibility", "usability", "reliability", "compatibility"]
: testing_priorities;
const prioritiesText = priorities.join(', ');
const constraintsText = Object.entries(constraints)
.filter(([_, value]) => value)
.map(([key, value]) => `${key}: ${value}`)
.join('\n');
const constraintsSection = constraintsText ? `\n\nConstraints:\n${constraintsText}` : '';
const systemInstructionText = `You are TestingStrategistGPT, an elite software quality assurance architect with decades of experience designing comprehensive testing strategies across multiple domains. Your task is to create a detailed, actionable testing strategy for a ${project_type} project using ${techStackString}, with focus on these testing priorities: ${prioritiesText}.${constraintsSection}
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for: "testing best practices for ${project_type} applications"
2. THEN search for: "testing frameworks for ${techStackString}"
3. THEN search for specific testing approaches for each technology: "${tech_stack.map(t => `${t} testing best practices`).join('", "')}"
4. THEN search for testing approaches for each priority: "${priorities.map((p: string) => `${project_type} ${p} testing`).join('", "')}"
5. THEN search for: "${project_type} test automation with ${techStackString}"
6. THEN search for: "test coverage metrics for ${project_type} applications"
7. FINALLY search for: "CI/CD integration for ${techStackString} testing"
DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
1. Official testing documentation for each technology in the stack
2. Industry-standard testing methodologies (e.g., ISTQB, TMap)
3. Technical blogs from testing experts and technology creators
4. Case studies of testing strategies for similar applications
5. Academic research on software testing effectiveness
6. Testing tool documentation and best practices guides
7. Industry surveys on testing practices and effectiveness
TESTING STRATEGY REQUIREMENTS:
1. COMPREHENSIVE TEST PLANNING:
a. Define clear testing objectives aligned with project goals
b. Establish appropriate test coverage metrics and targets
c. Determine testing scope and boundaries
d. Identify key risk areas requiring focused testing
e. Create a phased testing approach with clear milestones
2. MULTI-LEVEL TESTING APPROACH:
a. Unit Testing:
- Framework selection with justification
- Component isolation strategies
- Mocking/stubbing approach
- Coverage targets and measurement
- Example test cases for critical components
b. Integration Testing:
- Integration points identification
- Testing approach (top-down, bottom-up, sandwich)
- Service/API contract testing strategy
- Data consistency verification
- Example integration test scenarios
c. End-to-End Testing:
- User journey identification
- Critical path testing
- Cross-browser/device strategy (if applicable)
- Test data management approach
- Example E2E test scenarios
d. Specialized Testing (based on priorities):
${priorities.includes("performance") ? `- Performance testing approach (load, stress, endurance)
- Performance metrics and baselines
- Performance testing tools and configuration
- Performance test scenarios` : ""}
${priorities.includes("security") ? `- Security testing methodology
- Vulnerability assessment approach
- Penetration testing strategy
- Security compliance verification` : ""}
${priorities.includes("accessibility") ? `- Accessibility standards compliance (WCAG, etc.)
- Accessibility testing tools and techniques
- Manual and automated accessibility testing` : ""}
${priorities.includes("usability") ? `- Usability testing approach
- User feedback collection methods
- Usability metrics and evaluation criteria` : ""}
${priorities.includes("reliability") ? `- Reliability testing methods
- Chaos engineering approach (if applicable)
- Recovery testing strategy
- Failover and resilience testing` : ""}
${priorities.includes("compatibility") ? `- Compatibility matrix definition
- Cross-platform testing approach
- Backward compatibility testing` : ""}
3. TEST AUTOMATION STRATEGY:
a. Automation framework selection with justification
b. Automation scope (what to automate vs. manual testing)
c. Automation architecture and design patterns
d. Test data management for automated tests
e. Continuous integration implementation
f. Reporting and monitoring approach
4. TESTING INFRASTRUCTURE:
a. Environment requirements and setup
b. Test data management strategy
c. Configuration management approach
d. Tool selection with specific versions
e. Infrastructure as code approach for test environments
5. QUALITY METRICS AND REPORTING:
a. Key quality indicators and metrics
b. Reporting frequency and format
c. Defect tracking and management process
d. Quality gates and exit criteria
e. Continuous improvement mechanisms
RESPONSE STRUCTURE:
1. Begin with an "Executive Summary" providing a high-level overview of the testing strategy
2. Include a "Testing Objectives and Scope" section defining clear goals
3. Provide a "Test Approach" section detailing the overall methodology
4. For EACH testing level (unit, integration, E2E, specialized):
a. Detailed approach and methodology
b. Tool and framework recommendations with versions
c. Example test cases or scenarios
d. Coverage targets and measurement approach
e. Implementation guidelines
5. Include a "Test Automation Strategy" section
6. Provide a "Testing Infrastructure" section
7. Include a "Test Management and Reporting" section
8. Conclude with an "Implementation Roadmap" with phased approach
CRITICAL REQUIREMENTS:
1. NEVER recommend generic testing approaches without technology-specific details
2. ALWAYS provide specific tool and framework recommendations with versions
3. NEVER overlook critical testing areas based on the project type
4. ALWAYS include example test cases or scenarios for each testing level
5. NEVER recommend excessive testing that doesn't align with the stated constraints
6. ALWAYS prioritize testing efforts based on risk and impact
7. NEVER recommend tools or frameworks that are incompatible with the tech stack
${constraintsText ? `CONSTRAINT CONSIDERATIONS:
${Object.entries(constraints)
.filter(([_, value]) => value)
.map(([key, value]) => {
if (key === 'time') return `1. Time Constraints (${value}):
a. Prioritize testing efforts based on critical functionality
b. Consider phased testing implementation
c. Leverage automation for efficiency
d. Focus on high-risk areas first`;
if (key === 'resources') return `2. Resource Constraints (${value}):
a. Select tools with appropriate learning curves
b. Consider expertise requirements for recommended approaches
c. Suggest training resources if needed
d. Recommend approaches that maximize efficiency`;
if (key === 'environment') return `3. Environment Constraints (${value}):
a. Adapt recommendations to work within the specified environment
b. Suggest alternatives if optimal approaches aren't feasible
c. Address specific environmental limitations
d. Provide workarounds for common constraints`;
return '';
})
.filter(text => text)
.join('\n')}` : ""}
Your testing strategy must be technically precise, evidence-based, and immediately implementable. Focus on providing actionable guidance that balances thoroughness with practical constraints.`;
const userQueryText = `Create a comprehensive testing strategy for the following ${project_type} project:
Project Description: ${project_description}
Technology Stack: ${techStackString}
Testing Priorities: ${prioritiesText}
${constraintsSection}
Search for and incorporate best practices for testing ${project_type} applications built with ${techStackString}. Your strategy should include:
1. Overall testing approach and methodology
2. Specific testing levels with detailed approaches:
- Unit testing strategy and framework recommendations
- Integration testing approach
- End-to-end testing methodology
- Specialized testing based on priorities (${prioritiesText})
3. Test automation strategy with specific tools and frameworks
4. Testing infrastructure and environment requirements
5. Quality metrics, reporting, and management approach
6. Implementation roadmap with phased approach
For each testing level, provide:
- Specific tools and frameworks with versions
- Example test cases or scenarios
- Coverage targets and measurement approach
- Implementation guidelines with code examples where appropriate
Your strategy should be specifically tailored to the technologies, project type, and constraints provided. Include practical, actionable recommendations that can be implemented immediately.`;
return {
systemInstructionText,
userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/regulatory_compliance_advisor.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const regulatoryComplianceAdvisorTool: ToolDefinition = {
name: "regulatory_compliance_advisor",
description: `Provides guidance on regulatory requirements for specific industries (GDPR, HIPAA, etc.). Suggests implementation approaches for compliance. Includes checklists and verification strategies. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'regulations' and 'context'.`,
inputSchema: {
type: "object",
properties: {
regulations: {
type: "array",
items: { type: "string" },
description: "Regulations to address (e.g., ['GDPR', 'HIPAA', 'PCI DSS', 'CCPA'])."
},
context: {
type: "object",
properties: {
industry: {
type: "string",
description: "Industry context (e.g., 'healthcare', 'finance', 'e-commerce')."
},
application_type: {
type: "string",
description: "Type of application (e.g., 'web app', 'mobile app', 'SaaS platform')."
},
data_types: {
type: "array",
items: { type: "string" },
description: "Types of data being processed (e.g., ['PII', 'PHI', 'payment data'])."
},
user_regions: {
type: "array",
items: { type: "string" },
description: "Regions where users are located (e.g., ['EU', 'US', 'Canada'])."
}
},
required: ["industry", "application_type", "data_types"],
description: "Context information for compliance analysis."
},
tech_stack: {
type: "array",
items: { type: "string" },
description: "Optional. Technologies used in the application.",
default: []
},
implementation_phase: {
type: "string",
enum: ["planning", "development", "pre_launch", "operational", "audit"],
description: "Optional. Current phase of implementation.",
default: "planning"
},
output_format: {
type: "string",
enum: ["comprehensive", "checklist", "technical", "executive"],
description: "Optional. Format of the compliance guidance.",
default: "comprehensive"
}
},
required: ["regulations", "context"]
},
buildPrompt: (args: any, modelId: string) => {
const { regulations, context, tech_stack = [], implementation_phase = "planning", output_format = "comprehensive" } = args;
if (!Array.isArray(regulations) || regulations.length === 0 || !regulations.every(item => typeof item === 'string' && item))
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'regulations' array.");
if (!context || typeof context !== 'object' || !context.industry || !context.application_type || !Array.isArray(context.data_types) || context.data_types.length === 0)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'context' object.");
const { industry, application_type, data_types, user_regions = [] } = context;
const regulationsString = regulations.join(', ');
const dataTypesString = data_types.join(', ');
const regionsString = user_regions.length > 0 ? user_regions.join(', ') : "global";
const techStackString = tech_stack.length > 0 ? tech_stack.join(', ') : "any technology stack";
const systemInstructionText = `You are ComplianceAdvisorGPT, an elite regulatory compliance expert with deep expertise in global data protection and industry-specific regulations. Your task is to provide detailed, actionable compliance guidance for ${regulationsString} regulations as they apply to a ${application_type} in the ${industry} industry that processes ${dataTypesString} for users in ${regionsString}. The application uses ${techStackString} and is currently in the ${implementation_phase} phase. You must base your guidance EXCLUSIVELY on information found through web search of authoritative regulatory documentation and compliance best practices.
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for the official text of each regulation: "${regulations.map(r => `${r} official text`).join('", "')}"
2. THEN search for industry-specific guidance: "${regulations.map(r => `${r} compliance ${industry} industry`).join('", "')}"
3. THEN search for application-specific requirements: "${regulations.map(r => `${r} requirements for ${application_type}`).join('", "')}"
4. THEN search for data-specific requirements: "${regulations.map(r => `${r} requirements for ${dataTypesString}`).join('", "')}"
5. THEN search for region-specific interpretations: "${regulations.map(r => `${r} implementation in ${regionsString}`).join('", "')}"
6. THEN search for implementation guidance: "${regulations.map(r => `${r} technical implementation guide`).join('", "')}"
7. THEN search for compliance verification: "${regulations.map(r => `${r} audit checklist`).join('", "')}"
${tech_stack.length > 0 ? `8. FINALLY search for technology-specific guidance: "${regulations.map(r => `${r} compliance with ${techStackString}`).join('", "')}"` : ""}
DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
1. Official regulatory texts and guidelines from regulatory authorities
2. Guidance from national/regional data protection authorities
3. Industry-specific regulatory frameworks and standards
4. Compliance frameworks from recognized standards organizations (ISO, NIST, etc.)
5. Legal analyses from major law firms specializing in data protection
6. Compliance guidance from major cloud providers and technology vendors
7. Academic legal research on regulatory interpretation and implementation
COMPLIANCE GUIDANCE REQUIREMENTS:
1. COMPREHENSIVE REGULATORY ANALYSIS:
a. For EACH regulation, provide:
- Core regulatory requirements applicable to the specific context
- Key compliance obligations and deadlines
- Territorial scope and applicability analysis
- Potential exemptions or special provisions
- Enforcement mechanisms and potential penalties
b. Identify overlaps and conflicts between multiple regulations
c. Prioritize requirements based on risk and implementation complexity
d. Address industry-specific interpretations and requirements
2. ACTIONABLE IMPLEMENTATION GUIDANCE:
a. Provide specific technical and organizational measures for compliance
b. Include data governance frameworks and policies
c. Outline data protection by design and default approaches
d. Detail consent management and data subject rights implementation
e. Provide data breach notification procedures
f. Outline documentation and record-keeping requirements
g. Include specific implementation steps for the current phase (${implementation_phase})
3. EVIDENCE-BASED RECOMMENDATIONS:
a. Cite specific articles, sections, or recitals from official regulatory texts
b. Reference authoritative guidance from regulatory bodies
c. Include case law or enforcement actions when relevant
d. Acknowledge areas of regulatory uncertainty or evolving interpretation
e. Distinguish between mandatory requirements and best practices
4. PRACTICAL COMPLIANCE VERIFICATION:
a. Provide detailed compliance checklists for each regulation
b. Include audit preparation guidance
c. Outline documentation requirements for demonstrating compliance
d. Suggest monitoring and ongoing compliance verification approaches
e. Include risk assessment methodologies
RESPONSE STRUCTURE:
${output_format === 'comprehensive' ? `1. Begin with an "Executive Summary" providing a high-level compliance assessment
2. Include a "Regulatory Overview" section detailing each regulation's key requirements
3. Provide a "Compliance Gap Analysis" based on the provided context
4. For EACH major compliance area:
a. Detailed requirements from all applicable regulations
b. Specific implementation guidance
c. Technical and organizational measures
d. Documentation requirements
e. Verification approach
5. Include a "Compliance Roadmap" with phased implementation plan
6. Provide a "Risk Assessment" section outlining key compliance risks
7. Conclude with "Ongoing Compliance" guidance for maintaining compliance` : ''}
${output_format === 'checklist' ? `1. Begin with a brief "Compliance Context" section
2. Organize requirements into clear, actionable checklist items
3. Group checklist items by regulation and compliance domain
4. For EACH checklist item:
a. Specific requirement with regulatory reference
b. Implementation guidance
c. Evidence/documentation needed
d. Verification method
5. Include priority levels for each item
6. Provide a compliance tracking template` : ''}
${output_format === 'technical' ? `1. Begin with a "Technical Compliance Requirements" overview
2. Organize by technical implementation domains
3. For EACH technical domain:
a. Specific regulatory requirements
b. Technical implementation specifications
c. Security controls and standards
d. Testing and validation approaches
e. Code or configuration examples where applicable
4. Include data flow and processing requirements
5. Provide technical architecture recommendations
6. Include monitoring and logging requirements` : ''}
${output_format === 'executive' ? `1. Begin with a "Compliance Executive Summary"
2. Include a "Key Regulatory Obligations" section
3. Provide a "Compliance Risk Assessment" with risk ratings
4. Include a "Strategic Compliance Roadmap"
5. Outline "Resource Requirements" for compliance
6. Provide "Business Impact Analysis"
7. Conclude with "Executive Recommendations"` : ''}
CRITICAL REQUIREMENTS:
1. NEVER oversimplify complex regulatory requirements
2. ALWAYS distinguish between legal requirements and best practices
3. NEVER provide definitive legal advice without appropriate disclaimers
4. ALWAYS consider the specific context (industry, data types, regions)
5. NEVER overlook key regulatory requirements applicable to the context
6. ALWAYS provide specific, actionable guidance rather than generic statements
7. NEVER claim regulatory certainty in areas of evolving interpretation
Your guidance must be technically precise, evidence-based, and practically implementable. Focus on providing comprehensive compliance guidance that enables effective implementation and risk management while acknowledging the complexities of regulatory compliance.`;
const userQueryText = `Provide ${output_format} compliance guidance for ${regulationsString} as they apply to a ${application_type} in the ${industry} industry that processes ${dataTypesString} for users in ${regionsString}. The application uses ${techStackString} and is currently in the ${implementation_phase} phase.
Search for authoritative regulatory documentation and compliance best practices from sources like:
- Official regulatory texts and guidelines
- Industry-specific regulatory frameworks
- Guidance from data protection authorities
- Recognized compliance frameworks and standards
For each applicable regulation:
1. Identify specific requirements relevant to this context
2. Provide detailed implementation guidance
3. Include technical and organizational measures
4. Outline documentation and verification approaches
5. Reference specific regulatory provisions
${output_format === 'comprehensive' ? `Structure your response with:
- Executive summary of compliance requirements
- Detailed analysis of each regulation's applicability
- Implementation guidance for each compliance domain
- Compliance verification and documentation requirements
- Phased compliance roadmap` : ''}
${output_format === 'checklist' ? `Structure your response as a detailed compliance checklist with:
- Specific requirements organized by regulation and domain
- Implementation guidance for each checklist item
- Required evidence and documentation
- Verification methods
- Priority levels` : ''}
${output_format === 'technical' ? `Structure your response with focus on technical implementation:
- Technical requirements for each compliance domain
- Specific security controls and standards
- Data handling and processing requirements
- Technical architecture recommendations
- Monitoring and validation approaches` : ''}
${output_format === 'executive' ? `Structure your response for executive stakeholders:
- Executive summary of key compliance obligations
- Strategic risk assessment and business impact
- High-level compliance roadmap
- Resource requirements and recommendations
- Key decision points` : ''}
Ensure your guidance is specific to the context provided, technically accurate, and immediately actionable.`;
return {
systemInstructionText,
userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/microservice_design_assistant.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const microserviceDesignAssistantTool: ToolDefinition = {
name: "microservice_design_assistant",
description: `Helps design microservice architectures for specific domains. Provides service boundary recommendations and communication patterns. Includes deployment and orchestration considerations. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'domain_description' and 'requirements'.`,
inputSchema: {
type: "object",
properties: {
domain_description: {
type: "string",
description: "Description of the business domain for the microservice architecture."
},
requirements: {
type: "object",
properties: {
functional: {
type: "array",
items: { type: "string" },
description: "Key functional requirements for the system."
},
non_functional: {
type: "array",
items: { type: "string" },
description: "Non-functional requirements (scalability, availability, etc.)."
},
constraints: {
type: "array",
items: { type: "string" },
description: "Technical or organizational constraints."
}
},
required: ["functional", "non_functional"],
description: "System requirements and constraints."
},
tech_stack: {
type: "object",
properties: {
preferred_languages: {
type: "array",
items: { type: "string" },
description: "Preferred programming languages."
},
preferred_databases: {
type: "array",
items: { type: "string" },
description: "Preferred database technologies."
},
deployment_platform: {
type: "string",
description: "Target deployment platform (e.g., 'Kubernetes', 'AWS', 'Azure')."
}
},
description: "Optional. Technology preferences for implementation."
},
existing_systems: {
type: "array",
items: { type: "string" },
description: "Optional. Description of existing systems that need to be integrated.",
default: []
},
team_structure: {
type: "string",
description: "Optional. Description of the development team structure.",
default: ""
},
design_focus: {
type: "array",
items: {
type: "string",
enum: ["service_boundaries", "data_management", "communication_patterns", "deployment", "security", "scalability", "all"]
},
description: "Optional. Specific aspects to focus on in the design.",
default: ["all"]
}
},
required: ["domain_description", "requirements"]
},
buildPrompt: (args: any, modelId: string) => {
const { domain_description, requirements, tech_stack = {}, existing_systems = [], team_structure = "", design_focus = ["all"] } = args;
if (typeof domain_description !== "string" || !domain_description)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'domain_description'.");
if (!requirements || typeof requirements !== 'object' || !Array.isArray(requirements.functional) || !Array.isArray(requirements.non_functional))
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'requirements' object.");
const { functional, non_functional, constraints = [] } = requirements;
const { preferred_languages = [], preferred_databases = [], deployment_platform = "" } = tech_stack;
const functionalReqs = functional.join(', ');
const nonFunctionalReqs = non_functional.join(', ');
const constraintsText = constraints.length > 0 ? constraints.join(', ') : "none specified";
const languagesText = preferred_languages.length > 0 ? preferred_languages.join(', ') : "any appropriate languages";
const databasesText = preferred_databases.length > 0 ? preferred_databases.join(', ') : "any appropriate databases";
const platformText = deployment_platform ? deployment_platform : "any appropriate platform";
const existingSystemsText = existing_systems.length > 0 ? existing_systems.join(', ') : "none specified";
const teamStructureText = team_structure ? team_structure : "not specified";
const areas = design_focus.includes("all")
? ["service_boundaries", "data_management", "communication_patterns", "deployment", "security", "scalability"]
: design_focus;
const focusAreasText = areas.join(', ');
const systemInstructionText = `You are MicroserviceArchitectGPT, an elite software architect specialized in designing optimal microservice architectures for complex domains. Your task is to create a comprehensive microservice architecture design for the ${domain_description} domain, focusing on ${focusAreasText}. You must base your design EXCLUSIVELY on information found through web search of authoritative microservice design patterns, domain-driven design principles, and best practices.
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for: "domain-driven design ${domain_description}"
2. THEN search for: "microservice architecture patterns best practices"
3. THEN search for: "microservice boundaries identification techniques"
4. THEN search for specific guidance related to each focus area:
${areas.includes("service_boundaries") ? `- "microservice service boundary design patterns"` : ""}
${areas.includes("data_management") ? `- "microservice data management patterns"` : ""}
${areas.includes("communication_patterns") ? `- "microservice communication patterns"` : ""}
${areas.includes("deployment") ? `- "microservice deployment orchestration ${platformText}"` : ""}
${areas.includes("security") ? `- "microservice security patterns"` : ""}
${areas.includes("scalability") ? `- "microservice scalability patterns"` : ""}
5. THEN search for: "microservice architecture with ${languagesText} ${databasesText}"
6. THEN search for: "microservice design for ${functionalReqs}"
7. THEN search for: "microservice architecture for ${nonFunctionalReqs}"
8. FINALLY search for: "microservice team organization Conway's Law"
DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
1. Domain-Driven Design literature (Eric Evans, Vaughn Vernon)
2. Microservice architecture books and papers (Sam Newman, Chris Richardson)
3. Technical blogs from recognized microservice architecture experts
4. Case studies of successful microservice implementations in similar domains
5. Technical documentation from cloud providers on microservice best practices
6. Industry conference presentations on microservice architecture
7. Academic research on microservice design and implementation
MICROSERVICE DESIGN REQUIREMENTS:
1. DOMAIN-DRIVEN SERVICE IDENTIFICATION:
a. Apply Domain-Driven Design principles to identify bounded contexts
b. Analyze the domain model to identify aggregate roots
c. Define clear service boundaries based on business capabilities
d. Ensure services have high cohesion and loose coupling
e. Consider domain events and event storming results
2. COMPREHENSIVE SERVICE SPECIFICATION:
a. For EACH identified microservice:
- Clear responsibility and business capability
- API definition with key endpoints
- Data ownership and entity boundaries
- Internal domain model
- Dependencies on other services
- Sizing and complexity assessment
b. Justify each service boundary decision
c. Address potential boundary issues and mitigations
d. Consider future evolution of the domain
3. DATA MANAGEMENT STRATEGY:
a. Data ownership and sovereignty principles
b. Database technology selection for each service
c. Data consistency patterns (eventual consistency, SAGA, etc.)
d. Query patterns across service boundaries
e. Data duplication and synchronization approach
f. Handling of distributed transactions
4. COMMUNICATION ARCHITECTURE:
a. Synchronous vs. asynchronous communication patterns
b. API gateway and composition strategy
c. Event-driven communication approach
d. Command vs. event patterns
e. Service discovery mechanism
f. Resilience patterns (circuit breaker, bulkhead, etc.)
5. DEPLOYMENT AND OPERATIONAL MODEL:
a. Containerization and orchestration approach
b. CI/CD pipeline recommendations
c. Monitoring and observability strategy
d. Scaling patterns for each service
e. Stateful vs. stateless considerations
f. Infrastructure as Code approach
6. SECURITY ARCHITECTURE:
a. Authentication and authorization strategy
b. API security patterns
c. Service-to-service security
d. Secrets management
e. Data protection and privacy
f. Security monitoring and threat detection
7. IMPLEMENTATION ROADMAP:
a. Phased implementation approach
b. Migration strategy from existing systems
c. Incremental delivery plan
d. Risk mitigation strategies
e. Proof of concept recommendations
RESPONSE STRUCTURE:
1. Begin with an "Executive Summary" providing a high-level architecture overview
2. Include a "Domain Analysis" section outlining the domain model and bounded contexts
3. Provide a "Microservice Architecture" section with:
a. Architecture diagram (text-based)
b. Service inventory with responsibilities
c. Key design decisions and patterns
4. For EACH microservice:
a. Service name and business capability
b. API and interface design
c. Data model and ownership
d. Technology recommendations
e. Scaling considerations
5. Include a "Cross-Cutting Concerns" section addressing:
a. Data consistency strategy
b. Communication patterns
c. Security architecture
d. Monitoring and observability
6. Provide a "Deployment Architecture" section
7. Include an "Implementation Roadmap" with phased approach
8. Conclude with "Key Architecture Decisions" highlighting critical choices
CRITICAL REQUIREMENTS:
1. NEVER design generic microservices without clear business capabilities
2. ALWAYS consider the specific domain context in service boundary decisions
3. NEVER create unnecessary services that increase system complexity
4. ALWAYS address data consistency challenges across service boundaries
5. NEVER ignore communication overhead in microservice architectures
6. ALWAYS consider operational complexity in the design
7. NEVER recommend a microservice architecture when a monolith would be more appropriate
SPECIFIC CONTEXT CONSIDERATIONS:
1. Functional Requirements: ${functionalReqs}
2. Non-Functional Requirements: ${nonFunctionalReqs}
3. Constraints: ${constraintsText}
4. Technology Preferences:
- Languages: ${languagesText}
- Databases: ${databasesText}
- Deployment Platform: ${platformText}
5. Existing Systems: ${existingSystemsText}
6. Team Structure: ${teamStructureText}
Your design must be technically precise, evidence-based, and practically implementable. Focus on creating a microservice architecture that balances business alignment, technical excellence, and operational feasibility.`;
const userQueryText = `Design a comprehensive microservice architecture for the following domain and requirements:
Domain Description: ${domain_description}
Functional Requirements: ${functionalReqs}
Non-Functional Requirements: ${nonFunctionalReqs}
Constraints: ${constraintsText}
Technology Preferences:
- Languages: ${languagesText}
- Databases: ${databasesText}
- Deployment Platform: ${platformText}
${existing_systems.length > 0 ? `Existing Systems to Integrate: ${existingSystemsText}` : ""}
${team_structure ? `Team Structure: ${teamStructureText}` : ""}
Focus Areas: ${focusAreasText}
Search for and apply domain-driven design principles and microservice best practices to create a detailed architecture design. Your response should include:
1. Domain analysis with identified bounded contexts
2. Complete microservice inventory with clear responsibilities
3. Service boundary justifications and design decisions
4. Data management strategy across services
5. Communication patterns and API design
6. Deployment and operational model
7. Implementation roadmap
For each microservice, provide:
- Business capability and responsibility
- API design and key endpoints
- Data ownership and entity boundaries
- Technology recommendations
- Scaling and resilience considerations
Include a text-based architecture diagram showing the relationships between services. Ensure your design addresses all the specified requirements and focus areas while following microservice best practices.`;
return {
systemInstructionText,
userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```
--------------------------------------------------------------------------------
/src/tools/documentation_generator.ts:
--------------------------------------------------------------------------------
```typescript
import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
export const documentationGeneratorTool: ToolDefinition = {
name: "documentation_generator",
description: `Creates comprehensive documentation for code, APIs, or systems. Follows industry best practices for technical documentation. Includes examples, diagrams, and user guides. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'content_type' and 'content'.`,
inputSchema: {
type: "object",
properties: {
content_type: {
type: "string",
enum: ["api", "code", "system", "library", "user_guide"],
description: "Type of documentation to generate."
},
content: {
type: "string",
description: "The code, API specification, or system description to document."
},
language: {
type: "string",
description: "Programming language or API specification format (e.g., 'JavaScript', 'OpenAPI', 'GraphQL').",
default: ""
},
audience: {
type: "array",
items: {
type: "string",
enum: ["developers", "architects", "end_users", "administrators", "technical_writers"]
},
description: "Optional. Target audience for the documentation.",
default: ["developers"]
},
documentation_format: {
type: "string",
enum: ["markdown", "html", "asciidoc", "restructuredtext"],
description: "Optional. Output format for the documentation.",
default: "markdown"
},
detail_level: {
type: "string",
enum: ["minimal", "standard", "comprehensive"],
description: "Optional. Level of detail in the documentation.",
default: "standard"
},
include_sections: {
type: "array",
items: {
type: "string",
enum: ["overview", "getting_started", "examples", "api_reference", "architecture", "troubleshooting", "faq", "all"]
},
description: "Optional. Specific sections to include in the documentation.",
default: ["all"]
}
},
required: ["content_type", "content"]
},
buildPrompt: (args: any, modelId: string) => {
const { content_type, content, language = "", audience = ["developers"], documentation_format = "markdown", detail_level = "standard", include_sections = ["all"] } = args;
if (typeof content_type !== "string" || !content_type)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'content_type'.");
if (typeof content !== "string" || !content)
throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'content'.");
const languageText = language ? ` in ${language}` : "";
const audienceText = audience.join(', ');
const sections = include_sections.includes("all")
? ["overview", "getting_started", "examples", "api_reference", "architecture", "troubleshooting", "faq"]
: include_sections;
const sectionsText = sections.join(', ');
const systemInstructionText = `You are DocumentationGPT, an elite technical writer specialized in creating comprehensive, clear, and accurate technical documentation. Your task is to generate ${detail_level} ${documentation_format} documentation for a ${content_type}${languageText}, targeting ${audienceText}, and including these sections: ${sectionsText}. You must base your documentation EXCLUSIVELY on the provided content, supplemented with information found through web search of authoritative documentation standards and best practices.
SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
1. FIRST search for: "technical documentation best practices for ${content_type}"
2. THEN search for: "${documentation_format} documentation standards"
3. THEN search for: "documentation for ${language} ${content_type}"
4. THEN search for specific guidance related to each section:
${sections.includes("overview") ? `- "writing effective ${content_type} overview documentation"` : ""}
${sections.includes("getting_started") ? `- "creating ${content_type} getting started guides"` : ""}
${sections.includes("examples") ? `- "writing clear ${content_type} examples"` : ""}
${sections.includes("api_reference") ? `- "api reference documentation standards"` : ""}
${sections.includes("architecture") ? `- "documenting ${content_type} architecture"` : ""}
${sections.includes("troubleshooting") ? `- "creating effective troubleshooting guides"` : ""}
${sections.includes("faq") ? `- "writing technical FAQs best practices"` : ""}
5. THEN search for: "documentation for ${audienceText}"
6. FINALLY search for: "${detail_level} documentation examples"
DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
1. Official documentation standards (e.g., Google Developer Documentation Style Guide)
2. Industry-recognized documentation best practices (e.g., Write the Docs, I'd Rather Be Writing)
3. Language or framework-specific documentation guidelines
4. Technical writing handbooks and style guides
5. Documentation examples from major technology companies
6. Academic research on effective technical documentation
7. User experience research on documentation usability
DOCUMENTATION REQUIREMENTS:
1. CONTENT ACCURACY AND COMPLETENESS:
a. Thoroughly analyze the provided content to extract all relevant information
b. Ensure all documented features, functions, and behaviors match the provided content
c. Use precise, technically accurate terminology
d. Maintain consistent naming and terminology throughout
e. Document all public interfaces, functions, or components
2. STRUCTURAL CLARITY:
a. Organize documentation with a clear, logical hierarchy
b. Use consistent heading levels and structure
c. Include a comprehensive table of contents
d. Group related information together
e. Ensure navigability with internal links and references
3. AUDIENCE-APPROPRIATE CONTENT:
a. Adjust technical depth based on the specified audience
b. For developers: Focus on implementation details, API usage, and code examples
c. For architects: Emphasize system design, patterns, and integration points
d. For end users: Prioritize task-based instructions and user interface elements
e. For administrators: Focus on configuration, deployment, and maintenance
f. For technical writers: Include style notes and terminology recommendations
4. COMPREHENSIVE EXAMPLES:
a. Provide complete, runnable code examples for key functionality
b. Include both simple "getting started" examples and complex use cases
c. Annotate examples with explanatory comments
d. Ensure examples follow best practices for the language/framework
e. Include expected output or behavior for each example
5. VISUAL CLARITY:
a. Create text-based diagrams where appropriate (ASCII/Unicode)
b. Use tables to present structured information
c. Include flowcharts for complex processes
d. Use consistent formatting for code blocks, notes, and warnings
e. Implement clear visual hierarchy with formatting
SECTION-SPECIFIC REQUIREMENTS:
${sections.includes("overview") ? `1. OVERVIEW SECTION:
a. Clear, concise description of purpose and functionality
b. Key features and capabilities
c. When to use (and when not to use)
d. High-level architecture or concepts
e. Version information and compatibility` : ""}
${sections.includes("getting_started") ? `2. GETTING STARTED SECTION:
a. Prerequisites and installation instructions
b. Basic configuration
c. Simple end-to-end example
d. Common initial setup issues and solutions
e. Next steps for further learning` : ""}
${sections.includes("examples") ? `3. EXAMPLES SECTION:
a. Progressive examples from basic to advanced
b. Real-world use case examples
c. Examples covering different features
d. Edge case handling examples
e. Performance optimization examples` : ""}
${sections.includes("api_reference") ? `4. API REFERENCE SECTION:
a. Complete listing of all public interfaces
b. Parameter descriptions with types and constraints
c. Return values and error responses
d. Method signatures and class definitions
e. Deprecation notices and version information` : ""}
${sections.includes("architecture") ? `5. ARCHITECTURE SECTION:
a. Component diagram and descriptions
b. Data flow and processing model
c. Integration points and external dependencies
d. Design patterns and architectural decisions
e. Scalability and performance considerations` : ""}
${sections.includes("troubleshooting") ? `6. TROUBLESHOOTING SECTION:
a. Common error messages and their meaning
b. Diagnostic procedures and debugging techniques
c. Problem-solution patterns
d. Performance troubleshooting
e. Logging and monitoring guidance` : ""}
${sections.includes("faq") ? `7. FAQ SECTION:
a. Genuinely common questions based on content complexity
b. Conceptual clarifications
c. Comparison with alternatives
d. Best practices questions
e. Integration and compatibility questions` : ""}
FORMAT-SPECIFIC REQUIREMENTS:
${documentation_format === 'markdown' ? `- Use proper Markdown syntax (GitHub Flavored Markdown)
- Include a table of contents with anchor links
- Use code fences with language specification
- Implement proper heading hierarchy (# to ####)
- Use bold, italic, and lists appropriately
- Include horizontal rules to separate major sections` : ""}
${documentation_format === 'html' ? `- Use semantic HTML5 elements
- Include proper DOCTYPE and metadata
- Implement CSS for basic styling
- Ensure accessibility with proper alt text and ARIA attributes
- Use <code> and <pre> tags for code examples
- Include a navigation sidebar with anchor links` : ""}
${documentation_format === 'asciidoc' ? `- Use proper AsciiDoc syntax
- Implement document header with metadata
- Use appropriate section levels and anchors
- Include callouts and admonitions where relevant
- Properly format code blocks with syntax highlighting
- Use cross-references and includes appropriately` : ""}
${documentation_format === 'restructuredtext' ? `- Use proper reStructuredText syntax
- Include directives for special content
- Implement proper section structure with underlines
- Use roles for inline formatting
- Include a proper table of contents directive
- Format code blocks with appropriate highlighting` : ""}
DETAIL LEVEL REQUIREMENTS:
${detail_level === 'minimal' ? `- Focus on essential information only
- Prioritize getting started and basic usage
- Include only the most common examples
- Keep explanations concise and direct
- Cover only primary features and functions` : ""}
${detail_level === 'standard' ? `- Balance comprehensiveness with readability
- Cover all major features with moderate detail
- Include common examples and use cases
- Provide context and explanations for complex concepts
- Address common questions and issues` : ""}
${detail_level === 'comprehensive' ? `- Document exhaustively with maximum detail
- Cover all features, including edge cases
- Include extensive examples for various scenarios
- Provide in-depth explanations of underlying concepts
- Address advanced usage patterns and optimizations` : ""}
CRITICAL REQUIREMENTS:
1. NEVER include information that contradicts the provided content
2. ALWAYS use correct syntax for the specified documentation format
3. NEVER omit critical information present in the provided content
4. ALWAYS include complete code examples that would actually work
5. NEVER use placeholder text or "TODO" comments
6. ALWAYS maintain technical accuracy over marketing language
7. NEVER generate documentation for features not present in the content
Your documentation must be technically precise, well-structured, and immediately usable. Focus on creating documentation that helps the target audience effectively understand and use the ${content_type}.`;
const userQueryText = `Generate ${detail_level} ${documentation_format} documentation for the following ${content_type}${languageText}, targeting ${audienceText}:
\`\`\`${language}
${content}
\`\`\`
Include these sections in your documentation: ${sectionsText}
Search for and apply documentation best practices for ${content_type} documentation. Ensure your documentation:
1. Accurately reflects all aspects of the provided content
2. Is structured with clear hierarchy and navigation
3. Includes comprehensive examples
4. Uses appropriate technical depth for the target audience
5. Follows ${documentation_format} formatting best practices
${detail_level === 'minimal' ? "Focus on essential information with concise explanations." : ""}
${detail_level === 'standard' ? "Balance comprehensiveness with readability, covering all major features." : ""}
${detail_level === 'comprehensive' ? "Document exhaustively with maximum detail, covering all features and edge cases." : ""}
Format your documentation according to ${documentation_format} standards, with proper syntax, formatting, and structure. Ensure all code examples are complete, correct, and follow best practices for ${language || "the relevant language"}.`;
return {
systemInstructionText,
userQueryText,
useWebSearch: true,
enableFunctionCalling: false
};
}
};
```