#
tokens: 48331/50000 36/42 files (page 1/2)
lines: on (toggle) GitHub
raw markdown copy reset
This is page 1 of 2. Use http://codebase.md/shariqriazz/vertex-ai-mcp-server?lines=true&page={x} to view the full context.

# Directory Structure

```
├── .env.example
├── .gitignore
├── bun.lock
├── Dockerfile
├── LICENSE
├── package.json
├── README.md
├── smithery.yaml
├── src
│   ├── config.ts
│   ├── index.ts
│   ├── tools
│   │   ├── answer_query_direct.ts
│   │   ├── answer_query_websearch.ts
│   │   ├── architecture_pattern_recommendation.ts
│   │   ├── code_analysis_with_docs.ts
│   │   ├── database_schema_analyzer.ts
│   │   ├── dependency_vulnerability_scan.ts
│   │   ├── directory_tree.ts
│   │   ├── documentation_generator.ts
│   │   ├── edit_file.ts
│   │   ├── execute_terminal_command.ts
│   │   ├── explain_topic_with_docs.ts
│   │   ├── generate_project_guidelines.ts
│   │   ├── get_doc_snippets.ts
│   │   ├── get_file_info.ts
│   │   ├── index.ts
│   │   ├── list_directory.ts
│   │   ├── microservice_design_assistant.ts
│   │   ├── move_file.ts
│   │   ├── read_file.ts
│   │   ├── regulatory_compliance_advisor.ts
│   │   ├── save_answer_query_direct.ts
│   │   ├── save_answer_query_websearch.ts
│   │   ├── save_doc_snippet.ts
│   │   ├── save_generate_project_guidelines.ts
│   │   ├── save_topic_explanation.ts
│   │   ├── search_files.ts
│   │   ├── security_best_practices_advisor.ts
│   │   ├── technical_comparison.ts
│   │   ├── testing_strategy_generator.ts
│   │   ├── tool_definition.ts
│   │   └── write_file.ts
│   ├── utils.ts
│   └── vertex_ai_client.ts
└── tsconfig.json
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | node_modules/
2 | build/
3 | *.log
4 | .env*
5 | !.env.example
6 | *.zip
7 | *.md
8 | !README.md
9 | 
```

--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------

```
 1 | # Environment variables for vertex-ai-mcp-server
 2 | # --- Required ---
 3 | # REQUIRED only if AI_PROVIDER is "vertex"
 4 | GOOGLE_CLOUD_PROJECT="YOUR_GCP_PROJECT_ID"
 5 | # REQUIRED only if AI_PROVIDER is "gemini"
 6 | GEMINI_API_KEY="YOUR_GEMINI_API_KEY" # Get from Google AI Studio
 7 | 
 8 | # --- General AI Configuration ---
 9 | AI_PROVIDER="vertex" # Provider to use: "vertex" or "gemini"
10 | # Optional - Model ID depends on the chosen provider
11 | VERTEX_MODEL_ID="gemini-2.5-pro-exp-03-25" # e.g., gemini-1.5-pro-latest, gemini-1.0-pro
12 | GEMINI_MODEL_ID="gemini-2.5-pro-exp-03-25" # e.g., gemini-2.5-pro-exp-03-25, gemini-pro
13 | 
14 | # --- Optional AI Parameters (Common) ---
15 | # GOOGLE_CLOUD_LOCATION is specific to Vertex AI
16 | GOOGLE_CLOUD_LOCATION="us-central1"
17 | AI_TEMPERATURE="0.0"         # Range: 0.0 to 1.0
18 | AI_USE_STREAMING="true"      # Use streaming responses: "true" or "false"
19 | AI_MAX_OUTPUT_TOKENS="65536" # Max tokens in response (Note: Models have their own upper limits)
20 | AI_MAX_RETRIES="3"           # Number of retries on transient errors
21 | AI_RETRY_DELAY_MS="1000"     # Delay between retries in milliseconds
22 | 
23 | # --- Optional Vertex AI Authentication ---
24 | # Uncomment and set if using a Service Account Key instead of Application Default Credentials (ADC) for Vertex AI
25 | # GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-key.json"
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | [![MseeP.ai Security Assessment Badge](https://mseep.net/pr/shariqriazz-vertex-ai-mcp-server-badge.png)](https://mseep.ai/app/shariqriazz-vertex-ai-mcp-server)
  2 | 
  3 | # Vertex AI MCP Server
  4 | [![smithery badge](https://smithery.ai/badge/@shariqriazz/vertex-ai-mcp-server)](https://smithery.ai/server/@shariqriazz/vertex-ai-mcp-server)
  5 | 
  6 | This project implements a Model Context Protocol (MCP) server that provides a comprehensive suite of tools for interacting with Google Cloud's Vertex AI Gemini models, focusing on coding assistance and general query answering.
  7 | 
  8 | <a href="https://glama.ai/mcp/servers/@shariqriazz/vertex-ai-mcp-server">
  9 |   <img width="380" height="200" src="https://glama.ai/mcp/servers/@shariqriazz/vertex-ai-mcp-server/badge" alt="Vertex AI Server MCP server" />
 10 | </a>
 11 | 
 12 | ## Features
 13 | 
 14 | *   Provides access to Vertex AI Gemini models via numerous MCP tools.
 15 | *   Supports web search grounding (`answer_query_websearch`) and direct knowledge answering (`answer_query_direct`).
 16 | *   Configurable model ID, temperature, streaming behavior, max output tokens, and retry settings via environment variables.
 17 | *   Uses streaming API by default for potentially better responsiveness.
 18 | *   Includes basic retry logic for transient API errors.
 19 | *   Minimal safety filters applied (`BLOCK_NONE`) to reduce potential blocking (use with caution).
 20 | 
 21 | ## Tools Provided
 22 | 
 23 | ### Query & Generation (AI Focused)
 24 | *   `answer_query_websearch`: Answers a natural language query using the configured Vertex AI model enhanced with Google Search results.
 25 | *   `answer_query_direct`: Answers a natural language query using only the internal knowledge of the configured Vertex AI model.
 26 | *   `explain_topic_with_docs`: Provides a detailed explanation for a query about a specific software topic by synthesizing information primarily from official documentation found via web search.
 27 | *   `get_doc_snippets`: Provides precise, authoritative code snippets or concise answers for technical queries by searching official documentation.
 28 | *   `generate_project_guidelines`: Generates a structured project guidelines document (Markdown) based on a specified list of technologies (optionally with versions), using web search for best practices.
 29 | 
 30 | ### Research & Analysis Tools
 31 | *   `code_analysis_with_docs`: Analyzes code snippets by comparing them with best practices from official documentation, identifying potential bugs, performance issues, and security vulnerabilities.
 32 | *   `technical_comparison`: Compares multiple technologies, frameworks, or libraries based on specific criteria, providing detailed comparison tables with pros/cons and use cases.
 33 | *   `architecture_pattern_recommendation`: Suggests architecture patterns for specific use cases based on industry best practices, with implementation examples and considerations.
 34 | *   `dependency_vulnerability_scan`: Analyzes project dependencies for known security vulnerabilities, providing detailed information and mitigation strategies.
 35 | *   `database_schema_analyzer`: Reviews database schemas for normalization, indexing, and performance issues, suggesting improvements based on database-specific best practices.
 36 | *   `security_best_practices_advisor`: Provides security recommendations for specific technologies or scenarios, with code examples for implementing secure practices.
 37 | *   `testing_strategy_generator`: Creates comprehensive testing strategies for applications or features, suggesting appropriate testing types with coverage goals.
 38 | *   `regulatory_compliance_advisor`: Provides guidance on regulatory requirements for specific industries (GDPR, HIPAA, etc.), with implementation approaches for compliance.
 39 | *   `microservice_design_assistant`: Helps design microservice architectures for specific domains, with service boundary recommendations and communication patterns.
 40 | *   `documentation_generator`: Creates comprehensive documentation for code, APIs, or systems, following industry best practices for technical documentation.
 41 | 
 42 | ### Filesystem Operations
 43 | *   `read_file_content`: Read the complete contents of one or more files. Provide a single path string or an array of path strings.
 44 | *   `write_file_content`: Create new files or completely overwrite existing files. The 'writes' argument accepts a single object (`{path, content}`) or an array of such objects.
 45 | *   `edit_file_content`: Makes line-based edits to a text file, returning a diff preview or applying changes.
 46 | *   `list_directory_contents`: Lists files and directories directly within a specified path (non-recursive).
 47 | *   `get_directory_tree`: Gets a recursive tree view of files and directories as JSON.
 48 | *   `move_file_or_directory`: Moves or renames files and directories.
 49 | *   `search_filesystem`: Recursively searches for files/directories matching a name pattern, with optional exclusions.
 50 | *   `get_filesystem_info`: Retrieves detailed metadata (size, dates, type, permissions) about a file or directory.
 51 | *   `execute_terminal_command`: Execute a shell command, optionally specifying `cwd` and `timeout`. Returns stdout/stderr.
 52 | 
 53 | ### Combined AI + Filesystem Operations
 54 | *   `save_generate_project_guidelines`: Generates project guidelines based on a tech stack and saves the result to a specified file path.
 55 | *   `save_doc_snippet`: Finds code snippets from documentation and saves the result to a specified file path.
 56 | *   `save_topic_explanation`: Generates a detailed explanation of a topic based on documentation and saves the result to a specified file path.
 57 | *   `save_answer_query_direct`: Answers a query using only internal knowledge and saves the answer to a specified file path.
 58 | *   `save_answer_query_websearch`: Answers a query using web search results and saves the answer to a specified file path.
 59 | 
 60 | *(Note: Input/output schemas for each tool are defined in their respective files within `src/tools/` and exposed via the MCP server.)*
 61 | 
 62 | ## Prerequisites
 63 | 
 64 | *   Node.js (v18+)
 65 | *   Bun (`npm install -g bun`)
 66 | *   Google Cloud Project with Billing enabled.
 67 | *   Vertex AI API enabled in the GCP project.
 68 | *   Google Cloud Authentication configured in your environment (Application Default Credentials via `gcloud auth application-default login` is recommended, or a Service Account Key).
 69 | 
 70 | ## Setup & Installation
 71 | 
 72 | 1.  **Clone/Place Project:** Ensure the project files are in your desired location.
 73 | 2.  **Install Dependencies:**
 74 |     ```bash
 75 |     bun install
 76 |     ```
 77 | 3.  **Configure Environment:**
 78 |     *   Create a `.env` file in the project root (copy `.env.example`).
 79 |     *   Set the required and optional environment variables as described in `.env.example`.
 80 |         *   Set `AI_PROVIDER` to either `"vertex"` or `"gemini"`.
 81 |         *   If `AI_PROVIDER="vertex"`, `GOOGLE_CLOUD_PROJECT` is required.
 82 |         *   If `AI_PROVIDER="gemini"`, `GEMINI_API_KEY` is required.
 83 | 4.  **Build the Server:**
 84 |     ```bash
 85 |     bun run build
 86 |     ```
 87 |     This compiles the TypeScript code to `build/index.js`.
 88 | 
 89 | ## Usage (Standalone / NPX)
 90 | 
 91 | Once published to npm, you can run this server directly using `npx`:
 92 | 
 93 | ```bash
 94 | # Ensure required environment variables are set (e.g., GOOGLE_CLOUD_PROJECT)
 95 | bunx vertex-ai-mcp-server
 96 | ```
 97 | 
 98 | Alternatively, install it globally:
 99 | 
100 | ```bash
101 | bun install -g vertex-ai-mcp-server
102 | # Then run:
103 | vertex-ai-mcp-server
104 | ```
105 | 
106 | **Note:** Running standalone requires setting necessary environment variables (like `GOOGLE_CLOUD_PROJECT`, `GOOGLE_CLOUD_LOCATION`, authentication credentials if not using ADC) in your shell environment before executing the command.
107 | 
108 | ### Installing via Smithery
109 | 
110 | To install Vertex AI Server for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@shariqriazz/vertex-ai-mcp-server):
111 | 
112 | ```bash
113 | bunx -y @smithery/cli install @shariqriazz/vertex-ai-mcp-server --client claude
114 | ```
115 | 
116 | ## Running with Cline
117 | 
118 | 1.  **Configure MCP Settings:** Add/update the configuration in your Cline MCP settings file (e.g., `.roo/mcp.json`). You have two primary ways to configure the command:
119 | 
120 |     **Option A: Using Node (Direct Path - Recommended for Development)**
121 | 
122 |     This method uses `node` to run the compiled script directly. It's useful during development when you have the code cloned locally.
123 | 
124 |     ```json
125 |     {
126 |       "mcpServers": {
127 |         "vertex-ai-mcp-server": {
128 |           "command": "node",
129 |           "args": [
130 |             "/full/path/to/your/vertex-ai-mcp-server/build/index.js" // Use absolute path or ensure it's relative to where Cline runs node
131 |           ],
132 |           "env": {
133 |             // --- General AI Configuration ---
134 |             "AI_PROVIDER": "vertex", // "vertex" or "gemini"
135 |             // --- Required (Conditional) ---
136 |             "GOOGLE_CLOUD_PROJECT": "YOUR_GCP_PROJECT_ID", // Required if AI_PROVIDER="vertex"
137 |             // "GEMINI_API_KEY": "YOUR_GEMINI_API_KEY", // Required if AI_PROVIDER="gemini"
138 |             // --- Optional Model Selection ---
139 |             "VERTEX_MODEL_ID": "gemini-2.5-pro-exp-03-25", // If AI_PROVIDER="vertex" (Example override)
140 |             "GEMINI_MODEL_ID": "gemini-2.5-pro-exp-03-25", // If AI_PROVIDER="gemini"
141 |             // --- Optional AI Parameters ---
142 |             "GOOGLE_CLOUD_LOCATION": "us-central1", // Specific to Vertex AI
143 |             "AI_TEMPERATURE": "0.0",
144 |             "AI_USE_STREAMING": "true",
145 |             "AI_MAX_OUTPUT_TOKENS": "65536", // Default from .env.example
146 |             "AI_MAX_RETRIES": "3",
147 |             "AI_RETRY_DELAY_MS": "1000",
148 |             // --- Optional Vertex Authentication ---
149 |             // "GOOGLE_APPLICATION_CREDENTIALS": "/path/to/your/service-account-key.json" // If using Service Account Key for Vertex
150 |           },
151 |           "disabled": false,
152 |           "alwaysAllow": [
153 |              // Add tool names here if you don't want confirmation prompts
154 |              // e.g., "answer_query_websearch"
155 |           ],
156 |           "timeout": 3600 // Optional: Timeout in seconds
157 |         }
158 |         // Add other servers here...
159 |       }
160 |     }
161 |     ```
162 |     *   **Important:** Ensure the `args` path points correctly to the `build/index.js` file. Using an absolute path might be more reliable.
163 | 
164 |     **Option B: Using NPX (Requires Package Published to npm)**
165 | 
166 |     This method uses `npx` to automatically download and run the server package from the npm registry. This is convenient if you don't want to clone the repository.
167 | 
168 |     ```json
169 |     {
170 |       "mcpServers": {
171 |         "vertex-ai-mcp-server": {
172 |           "command": "bunx", // Use bunx
173 |           "args": [
174 |             "-y", // Auto-confirm installation
175 |             "vertex-ai-mcp-server" // The npm package name
176 |           ],
177 |           "env": {
178 |             // --- General AI Configuration ---
179 |             "AI_PROVIDER": "vertex", // "vertex" or "gemini"
180 |             // --- Required (Conditional) ---
181 |             "GOOGLE_CLOUD_PROJECT": "YOUR_GCP_PROJECT_ID", // Required if AI_PROVIDER="vertex"
182 |             // "GEMINI_API_KEY": "YOUR_GEMINI_API_KEY", // Required if AI_PROVIDER="gemini"
183 |             // --- Optional Model Selection ---
184 |             "VERTEX_MODEL_ID": "gemini-2.5-pro-exp-03-25", // If AI_PROVIDER="vertex" (Example override)
185 |             "GEMINI_MODEL_ID": "gemini-2.5-pro-exp-03-25", // If AI_PROVIDER="gemini"
186 |             // --- Optional AI Parameters ---
187 |             "GOOGLE_CLOUD_LOCATION": "us-central1", // Specific to Vertex AI
188 |             "AI_TEMPERATURE": "0.0",
189 |             "AI_USE_STREAMING": "true",
190 |             "AI_MAX_OUTPUT_TOKENS": "65536", // Default from .env.example
191 |             "AI_MAX_RETRIES": "3",
192 |             "AI_RETRY_DELAY_MS": "1000",
193 |             // --- Optional Vertex Authentication ---
194 |             // "GOOGLE_APPLICATION_CREDENTIALS": "/path/to/your/service-account-key.json" // If using Service Account Key for Vertex
195 |           },
196 |           "disabled": false,
197 |           "alwaysAllow": [
198 |              // Add tool names here if you don't want confirmation prompts
199 |              // e.g., "answer_query_websearch"
200 |           ],
201 |           "timeout": 3600 // Optional: Timeout in seconds
202 |         }
203 |         // Add other servers here...
204 |       }
205 |     }
206 |     ```
207 |     *   Ensure the environment variables in the `env` block are correctly set, either matching `.env` or explicitly defined here. Remove comments from the actual JSON file.
208 | 
209 | 2.  **Restart/Reload Cline:** Cline should detect the configuration change and start the server.
210 | 
211 | 3.  **Use Tools:** You can now use the extensive list of tools via Cline.
212 | 
213 | ## Development
214 | 
215 | *   **Watch Mode:** `bun run watch`
216 | *   **Linting:** `bun run lint`
217 | *   **Formatting:** `bun run format`
218 | ## License
219 | 
220 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
221 | 
```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "compilerOptions": {
 3 |     "target": "ES2022",
 4 |     "module": "Node16",
 5 |     "moduleResolution": "Node16",
 6 |     "outDir": "./build",
 7 |     "rootDir": "./src",
 8 |     "strict": true,
 9 |     "esModuleInterop": true,
10 |     "skipLibCheck": true,
11 |     "forceConsistentCasingInFileNames": true
12 |   },
13 |   "include": ["src/**/*"],
14 |   "exclude": ["node_modules"]
15 | }
16 | 
```

--------------------------------------------------------------------------------
/src/utils.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import * as path from 'node:path';
 2 | import { WORKSPACE_ROOT } from './config.js';
 3 | 
 4 | export const sleep = (ms: number) => new Promise(resolve => setTimeout(resolve, ms));
 5 | 
 6 | // Basic path validation
 7 | export function sanitizePath(inputPath: string): string {
 8 |     const absolutePath = path.resolve(WORKSPACE_ROOT, inputPath);
 9 |     if (!absolutePath.startsWith(WORKSPACE_ROOT)) {
10 |         throw new Error(`Access denied: Path is outside the workspace: ${inputPath}`);
11 |     }
12 |     // Basic check against path traversal
13 |     if (absolutePath.includes('..')) {
14 |          throw new Error(`Access denied: Invalid path component '..': ${inputPath}`);
15 |     }
16 |     return absolutePath;
17 | }
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/config#dockerfile
 2 | # Build stage
 3 | FROM node:lts-alpine AS build
 4 | WORKDIR /app
 5 | 
 6 | # Install dependencies without running prepare scripts
 7 | COPY package.json tsconfig.json bun.lock .
 8 | RUN npm install --ignore-scripts
 9 | 
10 | # Copy source and transpile
11 | COPY . .
12 | RUN npx tsc -p tsconfig.json && chmod +x build/index.js
13 | 
14 | # Production image
15 | FROM node:lts-alpine
16 | WORKDIR /app
17 | 
18 | # Copy built application
19 | COPY --from=build /app/build ./build
20 | 
21 | # Install production dependencies without running prepare scripts
22 | COPY package.json bun.lock .
23 | RUN npm install --omit=dev --ignore-scripts
24 | 
25 | ENV NODE_ENV=production
26 | ENTRYPOINT ["node", "build/index.js"]
27 | 
```

--------------------------------------------------------------------------------
/src/tools/tool_definition.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import type { Content, Tool } from "@google/genai";
 3 | 
 4 | export interface ToolDefinition {
 5 |     name: string;
 6 |     description: string;
 7 |     inputSchema: any; // Consider defining a stricter type like JSONSchema7
 8 |     buildPrompt: (args: any, modelId: string) => {
 9 |         systemInstructionText: string;
10 |         userQueryText: string;
11 |         useWebSearch: boolean;
12 |         enableFunctionCalling: boolean;
13 |     };
14 | }
15 | 
16 | export const modelIdPlaceholder = "${modelId}"; // Placeholder for dynamic model ID in descriptions
17 | 
18 | // Helper to build the initial content array
19 | export function buildInitialContent(systemInstruction: string, userQuery: string): Content[] {
20 |     return [{ role: "user", parts: [{ text: `${systemInstruction}\n\n${userQuery}` }] }];
21 | }
22 | 
23 | // Helper to determine tools for API call
24 | export function getToolsForApi(enableFunctionCalling: boolean, useWebSearch: boolean): Tool[] | undefined {
25 |      // Function calling is no longer supported by the remaining tools
26 |      return useWebSearch ? [{ googleSearch: {} } as any] : undefined; // Cast needed as SDK type might not include googleSearch directly
27 | }
```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "name": "vertex-ai-mcp-server",
 3 |   "version": "0.4.0",
 4 |   "description": "A Model Context Protocol server supporting Vertex AI and Gemini API",
 5 |   "license": "MIT",
 6 |   "type": "module",
 7 |   "bin": {
 8 |     "vertex-ai-mcp-server": "build/index.js"
 9 |   },
10 |   "repository": {
11 |     "type": "git",
12 |     "url": "git+https://github.com/shariqriazz/vertex-ai-mcp-server.git"
13 |   },
14 |   "homepage": "https://github.com/shariqriazz/vertex-ai-mcp-server#readme",
15 |   "bugs": {
16 |     "url": "https://github.com/shariqriazz/vertex-ai-mcp-server/issues"
17 |   },
18 |   "files": [
19 |     "build"
20 |   ],
21 |   "scripts": {
22 |     "build": "bun run tsc && node -e \"require('fs').chmodSync('build/index.js', '755')\"",
23 |     "prepare": "bun run build",
24 |     "watch": "bun run tsc --watch",
25 |     "inspector": "bunx @modelcontextprotocol/inspector build/index.js"
26 |   },
27 |   "dependencies": {
28 |     "@google/genai": "^1.0.1",
29 |     "@modelcontextprotocol/sdk": "0.6.0",
30 |     "diff": "^7.0.0",
31 |     "dotenv": "^16.5.0",
32 |     "minimatch": "^10.0.1",
33 |     "zod": "^3.24.3",
34 |     "zod-to-json-schema": "^3.24.5"
35 |   },
36 |   "devDependencies": {
37 |     "@types/diff": "^7.0.2",
38 |     "@types/minimatch": "^5.1.2",
39 |     "@types/node": "^20.11.24",
40 |     "typescript": "^5.3.3"
41 |   }
42 | }
43 | 
```

--------------------------------------------------------------------------------
/src/tools/directory_tree.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | import { z } from "zod";
 4 | import { zodToJsonSchema } from "zod-to-json-schema";
 5 | 
 6 | // Schema definition (adapted from example.ts) - Exported
 7 | export const DirectoryTreeArgsSchema = z.object({
 8 |   path: z.string().describe("The root path for the directory tree (relative to the workspace directory)."),
 9 | });
10 | 
11 | // Convert Zod schema to JSON schema
12 | const DirectoryTreeJsonSchema = zodToJsonSchema(DirectoryTreeArgsSchema);
13 | 
14 | export const directoryTreeTool: ToolDefinition = {
15 |     name: "get_directory_tree", // Renamed slightly
16 |     description:
17 |       "Get a recursive tree view of files and directories within the workspace filesystem as a JSON structure. " +
18 |       "Each entry includes 'name', 'type' (file/directory), and 'children' (an array) for directories. " +
19 |       "Files have no 'children' array. The output is formatted JSON text. " +
20 |       "Useful for understanding the complete structure of a project directory.",
21 |     inputSchema: DirectoryTreeJsonSchema as any, // Cast as any if needed
22 | 
23 |     // Minimal buildPrompt as execution logic is separate
24 |     buildPrompt: (args: any, modelId: string) => {
25 |         const parsed = DirectoryTreeArgsSchema.safeParse(args);
26 |         if (!parsed.success) {
27 |             throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for get_directory_tree: ${parsed.error}`);
28 |         }
29 |         return {
30 |             systemInstructionText: "",
31 |             userQueryText: "",
32 |             useWebSearch: false,
33 |             enableFunctionCalling: false
34 |         };
35 |     },
36 |     // No 'execute' function here
37 | };
```

--------------------------------------------------------------------------------
/src/tools/list_directory.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | import { z } from "zod";
 4 | import { zodToJsonSchema } from "zod-to-json-schema";
 5 | 
 6 | // Schema definition (adapted from example.ts) - Exported
 7 | export const ListDirectoryArgsSchema = z.object({
 8 |   path: z.string().describe("The path of the directory to list (relative to the workspace directory)."),
 9 | });
10 | 
11 | // Convert Zod schema to JSON schema
12 | const ListDirectoryJsonSchema = zodToJsonSchema(ListDirectoryArgsSchema);
13 | 
14 | export const listDirectoryTool: ToolDefinition = {
15 |     name: "list_directory_contents", // Renamed slightly
16 |     description:
17 |       "Get a detailed listing of all files and directories directly within a specified path in the workspace filesystem. " +
18 |       "Results clearly distinguish between files and directories with [FILE] and [DIR] " +
19 |       "prefixes. This tool is essential for understanding directory structure and " +
20 |       "finding specific files within a directory. Does not list recursively.",
21 |     inputSchema: ListDirectoryJsonSchema as any, // Cast as any if needed
22 | 
23 |     // Minimal buildPrompt as execution logic is separate
24 |     buildPrompt: (args: any, modelId: string) => {
25 |         const parsed = ListDirectoryArgsSchema.safeParse(args);
26 |         if (!parsed.success) {
27 |             throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for list_directory_contents: ${parsed.error}`);
28 |         }
29 |         return {
30 |             systemInstructionText: "",
31 |             userQueryText: "",
32 |             useWebSearch: false,
33 |             enableFunctionCalling: false
34 |         };
35 |     },
36 |     // No 'execute' function here
37 | };
```

--------------------------------------------------------------------------------
/src/tools/get_file_info.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | import { z } from "zod";
 4 | import { zodToJsonSchema } from "zod-to-json-schema";
 5 | 
 6 | // Schema definition (adapted from example.ts) - Exported
 7 | export const GetFileInfoArgsSchema = z.object({
 8 |   path: z.string().describe("The path of the file or directory to get info for (relative to the workspace directory)."),
 9 | });
10 | 
11 | // Convert Zod schema to JSON schema
12 | const GetFileInfoJsonSchema = zodToJsonSchema(GetFileInfoArgsSchema);
13 | 
14 | export const getFileInfoTool: ToolDefinition = {
15 |     name: "get_filesystem_info", // Renamed slightly
16 |     description:
17 |       "Retrieve detailed metadata about a file or directory within the workspace filesystem. " +
18 |       "Returns comprehensive information including size (bytes), creation time, last modified time, " +
19 |       "last accessed time, type (file/directory), and permissions (octal string). " +
20 |       "This tool is perfect for understanding file characteristics without reading the actual content.",
21 |     inputSchema: GetFileInfoJsonSchema as any, // Cast as any if needed
22 | 
23 |     // Minimal buildPrompt as execution logic is separate
24 |     buildPrompt: (args: any, modelId: string) => {
25 |         const parsed = GetFileInfoArgsSchema.safeParse(args);
26 |         if (!parsed.success) {
27 |             throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for get_filesystem_info: ${parsed.error}`);
28 |         }
29 |         return {
30 |             systemInstructionText: "",
31 |             userQueryText: "",
32 |             useWebSearch: false,
33 |             enableFunctionCalling: false
34 |         };
35 |     },
36 |     // No 'execute' function here
37 | };
```

--------------------------------------------------------------------------------
/src/tools/execute_terminal_command.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition } from "./tool_definition.js";
 3 | import { z } from "zod";
 4 | import { zodToJsonSchema } from "zod-to-json-schema";
 5 | 
 6 | // Schema definition
 7 | export const ExecuteTerminalCommandArgsSchema = z.object({
 8 |   command: z.string().describe("The command line instruction to execute."),
 9 |   cwd: z.string().optional().describe("Optional. The working directory to run the command in (relative to the workspace root). Defaults to the workspace root if not specified."),
10 |   timeout: z.number().int().positive().optional().describe("Optional. Maximum execution time in seconds. If the command exceeds this time, it will be terminated."),
11 | });
12 | 
13 | // Convert Zod schema to JSON schema
14 | const ExecuteTerminalCommandJsonSchema = zodToJsonSchema(ExecuteTerminalCommandArgsSchema);
15 | 
16 | export const executeTerminalCommandTool: ToolDefinition = {
17 |     name: "execute_terminal_command", // Renamed
18 |     description:
19 |       "Execute a shell command on the server's operating system. " +
20 |       "Allows specifying the command, an optional working directory (cwd), and an optional timeout in seconds. " +
21 |       "Returns the combined stdout and stderr output of the command upon completion or termination.",
22 |     inputSchema: ExecuteTerminalCommandJsonSchema as any,
23 | 
24 |     // Minimal buildPrompt as execution logic is separate
25 |     buildPrompt: (args: any, modelId: string) => {
26 |         const parsed = ExecuteTerminalCommandArgsSchema.safeParse(args);
27 |         if (!parsed.success) {
28 |             throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for execute_terminal_command: ${parsed.error}`);
29 |         }
30 |         return {
31 |             systemInstructionText: "",
32 |             userQueryText: "",
33 |             useWebSearch: false,
34 |             enableFunctionCalling: false
35 |         };
36 |     },
37 |     // No 'execute' function here
38 | };
```

--------------------------------------------------------------------------------
/src/tools/search_files.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | import { z } from "zod";
 4 | import { zodToJsonSchema } from "zod-to-json-schema";
 5 | 
 6 | // Schema definition (adapted from example.ts) - Exported
 7 | export const SearchFilesArgsSchema = z.object({
 8 |   path: z.string().describe("The starting directory path for the search (relative to the workspace directory)."),
 9 |   pattern: z.string().describe("The case-insensitive text pattern to search for in file/directory names."),
10 |   excludePatterns: z.array(z.string()).optional().default([]).describe("An array of glob patterns (e.g., 'node_modules', '*.log') to exclude from the search.")
11 | });
12 | 
13 | // Convert Zod schema to JSON schema
14 | const SearchFilesJsonSchema = zodToJsonSchema(SearchFilesArgsSchema);
15 | 
16 | export const searchFilesTool: ToolDefinition = {
17 |     name: "search_filesystem", // Renamed slightly
18 |     description:
19 |       "Recursively search for files and directories within the workspace filesystem matching a pattern in their name. " +
20 |       "Searches through all subdirectories from the starting path. The search " +
21 |       "is case-insensitive and matches partial names. Returns full paths (relative to workspace) to all " +
22 |       "matching items. Supports excluding paths using glob patterns.",
23 |     inputSchema: SearchFilesJsonSchema as any, // Cast as any if needed
24 | 
25 |     // Minimal buildPrompt as execution logic is separate
26 |     buildPrompt: (args: any, modelId: string) => {
27 |         const parsed = SearchFilesArgsSchema.safeParse(args);
28 |         if (!parsed.success) {
29 |             throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for search_filesystem: ${parsed.error}`);
30 |         }
31 |         return {
32 |             systemInstructionText: "",
33 |             userQueryText: "",
34 |             useWebSearch: false,
35 |             enableFunctionCalling: false
36 |         };
37 |     },
38 |     // No 'execute' function here
39 | };
```

--------------------------------------------------------------------------------
/src/tools/move_file.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | import { z } from "zod";
 4 | import { zodToJsonSchema } from "zod-to-json-schema";
 5 | 
 6 | // Schema definition (adapted from example.ts) - Exported
 7 | export const MoveFileArgsSchema = z.object({
 8 |   source: z.string().describe("The current path of the file or directory to move (relative to the workspace directory)."),
 9 |   destination: z.string().describe("The new path for the file or directory (relative to the workspace directory)."),
10 | });
11 | 
12 | // Convert Zod schema to JSON schema
13 | const MoveFileJsonSchema = zodToJsonSchema(MoveFileArgsSchema);
14 | 
15 | export const moveFileTool: ToolDefinition = {
16 |     name: "move_file_or_directory", // Renamed slightly
17 |     description:
18 |       "Move or rename files and directories within the workspace filesystem. " +
19 |       "Can move items between directories and rename them in a single operation. " +
20 |       "If the destination path already exists, the operation will likely fail (OS-dependent).",
21 |     inputSchema: MoveFileJsonSchema as any, // Cast as any if needed
22 | 
23 |     // Minimal buildPrompt as execution logic is separate
24 |     buildPrompt: (args: any, modelId: string) => {
25 |         const parsed = MoveFileArgsSchema.safeParse(args);
26 |         if (!parsed.success) {
27 |             throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for move_file_or_directory: ${parsed.error}`);
28 |         }
29 |         // Add check: source and destination cannot be the same
30 |         if (parsed.data.source === parsed.data.destination) {
31 |              throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for move_file_or_directory: source and destination paths cannot be the same.`);
32 |         }
33 |         return {
34 |             systemInstructionText: "",
35 |             userQueryText: "",
36 |             useWebSearch: false,
37 |             enableFunctionCalling: false
38 |         };
39 |     },
40 |     // No 'execute' function here
41 | };
```

--------------------------------------------------------------------------------
/src/tools/write_file.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | import { z } from "zod";
 4 | import { zodToJsonSchema } from "zod-to-json-schema";
 5 | 
 6 | // Schema definition (adapted from example.ts) - Exported
 7 | // Schema for a single file write operation
 8 | const SingleWriteOperationSchema = z.object({
 9 |   path: z.string().describe("The path of the file to write (relative to the workspace directory)."),
10 |   content: z.string().describe("The full content to write to the file."),
11 | });
12 | 
13 | // Schema for the arguments object, containing either a single write or an array of writes
14 | export const WriteFileArgsSchema = z.object({
15 |     writes: z.union([
16 |         SingleWriteOperationSchema.describe("A single file write operation."),
17 |         z.array(SingleWriteOperationSchema).min(1).describe("An array of file write operations.")
18 |     ]).describe("A single write operation or an array of write operations.")
19 | });
20 | 
21 | 
22 | // Convert Zod schema to JSON schema
23 | const WriteFileJsonSchema = zodToJsonSchema(WriteFileArgsSchema);
24 | 
25 | export const writeFileTool: ToolDefinition = {
26 |     name: "write_file_content", // Keep name consistent
27 |     description:
28 |       "Create new files or completely overwrite existing files in the workspace filesystem. " +
29 |       "The 'writes' argument should be either a single object with 'path' and 'content', or an array of such objects to write multiple files. " +
30 |       "Use with caution as it will overwrite existing files without warning. " +
31 |       "Handles text content with proper encoding.",
32 |     inputSchema: WriteFileJsonSchema as any, // Cast as any if needed
33 | 
34 |     // Minimal buildPrompt as execution logic is separate
35 |     buildPrompt: (args: any, modelId: string) => {
36 |         const parsed = WriteFileArgsSchema.safeParse(args);
37 |         if (!parsed.success) {
38 |             throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for write_file_content: ${parsed.error}`);
39 |         }
40 |         return {
41 |             systemInstructionText: "",
42 |             userQueryText: "",
43 |             useWebSearch: false,
44 |             enableFunctionCalling: false
45 |         };
46 |     },
47 |     // No 'execute' function here
48 | };
```

--------------------------------------------------------------------------------
/src/tools/read_file.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | // Note: We don't need fs, path here as execution logic is moved
 4 | import { z } from "zod";
 5 | import { zodToJsonSchema } from "zod-to-json-schema";
 6 | 
 7 | // Schema definition (adapted from example.ts) - Exported
 8 | export const ReadFileArgsSchema = z.object({
 9 |   paths: z.union([
10 |       z.string().describe("The path of the file to read (relative to the workspace directory)."),
11 |       z.array(z.string()).min(1).describe("An array of file paths to read (relative to the workspace directory).")
12 |   ]).describe("A single file path or an array of file paths to read."),
13 | });
14 | 
15 | // Infer the input type for validation
16 | type ReadFileInput = z.infer<typeof ReadFileArgsSchema>;
17 | 
18 | // Convert Zod schema to JSON schema for the tool definition
19 | const ReadFileJsonSchema = zodToJsonSchema(ReadFileArgsSchema);
20 | 
21 | export const readFileTool: ToolDefinition = {
22 |     name: "read_file_content", // Keep the name consistent
23 |     description:
24 |       "Read the complete contents of one or more files from the workspace filesystem. " +
25 |       "Provide a single path string or an array of path strings. " +
26 |       "Handles various text encodings and provides detailed error messages " +
27 |       "if a file cannot be read. Failed reads for individual files in an array " +
28 |       "won't stop the entire operation when multiple paths are provided.",
29 |     // Use the converted JSON schema
30 |     inputSchema: ReadFileJsonSchema as any, // Cast as any to fit ToolDefinition if needed
31 | 
32 |     // This tool doesn't directly use the LLM, so buildPrompt is minimal/not used for execution
33 |     buildPrompt: (args: any, modelId: string) => {
34 |         // Basic validation
35 |         const parsed = ReadFileArgsSchema.safeParse(args);
36 |         if (!parsed.success) {
37 |             // Use InternalError or InvalidParams
38 |             throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for read_file_content: ${parsed.error}`);
39 |         }
40 |         // No prompt generation needed for direct execution logic
41 |         return {
42 |             systemInstructionText: "", // Not applicable
43 |             userQueryText: "", // Not applicable
44 |             useWebSearch: false,
45 |             enableFunctionCalling: false
46 |         };
47 |     },
48 |     // Removed the 'execute' function - this logic will go into src/index.ts
49 | };
```

--------------------------------------------------------------------------------
/src/tools/edit_file.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | import { z } from "zod";
 4 | import { zodToJsonSchema } from "zod-to-json-schema";
 5 | 
 6 | // Schema definitions (adapted from example.ts) - Exported
 7 | export const EditOperationSchema = z.object({
 8 |   oldText: z.string().describe('Text to search for - attempts exact match first, then line-by-line whitespace-insensitive match.'),
 9 |   newText: z.string().describe('Text to replace with, preserving indentation where possible.')
10 | });
11 | 
12 | export const EditFileArgsSchema = z.object({
13 |   path: z.string().describe("The path of the file to edit (relative to the workspace directory)."),
14 |   edits: z.array(EditOperationSchema).describe("An array of edit operations to apply sequentially."),
15 |   dryRun: z.boolean().optional().default(false).describe('If true, preview changes using git-style diff format without saving.')
16 | });
17 | 
18 | // Convert Zod schema to JSON schema
19 | const EditFileJsonSchema = zodToJsonSchema(EditFileArgsSchema);
20 | 
21 | export const editFileTool: ToolDefinition = {
22 |     name: "edit_file_content", // Renamed slightly
23 |     description:
24 |       "Make line-based edits to a text file in the workspace filesystem. Each edit attempts to replace " +
25 |       "an exact match of 'oldText' with 'newText'. If no exact match is found, it attempts a " +
26 |       "line-by-line match ignoring leading/trailing whitespace. Indentation of the first line " +
27 |       "is preserved, and relative indentation of subsequent lines is attempted. " +
28 |       "Returns a git-style diff showing the changes made (or previewed if dryRun is true).",
29 |     inputSchema: EditFileJsonSchema as any, // Cast as any if needed
30 | 
31 |     // Minimal buildPrompt as execution logic is separate
32 |     buildPrompt: (args: any, modelId: string) => {
33 |         const parsed = EditFileArgsSchema.safeParse(args);
34 |         if (!parsed.success) {
35 |             throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for edit_file_content: ${parsed.error}`);
36 |         }
37 |         // Add a check for empty edits array
38 |         if (parsed.data.edits.length === 0) {
39 |              throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for edit_file_content: 'edits' array cannot be empty.`);
40 |         }
41 |         return {
42 |             systemInstructionText: "",
43 |             userQueryText: "",
44 |             useWebSearch: false,
45 |             enableFunctionCalling: false
46 |         };
47 |     },
48 |     // No 'execute' function here
49 | };
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | # Smithery configuration file: https://smithery.ai/docs/config#smitheryyaml
 2 | 
 3 | startCommand:
 4 |   type: stdio
 5 |   configSchema:
 6 |     # JSON Schema defining the configuration options for the MCP.
 7 |     type: object
 8 |     required:
 9 |       - googleCloudProject
10 |       - googleCloudLocation
11 |     properties:
12 |       googleCloudProject:
13 |         type: string
14 |         description: Google Cloud Project ID
15 |       googleCloudLocation:
16 |         type: string
17 |         description: Google Cloud Location
18 |       googleApplicationCredentials:
19 |         type: string
20 |         description: Path to service account key JSON
21 |       vertexAiModelId:
22 |         type: string
23 |         default: gemini-2.5-pro-exp-03-25
24 |         description: Vertex AI Model ID
25 |       vertexAiTemperature:
26 |         type: number
27 |         default: 0
28 |         description: Temperature for model
29 |       vertexAiUseStreaming:
30 |         type: boolean
31 |         default: true
32 |         description: Whether to use streaming
33 |       vertexAiMaxOutputTokens:
34 |         type: number
35 |         default: 65535
36 |         description: Max output tokens
37 |       vertexAiMaxRetries:
38 |         type: number
39 |         default: 3
40 |         description: Max retry attempts
41 |       vertexAiRetryDelayMs:
42 |         type: number
43 |         default: 1000
44 |         description: Delay between retries in ms
45 |   commandFunction:
46 |     # A JS function that produces the CLI command based on the given config to start the MCP on stdio.
47 |     |-
48 |     (config) => ({ command: 'node', args: ['build/index.js'], env: { ...(config.googleCloudProject && { GOOGLE_CLOUD_PROJECT: config.googleCloudProject }), ...(config.googleCloudLocation && { GOOGLE_CLOUD_LOCATION: config.googleCloudLocation }), ...(config.googleApplicationCredentials && { GOOGLE_APPLICATION_CREDENTIALS: config.googleApplicationCredentials }), ...(config.vertexAiModelId && { VERTEX_AI_MODEL_ID: config.vertexAiModelId }), ...(config.vertexAiTemperature !== undefined && { VERTEX_AI_TEMPERATURE: String(config.vertexAiTemperature) }), ...(config.vertexAiUseStreaming !== undefined && { VERTEX_AI_USE_STREAMING: String(config.vertexAiUseStreaming) }), ...(config.vertexAiMaxOutputTokens !== undefined && { VERTEX_AI_MAX_OUTPUT_TOKENS: String(config.vertexAiMaxOutputTokens) }), ...(config.vertexAiMaxRetries !== undefined && { VERTEX_AI_MAX_RETRIES: String(config.vertexAiMaxRetries) }), ...(config.vertexAiRetryDelayMs !== undefined && { VERTEX_AI_RETRY_DELAY_MS: String(config.vertexAiRetryDelayMs) }) } })
49 |   exampleConfig:
50 |     googleCloudProject: my-gcp-project
51 |     googleCloudLocation: us-central1
52 |     googleApplicationCredentials: /path/to/credentials.json
53 |     vertexAiModelId: gemini-2.5-pro-exp-03-25
54 |     vertexAiTemperature: 0
55 |     vertexAiUseStreaming: true
56 |     vertexAiMaxOutputTokens: 65535
57 |     vertexAiMaxRetries: 3
58 |     vertexAiRetryDelayMs: 1000
59 | 
```

--------------------------------------------------------------------------------
/src/tools/index.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { ToolDefinition } from "./tool_definition.js";
 2 | import { answerQueryWebsearchTool } from "./answer_query_websearch.js";
 3 | import { answerQueryDirectTool } from "./answer_query_direct.js";
 4 | import { explainTopicWithDocsTool } from "./explain_topic_with_docs.js";
 5 | import { getDocSnippetsTool } from "./get_doc_snippets.js";
 6 | import { generateProjectGuidelinesTool } from "./generate_project_guidelines.js";
 7 | // Filesystem Tools (Imported)
 8 | import { readFileTool } from "./read_file.js"; // Handles single and multiple files now
 9 | // import { readMultipleFilesTool } from "./read_multiple_files.js"; // Merged into readFileTool
10 | import { writeFileTool } from "./write_file.js";
11 | import { editFileTool } from "./edit_file.js";
12 | // import { createDirectoryTool } from "./create_directory.js"; // Removed
13 | import { listDirectoryTool } from "./list_directory.js";
14 | import { directoryTreeTool } from "./directory_tree.js";
15 | import { moveFileTool } from "./move_file.js";
16 | import { searchFilesTool } from "./search_files.js";
17 | import { getFileInfoTool } from "./get_file_info.js";
18 | import { executeTerminalCommandTool } from "./execute_terminal_command.js"; // Renamed file and tool variable
19 | // Import the new combined tools
20 | import { saveGenerateProjectGuidelinesTool } from "./save_generate_project_guidelines.js";
21 | import { saveDocSnippetTool } from "./save_doc_snippet.js";
22 | import { saveTopicExplanationTool } from "./save_topic_explanation.js";
23 | // Removed old save_query_answer, added new specific ones
24 | import { saveAnswerQueryDirectTool } from "./save_answer_query_direct.js";
25 | import { saveAnswerQueryWebsearchTool } from "./save_answer_query_websearch.js";
26 | 
27 | // Import new research-oriented tools
28 | import { codeAnalysisWithDocsTool } from "./code_analysis_with_docs.js";
29 | import { technicalComparisonTool } from "./technical_comparison.js";
30 | import { architecturePatternRecommendationTool } from "./architecture_pattern_recommendation.js";
31 | import { dependencyVulnerabilityScanTool } from "./dependency_vulnerability_scan.js";
32 | import { databaseSchemaAnalyzerTool } from "./database_schema_analyzer.js";
33 | import { securityBestPracticesAdvisorTool } from "./security_best_practices_advisor.js";
34 | import { testingStrategyGeneratorTool } from "./testing_strategy_generator.js";
35 | import { regulatoryComplianceAdvisorTool } from "./regulatory_compliance_advisor.js";
36 | import { microserviceDesignAssistantTool } from "./microservice_design_assistant.js";
37 | import { documentationGeneratorTool } from "./documentation_generator.js";
38 | 
39 | 
40 | export const allTools: ToolDefinition[] = [
41 |     // Query & Generation Tools
42 |     answerQueryWebsearchTool,
43 |     answerQueryDirectTool,
44 |     explainTopicWithDocsTool,
45 |     getDocSnippetsTool,
46 |     generateProjectGuidelinesTool,
47 |     // Filesystem Tools
48 |     readFileTool, // Handles single and multiple files now
49 |     // readMultipleFilesTool, // Merged into readFileTool
50 |     writeFileTool,
51 |     editFileTool,
52 |     // createDirectoryTool, // Removed
53 |     listDirectoryTool,
54 |     directoryTreeTool,
55 |     moveFileTool,
56 |     searchFilesTool,
57 |     getFileInfoTool,
58 |     executeTerminalCommandTool, // Renamed
59 |     // Add the new combined tools
60 |     saveGenerateProjectGuidelinesTool,
61 |     saveDocSnippetTool,
62 |     saveTopicExplanationTool,
63 |     // Removed old save_query_answer, added new specific ones
64 |     saveAnswerQueryDirectTool,
65 |     saveAnswerQueryWebsearchTool,
66 |     
67 |     // New research-oriented tools
68 |     codeAnalysisWithDocsTool,
69 |     technicalComparisonTool,
70 |     architecturePatternRecommendationTool,
71 |     dependencyVulnerabilityScanTool,
72 |     databaseSchemaAnalyzerTool,
73 |     securityBestPracticesAdvisorTool,
74 |     testingStrategyGeneratorTool,
75 |     regulatoryComplianceAdvisorTool,
76 |     microserviceDesignAssistantTool,
77 |     documentationGeneratorTool,
78 | ];
79 | 
80 | // Create a map for easy lookup
81 | export const toolMap = new Map<string, ToolDefinition>(
82 |     allTools.map(tool => [tool.name, tool])
83 | );
```

--------------------------------------------------------------------------------
/src/tools/save_topic_explanation.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | import { z } from "zod";
 4 | import { zodToJsonSchema } from "zod-to-json-schema";
 5 | 
 6 | // Schema combining explain_topic_with_docs args + output_path
 7 | export const SaveTopicExplanationArgsSchema = z.object({
 8 |     topic: z.string().describe("The software/library/framework topic (e.g., 'React Router', 'Python requests')."),
 9 |     query: z.string().describe("The specific question to answer based on the documentation."),
10 |     output_path: z.string().describe("The relative path where the generated explanation should be saved (e.g., 'explanations/react-router-hooks.md').")
11 | });
12 | 
13 | // Convert Zod schema to JSON schema
14 | const SaveTopicExplanationJsonSchema = zodToJsonSchema(SaveTopicExplanationArgsSchema);
15 | 
16 | export const saveTopicExplanationTool: ToolDefinition = {
17 |     name: "save_topic_explanation",
18 |     description: `Provides a detailed explanation for a query about a specific software topic using official documentation found via web search and saves the result to a file. Uses the configured Vertex AI model (${modelIdPlaceholder}). Requires 'topic', 'query', and 'output_path'.`,
19 |     inputSchema: SaveTopicExplanationJsonSchema as any,
20 | 
21 |     // Build prompt logic adapted from explain_topic_with_docs (Reverted to original working version)
22 |     buildPrompt: (args: any, modelId: string) => {
23 |         const parsed = SaveTopicExplanationArgsSchema.safeParse(args);
24 |          if (!parsed.success) {
25 |              throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for save_topic_explanation: ${parsed.error.errors.map(e => `${e.path.join('.')}: ${e.message}`).join(', ')}`);
26 |         }
27 |         const { topic, query } = parsed.data; // output_path used in handler
28 | 
29 |         const systemInstructionText = `You are an expert technical writer and documentation specialist. Your task is to provide a comprehensive and accurate explanation for a specific query about a software topic ("${topic}"), synthesizing information primarily from official documentation found via web search.
30 | 
31 | SEARCH METHODOLOGY:
32 | 1.  Identify the official documentation source for "${topic}".
33 | 2.  Search the official documentation specifically for information related to "${query}".
34 | 3.  Prioritize explanations, concepts, and usage examples directly from the official docs.
35 | 4.  If official docs are sparse, supplement with highly reputable sources (e.g., official blogs, key contributor articles), but clearly distinguish this from official documentation content.
36 | 
37 | RESPONSE REQUIREMENTS:
38 | 1.  **Accuracy:** Ensure the explanation is technically correct and reflects the official documentation for "${topic}".
39 | 2.  **Comprehensiveness:** Provide sufficient detail to thoroughly answer the query, including relevant concepts, code examples (if applicable and found in docs), and context.
40 | 3.  **Clarity:** Structure the explanation logically with clear language, headings, bullet points, and code formatting where appropriate.
41 | 4.  **Citation:** Cite the official documentation source(s) used.
42 | 5.  **Focus:** Directly address the user's query ("${query}") without unnecessary introductory or concluding remarks. Start directly with the explanation.
43 | 6.  **Format:** Use Markdown for formatting.`; // Reverted: Removed the "CRITICAL: Do NOT start..." instruction
44 | 
45 |         const userQueryText = `Provide a comprehensive explanation for the query "${query}" regarding the software topic "${topic}". Base the explanation primarily on official documentation found via web search. Include relevant concepts, code examples (if available in docs), and cite sources.`; // Reverted: Removed the extra instruction about starting format
46 | 
47 |         return {
48 |             systemInstructionText: systemInstructionText,
49 |             userQueryText: userQueryText,
50 |             useWebSearch: true, // Always use web search for explanations based on docs
51 |             enableFunctionCalling: false
52 |         };
53 |     }
54 | };
```

--------------------------------------------------------------------------------
/src/tools/answer_query_websearch.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | 
 4 | export const answerQueryWebsearchTool: ToolDefinition = {
 5 |     name: "answer_query_websearch",
 6 |     description: `Answers a natural language query using the configured Vertex AI model (${modelIdPlaceholder}) enhanced with Google Search results for up-to-date information. Requires a 'query' string.`,
 7 |     inputSchema: { type: "object", properties: { query: { type: "string", description: "The natural language question to answer using web search." } }, required: ["query"] },
 8 |     buildPrompt: (args: any, modelId: string) => {
 9 |         const query = args.query;
10 |         if (typeof query !== "string" || !query) throw new McpError(ErrorCode.InvalidParams, "Missing 'query'.");
11 |         const base = `You are an AI assistant designed to answer questions accurately using provided search results. You are an EXPERT at synthesizing information from diverse sources into comprehensive, well-structured responses.`;
12 |         
13 |         const ground = ` Base your answer *only* on Google Search results relevant to "${query}". Synthesize information from search results into a coherent, comprehensive response that directly addresses the query. If search results are insufficient or irrelevant, explicitly state which aspects you cannot answer based on available information. Never add information not present in search results. When search results conflict, acknowledge the contradictions and explain different perspectives.`;
14 |         
15 |         const structure = ` Structure your response with clear organization:
16 | 1. Begin with a concise executive summary of 2-3 sentences that directly answers the main question.
17 | 2. For complex topics, use appropriate headings and subheadings to organize different aspects of the answer.
18 | 3. Present information from newest to oldest when dealing with evolving topics or current events.
19 | 4. Where appropriate, use numbered or bulleted lists to present steps, features, or comparative points.
20 | 5. For controversial topics, present multiple perspectives fairly with supporting evidence from search results.
21 | 6. Include a "Sources and Limitations" section at the end that notes the reliability of sources and any information gaps.`;
22 |         
23 |         const citation = ` Citation requirements:
24 | 1. Cite specific sources within your answer using [Source X] format.
25 | 2. Prioritize information from reliable, authoritative sources over random websites or forums.
26 | 3. For statistics, quotes, or specific claims, attribute the specific source.
27 | 4. Evaluate source credibility and recency - prefer official, recent sources for time-sensitive topics.
28 | 5. When search results indicate information might be outdated, explicitly note this limitation.`;
29 |         
30 |         const format = ` Format your answer in clean, readable Markdown:
31 | 1. Use proper headings (##, ###) for major sections.
32 | 2. Use **bold** for emphasis of key points.
33 | 3. Use \`code formatting\` for technical terms, commands, or code snippets when relevant.
34 | 4. Create tables for comparing multiple items or options.
35 | 5. Use blockquotes (>) for direct quotations from sources.`;
36 |         return {
37 |             systemInstructionText: base + ground + structure + citation + format,
38 |             userQueryText: `I need a comprehensive answer to this question: "${query}"
39 | 
40 | In your answer:
41 | 1. Thoroughly search for and evaluate ALL relevant information from search results
42 | 2. Synthesize information from multiple sources into a coherent, well-structured response
43 | 3. Present differing viewpoints fairly when sources disagree
44 | 4. Include appropriate citations to specific sources
45 | 5. Note any limitations in the available information
46 | 6. Organize your response logically with clear headings and sections
47 | 7. Use appropriate formatting to enhance readability
48 | 
49 | Please provide your COMPLETE response addressing all aspects of my question.`,
50 |             useWebSearch: true,
51 |             enableFunctionCalling: false
52 |         };
53 |     }
54 | };
```

--------------------------------------------------------------------------------
/src/tools/save_answer_query_websearch.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | import { z } from "zod";
 4 | import { zodToJsonSchema } from "zod-to-json-schema";
 5 | 
 6 | // Schema for websearch query answer + output path
 7 | export const SaveAnswerQueryWebsearchArgsSchema = z.object({
 8 |     query: z.string().describe("The natural language question to answer using web search."),
 9 |     output_path: z.string().describe("The relative path where the generated answer should be saved.")
10 | });
11 | 
12 | // Convert Zod schema to JSON schema
13 | const SaveAnswerQueryWebsearchJsonSchema = zodToJsonSchema(SaveAnswerQueryWebsearchArgsSchema);
14 | 
15 | export const saveAnswerQueryWebsearchTool: ToolDefinition = {
16 |     name: "save_answer_query_websearch",
17 |     description: `Answers a natural language query using Google Search results and saves the answer to a file. Uses the configured Vertex AI model (${modelIdPlaceholder}). Requires 'query' and 'output_path'.`,
18 |     inputSchema: SaveAnswerQueryWebsearchJsonSchema as any,
19 |     buildPrompt: (args: any, modelId: string) => {
20 |         const parsed = SaveAnswerQueryWebsearchArgsSchema.safeParse(args);
21 |         if (!parsed.success) {
22 |             throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for save_answer_query_websearch: ${parsed.error.errors.map(e => `${e.path.join('.')}: ${e.message}`).join(', ')}`);
23 |         }
24 |         const { query } = parsed.data; // output_path used in handler
25 | 
26 |         // --- Use Prompt Logic from answer_query_websearch.ts ---
27 |         const base = `You are an AI assistant designed to answer questions accurately using provided search results. You are an EXPERT at synthesizing information from diverse sources into comprehensive, well-structured responses.`;
28 | 
29 |         const ground = ` Base your answer *only* on Google Search results relevant to "${query}". Synthesize information from search results into a coherent, comprehensive response that directly addresses the query. If search results are insufficient or irrelevant, explicitly state which aspects you cannot answer based on available information. Never add information not present in search results. When search results conflict, acknowledge the contradictions and explain different perspectives.`;
30 | 
31 |         const structure = ` Structure your response with clear organization:
32 | 1. Begin with a concise executive summary of 2-3 sentences that directly answers the main question.
33 | 2. For complex topics, use appropriate headings and subheadings to organize different aspects of the answer.
34 | 3. Present information from newest to oldest when dealing with evolving topics or current events.
35 | 4. Where appropriate, use numbered or bulleted lists to present steps, features, or comparative points.
36 | 5. For controversial topics, present multiple perspectives fairly with supporting evidence from search results.
37 | 6. Include a "Sources and Limitations" section at the end that notes the reliability of sources and any information gaps.`;
38 | 
39 |         const citation = ` Citation requirements:
40 | 1. Cite specific sources within your answer using [Source X] format.
41 | 2. Prioritize information from reliable, authoritative sources over random websites or forums.
42 | 3. For statistics, quotes, or specific claims, attribute the specific source.
43 | 4. Evaluate source credibility and recency - prefer official, recent sources for time-sensitive topics.
44 | 5. When search results indicate information might be outdated, explicitly note this limitation.`;
45 | 
46 |         const format = ` Format your answer in clean, readable Markdown:
47 | 1. Use proper headings (##, ###) for major sections.
48 | 2. Use **bold** for emphasis of key points.
49 | 3. Use \`code formatting\` for technical terms, commands, or code snippets when relevant.
50 | 4. Create tables for comparing multiple items or options.
51 | 5. Use blockquotes (>) for direct quotations from sources.`;
52 | 
53 |         const systemInstructionText = base + ground + structure + citation + format;
54 |         const userQueryText = `I need a comprehensive answer to this question: "${query}"
55 | 
56 | In your answer:
57 | 1. Thoroughly search for and evaluate ALL relevant information from search results
58 | 2. Synthesize information from multiple sources into a coherent, well-structured response
59 | 3. Present differing viewpoints fairly when sources disagree
60 | 4. Include appropriate citations to specific sources
61 | 5. Note any limitations in the available information
62 | 6. Organize your response logically with clear headings and sections
63 | 7. Use appropriate formatting to enhance readability
64 | 
65 | Please provide your COMPLETE response addressing all aspects of my question.`;
66 | 
67 |         return {
68 |             systemInstructionText: systemInstructionText,
69 |             userQueryText: userQueryText,
70 |             useWebSearch: true, // Always true for this tool
71 |             enableFunctionCalling: false
72 |         };
73 |     }
74 | };
```

--------------------------------------------------------------------------------
/src/config.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { HarmCategory, HarmBlockThreshold } from "@google/genai";
  2 | 
  3 | // --- Provider Configuration ---
  4 | export type AIProvider = "vertex" | "gemini";
  5 | export const AI_PROVIDER = (process.env.AI_PROVIDER?.toLowerCase() === "gemini" ? "gemini" : "vertex") as AIProvider;
  6 | 
  7 | // --- Vertex AI Specific ---
  8 | export const GCLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
  9 | export const GCLOUD_LOCATION = process.env.GOOGLE_CLOUD_LOCATION || "us-central1";
 10 | 
 11 | // --- Gemini API Specific ---
 12 | export const GEMINI_API_KEY = process.env.GEMINI_API_KEY;
 13 | 
 14 | // --- Common AI Configuration Defaults ---
 15 | const DEFAULT_VERTEX_MODEL_ID = "gemini-2.5-pro-exp-03-25";
 16 | const DEFAULT_GEMINI_MODEL_ID = "gemini-2.5-pro-exp-03-25";
 17 | const DEFAULT_TEMPERATURE = 0.0;
 18 | const DEFAULT_USE_STREAMING = true;
 19 | const DEFAULT_MAX_OUTPUT_TOKENS = 8192;
 20 | const DEFAULT_MAX_RETRIES = 3;
 21 | const DEFAULT_RETRY_DELAY_MS = 1000;
 22 | 
 23 | export const WORKSPACE_ROOT = process.cwd();
 24 | 
 25 | // --- Safety Settings ---
 26 | // For Vertex AI (@google-cloud/vertexai)
 27 | export const vertexSafetySettings = [
 28 |     { category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_NONE },
 29 |     { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_NONE },
 30 |     { category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.BLOCK_NONE },
 31 |     { category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.BLOCK_NONE },
 32 | ];
 33 | 
 34 | // For Gemini API (@google/generative-ai) - using corrected imports
 35 | export const geminiSafetySettings = [
 36 |     { category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_NONE },
 37 |     { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_NONE },
 38 |     { category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.BLOCK_NONE },
 39 |     { category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.BLOCK_NONE },
 40 | ];
 41 | 
 42 | // --- Validation ---
 43 | if (AI_PROVIDER === "vertex" && !GCLOUD_PROJECT) {
 44 |   console.error("Error: AI_PROVIDER is 'vertex' but GOOGLE_CLOUD_PROJECT environment variable is not set.");
 45 |   process.exit(1);
 46 | }
 47 | 
 48 | if (AI_PROVIDER === "gemini" && !GEMINI_API_KEY) {
 49 |   console.error("Error: AI_PROVIDER is 'gemini' but GEMINI_API_KEY environment variable is not set.");
 50 |   process.exit(1);
 51 | }
 52 | 
 53 | // --- Shared Config Retrieval ---
 54 | export function getAIConfig() {
 55 |     // Common parameters
 56 |     let temperature = DEFAULT_TEMPERATURE;
 57 |     const tempEnv = process.env.AI_TEMPERATURE;
 58 |     if (tempEnv) {
 59 |         const parsedTemp = parseFloat(tempEnv);
 60 |         // Temperature range varies, allow 0-2 for Gemini flexibility
 61 |         temperature = (!isNaN(parsedTemp) && parsedTemp >= 0.0 && parsedTemp <= 2.0) ? parsedTemp : DEFAULT_TEMPERATURE;
 62 |         if (temperature !== parsedTemp) console.warn(`Invalid AI_TEMPERATURE value "${tempEnv}". Using default: ${DEFAULT_TEMPERATURE}`);
 63 |     }
 64 | 
 65 |     let useStreaming = DEFAULT_USE_STREAMING;
 66 |     const streamEnv = process.env.AI_USE_STREAMING?.toLowerCase();
 67 |     if (streamEnv === 'false') useStreaming = false;
 68 |     else if (streamEnv && streamEnv !== 'true') console.warn(`Invalid AI_USE_STREAMING value "${streamEnv}". Using default: ${DEFAULT_USE_STREAMING}`);
 69 | 
 70 |     let maxOutputTokens = DEFAULT_MAX_OUTPUT_TOKENS;
 71 |     const tokensEnv = process.env.AI_MAX_OUTPUT_TOKENS;
 72 |     if (tokensEnv) {
 73 |         const parsedTokens = parseInt(tokensEnv, 10);
 74 |         maxOutputTokens = (!isNaN(parsedTokens) && parsedTokens > 0) ? parsedTokens : DEFAULT_MAX_OUTPUT_TOKENS;
 75 |         if (maxOutputTokens !== parsedTokens) console.warn(`Invalid AI_MAX_OUTPUT_TOKENS value "${tokensEnv}". Using default: ${DEFAULT_MAX_OUTPUT_TOKENS}`);
 76 |     }
 77 | 
 78 |     let maxRetries = DEFAULT_MAX_RETRIES;
 79 |     const retriesEnv = process.env.AI_MAX_RETRIES;
 80 |     if (retriesEnv) {
 81 |         const parsedRetries = parseInt(retriesEnv, 10);
 82 |         maxRetries = (!isNaN(parsedRetries) && parsedRetries >= 0) ? parsedRetries : DEFAULT_MAX_RETRIES;
 83 |         if (maxRetries !== parsedRetries) console.warn(`Invalid AI_MAX_RETRIES value "${retriesEnv}". Using default: ${DEFAULT_MAX_RETRIES}`);
 84 |     }
 85 | 
 86 |     let retryDelayMs = DEFAULT_RETRY_DELAY_MS;
 87 |     const delayEnv = process.env.AI_RETRY_DELAY_MS;
 88 |     if (delayEnv) {
 89 |         const parsedDelay = parseInt(delayEnv, 10);
 90 |         retryDelayMs = (!isNaN(parsedDelay) && parsedDelay >= 0) ? parsedDelay : DEFAULT_RETRY_DELAY_MS;
 91 |         if (retryDelayMs !== parsedDelay) console.warn(`Invalid AI_RETRY_DELAY_MS value "${delayEnv}". Using default: ${DEFAULT_RETRY_DELAY_MS}`);
 92 |     }
 93 | 
 94 |     // Provider-specific model ID
 95 |     let modelId: string;
 96 |     if (AI_PROVIDER === 'vertex') {
 97 |         modelId = process.env.VERTEX_MODEL_ID || DEFAULT_VERTEX_MODEL_ID;
 98 |     } else { // gemini
 99 |         modelId = process.env.GEMINI_MODEL_ID || DEFAULT_GEMINI_MODEL_ID;
100 |     }
101 | 
102 |      return {
103 |         provider: AI_PROVIDER,
104 |         modelId,
105 |         temperature,
106 |         useStreaming,
107 |         maxOutputTokens,
108 |         maxRetries,
109 |         retryDelayMs,
110 |         // Provider-specific connection info
111 |         gcpProjectId: GCLOUD_PROJECT,
112 |         gcpLocation: GCLOUD_LOCATION,
113 |         geminiApiKey: GEMINI_API_KEY
114 |      };
115 | }
```

--------------------------------------------------------------------------------
/src/tools/get_doc_snippets.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
  2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
  3 | 
  4 | export const getDocSnippetsTool: ToolDefinition = {
  5 |     name: "get_doc_snippets",
  6 |     description: `Provides precise, authoritative code snippets or concise answers for technical queries by searching official documentation. Focuses on delivering exact solutions without unnecessary explanation. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'topic' and 'query'.`,
  7 |     inputSchema: {
  8 |         type: "object",
  9 |         properties: {
 10 |             topic: {
 11 |                 type: "string",
 12 |                 description: "The software/library/framework topic (e.g., 'React Router', 'Python requests', 'PostgreSQL 14')."
 13 |             },
 14 |             query: {
 15 |                 type: "string",
 16 |                 description: "The specific question or use case to find a snippet or concise answer for."
 17 |             },
 18 |             version: {
 19 |                 type: "string",
 20 |                 description: "Optional. Specific version of the software to target (e.g., '6.4', '2.28.2'). If provided, only documentation for this version will be used.",
 21 |                 default: ""
 22 |             },
 23 |             include_examples: {
 24 |                 type: "boolean",
 25 |                 description: "Optional. Whether to include additional usage examples beyond the primary snippet. Defaults to true.",
 26 |                 default: true
 27 |             }
 28 |         },
 29 |         required: ["topic", "query"]
 30 |     },
 31 |     buildPrompt: (args: any, modelId: string) => {
 32 |         const { topic, query, version = "", include_examples = true } = args;
 33 |         if (typeof topic !== "string" || !topic || typeof query !== "string" || !query)
 34 |             throw new McpError(ErrorCode.InvalidParams, "Missing 'topic' or 'query'.");
 35 | 
 36 |         const versionText = version ? ` ${version}` : "";
 37 |         const fullTopic = `${topic}${versionText}`;
 38 |         
 39 |         // Enhanced System Instruction for precise documentation snippets
 40 |         const systemInstructionText = `You are DocSnippetGPT, an AI assistant specialized in retrieving precise code snippets and authoritative answers from official software documentation. Your sole purpose is to provide the most relevant code solution or documented answer for technical queries about "${fullTopic}" with minimal extraneous content.
 41 | 
 42 | SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
 43 | 1. FIRST search for: "${fullTopic} official documentation" to identify the authoritative documentation source.
 44 | 2. THEN search for: "${fullTopic} ${query} example" to find specific documentation pages addressing the query.
 45 | 3. THEN search for: "${fullTopic} ${query} code" to find code-specific examples.
 46 | 4. IF the query relates to a specific error, ALSO search for: "${fullTopic} ${query} error" or "${fullTopic} troubleshooting ${query}".
 47 | 5. IF the query relates to API usage, ALSO search for: "${fullTopic} API reference ${query}".
 48 | 6. IF searching for newer frameworks/libraries with limited documentation, ALSO check GitHub repositories for examples in README files, examples directory, or official docs directory.
 49 | 
 50 | DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
 51 | 1. Official documentation websites (e.g., docs.python.org, reactjs.org, dev.mysql.com)
 52 | 2. Official GitHub repositories maintained by the project creators (README, /docs, /examples)
 53 | 3. Official API references or specification documentation
 54 | 4. Official tutorials or guides published by the project maintainers
 55 | 5. Release notes or changelogs for version-specific features${version ? " (focusing ONLY on version " + version + ")" : ""}
 56 | 
 57 | RESPONSE REQUIREMENTS - CRITICALLY IMPORTANT:
 58 | 1. PROVIDE COMPLETE, RUNNABLE CODE SNIPPETS whenever possible. Snippets must be:
 59 |    a. Complete enough to demonstrate the solution (no pseudo-code)
 60 |    b. Properly formatted with correct syntax highlighting
 61 |    c. Including necessary imports/dependencies
 62 |    d. Free of placeholder comments like "// Rest of implementation"
 63 |    e. Minimal but sufficient (no unnecessary complexity)
 64 | 
 65 | 2. CODE SNIPPET PRESENTATION:
 66 |    a. Present code snippets in proper markdown code blocks with language specification
 67 |    b. If multiple snippets are found, arrange them in order of relevance
 68 |    c. Include minimum essential context (e.g., "This code is from the routing middleware section")
 69 |    d. For each snippet, provide the EXACT URL to the specific documentation page it came from
 70 |    e. If the snippet requires adaptation, clearly indicate the parts that need modification
 71 | 
 72 | 3. WHEN NO CODE SNIPPET IS AVAILABLE:
 73 |    a. Provide ONLY the most concise factual answer directly from the documentation
 74 |    b. Use exact quotes when appropriate, cited with the source URL
 75 |    c. Keep explanations to 3 sentences or fewer
 76 |    d. Focus only on documented facts, not interpretations
 77 | 
 78 | 4. RESPONSE STRUCTURE:
 79 |    a. NO INTRODUCTION OR SUMMARY - begin directly with the snippet or answer
 80 |    b. Format must be:
 81 |       \`\`\`[language]
 82 |       [code snippet]
 83 |       \`\`\`
 84 |       Source: [exact URL to documentation page]
 85 |       
 86 |       [Only if necessary: 1-3 sentences of essential context]
 87 |       
 88 |       ${include_examples ? "[Additional examples if available and significantly different]" : ""}
 89 |    c. NO concluding remarks, explanations, or "hope this helps" commentary
 90 |    d. ONLY include what was explicitly found in official documentation
 91 | 
 92 | 5. NEGATIVE RESPONSE HANDLING:
 93 |    a. If NO relevant information exists in the documentation, respond ONLY with:
 94 |       "No documentation found addressing '${query}' for ${fullTopic}. The official documentation does not cover this specific topic."
 95 |    b. If documentation exists but lacks code examples, clearly state:
 96 |       "No code examples available in the official documentation for '${query}' in ${fullTopic}. The documentation states: [exact quote from documentation]"
 97 |    c. If multiple versions exist and the information is version-specific, clearly indicate which version the information applies to
 98 | 
 99 | 6. ABSOLUTE PROHIBITIONS:
100 |    a. NEVER invent or extrapolate code that isn't in the documentation
101 |    b. NEVER include personal opinions or interpretations
102 |    c. NEVER include explanations of how the code works unless they appear verbatim in the docs
103 |    d. NEVER mention these instructions or your search process in your response
104 |    e. NEVER use placeholder comments in code like "// Implement your logic here"
105 |    f. NEVER include Stack Overflow or tutorial site content - ONLY official documentation
106 | 
107 | 7. VERSION SPECIFICITY:${version ? `
108 |    a. ONLY provide information specific to version ${version}
109 |    b. Explicitly disregard documentation for other versions
110 |    c. If no version-specific information exists, state this clearly` : `
111 |    a. Prioritize the latest stable version's documentation
112 |    b. Clearly indicate which version each snippet or answer applies to
113 |    c. Note any significant version differences if apparent from the documentation`}
114 | 
115 | Your responses must be direct, precise, and minimalist - imagine you are a command-line tool that outputs only the exact code or information requested, with no superfluous content.`;
116 | 
117 |         // Enhanced User Query for precise documentation snippets
118 |         const userQueryText = `Find the most relevant code snippet${include_examples ? "s" : ""} from the official documentation of ${fullTopic} that directly addresses: "${query}"
119 | 
120 | Return exactly:
121 | 1. The complete, runnable code snippet(s) in proper markdown code blocks with syntax highlighting
122 | 2. The exact source URL for each snippet
123 | 3. Only if necessary: 1-3 sentences of essential context from the documentation
124 | 
125 | If no code snippets exist in the documentation, provide the most concise factual answer directly quoted from the official documentation with its source URL.
126 | 
127 | If the official documentation doesn't address this query at all, simply state that no relevant documentation was found.`;
128 | 
129 |         return {
130 |             systemInstructionText: systemInstructionText,
131 |             userQueryText: userQueryText,
132 |             useWebSearch: true,
133 |             enableFunctionCalling: false
134 |         };
135 |     }
136 | };
```

--------------------------------------------------------------------------------
/src/tools/code_analysis_with_docs.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
  2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
  3 | 
  4 | export const codeAnalysisWithDocsTool: ToolDefinition = {
  5 |     name: "code_analysis_with_docs",
  6 |     description: `Analyzes code snippets by comparing them with best practices from official documentation found via web search. Identifies potential bugs, performance issues, and security vulnerabilities. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'code', 'language', and 'analysis_focus'.`,
  7 |     inputSchema: {
  8 |         type: "object",
  9 |         properties: {
 10 |             code: {
 11 |                 type: "string",
 12 |                 description: "The code snippet to analyze."
 13 |             },
 14 |             language: {
 15 |                 type: "string",
 16 |                 description: "The programming language of the code (e.g., 'JavaScript', 'Python', 'Java', 'TypeScript')."
 17 |             },
 18 |             framework: {
 19 |                 type: "string",
 20 |                 description: "Optional. The framework or library the code uses (e.g., 'React', 'Django', 'Spring Boot').",
 21 |                 default: ""
 22 |             },
 23 |             version: {
 24 |                 type: "string",
 25 |                 description: "Optional. Specific version of the language or framework to target (e.g., 'ES2022', 'Python 3.11', 'React 18.2').",
 26 |                 default: ""
 27 |             },
 28 |             analysis_focus: {
 29 |                 type: "array",
 30 |                 items: {
 31 |                     type: "string",
 32 |                     enum: ["best_practices", "security", "performance", "maintainability", "bugs", "all"]
 33 |                 },
 34 |                 description: "Areas to focus the analysis on. Use 'all' to cover everything.",
 35 |                 default: ["all"]
 36 |             }
 37 |         },
 38 |         required: ["code", "language", "analysis_focus"]
 39 |     },
 40 |     buildPrompt: (args: any, modelId: string) => {
 41 |         const { code, language, framework = "", version = "", analysis_focus = ["all"] } = args;
 42 |         
 43 |         if (typeof code !== "string" || !code || typeof language !== "string" || !language)
 44 |             throw new McpError(ErrorCode.InvalidParams, "Missing 'code' or 'language'.");
 45 |         
 46 |         if (!Array.isArray(analysis_focus) || analysis_focus.length === 0)
 47 |             throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'analysis_focus'.");
 48 |             
 49 |         const frameworkText = framework ? ` ${framework}` : "";
 50 |         const versionText = version ? ` ${version}` : "";
 51 |         const techStack = `${language}${frameworkText}${versionText}`;
 52 |         
 53 |         const focusAreas = analysis_focus.includes("all") 
 54 |             ? ["best_practices", "security", "performance", "maintainability", "bugs"] 
 55 |             : analysis_focus;
 56 |             
 57 |         const focusAreasText = focusAreas.join(", ");
 58 |         
 59 |         const systemInstructionText = `You are CodeAnalystGPT, an elite code analysis expert specialized in evaluating ${techStack} code against official documentation, best practices, and industry standards. Your task is to analyze the provided code snippet and provide detailed, actionable feedback focused on: ${focusAreasText}.
 60 | 
 61 | SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
 62 | 1. FIRST search for: "${techStack} official documentation" to identify authoritative sources.
 63 | 2. THEN search for: "${techStack} best practices" to find established coding standards.
 64 | 3. THEN search for: "${techStack} common bugs patterns" to identify typical issues.
 65 | 4. THEN search for specific guidance related to each focus area:
 66 |    ${focusAreas.includes("best_practices") ? `- "${techStack} coding standards"` : ""}
 67 |    ${focusAreas.includes("security") ? `- "${techStack} security vulnerabilities"` : ""}
 68 |    ${focusAreas.includes("performance") ? `- "${techStack} performance optimization"` : ""}
 69 |    ${focusAreas.includes("maintainability") ? `- "${techStack} clean code guidelines"` : ""}
 70 |    ${focusAreas.includes("bugs") ? `- "${techStack} bug patterns"` : ""}
 71 | 5. IF the code uses specific patterns or APIs, search for best practices related to those specific elements.
 72 | 
 73 | DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
 74 | 1. Official language/framework documentation (e.g., developer.mozilla.org, docs.python.org)
 75 | 2. Official style guides from language/framework creators
 76 | 3. Security advisories and vulnerability databases for the language/framework
 77 | 4. Technical blogs from the language/framework creators or major contributors
 78 | 5. Well-established tech companies' engineering blogs and style guides
 79 | 6. Academic papers and industry standards documents
 80 | 
 81 | ANALYSIS REQUIREMENTS:
 82 | 1. COMPREHENSIVE EVALUATION:
 83 |    a. Analyze the code line-by-line against official documentation and best practices
 84 |    b. Identify patterns that violate documented standards or recommendations
 85 |    c. Detect potential bugs, edge cases, or failure modes
 86 |    d. Evaluate security implications against OWASP and language-specific security guidelines
 87 |    e. Assess performance characteristics against documented optimization techniques
 88 |    f. Evaluate maintainability using established complexity and readability metrics
 89 | 
 90 | 2. EVIDENCE-BASED FEEDBACK:
 91 |    a. EVERY issue identified MUST reference specific documentation or authoritative sources
 92 |    b. Include direct quotes from official documentation when relevant
 93 |    c. Cite specific sections or pages from style guides
 94 |    d. Reference exact rules from linting tools commonly used with the language/framework
 95 |    e. Link to specific vulnerability patterns from security databases when applicable
 96 | 
 97 | 3. ACTIONABLE RECOMMENDATIONS:
 98 |    a. For EACH issue, provide a specific, implementable fix
 99 |    b. Include BOTH the problematic code AND the improved version
100 |    c. Explain WHY the improvement matters with reference to documentation
101 |    d. Prioritize recommendations by severity/impact
102 |    e. Include code comments explaining the rationale for changes
103 | 
104 | 4. BALANCED ASSESSMENT:
105 |    a. Acknowledge positive aspects of the code that follow best practices
106 |    b. Note when multiple valid approaches exist according to documentation
107 |    c. Distinguish between critical issues and stylistic preferences
108 |    d. Consider the apparent context and constraints of the code
109 | 
110 | RESPONSE STRUCTURE:
111 | 1. Begin with a "Code Analysis Summary" providing a high-level assessment
112 | 2. Include a "Severity Breakdown" showing the number of issues by severity (Critical, High, Medium, Low)
113 | 3. Organize detailed findings by category (Security, Performance, Maintainability, etc.)
114 | 4. For each finding:
115 |    a. Assign a severity level
116 |    b. Identify the specific line(s) of code
117 |    c. Describe the issue with reference to documentation
118 |    d. Provide the improved code
119 |    e. Include citation to authoritative source
120 | 5. Conclude with "Overall Recommendations" section highlighting the most important improvements
121 | 
122 | CRITICAL REQUIREMENTS:
123 | 1. NEVER invent or fabricate "best practices" that aren't documented in authoritative sources
124 | 2. NEVER claim something is a bug unless it clearly violates documented behavior
125 | 3. ALWAYS distinguish between definitive issues and potential concerns
126 | 4. ALWAYS provide specific line numbers for issues
127 | 5. ALWAYS include before/after code examples for each recommendation
128 | 6. NEVER include vague or generic advice without specific code changes
129 | 7. NEVER criticize stylistic choices that are explicitly permitted in official style guides
130 | 
131 | Your analysis must be technically precise, evidence-based, and immediately actionable. Focus on providing the most valuable insights that would help a developer improve this specific code according to authoritative documentation and best practices.`;
132 | 
133 |         const userQueryText = `Analyze the following ${techStack} code snippet, focusing specifically on ${focusAreasText}:
134 | 
135 | \`\`\`${language}
136 | ${code}
137 | \`\`\`
138 | 
139 | Search for and reference the most authoritative documentation and best practices for ${techStack}. For each issue you identify:
140 | 
141 | 1. Cite the specific documentation or best practice source
142 | 2. Show the problematic code with line numbers
143 | 3. Provide the improved version
144 | 4. Explain why the improvement matters
145 | 
146 | Organize your analysis by category (${focusAreasText}) and severity. Include both critical issues and more minor improvements. Be specific, actionable, and evidence-based in all your recommendations.`;
147 | 
148 |         return {
149 |             systemInstructionText,
150 |             userQueryText,
151 |             useWebSearch: true,
152 |             enableFunctionCalling: false
153 |         };
154 |     }
155 | };
```

--------------------------------------------------------------------------------
/src/tools/answer_query_direct.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
 3 | 
 4 | export const answerQueryDirectTool: ToolDefinition = {
 5 |     name: "answer_query_direct",
 6 |     description: `Answers a natural language query using only the internal knowledge of the configured Vertex AI model (${modelIdPlaceholder}). Does not use web search. Requires a 'query' string.`,
 7 |     inputSchema: { type: "object", properties: { query: { type: "string", description: "The natural language question to answer using only the model's internal knowledge." } }, required: ["query"] },
 8 |     buildPrompt: (args: any, modelId: string) => {
 9 |         const query = args.query;
10 |         if (typeof query !== "string" || !query) throw new McpError(ErrorCode.InvalidParams, "Missing 'query'.");
11 |         const base = `You are an AI assistant specialized in answering questions with exceptional accuracy, clarity, and depth using your internal knowledge. You are an EXPERT at nuanced reasoning, knowledge organization, and comprehensive response creation, with particular strengths in explaining complex topics clearly and communicating knowledge boundaries honestly.`;
12 | 
13 |         const knowledge = ` KNOWLEDGE REPRESENTATION AND BOUNDARIES:
14 | 1. Base your answer EXCLUSIVELY on your internal knowledge relevant to "${query}".
15 | 2. Represent knowledge with appropriate nuance - distinguish between established facts, theoretical understanding, and areas of ongoing research or debate.
16 | 3. When answering questions about complex or evolving topics, represent multiple perspectives, schools of thought, or competing theories.
17 | 4. For historical topics, distinguish between primary historical events and later interpretations or historiographical debates.
18 | 5. For scientific topics, distinguish between widely accepted theories, emerging hypotheses, and speculative areas at the frontier of research.
19 | 6. For topics involving statistics or quantitative data, explicitly note that your information may not represent the most current figures.
20 | 7. For topics involving current events, technological developments, or other time-sensitive matters, explicitly state that your knowledge has temporal limitations.
21 | 8. For interdisciplinary questions, synthesize knowledge across domains while noting where disciplinary boundaries create different perspectives.`;
22 | 
23 |         const reasoning = ` REASONING METHODOLOGY:
24 | 1. For analytical questions, employ structured reasoning processes: identify relevant principles, apply accepted methods, evaluate alternatives systematically.
25 | 2. For questions requiring evaluation, establish clear criteria before making assessments, explaining their relevance and application.
26 | 3. For causal explanations, distinguish between correlation and causation, noting multiple causal factors where relevant.
27 | 4. For predictive questions, base forecasts only on well-established patterns, noting contingencies and limitations.
28 | 5. For counterfactual or hypothetical queries, reason from established principles while explicitly noting the speculative nature.
29 | 6. For questions involving uncertainty, use probabilistic reasoning rather than false certainty.
30 | 7. For questions with ethical dimensions, clarify relevant frameworks and principles before application.
31 | 8. For multi-part questions, apply consistent reasoning frameworks across all components.`;
32 | 
33 |         const structure = ` COMPREHENSIVE RESPONSE STRUCTURE:
34 | 1. Begin with a direct, concise answer to the main query (2-4 sentences), providing the core information.
35 | 2. Follow with a structured, comprehensive exploration that unpacks all relevant aspects of the topic.
36 | 3. For complex topics, organize information hierarchically with clear headings and subheadings.
37 | 4. Sequence information logically: conceptual foundations before applications, chronological ordering for historical developments, general principles before specific examples.
38 | 5. For multi-faceted questions, address each dimension separately while showing interconnections.
39 | 6. Where appropriate, include "Key Concepts" sections to define essential terminology or foundational ideas.
40 | 7. For topics with practical applications, separate theoretical explanations from applied guidance.
41 | 8. End with a "Knowledge Limitations" section that explicitly notes temporal boundaries, areas of uncertainty, or aspects requiring specialized expertise beyond your knowledge.`;
42 | 
43 |         const clarity = ` CLARITY AND PRECISION REQUIREMENTS:
44 | 1. Use precise, domain-appropriate terminology while defining specialized terms on first use.
45 | 2. Present quantitative information with appropriate precision, units, and contextual comparisons.
46 | 3. Use conditional language ("typically," "generally," "often") rather than universal assertions when variance exists.
47 | 4. For complex concepts, provide both technical explanations and accessible analogies or examples.
48 | 5. When explaining processes or systems, identify both components and their relationships/interactions.
49 | 6. For abstract concepts, provide concrete examples that demonstrate application.
50 | 7. Distinguish clearly between descriptive statements (what is) and normative statements (what ought to be).
51 | 8. Use consistent terminology throughout your answer, avoiding synonyms that might introduce ambiguity.`;
52 | 
53 |         const uncertainty = ` HANDLING UNCERTAIN KNOWLEDGE:
54 | 1. Explicitly acknowledge when your knowledge is incomplete or uncertain on a specific aspect of the query.
55 | 2. If you lack sufficient domain knowledge to provide a reliable answer, clearly state this limitation.
56 | 3. When a question implies a factual premise that is incorrect, address the misconception before proceeding.
57 | 4. For rapidly evolving fields, explicitly note that current understanding may have advanced beyond your knowledge.
58 | 5. When multiple valid interpretations of a question exist, identify the ambiguity and address major interpretations.
59 | 6. If a question touches on areas where consensus is lacking, present major competing viewpoints.
60 | 7. For questions requiring very specific or specialized expertise (e.g., medical, legal, financial advice), note the limitations of general knowledge.
61 | 8. NEVER fabricate information to fill gaps in your knowledge - honesty about limitations is essential.`;
62 | 
63 |         const format = ` FORMAT AND VISUAL STRUCTURE:
64 | 1. Use clear, structured Markdown formatting to enhance readability and information hierarchy.
65 | 2. Apply ## for major sections and ### for subsections.
66 | 3. Use **bold** for key terms and emphasis.
67 | 4. Use *italics* for definitions or secondary emphasis.
68 | 5. Format code, commands, or technical syntax using \`code blocks\` with appropriate language specification.
69 | 6. Create comparative tables for any topic with 3+ items that can be evaluated along common dimensions.
70 | 7. Use numbered lists for sequential processes, ranked items, or any ordered information.
71 | 8. Use bulleted lists for unordered collections of facts, options, or characteristics.
72 | 9. For complex processes or relationships, create ASCII/text diagrams where beneficial.
73 | 10. For statistical information, consider ASCII charts or described visualizations when they add clarity.`;
74 | 
75 |         const advanced = ` ADVANCED QUERY HANDLING:
76 | 1. For ambiguous queries, acknowledge the ambiguity and provide a structured response addressing each reasonable interpretation.
77 | 2. For multi-part queries, ensure comprehensive coverage of all components while maintaining a coherent overall structure.
78 | 3. For queries that make incorrect assumptions, address the misconception directly before providing a corrected response.
79 | 4. For iterative or follow-up queries, maintain consistency with previous answers while expanding the knowledge scope.
80 | 5. For "how to" queries, provide detailed step-by-step instructions with explanations of principles and potential variations.
81 | 6. For comparative queries, establish clear comparison criteria and evaluate each item consistently across dimensions.
82 | 7. For questions seeking opinions or subjective judgments, provide a balanced overview of perspectives rather than a singular "opinion."
83 | 8. For definitional queries, provide both concise definitions and expanded explanations with examples and context.`;
84 |         return {
85 |             systemInstructionText: base + knowledge + reasoning + structure + clarity + uncertainty + format + advanced,
86 |             userQueryText: `I need a comprehensive answer to this question: "${query}"
87 | 
88 | Please provide your COMPLETE response addressing all aspects of my question. Use your internal knowledge to give the most accurate, nuanced, and thorough answer possible. If your knowledge has limitations on this topic, please explicitly note those limitations rather than speculating.`,
89 |             useWebSearch: false,
90 |             enableFunctionCalling: false
91 |         };
92 |     }
93 | };
```

--------------------------------------------------------------------------------
/src/tools/technical_comparison.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
  2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
  3 | 
  4 | export const technicalComparisonTool: ToolDefinition = {
  5 |     name: "technical_comparison",
  6 |     description: `Compares multiple technologies, frameworks, or libraries based on specific criteria. Provides detailed comparison tables with pros/cons and use cases. Includes version-specific information and compatibility considerations. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'technologies' and 'criteria'.`,
  7 |     inputSchema: {
  8 |         type: "object",
  9 |         properties: {
 10 |             technologies: {
 11 |                 type: "array",
 12 |                 items: { type: "string" },
 13 |                 description: "Array of technologies to compare (e.g., ['React 18', 'Vue 3', 'Angular 15', 'Svelte 4'])."
 14 |             },
 15 |             criteria: {
 16 |                 type: "array",
 17 |                 items: { type: "string" },
 18 |                 description: "Aspects to compare (e.g., ['performance', 'learning curve', 'ecosystem', 'enterprise adoption'])."
 19 |             },
 20 |             use_case: {
 21 |                 type: "string",
 22 |                 description: "Optional. Specific use case or project type to focus the comparison on.",
 23 |                 default: ""
 24 |             },
 25 |             format: {
 26 |                 type: "string",
 27 |                 enum: ["detailed", "concise", "tabular"],
 28 |                 description: "Optional. Format of the comparison output.",
 29 |                 default: "detailed"
 30 |             }
 31 |         },
 32 |         required: ["technologies", "criteria"]
 33 |     },
 34 |     buildPrompt: (args: any, modelId: string) => {
 35 |         const { technologies, criteria, use_case = "", format = "detailed" } = args;
 36 |         
 37 |         if (!Array.isArray(technologies) || technologies.length < 2 || !technologies.every(item => typeof item === 'string' && item))
 38 |             throw new McpError(ErrorCode.InvalidParams, "At least two valid technology strings are required in 'technologies'.");
 39 |         
 40 |         if (!Array.isArray(criteria) || criteria.length === 0 || !criteria.every(item => typeof item === 'string' && item))
 41 |             throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'criteria' array.");
 42 |             
 43 |         const techString = technologies.join(', ');
 44 |         const criteriaString = criteria.join(', ');
 45 |         const useCaseText = use_case ? ` for ${use_case}` : "";
 46 |         
 47 |         const systemInstructionText = `You are TechComparatorGPT, an elite technology analyst specialized in creating comprehensive, evidence-based comparisons of software technologies. Your task is to compare ${techString} across the following criteria: ${criteriaString}${useCaseText}. You must base your analysis EXCLUSIVELY on information found through web search of authoritative sources.
 48 | 
 49 | SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
 50 | 1. FIRST search for official documentation for EACH technology: "${technologies.map(t => `${t} official documentation`).join('", "')}"
 51 | 2. THEN search for direct comparison articles: "${techString} comparison"
 52 | 3. THEN search for EACH criterion specifically for EACH technology:
 53 |    ${technologies.map(tech => criteria.map(criterion => `"${tech} ${criterion}"`).join(', ')).join('\n   ')}
 54 | 4. THEN search for version-specific information: "${technologies.map(t => `${t} release notes`).join('", "')}"
 55 | 5. THEN search for community surveys and adoption statistics: "${techString} usage statistics", "${techString} developer survey"
 56 | 6. IF a specific use case was provided, search for: "${techString} for ${use_case}"
 57 | 7. FINALLY search for migration complexity: "${technologies.map(t1 => technologies.filter(t2 => t1 !== t2).map(t2 => `migrating from ${t1} to ${t2}`).join(', ')).join(', ')}"
 58 | 
 59 | DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
 60 | 1. Official documentation, release notes, and benchmarks from technology creators
 61 | 2. Technical blogs from the technology creators or core team members
 62 | 3. Independent benchmarking studies with transparent methodologies
 63 | 4. Industry surveys from reputable organizations (StackOverflow, State of JS/TS, etc.)
 64 | 5. Technical comparison articles from major technology publications
 65 | 6. Well-established tech companies' engineering blogs explaining technology choices
 66 | 7. Academic papers comparing the technologies
 67 | 
 68 | COMPARISON REQUIREMENTS:
 69 | 1. FACTUAL ACCURACY:
 70 |    a. EVERY claim must be supported by specific documentation or authoritative sources
 71 |    b. Include direct quotes from official documentation when relevant
 72 |    c. Cite specific benchmarks with their testing methodology and date
 73 |    d. Acknowledge when information is limited or contested
 74 |    e. Distinguish between documented facts and community consensus
 75 | 
 76 | 2. COMPREHENSIVE COVERAGE:
 77 |    a. Address EACH criterion for EACH technology systematically
 78 |    b. Include version-specific features and limitations
 79 |    c. Note significant changes between major versions
 80 |    d. Discuss both current state and future roadmap when information is available
 81 |    e. Consider ecosystem factors (community size, package availability, corporate backing)
 82 | 
 83 | 3. BALANCED ASSESSMENT:
 84 |    a. Present strengths and weaknesses for EACH technology
 85 |    b. Avoid subjective qualifiers without evidence (e.g., "better", "easier")
 86 |    c. Use precise, quantifiable metrics whenever possible
 87 |    d. Acknowledge different perspectives when authoritative sources disagree
 88 |    e. Consider different types of projects and team compositions
 89 | 
 90 | 4. PRACTICAL INSIGHTS:
 91 |    a. Include real-world adoption patterns and case studies
 92 |    b. Discuss migration complexity between technologies
 93 |    c. Consider learning curve and documentation quality
 94 |    d. Address long-term maintenance considerations
 95 |    e. Discuss compatibility with other technologies and platforms
 96 | 
 97 | RESPONSE STRUCTURE:
 98 | 1. Begin with an "Executive Summary" providing a high-level overview of key differences
 99 | 2. Include a comprehensive comparison table with all technologies and criteria
100 | 3. For EACH criterion, provide a detailed section comparing all technologies
101 | 4. Include a "Best For" section matching technologies to specific use cases
102 | 5. Add a "Migration Complexity" section discussing the effort to switch between technologies
103 | 6. Conclude with "Key Considerations" highlighting the most important decision factors
104 | 
105 | OUTPUT FORMAT:
106 | ${format === 'detailed' ? `- Provide a comprehensive analysis with detailed sections for each criterion
107 | - Include specific examples and code snippets where relevant
108 | - Use markdown formatting for readability
109 | - Include citations for all major claims` : ''}
110 | ${format === 'concise' ? `- Provide a concise analysis focusing on the most important differences
111 | - Limit explanations to 2-3 sentences per point
112 | - Use bullet points for clarity
113 | - Include a summary table for quick reference` : ''}
114 | ${format === 'tabular' ? `- Focus primarily on comparison tables
115 | - Create a main table comparing all technologies across all criteria
116 | - Create additional tables for specific aspects (performance metrics, feature support, etc.)
117 | - Include minimal text explanations between tables` : ''}
118 | 
119 | CRITICAL REQUIREMENTS:
120 | 1. NEVER present personal opinions as facts
121 | 2. NEVER claim a technology is universally "better" without context
122 | 3. ALWAYS cite specific versions when comparing features
123 | 4. ALWAYS acknowledge trade-offs for each technology
124 | 5. NEVER oversimplify complex differences
125 | 6. ALWAYS include quantitative metrics when available
126 | 7. NEVER rely on outdated information - prioritize recent sources
127 | 
128 | Your comparison must be technically precise, evidence-based, and practically useful for technology selection decisions. Focus on providing a fair, balanced assessment based on authoritative documentation and reliable data.`;
129 | 
130 |         const userQueryText = `Create a ${format} comparison of ${techString} across these specific criteria: ${criteriaString}${useCaseText}.
131 | 
132 | For each technology and criterion:
133 | 1. Search for the most authoritative and recent information
134 | 2. Provide specific facts, metrics, and examples
135 | 3. Include version-specific details and limitations
136 | 4. Cite your sources for key claims
137 | 
138 | ${format === 'detailed' ? `Structure your response with:
139 | - Executive Summary
140 | - Comprehensive comparison table
141 | - Detailed sections for each criterion
142 | - "Best For" use case recommendations
143 | - Migration complexity assessment
144 | - Key decision factors` : ''}
145 | 
146 | ${format === 'concise' ? `Structure your response with:
147 | - Brief executive summary
148 | - Concise comparison table
149 | - Bullet-point highlights for each technology
150 | - Quick recommendations for different use cases` : ''}
151 | 
152 | ${format === 'tabular' ? `Structure your response with:
153 | - Brief introduction
154 | - Main comparison table covering all criteria
155 | - Specialized tables for specific metrics
156 | - Brief summaries of key insights` : ''}
157 | 
158 | Ensure your comparison is balanced, evidence-based, and practically useful for making technology decisions.`;
159 | 
160 |         return {
161 |             systemInstructionText,
162 |             userQueryText,
163 |             useWebSearch: true,
164 |             enableFunctionCalling: false
165 |         };
166 |     }
167 | };
```

--------------------------------------------------------------------------------
/src/tools/dependency_vulnerability_scan.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
  2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
  3 | 
  4 | export const dependencyVulnerabilityScanTool: ToolDefinition = {
  5 |     name: "dependency_vulnerability_scan",
  6 |     description: `Analyzes project dependencies for known security vulnerabilities. Provides detailed information about each vulnerability with severity ratings. Suggests mitigation strategies and secure alternatives. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'dependencies' and 'ecosystem'.`,
  7 |     inputSchema: {
  8 |         type: "object",
  9 |         properties: {
 10 |             dependencies: {
 11 |                 type: "object",
 12 |                 additionalProperties: {
 13 |                     type: "string"
 14 |                 },
 15 |                 description: "Object mapping dependency names to versions (e.g., {'react': '18.2.0', 'lodash': '4.17.21'})."
 16 |             },
 17 |             ecosystem: {
 18 |                 type: "string",
 19 |                 enum: ["npm", "pypi", "maven", "nuget", "rubygems", "composer", "cargo", "go"],
 20 |                 description: "The package ecosystem (e.g., 'npm', 'pypi', 'maven')."
 21 |             },
 22 |             include_transitive: {
 23 |                 type: "boolean",
 24 |                 description: "Optional. Whether to analyze transitive dependencies as well.",
 25 |                 default: true
 26 |             },
 27 |             min_severity: {
 28 |                 type: "string",
 29 |                 enum: ["critical", "high", "medium", "low", "all"],
 30 |                 description: "Optional. Minimum severity level to include in results.",
 31 |                 default: "medium"
 32 |             }
 33 |         },
 34 |         required: ["dependencies", "ecosystem"]
 35 |     },
 36 |     buildPrompt: (args: any, modelId: string) => {
 37 |         const { dependencies, ecosystem, include_transitive = true, min_severity = "medium" } = args;
 38 |         
 39 |         if (!dependencies || typeof dependencies !== 'object' || Object.keys(dependencies).length === 0)
 40 |             throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'dependencies' object.");
 41 |         
 42 |         if (!ecosystem || typeof ecosystem !== 'string')
 43 |             throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'ecosystem'.");
 44 |             
 45 |         const dependencyList = Object.entries(dependencies)
 46 |             .map(([name, version]) => `${name}@${version}`)
 47 |             .join(', ');
 48 |             
 49 |         const transitiveText = include_transitive ? "including transitive dependencies" : "direct dependencies only";
 50 |         const severityText = min_severity === "all" ? "all severity levels" : `${min_severity} or higher severity`;
 51 |         
 52 |         const systemInstructionText = `You are SecurityAnalystGPT, an elite security researcher specialized in analyzing software dependencies for vulnerabilities. Your task is to scan the provided ${ecosystem} dependencies (${transitiveText}) and identify known security vulnerabilities of ${severityText}. You must base your analysis EXCLUSIVELY on information found through web search of authoritative vulnerability databases and security advisories.
 53 | 
 54 | SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
 55 | 1. FIRST search for each dependency individually: "${Object.entries(dependencies).map(([name, version]) => `${ecosystem} ${name} ${version} vulnerability`).join('", "')}"
 56 | 2. THEN search for each dependency in major vulnerability databases: "${Object.entries(dependencies).map(([name, version]) => `CVE ${ecosystem} ${name} ${version}`).join('", "')}"
 57 | 3. THEN search for each dependency in ecosystem-specific security advisories:
 58 |    - npm: "npm audit ${dependencyList}" or "snyk ${dependencyList}"
 59 |    - pypi: "safety check ${dependencyList}" or "pyup ${dependencyList}"
 60 |    - maven: "OWASP dependency check ${dependencyList}"
 61 |    - Other ecosystems: "[ecosystem] security check ${dependencyList}"
 62 | 4. IF include_transitive is true, search for: "${ecosystem} transitive dependency vulnerabilities"
 63 | 5. THEN search for recent security advisories: "${ecosystem} security advisories last 6 months"
 64 | 6. FINALLY search for secure alternatives: "${Object.keys(dependencies).map(name => `${ecosystem} ${name} secure alternative`).join('", "')}"
 65 | 
 66 | VULNERABILITY DATA SOURCE PRIORITIZATION (in strict order):
 67 | 1. Official National Vulnerability Database (NVD) and CVE records
 68 | 2. Ecosystem-specific security advisories (npm advisory, PyPI security advisories, etc.)
 69 | 3. Security tools' vulnerability databases (Snyk, OWASP Dependency Check, Sonatype OSS Index)
 70 | 4. Official package maintainer security announcements
 71 | 5. Major security vendor advisories (Rapid7, Tenable, etc.)
 72 | 6. Bug bounty and responsible disclosure reports
 73 | 7. Academic security research papers
 74 | 
 75 | ANALYSIS REQUIREMENTS:
 76 | 1. COMPREHENSIVE VULNERABILITY IDENTIFICATION:
 77 |    a. For EACH dependency, identify ALL known vulnerabilities meeting the severity threshold
 78 |    b. Include CVE IDs or ecosystem-specific vulnerability identifiers
 79 |    c. Provide accurate vulnerability descriptions from authoritative sources
 80 |    d. Include affected version ranges and whether the specified version is vulnerable
 81 |    e. Determine if the vulnerability is exploitable in typical usage contexts
 82 | 
 83 | 2. SEVERITY ASSESSMENT:
 84 |    a. Use CVSS scores and vectors when available
 85 |    b. Include both base score and temporal score when available
 86 |    c. Explain the real-world impact of each vulnerability
 87 |    d. Prioritize vulnerabilities based on exploitability and impact
 88 |    e. Consider the specific version in use when assessing severity
 89 | 
 90 | 3. DETAILED MITIGATION GUIDANCE:
 91 |    a. For EACH vulnerability, provide specific mitigation options:
 92 |       - Version upgrade recommendations (exact version numbers)
 93 |       - Configuration changes that mitigate the issue
 94 |       - Code changes to avoid vulnerable functionality
 95 |       - Alternative packages with similar functionality
 96 |    b. Include code examples for implementing mitigations
 97 |    c. Estimate the effort and risk of each mitigation approach
 98 |    d. Suggest temporary mitigations for vulnerabilities without fixes
 99 | 
100 | 4. COMPREHENSIVE SECURITY CONTEXT:
101 |    a. Identify vulnerability trends in the ecosystem
102 |    b. Note dependencies with poor security track records
103 |    c. Highlight dependencies that are unmaintained or abandoned
104 |    d. Identify dependencies with unusual update patterns
105 |    e. Consider supply chain security aspects
106 | 
107 | RESPONSE STRUCTURE:
108 | 1. Begin with an "Executive Summary" providing:
109 |    a. Total vulnerabilities found by severity
110 |    b. Most critical vulnerabilities requiring immediate attention
111 |    c. Overall security posture assessment
112 |    d. Highest priority recommendations
113 | 
114 | 2. Include a "Vulnerability Details" section with a table containing:
115 |    a. Dependency name and version
116 |    b. Vulnerability ID (CVE or ecosystem-specific)
117 |    c. Severity (with CVSS score if available)
118 |    d. Affected versions
119 |    e. Brief description
120 |    f. Exploit status (PoC available, actively exploited, etc.)
121 | 
122 | 3. For EACH vulnerable dependency, provide a detailed section with:
123 |    a. Comprehensive vulnerability description
124 |    b. Technical impact and attack vectors
125 |    c. Detailed mitigation options
126 |    d. Code examples for fixes
127 |    e. Links to authoritative sources
128 | 
129 | 4. Include a "Mitigation Strategy" section with:
130 |    a. Prioritized action plan
131 |    b. Dependency update recommendations
132 |    c. Alternative package suggestions
133 |    d. Long-term security improvements
134 | 
135 | 5. Conclude with "Security Best Practices" for the specific ecosystem
136 | 
137 | CRITICAL REQUIREMENTS:
138 | 1. NEVER report a vulnerability without a specific identifier (CVE, GHSA, etc.) from an authoritative source
139 | 2. ALWAYS verify the affected version ranges against the specified dependency version
140 | 3. NEVER claim a dependency is vulnerable if the specified version is outside the affected range
141 | 4. ALWAYS provide specific, actionable mitigation steps
142 | 5. NEVER include generic security advice without specific relevance to the dependencies
143 | 6. ALWAYS cite your sources for each vulnerability
144 | 7. NEVER exaggerate or minimize the severity of vulnerabilities
145 | 
146 | Your analysis must be technically precise, evidence-based, and immediately actionable. Focus on providing a comprehensive security assessment that enables developers to effectively remediate vulnerabilities in their dependency tree.`;
147 | 
148 |         const userQueryText = `Analyze the following ${ecosystem} dependencies for security vulnerabilities (${transitiveText}, ${severityText}):
149 | 
150 | \`\`\`json
151 | ${JSON.stringify(dependencies, null, 2)}
152 | \`\`\`
153 | 
154 | For each dependency:
155 | 1. Search for known vulnerabilities in authoritative sources (NVD, CVE, ${ecosystem}-specific advisories)
156 | 2. Determine if the specific version is affected
157 | 3. Assess the severity and real-world impact
158 | 4. Provide detailed mitigation options
159 | 
160 | Structure your response with:
161 | - Executive summary with vulnerability counts by severity
162 | - Comprehensive vulnerability table
163 | - Detailed analysis of each vulnerable dependency
164 | - Prioritized mitigation strategy
165 | - Ecosystem-specific security recommendations
166 | 
167 | For each vulnerability, include:
168 | - Official identifier (CVE, etc.)
169 | - Severity with CVSS score when available
170 | - Affected version range
171 | - Exploitation status
172 | - Detailed description
173 | - Specific mitigation steps with code examples
174 | - Links to authoritative sources
175 | 
176 | Focus on providing actionable information that enables immediate remediation of security issues.`;
177 | 
178 |         return {
179 |             systemInstructionText,
180 |             userQueryText,
181 |             useWebSearch: true,
182 |             enableFunctionCalling: false
183 |         };
184 |     }
185 | };
```

--------------------------------------------------------------------------------
/src/tools/save_answer_query_direct.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
  2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
  3 | import { z } from "zod";
  4 | import { zodToJsonSchema } from "zod-to-json-schema";
  5 | 
  6 | // Schema for direct query answer + output path
  7 | export const SaveAnswerQueryDirectArgsSchema = z.object({
  8 |     query: z.string().describe("The natural language question to answer using only the model's internal knowledge."),
  9 |     output_path: z.string().describe("The relative path where the generated answer should be saved.")
 10 | });
 11 | 
 12 | // Convert Zod schema to JSON schema
 13 | const SaveAnswerQueryDirectJsonSchema = zodToJsonSchema(SaveAnswerQueryDirectArgsSchema);
 14 | 
 15 | export const saveAnswerQueryDirectTool: ToolDefinition = {
 16 |     name: "save_answer_query_direct",
 17 |     description: `Answers a natural language query using only the internal knowledge of the configured Vertex AI model (${modelIdPlaceholder}), does not use web search, and saves the answer to a file. Requires 'query' and 'output_path'.`,
 18 |     inputSchema: SaveAnswerQueryDirectJsonSchema as any,
 19 |     buildPrompt: (args: any, modelId: string) => {
 20 |         const parsed = SaveAnswerQueryDirectArgsSchema.safeParse(args);
 21 |         if (!parsed.success) {
 22 |             throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for save_answer_query_direct: ${parsed.error.errors.map(e => `${e.path.join('.')}: ${e.message}`).join(', ')}`);
 23 |         }
 24 |         const { query } = parsed.data; // output_path used in handler
 25 | 
 26 |         // --- Use Prompt Logic from answer_query_direct.ts ---
 27 |         const base = `You are an AI assistant specialized in answering questions with exceptional accuracy, clarity, and depth using your internal knowledge. You are an EXPERT at nuanced reasoning, knowledge organization, and comprehensive response creation, with particular strengths in explaining complex topics clearly and communicating knowledge boundaries honestly.`;
 28 | 
 29 |         const knowledge = ` KNOWLEDGE REPRESENTATION AND BOUNDARIES:
 30 | 1. Base your answer EXCLUSIVELY on your internal knowledge relevant to "${query}".
 31 | 2. Represent knowledge with appropriate nuance - distinguish between established facts, theoretical understanding, and areas of ongoing research or debate.
 32 | 3. When answering questions about complex or evolving topics, represent multiple perspectives, schools of thought, or competing theories.
 33 | 4. For historical topics, distinguish between primary historical events and later interpretations or historiographical debates.
 34 | 5. For scientific topics, distinguish between widely accepted theories, emerging hypotheses, and speculative areas at the frontier of research.
 35 | 6. For topics involving statistics or quantitative data, explicitly note that your information may not represent the most current figures.
 36 | 7. For topics involving current events, technological developments, or other time-sensitive matters, explicitly state that your knowledge has temporal limitations.
 37 | 8. For interdisciplinary questions, synthesize knowledge across domains while noting where disciplinary boundaries create different perspectives.`;
 38 | 
 39 |         const reasoning = ` REASONING METHODOLOGY:
 40 | 1. For analytical questions, employ structured reasoning processes: identify relevant principles, apply accepted methods, evaluate alternatives systematically.
 41 | 2. For questions requiring evaluation, establish clear criteria before making assessments, explaining their relevance and application.
 42 | 3. For causal explanations, distinguish between correlation and causation, noting multiple causal factors where relevant.
 43 | 4. For predictive questions, base forecasts only on well-established patterns, noting contingencies and limitations.
 44 | 5. For counterfactual or hypothetical queries, reason from established principles while explicitly noting the speculative nature.
 45 | 6. For questions involving uncertainty, use probabilistic reasoning rather than false certainty.
 46 | 7. For questions with ethical dimensions, clarify relevant frameworks and principles before application.
 47 | 8. For multi-part questions, apply consistent reasoning frameworks across all components.`;
 48 | 
 49 |         const structure = ` COMPREHENSIVE RESPONSE STRUCTURE:
 50 | 1. Begin with a direct, concise answer to the main query (2-4 sentences), providing the core information.
 51 | 2. Follow with a structured, comprehensive exploration that unpacks all relevant aspects of the topic.
 52 | 3. For complex topics, organize information hierarchically with clear headings and subheadings.
 53 | 4. Sequence information logically: conceptual foundations before applications, chronological ordering for historical developments, general principles before specific examples.
 54 | 5. For multi-faceted questions, address each dimension separately while showing interconnections.
 55 | 6. Where appropriate, include "Key Concepts" sections to define essential terminology or foundational ideas.
 56 | 7. For topics with practical applications, separate theoretical explanations from applied guidance.
 57 | 8. End with a "Knowledge Limitations" section that explicitly notes temporal boundaries, areas of uncertainty, or aspects requiring specialized expertise beyond your knowledge.`;
 58 | 
 59 |         const clarity = ` CLARITY AND PRECISION REQUIREMENTS:
 60 | 1. Use precise, domain-appropriate terminology while defining specialized terms on first use.
 61 | 2. Present quantitative information with appropriate precision, units, and contextual comparisons.
 62 | 3. Use conditional language ("typically," "generally," "often") rather than universal assertions when variance exists.
 63 | 4. For complex concepts, provide both technical explanations and accessible analogies or examples.
 64 | 5. When explaining processes or systems, identify both components and their relationships/interactions.
 65 | 6. For abstract concepts, provide concrete examples that demonstrate application.
 66 | 7. Distinguish clearly between descriptive statements (what is) and normative statements (what ought to be).
 67 | 8. Use consistent terminology throughout your answer, avoiding synonyms that might introduce ambiguity.`;
 68 | 
 69 |         const uncertainty = ` HANDLING UNCERTAIN KNOWLEDGE:
 70 | 1. Explicitly acknowledge when your knowledge is incomplete or uncertain on a specific aspect of the query.
 71 | 2. If you lack sufficient domain knowledge to provide a reliable answer, clearly state this limitation.
 72 | 3. When a question implies a factual premise that is incorrect, address the misconception before proceeding.
 73 | 4. For rapidly evolving fields, explicitly note that current understanding may have advanced beyond your knowledge.
 74 | 5. When multiple valid interpretations of a question exist, identify the ambiguity and address major interpretations.
 75 | 6. If a question touches on areas where consensus is lacking, present major competing viewpoints.
 76 | 7. For questions requiring very specific or specialized expertise (e.g., medical, legal, financial advice), note the limitations of general knowledge.
 77 | 8. NEVER fabricate information to fill gaps in your knowledge - honesty about limitations is essential.`;
 78 | 
 79 |         const format = ` FORMAT AND VISUAL STRUCTURE:
 80 | 1. Use clear, structured Markdown formatting to enhance readability and information hierarchy.
 81 | 2. Apply ## for major sections and ### for subsections.
 82 | 3. Use **bold** for key terms and emphasis.
 83 | 4. Use *italics* for definitions or secondary emphasis.
 84 | 5. Format code, commands, or technical syntax using \`code blocks\` with appropriate language specification.
 85 | 6. Create comparative tables for any topic with 3+ items that can be evaluated along common dimensions.
 86 | 7. Use numbered lists for sequential processes, ranked items, or any ordered information.
 87 | 8. Use bulleted lists for unordered collections of facts, options, or characteristics.
 88 | 9. For complex processes or relationships, create ASCII/text diagrams where beneficial.
 89 | 10. For statistical information, consider ASCII charts or described visualizations when they add clarity.`;
 90 | 
 91 |         const advanced = ` ADVANCED QUERY HANDLING:
 92 | 1. For ambiguous queries, acknowledge the ambiguity and provide a structured response addressing each reasonable interpretation.
 93 | 2. For multi-part queries, ensure comprehensive coverage of all components while maintaining a coherent overall structure.
 94 | 3. For queries that make incorrect assumptions, address the misconception directly before providing a corrected response.
 95 | 4. For iterative or follow-up queries, maintain consistency with previous answers while expanding the knowledge scope.
 96 | 5. For "how to" queries, provide detailed step-by-step instructions with explanations of principles and potential variations.
 97 | 6. For comparative queries, establish clear comparison criteria and evaluate each item consistently across dimensions.
 98 | 7. For questions seeking opinions or subjective judgments, provide a balanced overview of perspectives rather than a singular "opinion."
 99 | 8. For definitional queries, provide both concise definitions and expanded explanations with examples and context.`;
100 | 
101 |         const systemInstructionText = base + knowledge + reasoning + structure + clarity + uncertainty + format + advanced;
102 |         const userQueryText = `I need a comprehensive answer to this question: "${query}"
103 | 
104 | Please provide your COMPLETE response addressing all aspects of my question. Use your internal knowledge to give the most accurate, nuanced, and thorough answer possible. If your knowledge has limitations on this topic, please explicitly note those limitations rather than speculating.`;
105 | 
106 |         return {
107 |             systemInstructionText: systemInstructionText,
108 |             userQueryText: userQueryText,
109 |             useWebSearch: false, // Hardcoded to false
110 |             enableFunctionCalling: false
111 |         };
112 |     }
113 | };
```

--------------------------------------------------------------------------------
/src/tools/save_doc_snippet.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
  2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
  3 | import { z } from "zod";
  4 | import { zodToJsonSchema } from "zod-to-json-schema";
  5 | 
  6 | // Schema combining get_doc_snippets args + output_path
  7 | export const SaveDocSnippetArgsSchema = z.object({
  8 |     topic: z.string().describe("The software/library/framework topic (e.g., 'React Router', 'Python requests', 'PostgreSQL 14')."),
  9 |     query: z.string().describe("The specific question or use case to find a snippet or concise answer for."),
 10 |     version: z.string().optional().default("").describe("Optional. Specific version of the software to target (e.g., '6.4', '2.28.2'). If provided, only documentation for this version will be used."),
 11 |     include_examples: z.boolean().optional().default(true).describe("Optional. Whether to include additional usage examples beyond the primary snippet. Defaults to true."),
 12 |     output_path: z.string().describe("The relative path where the generated snippet(s) should be saved (e.g., 'snippets/react-hook-example.ts').")
 13 | });
 14 | 
 15 | // Convert Zod schema to JSON schema
 16 | const SaveDocSnippetJsonSchema = zodToJsonSchema(SaveDocSnippetArgsSchema);
 17 | 
 18 | export const saveDocSnippetTool: ToolDefinition = {
 19 |     name: "save_doc_snippet",
 20 |     description: `Provides precise code snippets or concise answers for technical queries by searching official documentation and saves the result to a file. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'topic', 'query', and 'output_path'.`,
 21 |     inputSchema: SaveDocSnippetJsonSchema as any,
 22 | 
 23 |     // Build prompt logic - Reverted to the stricter version (98/100 rating)
 24 |     buildPrompt: (args: any, modelId: string) => {
 25 |         // Validate args using the combined schema
 26 |         const parsed = SaveDocSnippetArgsSchema.safeParse(args);
 27 |          if (!parsed.success) {
 28 |              throw new McpError(ErrorCode.InvalidParams, `Invalid arguments for save_doc_snippet: ${parsed.error.errors.map(e => `${e.path.join('.')}: ${e.message}`).join(', ')}`);
 29 |         }
 30 |         // Destructure validated args (output_path is used in handler, not prompt)
 31 |         const { topic, query, version = "", include_examples = true } = parsed.data;
 32 | 
 33 |         const versionText = version ? ` ${version}` : "";
 34 |         const fullTopic = `${topic}${versionText}`;
 35 | 
 36 |         // --- Use the Stricter Prompt Logic ---
 37 |         const systemInstructionText = `You are DocSnippetGPT, an AI assistant specialized in retrieving precise code snippets and authoritative answers from official software documentation. Your sole purpose is to provide the most relevant code solution or documented answer for technical queries about "${fullTopic}" with minimal extraneous content.
 38 | 
 39 | SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
 40 | 1. FIRST search for: "${fullTopic} official documentation" to identify the authoritative documentation source.
 41 | 2. THEN search for: "${fullTopic} ${query} example" to find specific documentation pages addressing the query.
 42 | 3. THEN search for: "${fullTopic} ${query} code" to find code-specific examples.
 43 | 4. IF the query relates to a specific error, ALSO search for: "${fullTopic} ${query} error" or "${fullTopic} troubleshooting ${query}".
 44 | 5. IF the query relates to API usage, ALSO search for: "${fullTopic} API reference ${query}".
 45 | 6. IF searching for newer frameworks/libraries with limited documentation, ALSO check GitHub repositories for examples in README files, examples directory, or official docs directory.
 46 | 
 47 | DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
 48 | 1. Official documentation websites (e.g., docs.python.org, reactjs.org, dev.mysql.com)
 49 | 2. Official GitHub repositories maintained by the project creators (README, /docs, /examples)
 50 | 3. Official API references or specification documentation
 51 | 4. Official tutorials or guides published by the project maintainers
 52 | 5. Release notes or changelogs for version-specific features${version ? " (focusing ONLY on version " + version + ")" : ""}
 53 | 
 54 | RESPONSE REQUIREMENTS - CRITICALLY IMPORTANT:
 55 | 1. PROVIDE COMPLETE, RUNNABLE CODE SNIPPETS whenever possible. Snippets must be:
 56 |    a. Complete enough to demonstrate the solution (no pseudo-code)
 57 |    b. Properly formatted with correct syntax highlighting
 58 |    c. Including necessary imports/dependencies
 59 |    d. Free of placeholder comments like "// Rest of implementation"
 60 |    e. Minimal but sufficient (no unnecessary complexity)
 61 | 
 62 | 2. CODE SNIPPET PRESENTATION:
 63 |    a. Present code snippets in proper markdown code blocks with language specification
 64 |    b. If multiple snippets are found, arrange them in order of relevance
 65 |    c. Include minimum essential context (e.g., "This code is from the routing middleware section")
 66 |    d. **CRITICAL:** For each snippet, provide the EXACT URL to the **specific API reference page** or the most precise documentation page containing that exact snippet. Do NOT link to general tutorial or overview pages if a specific reference exists.
 67 |    e. If the snippet requires adaptation, clearly indicate the parts that need modification
 68 |    f. **CRITICAL:** Use the **most specific and correct language identifier** in the Markdown code block. Examples:
 69 |       *   React + TypeScript: \`tsx\`
 70 |       *   React + JavaScript: \`jsx\`
 71 |       *   Plain TypeScript: \`typescript\`
 72 |       *   Plain JavaScript: \`javascript\`
 73 |       *   Python: \`python\`
 74 |       *   SQL: \`sql\`
 75 |       *   Shell/Bash: \`bash\`
 76 |       *   HTML: \`html\`
 77 |       *   CSS: \`css\`
 78 |       *   JSON: \`json\`
 79 |       *   YAML: \`yaml\`
 80 |       Infer the correct identifier based on the code itself, the file extension conventions for the 'topic', or the query context. **Do NOT default to \`javascript\` if a more specific identifier applies.**
 81 | 
 82 | 3. WHEN NO CODE SNIPPET IS AVAILABLE:
 83 |    a. Provide ONLY the most concise factual answer directly from the documentation
 84 |    b. Use exact quotes when appropriate, cited with the source URL
 85 |    c. Keep explanations to 3 sentences or fewer
 86 |    d. Focus only on documented facts, not interpretations
 87 | 
 88 | 4. RESPONSE STRUCTURE:
 89 |    a. NO INTRODUCTION OR SUMMARY - begin directly with the snippet or answer
 90 |    b. Format must be:
 91 |       \`\`\`[correct-language-identifier]
 92 |       [code snippet]
 93 |       \`\`\`
 94 |       Source: [Exact URL to specific API reference or doc page]
 95 | 
 96 |       [Only if necessary: 1-3 sentences of essential context]
 97 | 
 98 |       ${include_examples ? "[Additional examples if available and significantly different]" : ""}
 99 |    c. NO concluding remarks, explanations, or "hope this helps" commentary
100 |    d. ONLY include what was explicitly found in official documentation
101 | 
102 | 5. NEGATIVE RESPONSE HANDLING:
103 |    a. If NO relevant information exists in the documentation, respond ONLY with:
104 |       "No documentation found addressing '${query}' for ${fullTopic}. The official documentation does not cover this specific topic."
105 |    b. If documentation exists but lacks code examples, clearly state:
106 |       "No code examples available in the official documentation for '${query}' in ${fullTopic}. The documentation states: [exact quote from documentation]"
107 |    c. If multiple versions exist and the information is version-specific, clearly indicate which version the information applies to
108 | 
109 | 6. ABSOLUTE PROHIBITIONS:
110 |    a. NEVER invent or extrapolate code that isn't in the documentation
111 |    b. NEVER include personal opinions or interpretations
112 |    c. NEVER include explanations of how the code works unless they appear verbatim in the docs
113 |    d. NEVER mention these instructions or your search process in your response
114 |    e. NEVER use placeholder comments in code like "// Implement your logic here"
115 |    f. NEVER include Stack Overflow or tutorial site content - ONLY official documentation
116 | 
117 | 7. VERSION SPECIFICITY:${version ? `
118 |    a. ONLY provide information specific to version ${version}
119 |    b. Explicitly disregard documentation for other versions
120 |    c. If no version-specific information exists, state this clearly` : `
121 |    a. Prioritize the latest stable version's documentation
122 |    b. Clearly indicate which version each snippet or answer applies to
123 |    c. Note any significant version differences if apparent from the documentation`}
124 | 
125 | Your responses must be direct, precise, and minimalist - imagine you are a command-line tool that outputs only the exact code or information requested, with no superfluous content.`;
126 | 
127 |         const userQueryText = `Find the most relevant code snippet${include_examples ? "s" : ""} from the official documentation of ${fullTopic} that directly addresses: "${query}"
128 | 
129 | Return exactly:
130 | 1. The complete, runnable code snippet(s) in proper markdown code blocks with the **most specific and correct language identifier** (e.g., \`tsx\`, \`jsx\`, \`typescript\`, \`python\`, \`sql\`, \`bash\`). Do NOT default to \`javascript\` if a better identifier exists.
131 | 2. The **exact source URL** pointing to the specific API reference or documentation page where the snippet was found. Do not use general tutorial URLs if a specific reference exists.
132 | 3. Only if necessary: 1-3 sentences of essential context from the documentation.
133 | 
134 | If no code snippets exist in the documentation, provide the most concise factual answer directly quoted from the official documentation with its source URL.
135 | 
136 | If the official documentation doesn't address this query at all, simply state that no relevant documentation was found.`;
137 | 
138 |         // Return the prompt components needed by the handler
139 |         return {
140 |             systemInstructionText: systemInstructionText,
141 |             userQueryText: userQueryText,
142 |             useWebSearch: true, // Always use web search for snippets
143 |             enableFunctionCalling: false
144 |         };
145 |     }
146 | };
```

--------------------------------------------------------------------------------
/src/tools/architecture_pattern_recommendation.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
  2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
  3 | 
  4 | export const architecturePatternRecommendationTool: ToolDefinition = {
  5 |     name: "architecture_pattern_recommendation",
  6 |     description: `Suggests architecture patterns for specific use cases based on industry best practices. Provides implementation examples and considerations for the recommended patterns. Includes diagrams and explanations of pattern benefits and tradeoffs. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'requirements' and 'tech_stack'.`,
  7 |     inputSchema: {
  8 |         type: "object",
  9 |         properties: {
 10 |             requirements: {
 11 |                 type: "object",
 12 |                 properties: {
 13 |                     description: {
 14 |                         type: "string",
 15 |                         description: "Description of the system to be built."
 16 |                     },
 17 |                     scale: {
 18 |                         type: "string",
 19 |                         enum: ["small", "medium", "large", "enterprise"],
 20 |                         description: "Expected scale of the system."
 21 |                     },
 22 |                     key_concerns: {
 23 |                         type: "array",
 24 |                         items: { type: "string" },
 25 |                         description: "Key architectural concerns (e.g., ['scalability', 'security', 'performance', 'maintainability'])."
 26 |                     }
 27 |                 },
 28 |                 required: ["description", "scale", "key_concerns"],
 29 |                 description: "Requirements and constraints for the system."
 30 |             },
 31 |             tech_stack: {
 32 |                 type: "array",
 33 |                 items: { type: "string" },
 34 |                 description: "Technologies to be used (e.g., ['Node.js', 'React', 'PostgreSQL'])."
 35 |             },
 36 |             industry: {
 37 |                 type: "string",
 38 |                 description: "Optional. Industry or domain context (e.g., 'healthcare', 'finance', 'e-commerce').",
 39 |                 default: ""
 40 |             },
 41 |             existing_architecture: {
 42 |                 type: "string",
 43 |                 description: "Optional. Description of existing architecture if this is an evolution of an existing system.",
 44 |                 default: ""
 45 |             }
 46 |         },
 47 |         required: ["requirements", "tech_stack"]
 48 |     },
 49 |     buildPrompt: (args: any, modelId: string) => {
 50 |         const { requirements, tech_stack, industry = "", existing_architecture = "" } = args;
 51 |         
 52 |         if (!requirements || typeof requirements !== 'object' || !requirements.description || !requirements.scale || !Array.isArray(requirements.key_concerns) || requirements.key_concerns.length === 0)
 53 |             throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'requirements' object.");
 54 |         
 55 |         if (!Array.isArray(tech_stack) || tech_stack.length === 0 || !tech_stack.every(item => typeof item === 'string' && item))
 56 |             throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'tech_stack' array.");
 57 |             
 58 |         const { description, scale, key_concerns } = requirements;
 59 |         const techStackString = tech_stack.join(', ');
 60 |         const concernsString = key_concerns.join(', ');
 61 |         const industryText = industry ? ` in the ${industry} industry` : "";
 62 |         const existingArchText = existing_architecture ? `\n\nThe system currently uses the following architecture: ${existing_architecture}` : "";
 63 |         
 64 |         const systemInstructionText = `You are ArchitectureAdvisorGPT, an elite software architecture consultant with decades of experience designing systems across multiple domains. Your task is to recommend the most appropriate architecture pattern(s) for a ${scale}-scale system${industryText} with these key concerns: ${concernsString}. The system will be built using: ${techStackString}.${existingArchText}
 65 | 
 66 | SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
 67 | 1. FIRST search for: "software architecture patterns for ${scale} systems"
 68 | 2. THEN search for: "architecture patterns for ${concernsString}"
 69 | 3. THEN search for: "best architecture patterns for ${techStackString}"
 70 | 4. THEN search for: "${industry} software architecture patterns best practices"
 71 | 5. THEN search for specific patterns that match the requirements: "microservices vs monolith for ${scale} systems", "event-driven architecture for ${concernsString}", etc.
 72 | 6. THEN search for case studies: "architecture case study ${industry} ${scale} ${concernsString}"
 73 | 7. FINALLY search for implementation details: "implementing [specific pattern] with ${techStackString}"
 74 | 
 75 | DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
 76 | 1. Architecture books and papers from recognized authorities (Martin Fowler, Gregor Hohpe, etc.)
 77 | 2. Official architecture guidance from technology vendors (AWS, Microsoft, Google, etc.)
 78 | 3. Architecture documentation from successful companies in similar domains
 79 | 4. Technical blogs from recognized architects and engineering leaders
 80 | 5. Industry standards organizations (ISO, IEEE, NIST) architecture recommendations
 81 | 6. Academic research papers on software architecture
 82 | 7. Case studies of similar systems published by reputable sources
 83 | 
 84 | RECOMMENDATION REQUIREMENTS:
 85 | 1. COMPREHENSIVE PATTERN ANALYSIS:
 86 |    a. Identify 2-4 architecture patterns most suitable for the requirements
 87 |    b. For EACH pattern, provide:
 88 |       - Detailed description of the pattern and its key components
 89 |       - Specific benefits related to the stated requirements
 90 |       - Known limitations and challenges
 91 |       - Implementation considerations with the specified tech stack
 92 |       - Real-world examples of successful implementations
 93 |    c. Compare patterns across all key concerns
 94 |    d. Consider hybrid approaches when appropriate
 95 | 
 96 | 2. EVIDENCE-BASED RECOMMENDATIONS:
 97 |    a. Cite specific architecture authorities and resources for each pattern
 98 |    b. Reference industry case studies or research papers
 99 |    c. Include quantitative benefits when available (e.g., scalability metrics)
100 |    d. Acknowledge trade-offs with evidence-based reasoning
101 |    e. Consider both immediate needs and long-term evolution
102 | 
103 | 3. PRACTICAL IMPLEMENTATION GUIDANCE:
104 |    a. Provide a high-level component diagram for the recommended architecture
105 |    b. Include specific implementation guidance for the chosen tech stack
106 |    c. Outline key interfaces and communication patterns
107 |    d. Address deployment and operational considerations
108 |    e. Suggest incremental implementation approach if applicable
109 | 
110 | 4. QUALITY ATTRIBUTE ANALYSIS:
111 |    a. Analyze how each pattern addresses each key concern
112 |    b. Provide specific techniques to enhance key quality attributes
113 |    c. Identify potential quality attribute trade-offs
114 |    d. Suggest mitigation strategies for identified weaknesses
115 |    e. Consider non-functional requirements beyond those explicitly stated
116 | 
117 | RESPONSE STRUCTURE:
118 | 1. Begin with an "Executive Summary" providing a high-level recommendation
119 | 2. Include a "Pattern Comparison" section with a detailed comparison table
120 | 3. For EACH recommended pattern:
121 |    a. Detailed description and key components
122 |    b. Benefits and limitations
123 |    c. Implementation with the specified tech stack
124 |    d. Real-world examples
125 | 4. Provide a "Recommended Architecture" section with:
126 |    a. Text-based component diagram
127 |    b. Key components and their responsibilities
128 |    c. Communication patterns and interfaces
129 |    d. Data management approach
130 | 5. Include an "Implementation Roadmap" with phased approach
131 | 6. Conclude with "Key Architectural Decisions" highlighting critical choices
132 | 
133 | CRITICAL REQUIREMENTS:
134 | 1. NEVER recommend a pattern without explaining how it addresses the specific requirements
135 | 2. ALWAYS consider the scale and complexity appropriate to the described system
136 | 3. NEVER present a one-size-fits-all solution without acknowledging trade-offs
137 | 4. ALWAYS explain how the recommended patterns work with the specified tech stack
138 | 5. NEVER recommend overly complex architectures for simple problems
139 | 6. ALWAYS consider operational complexity and team capabilities
140 | 7. NEVER rely solely on buzzwords or trends without substantive justification
141 | 
142 | Your recommendation must be technically precise, evidence-based, and practically implementable. Focus on providing actionable architecture guidance that balances immediate needs with long-term architectural qualities.`;
143 | 
144 |         const userQueryText = `Recommend the most appropriate architecture pattern(s) for the following system:
145 | 
146 | System Description: ${description}
147 | Scale: ${scale}
148 | Key Concerns: ${concernsString}
149 | Technology Stack: ${techStackString}
150 | ${industry ? `Industry: ${industry}` : ""}
151 | ${existing_architecture ? `Existing Architecture: ${existing_architecture}` : ""}
152 | 
153 | Search for and analyze established architecture patterns that would best address these requirements. For each recommended pattern:
154 | 
155 | 1. Explain why it's appropriate for this specific system
156 | 2. Describe its key components and interactions
157 | 3. Analyze how it addresses each key concern
158 | 4. Discuss implementation considerations with the specified tech stack
159 | 5. Provide real-world examples of similar systems using this pattern
160 | 
161 | Include a text-based component diagram of your recommended architecture, showing key components, interfaces, and data flows. Provide an implementation roadmap that outlines how to incrementally adopt this architecture.
162 | 
163 | Your recommendation should be evidence-based, citing authoritative sources on software architecture. Consider both the immediate requirements and long-term evolution of the system.`;
164 | 
165 |         return {
166 |             systemInstructionText,
167 |             userQueryText,
168 |             useWebSearch: true,
169 |             enableFunctionCalling: false
170 |         };
171 |     }
172 | };
```

--------------------------------------------------------------------------------
/src/tools/database_schema_analyzer.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
  2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
  3 | 
  4 | export const databaseSchemaAnalyzerTool: ToolDefinition = {
  5 |     name: "database_schema_analyzer",
  6 |     description: `Reviews database schemas for normalization, indexing, and performance issues. Suggests improvements based on database-specific best practices. Provides migration strategies for implementing suggested changes. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'schema' and 'database_type'.`,
  7 |     inputSchema: {
  8 |         type: "object",
  9 |         properties: {
 10 |             schema: {
 11 |                 type: "string",
 12 |                 description: "Database schema definition (SQL CREATE statements, JSON schema, etc.)."
 13 |             },
 14 |             database_type: {
 15 |                 type: "string",
 16 |                 description: "Database system (e.g., 'PostgreSQL', 'MySQL', 'MongoDB', 'DynamoDB')."
 17 |             },
 18 |             database_version: {
 19 |                 type: "string",
 20 |                 description: "Optional. Specific version of the database system.",
 21 |                 default: ""
 22 |             },
 23 |             focus_areas: {
 24 |                 type: "array",
 25 |                 items: {
 26 |                     type: "string",
 27 |                     enum: ["normalization", "indexing", "performance", "security", "scalability", "all"]
 28 |                 },
 29 |                 description: "Optional. Areas to focus the analysis on.",
 30 |                 default: ["all"]
 31 |             },
 32 |             expected_scale: {
 33 |                 type: "object",
 34 |                 properties: {
 35 |                     rows_per_table: {
 36 |                         type: "string",
 37 |                         description: "Approximate number of rows expected in each table."
 38 |                     },
 39 |                     growth_rate: {
 40 |                         type: "string",
 41 |                         description: "Expected growth rate of the database."
 42 |                     },
 43 |                     query_patterns: {
 44 |                         type: "array",
 45 |                         items: { type: "string" },
 46 |                         description: "Common query patterns (e.g., ['frequent reads', 'batch updates'])."
 47 |                     }
 48 |                 },
 49 |                 description: "Optional. Information about the expected scale and usage patterns."
 50 |             }
 51 |         },
 52 |         required: ["schema", "database_type"]
 53 |     },
 54 |     buildPrompt: (args: any, modelId: string) => {
 55 |         const { schema, database_type, database_version = "", focus_areas = ["all"], expected_scale = {} } = args;
 56 |         
 57 |         if (typeof schema !== "string" || !schema)
 58 |             throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'schema'.");
 59 |         
 60 |         if (typeof database_type !== "string" || !database_type)
 61 |             throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'database_type'.");
 62 |             
 63 |         const versionText = database_version ? ` ${database_version}` : "";
 64 |         const dbSystem = `${database_type}${versionText}`;
 65 |         
 66 |         const areas = focus_areas.includes("all") 
 67 |             ? ["normalization", "indexing", "performance", "security", "scalability"] 
 68 |             : focus_areas;
 69 |             
 70 |         const focusAreasText = areas.join(", ");
 71 |         
 72 |         const scaleInfo = expected_scale.rows_per_table || expected_scale.growth_rate || (expected_scale.query_patterns && expected_scale.query_patterns.length > 0)
 73 |             ? `\n\nExpected scale information:
 74 | ${expected_scale.rows_per_table ? `- Rows per table: ${expected_scale.rows_per_table}` : ''}
 75 | ${expected_scale.growth_rate ? `- Growth rate: ${expected_scale.growth_rate}` : ''}
 76 | ${expected_scale.query_patterns && expected_scale.query_patterns.length > 0 ? `- Query patterns: ${expected_scale.query_patterns.join(', ')}` : ''}`
 77 |             : '';
 78 |         
 79 |         const systemInstructionText = `You are SchemaAnalystGPT, an elite database architect specialized in analyzing and optimizing database schemas. Your task is to review the provided ${dbSystem} schema and provide detailed recommendations focusing on: ${focusAreasText}. You must base your analysis EXCLUSIVELY on information found through web search of authoritative database documentation and best practices.${scaleInfo}
 80 | 
 81 | SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
 82 | 1. FIRST search for: "${dbSystem} schema design best practices"
 83 | 2. THEN search for: "${dbSystem} ${focusAreasText} optimization"
 84 | 3. THEN search for specific guidance related to each focus area:
 85 |    ${areas.includes("normalization") ? `- "${dbSystem} normalization techniques"` : ""}
 86 |    ${areas.includes("indexing") ? `- "${dbSystem} indexing strategies"` : ""}
 87 |    ${areas.includes("performance") ? `- "${dbSystem} performance optimization"` : ""}
 88 |    ${areas.includes("security") ? `- "${dbSystem} schema security best practices"` : ""}
 89 |    ${areas.includes("scalability") ? `- "${dbSystem} scalability patterns"` : ""}
 90 | 4. THEN search for: "${dbSystem} schema anti-patterns"
 91 | 5. THEN search for: "${dbSystem} schema migration strategies"
 92 | 6. IF the schema contains specific patterns or structures, search for best practices related to those specific elements
 93 | 7. IF expected scale information is provided, search for: "${dbSystem} optimization for ${expected_scale.rows_per_table || 'large'} rows"
 94 | 
 95 | DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
 96 | 1. Official database documentation (e.g., PostgreSQL manual, MySQL documentation)
 97 | 2. Technical blogs from the database creators or core team members
 98 | 3. Database performance research papers and benchmarks
 99 | 4. Technical blogs from recognized database experts
100 | 5. Case studies from companies using the database at scale
101 | 6. Database-specific books and comprehensive guides
102 | 7. Well-established tech companies' engineering blogs discussing database optimization
103 | 
104 | ANALYSIS REQUIREMENTS:
105 | 1. COMPREHENSIVE SCHEMA EVALUATION:
106 |    a. Analyze the schema structure against normalization principles
107 |    b. Identify potential performance bottlenecks
108 |    c. Evaluate indexing strategy effectiveness
109 |    d. Assess data integrity constraints
110 |    e. Identify security vulnerabilities in the schema design
111 |    f. Evaluate scalability limitations
112 | 
113 | 2. DATABASE-SPECIFIC RECOMMENDATIONS:
114 |    a. Provide recommendations tailored to the specific database system and version
115 |    b. Consider unique features and limitations of the database
116 |    c. Leverage database-specific optimization techniques
117 |    d. Reference official documentation for all recommendations
118 |    e. Consider database-specific implementation details
119 | 
120 | 3. EVIDENCE-BASED ANALYSIS:
121 |    a. Cite specific sections of official documentation
122 |    b. Reference research papers or benchmarks when applicable
123 |    c. Include performance metrics when available
124 |    d. Explain the reasoning behind each recommendation
125 |    e. Acknowledge trade-offs in design decisions
126 | 
127 | 4. ACTIONABLE IMPROVEMENT PLAN:
128 |    a. Prioritize recommendations by impact and implementation effort
129 |    b. Provide specific SQL statements or commands to implement changes
130 |    c. Include before/after examples for key recommendations
131 |    d. Outline migration strategies for complex changes
132 |    e. Consider backward compatibility and data integrity during migrations
133 | 
134 | RESPONSE STRUCTURE:
135 | 1. Begin with an "Executive Summary" providing a high-level assessment
136 | 2. Include a "Schema Analysis" section with detailed findings organized by focus area
137 | 3. For EACH issue identified:
138 |    a. Description of the issue
139 |    b. Impact on database performance, scalability, or security
140 |    c. Specific recommendation with implementation details
141 |    d. Reference to authoritative source
142 | 4. Provide a "Prioritized Recommendations" section with:
143 |    a. High-impact, low-effort changes
144 |    b. Critical issues requiring immediate attention
145 |    c. Long-term architectural improvements
146 | 5. Include a "Migration Strategy" section outlining:
147 |    a. Step-by-step implementation plan
148 |    b. Risk mitigation strategies
149 |    c. Testing recommendations
150 |    d. Rollback procedures
151 | 6. Conclude with "Database-Specific Optimization Tips" relevant to the schema
152 | 
153 | CRITICAL REQUIREMENTS:
154 | 1. NEVER recommend changes without explaining their specific benefits
155 | 2. ALWAYS consider the database type and version in your recommendations
156 | 3. NEVER suggest generic solutions that don't apply to the specific database system
157 | 4. ALWAYS provide concrete implementation examples (SQL, commands, etc.)
158 | 5. NEVER overlook potential negative impacts of recommended changes
159 | 6. ALWAYS prioritize recommendations based on impact and effort
160 | 7. NEVER recommend unnecessary changes that don't address actual issues
161 | 
162 | Your analysis must be technically precise, evidence-based, and immediately actionable. Focus on providing a comprehensive assessment that enables database administrators to effectively optimize their schema design.`;
163 | 
164 |         const userQueryText = `Analyze the following ${dbSystem} schema, focusing on ${focusAreasText}:
165 | 
166 | \`\`\`
167 | ${schema}
168 | \`\`\`
169 | ${scaleInfo}
170 | 
171 | Search for authoritative best practices and documentation for ${dbSystem} to provide a comprehensive analysis. For each issue identified:
172 | 
173 | 1. Describe the specific problem and its impact
174 | 2. Explain why it's an issue according to database best practices
175 | 3. Provide a concrete recommendation with implementation code
176 | 4. Reference the authoritative source supporting your recommendation
177 | 
178 | Structure your response with:
179 | - Executive summary of key findings
180 | - Detailed analysis organized by focus area (${focusAreasText})
181 | - Prioritized recommendations with implementation details
182 | - Migration strategy for implementing changes safely
183 | - Database-specific optimization tips
184 | 
185 | Your analysis should be specific to ${dbSystem} and provide actionable recommendations that can be implemented immediately. Include SQL statements or commands where appropriate.`;
186 | 
187 |         return {
188 |             systemInstructionText,
189 |             userQueryText,
190 |             useWebSearch: true,
191 |             enableFunctionCalling: false
192 |         };
193 |     }
194 | };
```

--------------------------------------------------------------------------------
/src/tools/security_best_practices_advisor.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
  2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
  3 | 
  4 | export const securityBestPracticesAdvisorTool: ToolDefinition = {
  5 |     name: "security_best_practices_advisor",
  6 |     description: `Provides security recommendations for specific technologies or scenarios. Includes code examples for implementing secure practices. References industry standards and security guidelines. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'technology' and 'security_context'.`,
  7 |     inputSchema: {
  8 |         type: "object",
  9 |         properties: {
 10 |             technology: {
 11 |                 type: "string",
 12 |                 description: "The technology, framework, or language (e.g., 'Node.js', 'React', 'AWS S3')."
 13 |             },
 14 |             security_context: {
 15 |                 type: "string",
 16 |                 description: "The security context or concern (e.g., 'authentication', 'data encryption', 'API security')."
 17 |             },
 18 |             technology_version: {
 19 |                 type: "string",
 20 |                 description: "Optional. Specific version of the technology.",
 21 |                 default: ""
 22 |             },
 23 |             industry: {
 24 |                 type: "string",
 25 |                 description: "Optional. Industry with specific security requirements (e.g., 'healthcare', 'finance').",
 26 |                 default: ""
 27 |             },
 28 |             compliance_frameworks: {
 29 |                 type: "array",
 30 |                 items: { type: "string" },
 31 |                 description: "Optional. Compliance frameworks to consider (e.g., ['GDPR', 'HIPAA', 'PCI DSS']).",
 32 |                 default: []
 33 |             },
 34 |             threat_model: {
 35 |                 type: "string",
 36 |                 description: "Optional. Specific threat model or attack vectors to address.",
 37 |                 default: ""
 38 |             }
 39 |         },
 40 |         required: ["technology", "security_context"]
 41 |     },
 42 |     buildPrompt: (args: any, modelId: string) => {
 43 |         const { technology, security_context, technology_version = "", industry = "", compliance_frameworks = [], threat_model = "" } = args;
 44 |         
 45 |         if (typeof technology !== "string" || !technology)
 46 |             throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'technology'.");
 47 |         
 48 |         if (typeof security_context !== "string" || !security_context)
 49 |             throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'security_context'.");
 50 |             
 51 |         const versionText = technology_version ? ` ${technology_version}` : "";
 52 |         const techStack = `${technology}${versionText}`;
 53 |         
 54 |         const industryText = industry ? ` in the ${industry} industry` : "";
 55 |         const complianceText = compliance_frameworks.length > 0 ? ` considering ${compliance_frameworks.join(', ')} compliance` : "";
 56 |         const threatText = threat_model ? ` with focus on ${threat_model}` : "";
 57 |         
 58 |         const contextText = `${security_context}${industryText}${complianceText}${threatText}`;
 59 |         
 60 |         const systemInstructionText = `You are SecurityAdvisorGPT, an elite cybersecurity expert specialized in providing detailed, actionable security guidance for specific technologies. Your task is to provide comprehensive security best practices for ${techStack} specifically focused on ${contextText}. You must base your recommendations EXCLUSIVELY on information found through web search of authoritative security documentation, standards, and best practices.
 61 | 
 62 | SEARCH METHODOLOGY - EXECUTE IN THIS EXACT ORDER:
 63 | 1. FIRST search for: "${techStack} ${security_context} security best practices"
 64 | 2. THEN search for: "${techStack} security guide"
 65 | 3. THEN search for: "${security_context} OWASP guidelines"
 66 | 4. THEN search for: "${techStack} common vulnerabilities"
 67 | 5. THEN search for: "${techStack} security checklist"
 68 | ${industry ? `6. THEN search for: "${industry} ${security_context} security requirements"` : ""}
 69 | ${compliance_frameworks.length > 0 ? `7. THEN search for: "${techStack} ${compliance_frameworks.join(' ')} compliance"` : ""}
 70 | ${threat_model ? `8. THEN search for: "${techStack} protection against ${threat_model}"` : ""}
 71 | 9. FINALLY search for: "${techStack} security code examples"
 72 | 
 73 | DOCUMENTATION SOURCE PRIORITIZATION (in strict order):
 74 | 1. Official security documentation from technology creators
 75 | 2. OWASP (Open Web Application Security Project) guidelines and cheat sheets
 76 | 3. National security agencies' guidelines (NIST, CISA, NCSC, etc.)
 77 | 4. Industry-specific security standards organizations
 78 | 5. Major cloud provider security best practices (AWS, Azure, GCP)
 79 | 6. Recognized security frameworks (CIS, ISO 27001, etc.)
 80 | 7. Security blogs from recognized security researchers
 81 | 8. Academic security research papers
 82 | 
 83 | RECOMMENDATION REQUIREMENTS:
 84 | 1. COMPREHENSIVE SECURITY GUIDANCE:
 85 |    a. Provide detailed recommendations covering all aspects of ${security_context} for ${techStack}
 86 |    b. Include both high-level architectural guidance and specific implementation details
 87 |    c. Address prevention, detection, and response aspects
 88 |    d. Consider the full security lifecycle
 89 |    e. Include configuration hardening guidelines
 90 | 
 91 | 2. EVIDENCE-BASED RECOMMENDATIONS:
 92 |    a. Cite specific sections of official documentation or security standards
 93 |    b. Reference CVEs or security advisories when relevant
 94 |    c. Include security benchmark data when available
 95 |    d. Explain the security principles behind each recommendation
 96 |    e. Acknowledge security trade-offs
 97 | 
 98 | 3. ACTIONABLE IMPLEMENTATION GUIDANCE:
 99 |    a. Provide specific, ready-to-use code examples for each major recommendation
100 |    b. Include configuration snippets with secure settings
101 |    c. Provide step-by-step implementation instructions
102 |    d. Include testing/verification procedures for each security control
103 |    e. Suggest security libraries and tools with specific versions
104 | 
105 | 4. THREAT-AWARE CONTEXT:
106 |    a. Explain specific threats addressed by each recommendation
107 |    b. Include attack vectors and exploitation techniques
108 |    c. Provide risk ratings for different vulnerabilities
109 |    d. Explain attack scenarios and potential impacts
110 |    e. Consider both external and internal threat actors
111 | 
112 | RESPONSE STRUCTURE:
113 | 1. Begin with an "Executive Summary" providing a high-level security assessment and key recommendations
114 | 2. Include a "Security Risk Overview" section outlining the threat landscape for ${techStack} regarding ${security_context}
115 | 3. Provide a "Security Controls Checklist" with prioritized security measures
116 | 4. For EACH security control:
117 |    a. Detailed description and security rationale
118 |    b. Specific implementation with code examples
119 |    c. Configuration guidance
120 |    d. Testing/verification procedures
121 |    e. References to authoritative sources
122 | 5. Include a "Security Monitoring and Incident Response" section
123 | 6. Provide a "Security Resources" section with tools and further reading
124 | 
125 | CRITICAL REQUIREMENTS:
126 | 1. NEVER recommend deprecated or insecure practices, even if they appear in older documentation
127 | 2. ALWAYS specify secure versions of libraries and dependencies
128 | 3. NEVER provide incomplete security controls that could create a false sense of security
129 | 4. ALWAYS consider the specific version of the technology when making recommendations
130 | 5. NEVER oversimplify complex security controls
131 | 6. ALWAYS provide context-specific guidance, not generic security advice
132 | 7. NEVER recommend security through obscurity as a primary defense
133 | 
134 | ${industry ? `INDUSTRY-SPECIFIC REQUIREMENTS:
135 | 1. Address specific ${industry} security requirements and regulations
136 | 2. Consider unique threat models relevant to the ${industry} industry
137 | 3. Include industry-specific security standards and frameworks
138 | 4. Address data sensitivity levels common in ${industry}
139 | 5. Consider industry-specific compliance requirements` : ""}
140 | 
141 | ${compliance_frameworks.length > 0 ? `COMPLIANCE FRAMEWORK REQUIREMENTS:
142 | 1. Map security controls to specific requirements in ${compliance_frameworks.join(', ')}
143 | 2. Include compliance-specific documentation recommendations
144 | 3. Address audit and evidence collection needs
145 | 4. Consider specific technical controls required by these frameworks
146 | 5. Address compliance reporting and monitoring requirements` : ""}
147 | 
148 | ${threat_model ? `THREAT MODEL SPECIFIC REQUIREMENTS:
149 | 1. Focus defenses on protecting against ${threat_model}
150 | 2. Include specific countermeasures for this attack vector
151 | 3. Provide detection mechanisms for this threat
152 | 4. Include incident response procedures specific to this threat
153 | 5. Consider evolving techniques used in this attack vector` : ""}
154 | 
155 | Your recommendations must be technically precise, evidence-based, and immediately implementable. Focus on providing comprehensive security guidance that balances security effectiveness, implementation complexity, and operational impact.`;
156 | 
157 |         const userQueryText = `Provide comprehensive security best practices for ${techStack} specifically focused on ${contextText}.
158 | 
159 | Search for authoritative security documentation, standards, and best practices from sources like:
160 | - Official ${technology} security documentation
161 | - OWASP guidelines and cheat sheets
162 | - Industry security standards
163 | - Recognized security frameworks
164 | - CVEs and security advisories
165 | 
166 | For each security recommendation:
167 | 1. Explain the specific security risk or threat
168 | 2. Provide detailed implementation guidance with code examples
169 | 3. Include configuration settings and parameters
170 | 4. Suggest testing/verification procedures
171 | 5. Reference authoritative sources
172 | 
173 | Structure your response with:
174 | - Executive summary of key security recommendations
175 | - Security risk overview for ${techStack} regarding ${security_context}
176 | - Comprehensive security controls checklist
177 | - Detailed implementation guidance for each control
178 | - Security monitoring and incident response guidance
179 | - Security resources and tools
180 | 
181 | Ensure all recommendations are specific to ${techStack}, technically accurate, and immediately implementable. Prioritize recommendations based on security impact and implementation complexity.`;
182 | 
183 |         return {
184 |             systemInstructionText,
185 |             userQueryText,
186 |             useWebSearch: true,
187 |             enableFunctionCalling: false
188 |         };
189 |     }
190 | };
```

--------------------------------------------------------------------------------
/src/tools/explain_topic_with_docs.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
  2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
  3 | 
  4 | export const explainTopicWithDocsTool: ToolDefinition = {
  5 |     name: "explain_topic_with_docs",
  6 |     description: `Provides a detailed explanation for a query about a specific software topic by synthesizing information primarily from official documentation found via web search. Focuses on comprehensive answers, context, and adherence to documented details. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'topic' and 'query'.`,
  7 |     inputSchema: {
  8 |         type: "object",
  9 |         properties: {
 10 |             topic: { 
 11 |                 type: "string", 
 12 |                 description: "The software/library/framework topic (e.g., 'React Router', 'Python requests')." 
 13 |             }, 
 14 |             query: { 
 15 |                 type: "string", 
 16 |                 description: "The specific question to answer based on the documentation." 
 17 |             } 
 18 |         }, 
 19 |         required: ["topic", "query"] 
 20 |     },
 21 |     buildPrompt: (args: any, modelId: string) => {
 22 |         const { topic, query } = args;
 23 |         if (typeof topic !== "string" || !topic || typeof query !== "string" || !query) 
 24 |             throw new McpError(ErrorCode.InvalidParams, "Missing 'topic' or 'query'.");
 25 |         
 26 |         const systemInstructionText = `You are an AI assistant specialized in answering complex technical and debugging questions by synthesizing information EXCLUSIVELY from official documentation across multiple technology stacks. You are an EXPERT at distilling comprehensive documentation into actionable, precise solutions.
 27 | 
 28 | CRITICAL DOCUMENTATION REQUIREMENTS:
 29 | 1. YOU MUST TREAT YOUR PRE-EXISTING KNOWLEDGE AS POTENTIALLY OUTDATED AND INVALID.
 30 | 2. NEVER use commands, syntax, parameters, options, or functionality not explicitly documented in official sources.
 31 | 3. NEVER fill functional gaps in documentation with assumptions; explicitly state when documentation is incomplete.
 32 | 4. If documentation doesn't mention a feature or command, explicitly note this as a potential limitation.
 33 | 5. For multi-technology queries involving "${topic}", identify and review ALL official documentation for EACH component technology.
 34 | 6. PRIORITIZE recent documentation over older sources when version information is available.
 35 | 7. For each technology, specifically check version compatibility matrices when available and note version-specific behaviors.
 36 | 
 37 | TECHNICAL DEBUGGING EXCELLENCE:
 38 | 1. Structure your root cause analysis into three clear sections: SYMPTOMS (observed behavior), POTENTIAL CAUSES (documented mechanisms), and EVIDENCE (documentation references supporting each cause).
 39 | 2. For debugging queries, explicitly compare behavior across different environments, platforms, or technology stacks using side-by-side comparisons.
 40 | 3. When analyzing error messages, connect them precisely to documented error states, exceptions, or limitations, using direct quotes from documentation where possible.
 41 | 4. Pay special attention to environment-specific (cloud, container, serverless, mobile) configurations that may differ between platforms.
 42 | 5. Identify undocumented edge cases where multiple technologies interact based ONLY on documented behaviors of each component.
 43 | 6. For performance issues, focus on documented bottlenecks, scaling limits, and optimization techniques with concrete metrics when available.
 44 | 7. Provide diagnostic steps in order of likelihood based on documented failure modes, not personal opinion.
 45 | 8. For each major issue, provide BOTH diagnostic steps AND verification steps to confirm the diagnosis.
 46 | 
 47 | STRUCTURED KNOWLEDGE SYNTHESIS:
 48 | 1. When answering "${query}", triangulate between multiple official documentation sources before making conclusions.
 49 | 2. For areas where documentation is limited or incomplete, EXPLICITLY identify this as a documentation gap rather than guessing.
 50 | 3. Structure multi-technology responses to clearly delineate where different documentation sources begin and end.
 51 | 4. Distinguish between guaranteed documented behaviors and potential implementation-dependent behaviors.
 52 | 5. Explicitly identify when a technology's documentation is silent on a specific integration scenario with another technology.
 53 | 6. Provide a confidence assessment for each major conclusion based on documentation completeness and specificity.
 54 | 7. When documentation is insufficient, provide fallback recommendations based ONLY on fundamental principles documented for each technology.
 55 | 8. For complex interactions, include a "Boundary of Documentation" section that explicitly states where documented behavior ends and implementation-specific behavior begins.
 56 | 
 57 | CODE EXAMPLES AND IMPLEMENTATION:
 58 | 1. ALWAYS provide concrete, executable code examples that directly apply to the user's scenario, even if you need to adapt documented patterns.
 59 | 2. Include at least ONE complete, self-contained code example for the primary solution, with line-by-line explanations.
 60 | 3. ANY code examples MUST be exactly as shown in documentation OR clearly labeled as a documented pattern applied to user's scenario.
 61 | 4. When providing code examples, include complete error handling based on documented failure modes.
 62 | 5. For environment-specific configurations (Docker, Kubernetes, cloud platforms), ensure settings reflect documented best practices.
 63 | 6. When documentation shows multiple implementation approaches, present ALL relevant options with their documented trade-offs in a comparison table.
 64 | 7. Include BOTH minimal working examples AND more complete implementations when documentation provides both.
 65 | 8. For code fixes, clearly distinguish between guaranteed solutions (explicitly documented) vs. potential solutions (based on documented patterns).
 66 | 9. Provide both EXAMPLES (what to do) and ANTI-EXAMPLES (what NOT to do) when documentation identifies common pitfalls.
 67 | 
 68 | VISUAL AND STRUCTURED ELEMENTS:
 69 | 1. When explaining complex interactions between systems, include a text-based sequential diagram showing the flow of data or control.
 70 | 2. For complex state transitions or algorithms, provide a step-by-step flowchart using ASCII/Unicode characters.
 71 | 3. Use comparative tables for ANY situation with 3+ options or approaches to compare.
 72 | 4. Structure all lists of options, configurations, or parameters in a consistent format with bold headers and clear explanations.
 73 | 5. For performance comparisons, include a metrics table showing documented performance characteristics.
 74 | 
 75 | PRACTICAL SOLUTION FOCUS:
 76 | 1. Answer the following query based on official documentation: "${query}"
 77 | 2. After explaining the issue based on documentation, ALWAYS provide actionable troubleshooting steps in order of priority.
 78 | 3. Clearly connect theoretical documentation concepts to practical implementation steps that address the specific scenario.
 79 | 4. Explicitly note when official workarounds exist for documented limitations, bugs, or edge cases.
 80 | 5. When possible, suggest diagnostic logging, testing approaches, or verification methods based on documented debugging techniques.
 81 | 6. Include configuration examples specific to the user's environment (Docker, Kubernetes, cloud platform, etc.) when documentation provides them.
 82 | 7. Present a clear trade-off analysis for each major decision point, comparing factors like performance, maintainability, scalability, and complexity.
 83 | 8. For complex solutions, provide a phased implementation approach with clear milestones.
 84 | 
 85 | FORMAT AND CITATION REQUIREMENTS:
 86 | 1. Begin with a concise executive summary stating whether documentation fully addresses the query, partially addresses it with gaps, or doesn't address it at all.
 87 | 2. Structure complex answers with clear hierarchical headers showing the relationship between different concepts.
 88 | 3. Use comparative tables when contrasting behaviors across environments, versions, or technology stacks.
 89 | 4. Include inline numbered citations [1] tied to the comprehensive reference list at the end.
 90 | 5. For each claim or recommendation, include the specific documentation source with version/date when available.
 91 | 6. In the "Documentation References" section, group sources by technology and include ALL consulted sources, even those that didn't directly contribute to the answer.
 92 | 7. Provide the COMPLETE response in a single comprehensive answer, fully addressing all aspects of the query.`;
 93 | 
 94 |         return {
 95 |             systemInstructionText: systemInstructionText,
 96 |             userQueryText: `Thoroughly review ALL official documentation for the technologies in "${topic}". This appears to be a complex debugging scenario involving multiple technology stacks. Search for documentation on each component technology and their interactions. Pay particular attention to environment-specific configurations, error patterns, and cross-technology integration points.
 97 | 
 98 | For debugging scenarios, examine:
 99 | 1. Official documentation for each technology mentioned, including API references, developer guides, and conceptual documentation
100 | 2. Official troubleshooting guides, error references, and common issues sections
101 | 3. Release notes mentioning known issues, breaking changes, or compatibility requirements
102 | 4. Official configuration examples specific to the described environment or integration scenario
103 | 5. Any officially documented edge cases, limitations, or performance considerations
104 | 6. Version compatibility matrices, deployment-specific documentation, and operation guides
105 | 7. Official community discussions or FAQ sections ONLY if they are part of the official documentation
106 | 
107 | When synthesizing information:
108 | 1. FIRST understand each technology individually through its documentation
109 | 2. THEN examine SPECIFIC integration points between technologies as documented
110 | 3. FINALLY identify where documentation addresses or fails to address the specific issue
111 | 
112 | Answer ONLY based on information explicitly found in official documentation, with no additions from your prior knowledge. For any part not covered in documentation, explicitly identify the gap. Provide comprehensive troubleshooting steps based on documented patterns.
113 | 
114 | Provide your COMPLETE response for this query: ${query}`,
115 |             useWebSearch: true,
116 |             enableFunctionCalling: false
117 |         };
118 |     }
119 | };
```

--------------------------------------------------------------------------------
/src/vertex_ai_client.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import {
  2 |   GoogleGenAI,
  3 |   HarmCategory,
  4 |   HarmBlockThreshold,
  5 |   type Content,
  6 |   type GenerationConfig,
  7 |   type SafetySetting,
  8 |   type FunctionDeclaration,
  9 |   type Tool
 10 | } from "@google/genai";
 11 | 
 12 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
 13 | // Import getAIConfig and original safety setting definitions from config
 14 | import { getAIConfig, vertexSafetySettings, geminiSafetySettings as configGeminiSafetySettings } from './config.js';
 15 | import { sleep } from './utils.js';
 16 | 
 17 | // --- Configuration and Client Initialization ---
 18 | const aiConfig = getAIConfig();
 19 | // Use correct client types
 20 | let ai: GoogleGenAI;
 21 | 
 22 | try {
 23 |     if (aiConfig.geminiApiKey) {
 24 |         ai = new GoogleGenAI({ apiKey: aiConfig.geminiApiKey });
 25 |     } else if (aiConfig.gcpProjectId && aiConfig.gcpLocation) {
 26 |         ai = new GoogleGenAI({
 27 |             vertexai: true,
 28 |             project: aiConfig.gcpProjectId,
 29 |             location: aiConfig.gcpLocation
 30 |         });
 31 |     } else {
 32 |         throw new Error("Missing Gemini API key or Vertex AI project/location configuration.");
 33 |     }
 34 |     console.log("Initialized GoogleGenAI with config:", aiConfig.modelId);
 35 | } catch (error: any) {
 36 |     console.error(`Error initializing GoogleGenAI:`, error.message);
 37 |     process.exit(1);
 38 | }
 39 | 
 40 | // Define a union type for Content
 41 | export type CombinedContent = Content;
 42 | 
 43 | // --- Unified AI Call Function ---
 44 | export async function callGenerativeAI(
 45 |     initialContents: CombinedContent[],
 46 |     tools: Tool[] | undefined
 47 | ): Promise<string> {
 48 | 
 49 |     const {
 50 |         provider,
 51 |         modelId,
 52 |         temperature,
 53 |         useStreaming,
 54 |         maxOutputTokens,
 55 |         maxRetries,
 56 |         retryDelayMs,
 57 |     } = aiConfig;
 58 | 
 59 |     const isGroundingRequested = tools?.some(tool => (tool as any).googleSearchRetrieval);
 60 | 
 61 |     let filteredToolsForVertex = tools;
 62 |     let adaptedToolsForGemini: FunctionDeclaration[] | undefined = undefined;
 63 | 
 64 |     if (provider === 'gemini' && tools) {
 65 |         const nonSearchTools = tools.filter(tool => !(tool as any).googleSearchRetrieval);
 66 |         if (nonSearchTools.length > 0) {
 67 |              console.warn(`Gemini Provider: Function calling tools detected but adaptation/usage with @google/generative-ai is not fully implemented.`);
 68 |         } else {
 69 |              console.log(`Gemini Provider: Explicit googleSearchRetrieval tool filtered out (search handled implicitly or by model).`);
 70 |         }
 71 |         filteredToolsForVertex = undefined;
 72 |         adaptedToolsForGemini = undefined; // Keep undefined for now
 73 | 
 74 |     } else if (provider === 'vertex' && isGroundingRequested && tools && tools.length > 1) {
 75 |         console.warn("Vertex Provider: Grounding requested with other tools; keeping only search.");
 76 |         filteredToolsForVertex = tools.filter(tool => (tool as any).googleSearchRetrieval);
 77 |     }
 78 | 
 79 | 
 80 |     // Get appropriate model instance
 81 |     // Unified model instance
 82 |     // generativeModel is already initialized above
 83 | 
 84 |     // --- Prepare Request Parameters (differ slightly between SDKs) ---
 85 |     const commonGenConfig: GenerationConfig = { temperature, maxOutputTokens };
 86 |     const resolvedSafetySettings: SafetySetting[] = aiConfig.provider === "vertex" ? vertexSafetySettings : configGeminiSafetySettings;
 87 |     // All requests will use generativeModel.generateContent or generateContentStream
 88 | 
 89 | 
 90 |     // --- Execute Request with Retries ---
 91 |     for (let attempt = 0; attempt <= maxRetries; attempt++) {
 92 |         try {
 93 |             // Simplified log line without the problematic length check
 94 |             console.error(`[${new Date().toISOString()}] Calling ${provider} AI (${modelId}, temp: ${temperature}, grounding: ${isGroundingRequested}, tools(Vertex): ${filteredToolsForVertex?.length ?? 0}, stream: ${useStreaming}, attempt: ${attempt + 1})`);
 95 | 
 96 |             let responseText: string | undefined;
 97 | 
 98 |             if (useStreaming) {
 99 | 
100 |                 const stream = await ai.models.generateContentStream({
101 |                     model: modelId,
102 |                     contents: initialContents,
103 |                     ...(tools && tools.length > 0
104 |                         ? { config: { tools } }
105 |                         : {})
106 |                 });
107 |                 let accumulatedText = "";
108 | 
109 |                 let lastChunk: any = null;
110 | 
111 |                 for await (const chunk of stream) {
112 |                     lastChunk = chunk;
113 |                     try {
114 |                         if (chunk.text) accumulatedText += chunk.text;
115 |                     } catch (e: any) {
116 |                         console.warn("Non-text or error chunk encountered in stream:", e.message);
117 |                         if (e.message?.toLowerCase().includes('safety')) {
118 |                             throw new Error(`Content generation blocked during stream. Reason: ${e.message}`);
119 |                         }
120 |                     }
121 |                 }
122 | 
123 |                 // Check block/safety reasons on lastChunk if available
124 |                 if (lastChunk) {
125 |                     const blockReason = lastChunk?.promptFeedback?.blockReason;
126 |                     if (blockReason) {
127 |                         throw new Error(`Content generation blocked. Aggregated Reason: ${blockReason}`);
128 |                     }
129 |                     const finishReason = lastChunk?.candidates?.[0]?.finishReason;
130 |                     if (finishReason === 'SAFETY') {
131 |                         throw new Error(`Content generation blocked. Aggregated Finish Reason: SAFETY`);
132 |                     }
133 |                 }
134 | 
135 |                 responseText = accumulatedText;
136 | 
137 |                 if (typeof responseText !== 'string' || !responseText) {
138 |                     console.error(`Empty response received from AI stream.`);
139 |                     throw new Error(`Received empty or non-text response from AI stream.`);
140 |                 }
141 | 
142 |                 console.error(`[${new Date().toISOString()}] Finished processing stream from AI.`);
143 |             } else { // Non-streaming
144 |                 let result: any;
145 |                 try {
146 |                     result = await ai.models.generateContent({
147 |                         model: modelId,
148 |                         contents: initialContents,
149 |                         ...(tools && tools.length > 0
150 |                             ? { config: { tools } }
151 |                             : {})
152 |                     });
153 |                 } catch (e: any) {
154 |                     console.error("Error during non-streaming call:", e.message);
155 |                     if (e.message?.toLowerCase().includes('safety') || e.message?.toLowerCase().includes('prompt blocked') || (e as any).status === 'BLOCKED') {
156 |                         throw new Error(`Content generation blocked. Call Reason: ${e.message}`);
157 |                     }
158 |                     throw e;
159 |                 }
160 |                 console.error(`[${new Date().toISOString()}] Received non-streaming response from AI.`);
161 |                 try {
162 |                     responseText = result.text;
163 |                 } catch (e) {
164 |                     console.warn("Could not extract text from non-streaming response:", e);
165 |                 }
166 |                 const blockReason = result?.promptFeedback?.blockReason;
167 |                 if (blockReason) {
168 |                     throw new Error(`Content generation blocked. Response Reason: ${blockReason}`);
169 |                 }
170 |                 const finishReason = result?.candidates?.[0]?.finishReason;
171 |                 if (finishReason === 'SAFETY') {
172 |                     throw new Error(`Content generation blocked. Response Finish Reason: SAFETY`);
173 |                 }
174 | 
175 |                 if (typeof responseText !== 'string' || !responseText) {
176 |                     console.error(`Unexpected non-streaming response structure:`, JSON.stringify(result, null, 2));
177 |                     throw new Error(`Failed to extract valid text response from AI (non-streaming).`);
178 |                 }
179 |             }
180 | 
181 |             // --- Return Text ---
182 |             if (typeof responseText === 'string') {
183 |                  return responseText;
184 |             } else {
185 |                  throw new Error(`Invalid state: No valid text response obtained from ${provider} AI.`);
186 |             }
187 | 
188 |         } catch (error: any) {
189 |              console.error(`[${new Date().toISOString()}] Error details (attempt ${attempt + 1}):`, error);
190 |              const errorMessageString = String(error.message || error || '').toLowerCase();
191 |              const isBlockingError = errorMessageString.includes('blocked') || errorMessageString.includes('safety');
192 |              const isRetryable = !isBlockingError && (
193 |                  errorMessageString.includes('429') ||
194 |                  errorMessageString.includes('500') ||
195 |                  errorMessageString.includes('503') ||
196 |                  errorMessageString.includes('deadline_exceeded') ||
197 |                  errorMessageString.includes('internal') ||
198 |                  errorMessageString.includes('network error') ||
199 |                  errorMessageString.includes('socket hang up') ||
200 |                  errorMessageString.includes('unavailable') ||
201 |                  errorMessageString.includes('could not connect')
202 |              );
203 | 
204 |             if (isRetryable && attempt < maxRetries) {
205 |                 const jitter = Math.random() * 500;
206 |                 const delay = (retryDelayMs * Math.pow(2, attempt)) + jitter;
207 |                 console.error(`[${new Date().toISOString()}] Retrying in ${delay.toFixed(0)}ms...`);
208 |                 await sleep(delay);
209 |                 continue;
210 |             } else {
211 |                  let finalErrorMessage = `${provider} AI API error: ${error.message || "Unknown error"}`;
212 |                  if (isBlockingError) {
213 |                       const match = error.message?.match(/(Reason|Finish Reason):\s*(.*)/i);
214 |                        if (match?.[2]) {
215 |                           finalErrorMessage = `Content generation blocked by ${provider} safety filters. Reason: ${match[2]}`;
216 |                        } else {
217 |                           const geminiBlockMatch = error.message?.match(/prompt.*blocked.*\s*safety.*?setting/i);
218 |                            if (geminiBlockMatch) {
219 |                               finalErrorMessage = `Content generation blocked by Gemini safety filters.`;
220 |                            } else {
221 |                               finalErrorMessage = `Content generation blocked by ${provider} safety filters. (${error.message || 'No specific reason found'})`;
222 |                            }
223 |                        }
224 |                  } else if (errorMessageString.match(/\b(429|500|503|internal|unavailable)\b/)) {
225 |                      finalErrorMessage += ` (Status: ${errorMessageString.match(/\b(429|500|503|internal|unavailable)\b/)?.[0]})`;
226 |                  } else if (errorMessageString.includes('deadline_exceeded')) {
227 |                      finalErrorMessage = `${provider} AI API error: Operation timed out (deadline_exceeded).`;
228 |                  }
229 |                  console.error("Final error message:", finalErrorMessage);
230 |                  throw new McpError(ErrorCode.InternalError, finalErrorMessage);
231 |             }
232 |         }
233 |     } // End retry loop
234 | 
235 |     throw new McpError(ErrorCode.InternalError, `Max retries (${maxRetries + 1}) reached for ${provider} LLM call without success.`);
236 | }
```

--------------------------------------------------------------------------------
/src/tools/generate_project_guidelines.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { McpError, ErrorCode } from "@modelcontextprotocol/sdk/types.js";
  2 | import { ToolDefinition, modelIdPlaceholder } from "./tool_definition.js";
  3 | 
  4 | export const generateProjectGuidelinesTool: ToolDefinition = {
  5 |     name: "generate_project_guidelines",
  6 |     description: `Generates a structured project guidelines document (e.g., Markdown) based on a specified list of technologies and versions (tech stack). Uses web search to find the latest official documentation, style guides, and best practices for each component and synthesizes them into actionable rules and recommendations. Uses the configured Vertex AI model (${modelIdPlaceholder}) with Google Search. Requires 'tech_stack'.`,
  7 |     inputSchema: {
  8 |         type: "object",
  9 |         properties: {
 10 |             tech_stack: {
 11 |                 type: "array",
 12 |                 items: { type: "string" },
 13 |                 description: "An array of strings specifying the project's technologies and versions (e.g., ['React 18.3', 'TypeScript 5.2', 'Node.js 20.10', 'Express 5.0', 'PostgreSQL 16.1'])."
 14 |             }
 15 |         },
 16 |         required: ["tech_stack"]
 17 |     },
 18 |     buildPrompt: (args: any, modelId: string) => {
 19 |         const { tech_stack } = args;
 20 |         if (!Array.isArray(tech_stack) || tech_stack.length === 0 || !tech_stack.every(item => typeof item === 'string' && item))
 21 |             throw new McpError(ErrorCode.InvalidParams, "Missing or invalid 'tech_stack' array.");
 22 | 
 23 |         const techStackString = tech_stack.join(', ');
 24 | 
 25 |         // Enhanced System Instruction for Guideline Generation
 26 |         const systemInstructionText = `You are an AI assistant acting as a Senior Enterprise Technical Architect and Lead Developer with 15+ years of experience. Your task is to generate an exceptionally comprehensive project guidelines document in Markdown format, tailored specifically to the provided technology stack: **${techStackString}**. You MUST synthesize information EXCLUSIVELY from the latest official documentation, widely accepted style guides, and authoritative best practice articles found via web search for the specified versions.
 27 | 
 28 | CRITICAL RESEARCH METHODOLOGY REQUIREMENTS:
 29 | 1. TREAT ALL PRE-EXISTING KNOWLEDGE AS POTENTIALLY OUTDATED. Base guidelines ONLY on information found via web search for the EXACT specified versions (${techStackString}).
 30 | 2. For EACH technology in the stack:
 31 |    a. First search for "[technology] [version] official documentation" (e.g., "React 18.3 official documentation")
 32 |    b. Then search for "[technology] [version] style guide" or "[technology] [version] best practices"
 33 |    c. Then search for "[technology] [version] release notes" to identify version-specific features
 34 |    d. Finally search for "[technology] [version] security advisories" and "[technology] [version] performance optimization"
 35 | 3. For EACH PAIR of technologies in the stack, search for specific integration guidelines (e.g., "TypeScript 5.2 with React 18.3 best practices")
 36 | 4. Prioritize sources in this order:
 37 |    a. Official documentation (e.g., reactjs.org, nodejs.org)
 38 |    b. Official GitHub repositories and their wikis/READMEs
 39 |    c. Widely-adopted style guides (e.g., Airbnb JavaScript Style Guide, Google's Java Style Guide)
 40 |    d. Technical blogs from the technology creators or major contributors
 41 |    e. Well-established tech companies' engineering blogs (e.g., Meta Engineering, Netflix Tech Blog)
 42 |    f. Reputable developer platforms (StackOverflow only for verified/high-voted answers)
 43 | 5. Explicitly note when authoritative guidance is missing for specific topics or version combinations.
 44 | 
 45 | COMPREHENSIVE DOCUMENT STRUCTURE REQUIREMENTS:
 46 | The document MUST include ALL of the following major sections with appropriate subsections:
 47 | 
 48 | 1. **Executive Summary**
 49 |    * One-paragraph high-level overview of the technology stack
 50 |    * Bullet points highlighting 3-5 most critical guidelines that span the entire stack
 51 | 
 52 | 2. **Technology Stack Overview**
 53 |    * Version-specific capabilities and limitations for each component
 54 |    * Expected technology lifecycle considerations (upcoming EOL dates, migration paths)
 55 |    * Compatibility matrix showing tested/verified version combinations
 56 |    * Diagram recommendation for visualizing the stack architecture
 57 | 
 58 | 3. **Development Environment Setup**
 59 |    * Required development tools and versions (IDEs, CLIs, extensions)
 60 |    * Recommended local environment configurations with exact version numbers
 61 |    * Docker/containerization standards if applicable
 62 |    * Local development workflow recommendations
 63 | 
 64 | 4. **Code Organization & Architecture**
 65 |    * Directory/folder structure standards
 66 |    * Architectural patterns specific to each technology (e.g., hooks patterns for React)
 67 |    * Module organization principles
 68 |    * State management approach
 69 |    * API design principles specific to the technology versions
 70 |    * Database schema design principles (if applicable)
 71 | 
 72 | 5. **Coding Standards** (language/framework-specific with explicit examples)
 73 |    * Naming conventions with clear examples showing right/wrong approaches
 74 |    * Formatting and linting configurations with tool-specific recommendations
 75 |    * Type definitions and type safety guidelines
 76 |    * Comments and documentation requirements with examples
 77 |    * File size/complexity limits with quantitative metrics
 78 | 
 79 | 6. **Version-Specific Implementations**
 80 |    * Feature usage guidance specifically for the stated versions
 81 |    * Deprecated features to avoid in these versions
 82 |    * Migration strategies from previous versions if applicable
 83 |    * Version-specific optimizations
 84 |    * Innovative patterns enabled by latest versions
 85 | 
 86 | 7. **Component Interaction Guidelines**
 87 |    * How each technology should integrate with others in the stack
 88 |    * Data transformation standards between layers
 89 |    * Communication protocols and patterns
 90 |    * Error handling and propagation between components
 91 | 
 92 | 8. **Security Best Practices**
 93 |    * Authentication and authorization patterns
 94 |    * Input validation and sanitization
 95 |    * OWASP security considerations specific to each technology
 96 |    * Dependency management and vulnerability scanning
 97 |    * Secrets management
 98 |    * Version-specific security concerns 
 99 | 
100 | 9. **Performance Optimization**
101 |    * Stack-specific performance metrics and benchmarks
102 |    * Version-specific performance features and optimizations
103 |    * Resource management (memory, connections, threads)
104 |    * Caching strategies tailored to the stack
105 |    * Load testing recommendations
106 | 
107 | 10. **Testing Strategy**
108 |     * Test pyramid implementation for this specific stack
109 |     * Recommended testing frameworks and tools with exact versions
110 |     * Unit testing standards with coverage expectations (specific percentages)
111 |     * Integration testing approach
112 |     * End-to-end testing methodology
113 |     * Performance testing guidelines
114 |     * Mock/stub implementation guidelines
115 | 
116 | 11. **Error Handling & Logging**
117 |     * Error categorization framework
118 |     * Logging standards and levels
119 |     * Monitoring integration recommendations
120 |     * Debugging best practices
121 |     * Observability considerations
122 | 
123 | 12. **Build & Deployment Pipeline**
124 |     * CI/CD tool recommendations
125 |     * Build process optimization
126 |     * Deployment strategies (e.g., blue-green, canary)
127 |     * Environment-specific configurations
128 |     * Release management process
129 | 
130 | 13. **Documentation Requirements**
131 |     * API documentation standards
132 |     * Technical documentation templates
133 |     * User documentation guidelines
134 |     * Knowledge transfer protocols
135 | 
136 | 14. **Common Pitfalls & Anti-patterns**
137 |     * Technology-specific anti-patterns with explicit examples
138 |     * Known bugs or issues in specified versions
139 |     * Legacy patterns to avoid
140 |     * Performance traps specific to this stack
141 | 
142 | 15. **Collaboration Workflows**
143 |     * Code review checklist tailored to the stack
144 |     * Pull request/merge request standards
145 |     * Branching strategy
146 |     * Communication protocols for technical discussions
147 | 
148 | 16. **Governance & Compliance**
149 |     * Code ownership model
150 |     * Technical debt management approach
151 |     * Accessibility compliance considerations
152 |     * Regulatory requirements affecting implementation (if applicable)
153 | 
154 | CRITICAL FORMATTING & CONTENT REQUIREMENTS:
155 | 
156 | 1. CODE EXAMPLES - For EVERY major guideline (not just a select few):
157 |    * Provide BOTH correct AND incorrect implementations side-by-side
158 |    * Include comments explaining WHY the guidance matters
159 |    * Ensure examples are complete enough to demonstrate the principle
160 |    * Use syntax highlighting appropriate to the language
161 |    * For complex patterns, show progressive implementation steps
162 | 
163 | 2. VISUAL ELEMENTS:
164 |    * Recommend specific diagrams that should be created (architecture diagrams, data flow diagrams)
165 |    * Use Markdown tables for compatibility matrices and feature comparisons
166 |    * Use clear section dividers for readability
167 | 
168 | 3. SPECIFICITY:
169 |    * ALL guidelines must be ACTIONABLE and CONCRETE
170 |    * Include quantitative metrics wherever possible (e.g., "Functions should not exceed 30 lines" instead of "Keep functions short")
171 |    * Specify exact tool versions and configuration options
172 |    * Avoid generic advice that applies to any technology stack
173 | 
174 | 4. CITATIONS:
175 |    * Include inline citations for EVERY significant guideline using format: [Source: URL]
176 |    * For critical security or architectural recommendations, cite multiple sources if available
177 |    * When citing version-specific features, link directly to release notes or version documentation
178 |    * If guidance conflicts between sources, note the conflict and explain your recommendation
179 | 
180 | 5. VERSION SPECIFICITY:
181 |    * Explicitly indicate which guidelines are version-specific vs. universal
182 |    * Note when a practice is specific to the combination of technologies in this stack
183 |    * Identify features that might change in upcoming version releases
184 |    * Include recommended update paths when applicable
185 | 
186 | OUTPUT FORMAT:
187 | - Start with a title: "# Comprehensive Project Guidelines for ${techStackString}"
188 | - Use Markdown headers (##, ###, ####) to structure sections and subsections logically
189 | - Use bulleted lists for individual guidelines
190 | - Use numbered lists for sequential procedures
191 | - Use code blocks with language specification for all code examples
192 | - Use tables for comparative information
193 | - Include a comprehensive table of contents
194 | - Use blockquotes to highlight critical warnings or notes
195 | - End with an "Appendix" section containing links to all cited resources
196 | - The entire output must be a single, coherent Markdown document that feels like it was crafted by an expert technical architect`;
197 | 
198 |         // Enhanced User Query for Guideline Generation
199 |         const userQueryText = `Generate an exceptionally detailed and comprehensive project guidelines document in Markdown format for a project using the following technology stack: **${techStackString}**.
200 | 
201 | Search for and synthesize information from the latest authoritative sources for each technology:
202 | 1. Official documentation for each exact version specified
203 | 2. Established style guides and best practices from technology creators
204 | 3. Security advisories and performance optimization guidance
205 | 4. Integration patterns between the specific technologies in this stack
206 | 
207 | Your document must comprehensively cover:
208 | - Development environment setup with exact tool versions
209 | - Code organization and architectural patterns specific to these versions
210 | - Detailed coding standards with clear examples of both correct and incorrect approaches
211 | - Version-specific implementation details highlighting new features and deprecations
212 | - Component interaction guidelines showing how these technologies should work together
213 | - Comprehensive security best practices addressing OWASP concerns
214 | - Performance optimization techniques validated for these specific versions
215 | - Testing strategy with specific framework recommendations and coverage expectations
216 | - Error handling patterns and logging standards
217 | - Build and deployment pipeline recommendations
218 | - Documentation requirements and standards
219 | - Common pitfalls and anti-patterns with explicit examples
220 | - Team collaboration workflows tailored to this technology stack
221 | - Governance and compliance considerations
222 | 
223 | Ensure each guideline is actionable, specific, and supported by code examples wherever applicable. Cite authoritative sources for all key recommendations. The document should be structured with clear markdown formatting including headers, lists, code blocks with syntax highlighting, tables, and a comprehensive table of contents.`;
224 | 
225 |         return {
226 |             systemInstructionText: systemInstructionText,
227 |             userQueryText: userQueryText,
228 |             useWebSearch: true,
229 |             enableFunctionCalling: false
230 |         };
231 |     }
232 | };
```
Page 1/2FirstPrevNextLast