#
tokens: 20559/50000 31/31 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .env.example
├── .gitignore
├── Dockerfile
├── examples
│   ├── claude-commands
│   │   ├── review-branch-custom-gemini.md
│   │   ├── review-branch-develop-openai.md
│   │   ├── review-branch-main-claude.md
│   │   ├── review-head-claude.md
│   │   ├── review-head-gemini.md
│   │   ├── review-head-openai.md
│   │   ├── review-staged-claude.md
│   │   ├── review-staged-gemini.md
│   │   ├── review-staged-maintainability-gemini.md
│   │   ├── review-staged-openai.md
│   │   ├── review-staged-performance-openai.md
│   │   └── review-staged-security-claude.md
│   ├── cursor-rules
│   │   └── project.mdc
│   └── windsurf-workflows
│       ├── review-branch.md
│       ├── review-head.md
│       ├── review-security.md
│       └── review-staged.md
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── src
│   ├── config.ts
│   ├── git-utils.ts
│   ├── index.ts
│   └── llm-service.ts
├── tests
│   ├── config.test.ts
│   └── git-utils.test.ts
├── tsconfig.json
├── tsconfig.test.json
└── vitest.config.ts
```

# Files

--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------

```
1 | # Provider API Keys
2 | OPENAI_API_KEY=your-openai-api-key
3 | ANTHROPIC_API_KEY=your-anthropic-api-key
4 | GOOGLE_API_KEY=your-google-api-key
5 | 
```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
 1 | # dev
 2 | .yarn/
 3 | !.yarn/releases
 4 | .vscode/*
 5 | !.vscode/launch.json
 6 | !.vscode/*.code-snippets
 7 | .idea/workspace.xml
 8 | .idea/usage.statistics.xml
 9 | .idea/shelf
10 | 
11 | # deps
12 | node_modules/
13 | 
14 | # env
15 | .env
16 | .env.production
17 | 
18 | # logs
19 | logs/
20 | *.log
21 | npm-debug.log*
22 | yarn-debug.log*
23 | yarn-error.log*
24 | pnpm-debug.log*
25 | lerna-debug.log*
26 | 
27 | dist
28 | 
29 | # misc
30 | .DS_Store
31 | 
32 | **/.claude/
33 | requirements.md
34 | ai_docs
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # @vibesnipe/code-review-mcp
  2 | 
  3 | **Version: 1.0.0**
  4 | 
  5 | An MCP (Model Context Protocol) server that provides a powerful tool to perform code reviews using various Large Language Models (LLMs). This server is designed to be seamlessly integrated with AI coding assistants like Anthropic's Claude Code, Cursor, Windsurf, or other MCP-compatible clients.
  6 | 
  7 | > **Note:** This tool was initially created for Claude Code but has since been expanded to support other AI IDEs and Claude Desktop. See the integration guides for [Claude Code](#integration-with-claude-code), [Cursor](#cursor-integration), and [Windsurf](#windsurf-integration).
  8 | 
  9 | It analyzes `git diff` output for staged changes, differences from HEAD, or differences between branches, providing contextualized reviews based on your task description and project details.
 10 | 
 11 | ## Features
 12 | 
 13 | -   Reviews git diffs (staged changes, current HEAD, branch differences).
 14 | -   Integrates with Google Gemini, OpenAI, and Anthropic models through the Vercel AI SDK.
 15 | -   Allows specification of task description, review focus, and overall project context for tailored reviews.
 16 | -   Outputs reviews in clear, actionable markdown format.
 17 | -   Designed to be run from the root of any Git repository you wish to analyze.
 18 | -   Easily installable and runnable via `npx` for immediate use.
 19 | 
 20 | ## Compatibility
 21 | 
 22 | -   **Node.js**: Version 18 or higher is required.
 23 | -   **Operating Systems**: Works on Windows, macOS, and Linux.
 24 | -   **Git**: Version 2.20.0 or higher recommended.
 25 | 
 26 | ## Prerequisites
 27 | 
 28 | -   **Node.js**: Version 18 or higher is required.
 29 | -   **Git**: Must be installed and accessible in your system's PATH. The server executes `git` commands.
 30 | -   **API Keys for LLMs**: You need API keys for the LLM providers you intend to use. These should be set as environment variables:
 31 |     -   `GOOGLE_API_KEY` for Google models.
 32 |     -   `OPENAI_API_KEY` for OpenAI models.
 33 |     -   `ANTHROPIC_API_KEY` for Anthropic models.
 34 |     These can be set globally in your environment or, conveniently, in a `.env` file placed in the root of the project you are currently reviewing. The server will automatically try to load it.
 35 | 
 36 | ## Installation & Usage
 37 | 
 38 | The primary way to use this server is with `npx`, which ensures you're always using the latest version without needing a global installation.
 39 | 
 40 | ### Recommended: Using with `npx`
 41 | 
 42 | 1.  **Navigate to Your Project:**
 43 |     Open your terminal and change to the root directory of the Git repository you want to review.
 44 |     ```bash
 45 |     cd /path/to/your-git-project
 46 |     ```
 47 | 
 48 | 2.  **Run the MCP Server:**
 49 |     Execute the following command:
 50 |     ```bash
 51 |     npx -y @vibesnipe/code-review-mcp
 52 |     ```
 53 | 
 54 |     This command will download (if not already cached) and run the `@vibesnipe/code-review-mcp` server. You should see output in your terminal similar to:
 55 |     `[MCP Server] Code Reviewer MCP Server is running via stdio and connected to transport.`
 56 |     The server is now running and waiting for an MCP client (like Claude Code, Cursor, or Windsurf) to connect.
 57 | 
 58 | 
 59 | 
 60 | ### Installing via Smithery
 61 | 
 62 | To install code-review-mcp for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@praneybehl/code-review-mcp):
 63 | 
 64 | ```bash
 65 | npx -y @smithery/cli install @praneybehl/code-review-mcp --client claude
 66 | ```
 67 | 
 68 | 
 69 | ## Integration with Claude Code
 70 | 
 71 | Once the `claude-code-review-mcp` server is running (ideally via `npx` from your project's root):
 72 | 
 73 | 1.  **Add as an MCP Server in Claude Code:**
 74 |     In a separate terminal where Claude Code is running (or will be run), configure it to use this MCP server.
 75 |     The command to run the server (as used by Claude Code) would be `code-review-mcp` if installed globally and in PATH, or the `npx ...` command if you prefer Claude Code to always fetch it.
 76 | 
 77 |     To add it to Claude Code:
 78 |     ```bash
 79 |     claude mcp add code-reviewer -s <user|local> -e GOOGLE_API_KEY="key" -- code-review-mcp 
 80 |     ```
 81 |     If you want Claude Code to use `npx` (which is a good practice to ensure version consistency if you don't install globally):
 82 |     ```bash
 83 |     claude mcp add code-reviewer -s <user|local> -e GOOGLE_API_KEY="key" -- npx -y @vibesnipe/code-review-mcp
 84 |     ```
 85 |     This tells Claude Code how to launch the MCP server when the "code-reviewer" toolset is requested. This configuration can be project-specific (saved in `.claude/.mcp.json` in your project) or user-specific (global Claude Code settings).
 86 | 
 87 | 2.  **Use Smart Slash Commands in Claude Code:**
 88 |     Create custom slash command files in your project's `.claude/commands/` directory to easily invoke the review tool. The package includes several example commands in the `examples/claude-commands/` directory that you can copy to your project.
 89 | 
 90 |     These improved slash commands don't require you to manually specify task descriptions or project context - they leverage Claude Code's existing knowledge of your project and the current task you're working on.
 91 | 
 92 |     Example invocation using a slash command in Claude Code:
 93 |     ```
 94 |     claude > /project:review-staged-claude
 95 |     ```
 96 | 
 97 |     No additional arguments needed! Claude will understand what you're currently working on and use that as context for the review.
 98 | 
 99 |     For commands that require arguments (like `review-branch-custom-gemini.md` which uses a custom branch name), you can pass them directly after the command:
100 |     ```
101 |     claude > /project:review-branch-custom-gemini main
102 |     ```
103 |     This will pass "main" as the $ARGUMENTS_BASE_BRANCH parameter.
104 | 
105 | ## Integration with Modern AI IDEs
106 | 
107 | ### Cursor Integration
108 | 
109 | Cursor is a popular AI-powered IDE based on VS Code that supports MCP servers. Here's how to integrate the code review MCP server with Cursor:
110 | 
111 | 1. **Configure Cursor's Rules for Code Review**:
112 |    
113 |    Create or open the `.cursor/rules/project.mdc` file in your project and add the following section:
114 | 
115 |    ```markdown
116 |    ## Slash Commands
117 | 
118 |    /review-staged: Use the perform_code_review tool from the code-reviewer MCP server to review staged changes. Use anthropic provider with claude-3-7-sonnet-20250219 model. Base the task description on our current conversation context and focus on code quality and best practices.
119 | 
120 |    /review-head: Use the perform_code_review tool from the code-reviewer MCP server to review all uncommitted changes (HEAD). Use openai provider with o3 model. Base the task description on our current conversation context and focus on code quality and best practices.
121 | 
122 |    /review-security: Use the perform_code_review tool from the code-reviewer MCP server to review staged changes. Use anthropic provider with claude-3-5-sonnet-20241022 model. Base the task description on our current conversation context and specifically focus on security vulnerabilities, input validation, and secure coding practices.
123 |    ```
124 | 
125 | 2. **Add the MCP Server in Cursor**:
126 |    
127 |    - Open Cursor settings
128 |    - Navigate to the MCP Servers section
129 |    - Add a new MCP server with the following JSON configuration:
130 |    ```json
131 |    "code-reviewer": {
132 |      "command": "npx",
133 |      "args": ["-y", "@vibesnipe/code-review-mcp"],
134 |      "env": {
135 |        "GOOGLE_API_KEY": "your-google-api-key",
136 |        "OPENAI_API_KEY": "your-openai-api-key",
137 |        "ANTHROPIC_API_KEY": "your-anthropic-api-key"
138 |      }
139 |    }
140 |    ```
141 | 
142 | 4. **Using the Commands**:
143 |    
144 |    In Cursor's AI chat interface, you can now simply type:
145 |    ```
146 |    /review-staged
147 |    ```
148 |    And Cursor will use the claude-code-review-mcp server to perform a code review of your staged changes.
149 | 
150 | ### Windsurf Integration
151 | 
152 | Windsurf (formerly Codeium) is another advanced AI IDE that supports custom workflows via slash commands. Here's how to integrate with Windsurf:
153 | 
154 | 1. **Configure the MCP Server in Windsurf**:
155 |    
156 |    - Open Windsurf
157 |    - Click on the Customizations icon in the top right of Cascade
158 |    - Navigate to the MCP Servers panel
159 |    - Add a new MCP server with the following JSON configuration:
160 |    ```json
161 |    "code-reviewer": {
162 |      "command": "npx",
163 |      "args": ["-y", "@vibesnipe/code-review-mcp"],
164 |      "env": {
165 |        "GOOGLE_API_KEY": "your-google-api-key",
166 |        "OPENAI_API_KEY": "your-openai-api-key",
167 |        "ANTHROPIC_API_KEY": "your-anthropic-api-key"
168 |      }
169 |    }
170 |    ```
171 | 
172 | 3. **Create Workflows for Code Review**:
173 |    
174 |    Windsurf supports workflows that can be invoked via slash commands. Create a file in `.windsurf/workflows/review-staged.md`:
175 | 
176 |    ```markdown
177 |    # Review Staged Changes
178 |    
179 |    Perform a code review on the currently staged changes in the repository.
180 |    
181 |    ## Step 1
182 |    
183 |    Use the perform_code_review tool from the code-reviewer MCP server with the following parameters:
184 |    ```
185 |    {
186 |      "target": "staged",
187 |      "llmProvider": "anthropic",
188 |      "modelName": "claude-3-7-sonnet-20250219",
189 |      "taskDescription": "The task I am currently working on in this codebase",
190 |      "reviewFocus": "General code quality, security best practices, and performance considerations",
191 |      "projectContext": "This project is being developed in Windsurf. Please review the code carefully for any issues."
192 |    }
193 |    ```
194 |    ```
195 | 
196 |    Similarly, create workflows for other review types as needed.
197 | 
198 | 4. **Using the Workflows**:
199 |    
200 |    In Windsurf's Cascade interface, you can invoke these workflows with:
201 |    ```
202 |    /review-staged
203 |    ```
204 | 
205 |    Windsurf will then execute the workflow, which will use the claude-code-review-mcp server to perform the code review.
206 | 
207 | ## Tool Provided by this MCP Server
208 | 
209 | ### `perform_code_review`
210 | 
211 | **Description:**
212 | Performs a code review using a specified Large Language Model on git changes within the current Git repository. This tool must be run from the root directory of the repository being reviewed.
213 | 
214 | **Input Schema (Parameters):**
215 | 
216 | The tool expects parameters matching the `CodeReviewToolParamsSchema`:
217 | 
218 | -   `target` (enum: `'staged'`, `'HEAD'`, `'branch_diff'`):
219 |     Specifies the set of changes to review.
220 |     -   `'staged'`: Reviews only the changes currently staged for commit.
221 |     -   `'HEAD'`: Reviews uncommitted changes (both staged and unstaged) against the last commit.
222 |     -   `'branch_diff'`: Reviews changes between a specified base branch/commit and the current HEAD. Requires `diffBase` parameter.
223 | -   `taskDescription` (string):
224 |     A clear and concise description of the task, feature, or bugfix that led to the code changes. This provides crucial context for the LLM reviewer. (e.g., "Implemented password reset functionality via email OTP.")
225 | -   `llmProvider` (enum: `'google'`, `'openai'`, `'anthropic'`):
226 |     The Large Language Model provider to use for the review.
227 | -   `modelName` (string):
228 |     The specific model name from the chosen provider. Examples:
229 |     -   Google: `'gemini-2.5-pro-preview-05-06'`, `'gemini-2.5-flash-preview-04-17'`
230 |         *(Ref: https://ai-sdk.dev/providers/ai-sdk-providers/google-generative-ai#model-capabilities)*
231 |     -   OpenAI: `'o4-mini'`, `'gpt-4.1'`, `'gpt-4.1-mini'`, `'o3'`
232 |         *(Ref: https://ai-sdk.dev/providers/ai-sdk-providers/openai#model-capabilities)*
233 |     -   Anthropic: `'claude-3-7-sonnet-20250219'`, `'claude-3-5-sonnet-20241022'`
234 |         *(Ref: https://ai-sdk.dev/providers/ai-sdk-providers/anthropic#model-capabilities)*
235 |     Ensure the model selected is available via the Vercel AI SDK and your API key has access.
236 |     
237 |     **Note:** Model names often change as providers release new versions. Always check the latest documentation from your provider and update the model names accordingly.
238 | -   `reviewFocus` (string, optional but recommended):
239 |     Specific areas, concerns, or aspects you want the LLM to concentrate on during the review. (e.g., "Focus on thread safety in concurrent operations.", "Pay special attention to input validation and sanitization.", "Check for adherence to our internal style guide for React components.").
240 | -   `projectContext` (string, optional but recommended):
241 |     General background information about the project, its architecture, key libraries, coding standards, or any other context that would help the LLM provide a more relevant and insightful review. (e.g., "This is a high-performance microservice using Rust and Actix. Low latency is critical.", "The project follows Clean Architecture principles. Ensure new code aligns with this.").
242 | -   `diffBase` (string, optional):
243 |     **Required if `target` is `'branch_diff'**. Specifies the base branch (e.g., `'main'`, `'develop'`) or a specific commit SHA to compare the current HEAD against.
244 | -   `maxTokens` (number, optional):
245 |     Maximum number of tokens to use for the LLM response. Defaults to 32000 if not specified. Use this parameter to optimize for faster, less expensive responses (lower value) or more comprehensive reviews (higher value).
246 |     **Note**: In v0.11.0, the default was reduced from 60000 to 32000 tokens to better balance cost and quality.
247 | 
248 | **Output:**
249 | 
250 | -   If successful: A JSON object with `isError: false` and a `content` array containing a single text item. The `text` field holds the code review generated by the LLM in markdown format.
251 | -   If an error occurs: A JSON object with `isError: true` and a `content` array. The `text` field will contain an error message describing the issue.
252 | 
253 | ## Environment Variables
254 | 
255 | For the LLM integration to work, the `claude-code-review-mcp` server (the process started by `npx` or `claude-code-review-mcp`) needs access to the respective API keys.
256 | 
257 | Set these in your shell environment or place them in a `.env` file in the **root directory of the project you are reviewing**:
258 | 
259 | -   **For Google Models:**
260 |     `GOOGLE_API_KEY="your_google_api_key"`
261 | 
262 | -   **For OpenAI Models:**
263 |     `OPENAI_API_KEY="your_openai_api_key"`
264 | 
265 | -   **For Anthropic Models:**
266 |     `ANTHROPIC_API_KEY="your_anthropic_api_key"`
267 | 
268 | The server will automatically load variables from a `.env` file found in the current working directory (i.e., your project's root) or you can configure them directly in the MCP server configuration as shown in the examples above.
269 | 
270 | ## Smart Slash Commands for Claude Code
271 | 
272 | The package includes several improved slash commands in the `examples/claude-commands/` directory that you can copy to your project's `.claude/commands/` directory. These commands take advantage of Claude Code's understanding of your project context and current task, eliminating the need for manual input.
273 | 
274 | ### Available Slash Commands
275 | 
276 | | Command File | Description |
277 | |--------------|-------------|
278 | | `review-staged-claude.md` | Reviews staged changes using Claude 3.5 Sonnet |
279 | | `review-staged-openai.md` | Reviews staged changes using OpenAI GPT-4.1 |
280 | | `review-staged-gemini.md` | Reviews staged changes using Google Gemini 2.5 Pro |
281 | | `review-head-claude.md` | Reviews all uncommitted changes using Claude 3.7 Sonnet |
282 | | `review-head-openai.md` | Reviews all uncommitted changes using OpenAI O3 |
283 | | `review-head-gemini.md` | Reviews all uncommitted changes using Google Gemini 2.5 Pro |
284 | | `review-branch-main-claude.md` | Reviews changes from main branch using Claude 3.7 Sonnet |
285 | | `review-branch-develop-openai.md` | Reviews changes from develop branch using OpenAI O4-mini |
286 | | `review-branch-custom-gemini.md` | Reviews changes from a specified branch using Google Gemini 2.5 Flash |
287 | | `review-staged-security-claude.md` | Security-focused review of staged changes using Claude 3.5 Sonnet |
288 | | `review-staged-performance-openai.md` | Performance-focused review of staged changes using OpenAI O3 |
289 | | `review-staged-maintainability-gemini.md` | Maintainability-focused review of staged changes using Google Gemini 2.5 Flash |
290 | 
291 | To use these commands:
292 | 
293 | 1. Copy the desired command files from the `examples/claude-commands/` directory to your project's `.claude/commands/` directory
294 | 2. Invoke the command in Claude Code with `/project:command-name` (e.g., `/project:review-staged-claude`)
295 | 
296 | These commands automatically use Claude's knowledge of your current task and project context, eliminating the need for lengthy manual arguments.
297 | 
298 | ## Security Considerations
299 | 
300 | - **API Key Handling**: API keys for LLM providers are sensitive credentials. This tool accesses them from environment variables or `.env` files, but does not store or transmit them beyond the necessary API calls. Consider using a secure environment variable manager for production environments.
301 |   
302 | - **Git Repository Analysis**: The tool analyzes your local Git repository contents. It executes Git commands and reads diff output, but does not transmit your entire codebase to the LLM - only the specific changes being reviewed.
303 |   
304 | - **Code Privacy**: When sending code for review to external LLM providers, be mindful of:
305 |   - Sensitive information in code comments or strings
306 |   - Proprietary algorithms or trade secrets
307 |   - Authentication credentials or API keys in config files
308 |   
309 | - **Branch Name Sanitization**: To prevent command injection, branch names are sanitized before being used in Git commands. However, it's still good practice to avoid using unusual characters in branch names.
310 | 
311 | ## Development (for the `@vibesnipe/code-review-mcp` package itself)
312 | 
313 | If you are contributing to or modifying the `@vibesnipe/code-review-mcp` package:
314 | 
315 | 1.  **Clone the Monorepo:**
316 |     Ensure you have the repo cloned.
317 | 2.  **Navigate to Package:**
318 |     ```bash
319 |     cd /path/to/claude-code-review-mcp
320 |     ```
321 | 3.  **Install Dependencies:**
322 |     ```bash
323 |     pnpm install
324 |     ```
325 | 4.  **Run in Development Mode:**
326 |     This uses `tsx` for hot-reloading TypeScript changes.
327 |     ```bash
328 |     pnpm dev
329 |     ```
330 |     The server will start and log to `stderr`.
331 | 
332 | 5.  **Testing with Claude Code (Local Development):**
333 |     You'll need to tell Claude Code where your local development server script is:
334 |     ```bash
335 |     # From any directory where you use Claude Code
336 |     claude mcp add local-code-reviewer "node /path/to/claude-code-review-mcp/src/index.ts" --interpreter=tsx
337 |     ```
338 |     Now, when Claude Code needs the "local-code-reviewer", it will execute your source `index.ts` using `tsx`. Remember to replace `/path/to/claude-code-review-mcp/` with the actual absolute path to your repo.
339 | 
340 | ## Building for Production/Publishing
341 | 
342 | From the package directory:
343 | ```bash
344 | pnpm build
345 | ```
346 | This compiles TypeScript to JavaScript in the `dist` directory. The `prepublishOnly` script in `package.json` ensures this command is run automatically before publishing the package to npm.
347 | 
348 | ## Troubleshooting
349 | 
350 | -   **"Current directory is not a git repository..."**: Ensure you are running `npx @vibesnipe/code-review-mcp` (or the global command) from the root directory of a valid Git project.
351 | -   **"API key for ... is not configured"**: Make sure the relevant environment variable (e.g., `OPENAI_API_KEY`) is set in the shell where you launched the MCP server OR in a `.env` file in your project's root.
352 | -   **"Failed to get git diff. Git error: ..."**: This indicates an issue with the `git diff` command.
353 |     -   Check if `git` is installed and in your PATH.
354 |     -   Verify that the `target` and `diffBase` (if applicable) are valid for your repository.
355 |     -   The error message from `git` itself should provide more clues.
356 | -   **LLM API Errors**: Errors from the LLM providers (e.g., rate limits, invalid model name, authentication issues) will be passed through. Check the error message for details from the specific LLM API.
357 | -   **Claude Code MCP Issues**: If Claude Code isn't finding or launching the server, double-check your `claude mcp add ...` command and ensure the command specified for the MCP server is correct and executable. Use `claude mcp list` to verify.
358 | -   **Cursor MCP Server Issues**: If Cursor doesn't recognize the MCP server, make sure it's properly added in the settings with the correct configuration.
359 | -   **Windsurf Workflow Errors**: If Windsurf workflows are not executing correctly, check that:
360 |     -   The workflow files are properly formatted and located in the `.windsurf/workflows/` directory
361 |     -   The MCP server is correctly configured in Windsurf's MCP Server panel
362 |     -   You have the necessary API keys set up in your environment
363 | 
364 | ## License
365 | 
366 | MIT License - Copyright (c) Praney Behl
367 | 
```

--------------------------------------------------------------------------------
/tsconfig.test.json:
--------------------------------------------------------------------------------

```json
1 | {
2 |   "extends": "./tsconfig.json",
3 |   "compilerOptions": {
4 |     "types": ["vitest/globals", "node"]
5 |   },
6 |   "include": ["src/**/*", "tests/**/*"],
7 |   "exclude": ["node_modules"]
8 | }
9 | 
```

--------------------------------------------------------------------------------
/vitest.config.ts:
--------------------------------------------------------------------------------

```typescript
 1 | /// <reference types="vitest" />
 2 | import { defineConfig } from 'vitest/config';
 3 | 
 4 | export default defineConfig({
 5 |   test: {
 6 |     environment: 'node',
 7 |     globals: true,
 8 |     include: ['tests/**/*.test.ts'],
 9 |     coverage: {
10 |       reporter: ['text', 'json', 'html'],
11 |     },
12 |   },
13 | });
14 | 
```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "compilerOptions": {
 3 |     "target": "ES2022",
 4 |     "module": "NodeNext", // Changed for better CJS/ESM interop with Node.js ecosystem
 5 |     "moduleResolution": "NodeNext", // Changed
 6 |     "esModuleInterop": true,
 7 |     "strict": true,
 8 |     "skipLibCheck": true,
 9 |     "outDir": "./dist",
10 |     "rootDir": "./src",
11 |     "declaration": true,
12 |     "sourceMap": true,
13 |     "resolveJsonModule": true // If you import JSON files directly
14 |   },
15 |   "include": ["src/**/*.ts"],
16 |   "exclude": ["node_modules", "dist", "**/*.test.ts"]
17 | }
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-openai.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a code review on the currently **staged** changes using OpenAI's **GPT-4.1**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "staged"
 5 | llmProvider: "openai"
 6 | modelName: "gpt-4.1"
 7 | taskDescription: "The task I am currently working on in this codebase"
 8 | reviewFocus: "General code quality, security best practices, and performance considerations"
 9 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
10 | 
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-head-openai.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a code review on all **uncommitted changes** (both staged and unstaged) using OpenAI's **O3**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "HEAD"
 5 | llmProvider: "openai"
 6 | modelName: "o3"
 7 | taskDescription: "The task I am currently working on in this codebase"
 8 | reviewFocus: "General code quality, security best practices, and performance considerations"
 9 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
10 | 
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-gemini.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a code review on the currently **staged** changes using Google's **Gemini 2.5 Pro**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "staged"
 5 | llmProvider: "google"
 6 | modelName: "gemini-2.5-pro-preview-05-06"
 7 | taskDescription: "The task I am currently working on in this codebase"
 8 | reviewFocus: "General code quality, security best practices, and performance considerations"
 9 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
10 | 
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-claude.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a code review on the currently **staged** changes using Anthropic's **Claude 3.5 Sonnet**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "staged"
 5 | llmProvider: "anthropic"
 6 | modelName: "claude-3-5-sonnet-20241022"
 7 | taskDescription: "The task I am currently working on in this codebase"
 8 | reviewFocus: "General code quality, security best practices, and performance considerations"
 9 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
10 | 
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-branch-develop-openai.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a code review of changes between the develop branch and the current HEAD using OpenAI's **O4-mini**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "branch_diff"
 5 | diffBase: "develop"
 6 | llmProvider: "openai"
 7 | modelName: "o4-mini"
 8 | taskDescription: "The task I am currently working on in this codebase"
 9 | reviewFocus: "General code quality, security best practices, and performance considerations"
10 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
11 | 
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-head-gemini.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a code review on all **uncommitted changes** (both staged and unstaged) using Google's **Gemini 2.5 Pro**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "HEAD"
 5 | llmProvider: "google"
 6 | modelName: "gemini-2.5-pro-preview-05-06"
 7 | taskDescription: "The task I am currently working on in this codebase"
 8 | reviewFocus: "General code quality, security best practices, and performance considerations"
 9 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
10 | 
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-head-claude.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a code review on all **uncommitted changes** (both staged and unstaged) using Anthropic's **Claude 3.7 Sonnet**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "HEAD"
 5 | llmProvider: "anthropic"
 6 | modelName: "claude-3-7-sonnet-20250219"
 7 | taskDescription: "The task I am currently working on in this codebase"
 8 | reviewFocus: "General code quality, security best practices, and performance considerations"
 9 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
10 | 
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-branch-main-claude.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a code review of changes between the main branch and the current HEAD using Anthropic's **Claude 3.7 Sonnet**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "branch_diff"
 5 | diffBase: "main"
 6 | llmProvider: "anthropic"
 7 | modelName: "claude-3-7-sonnet-20250219"
 8 | taskDescription: "The task I am currently working on in this codebase"
 9 | reviewFocus: "General code quality, security best practices, and performance considerations"
10 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
11 | 
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-performance-openai.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a performance-focused code review on the currently **staged** changes using OpenAI's **O3**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "staged"
 5 | llmProvider: "openai"
 6 | modelName: "o3"
 7 | taskDescription: "The task I am currently working on in this codebase"
 8 | reviewFocus: "Performance optimizations, computational efficiency, memory usage, time complexity, algorithmic improvements, bottlenecks, lazy loading, and caching opportunities"
 9 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
10 | 
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-maintainability-gemini.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a maintainability-focused code review on the currently **staged** changes using Google's **Gemini 2.5 Flash**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "staged"
 5 | llmProvider: "google"
 6 | modelName: "gemini-2.5-flash-preview-04-17"
 7 | taskDescription: "The task I am currently working on in this codebase"
 8 | reviewFocus: "Code readability, maintainability, documentation, naming conventions, SOLID principles, design patterns, abstraction, and testability"
 9 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
10 | 
```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
 1 | # Generated by https://smithery.ai. See: https://smithery.ai/docs/build/project-config
 2 | # Stage 1: Build
 3 | FROM node:lts-alpine AS builder
 4 | WORKDIR /app
 5 | 
 6 | # Install dependencies and build
 7 | COPY package.json package-lock.json tsconfig.json tsconfig.test.json ./
 8 | COPY src ./src
 9 | COPY examples ./examples
10 | RUN npm ci --ignore-scripts && npm run build && npm prune --production
11 | 
12 | # Stage 2: Runtime
13 | FROM node:lts-alpine
14 | WORKDIR /app
15 | 
16 | # Copy production artifacts
17 | COPY --from=builder /app/dist ./dist
18 | COPY --from=builder /app/node_modules ./node_modules
19 | COPY package.json ./package.json
20 | 
21 | # Default environment
22 | ENV NODE_ENV=production
23 | 
24 | # Expose no ports (stdio transport)
25 | ENTRYPOINT ["node", "dist/cli.js"]
26 | 
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-security-claude.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a security-focused code review on the currently **staged** changes using Anthropic's **Claude 3.5 Sonnet**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "staged"
 5 | llmProvider: "anthropic"
 6 | modelName: "claude-3-5-sonnet-20241022"
 7 | taskDescription: "The task I am currently working on in this codebase"
 8 | reviewFocus: "Security vulnerabilities, data validation, authentication, authorization, input sanitization, sensitive data handling, and adherence to OWASP standards"
 9 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
10 | 
```

--------------------------------------------------------------------------------
/examples/windsurf-workflows/review-staged.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Review Staged Changes
 2 | 
 3 | Perform a code review on the currently staged changes in the repository.
 4 | 
 5 | ## Step 1
 6 | 
 7 | Use the perform_code_review tool from the code-reviewer MCP server with the following parameters:
 8 | ```
 9 | {
10 |   "target": "staged",
11 |   "llmProvider": "anthropic",
12 |   "modelName": "claude-3-7-sonnet-20250219",
13 |   "taskDescription": "The task I am currently working on in this codebase",
14 |   "reviewFocus": "General code quality, security best practices, and performance considerations",
15 |   "projectContext": "This project is being developed in Windsurf. Please review the code carefully for any issues."
16 | }
17 | ```
18 | 
19 | <!-- 
20 | Note: 
21 | 1. Consider updating the model name to the latest available model from Anthropic
22 | 2. Customize the taskDescription with specific context for better review results
23 | -->
24 | 
25 | 
```

--------------------------------------------------------------------------------
/examples/windsurf-workflows/review-head.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Review All Uncommitted Changes
 2 | 
 3 | Perform a code review on all uncommitted changes (both staged and unstaged) against the last commit.
 4 | 
 5 | ## Step 1
 6 | 
 7 | Use the perform_code_review tool from the code-reviewer MCP server with the following parameters:
 8 | ```
 9 | {
10 |   "target": "HEAD",
11 |   "llmProvider": "openai",
12 |   "modelName": "o3",
13 |   "taskDescription": "The task I am currently working on in this codebase",
14 |   "reviewFocus": "General code quality, security best practices, and performance considerations",
15 |   "projectContext": "This project is being developed in Windsurf. Please review the code carefully for any issues."
16 | }
17 | ```
18 | 
19 | <!-- 
20 | Note: 
21 | 1. Consider updating the model name to the latest available model from OpenAI
22 | 2. Customize the taskDescription with specific context for better review results
23 | -->
24 | 
25 | 
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-branch-custom-gemini.md:
--------------------------------------------------------------------------------

```markdown
 1 | Perform a code review of changes between a specified branch and the current HEAD using Google's **Gemini 2.5 Flash**.
 2 | 
 3 | Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
 4 | target: "branch_diff"
 5 | diffBase: "$ARGUMENTS_BASE_BRANCH"
 6 | llmProvider: "google"
 7 | modelName: "gemini-2.5-flash-preview-04-17"
 8 | taskDescription: "The task I am currently working on in this codebase"
 9 | reviewFocus: "General code quality, security best practices, and performance considerations"
10 | projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."
11 | 
12 | # Usage: 
13 | # Invoke this command with the base branch name as an argument:
14 | # claude > /project:review-branch-custom-gemini main
15 | # This will compare your current HEAD against the 'main' branch.
16 | 
```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
 1 | # Smithery configuration file: https://smithery.ai/docs/build/project-config
 2 | 
 3 | startCommand:
 4 |   type: stdio
 5 |   commandFunction:
 6 |     # A JS function that produces the CLI command based on the given config to start the MCP on stdio.
 7 |     |-
 8 |     (config) => ({ command: 'node', args: ['dist/cli.js'], env: { OPENAI_API_KEY: config.openaiApiKey, GOOGLE_API_KEY: config.googleApiKey, ANTHROPIC_API_KEY: config.anthropicApiKey } })
 9 |   configSchema:
10 |     # JSON Schema defining the configuration options for the MCP.
11 |     type: object
12 |     properties:
13 |       openaiApiKey:
14 |         type: string
15 |         description: OpenAI API key
16 |       googleApiKey:
17 |         type: string
18 |         description: Google API key
19 |       anthropicApiKey:
20 |         type: string
21 |         description: Anthropic API key
22 |   exampleConfig:
23 |     openaiApiKey: sk-1234567890abcdef
24 |     googleApiKey: AIzaSyExampleKey
25 |     anthropicApiKey: anthropic-key-example
26 | 
```

--------------------------------------------------------------------------------
/examples/windsurf-workflows/review-security.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Security Review
 2 | 
 3 | Perform a security-focused code review on the currently staged changes in the repository.
 4 | 
 5 | ## Step 1
 6 | 
 7 | Use the perform_code_review tool from the code-reviewer MCP server with the following parameters:
 8 | ```
 9 | {
10 |   "target": "staged",
11 |   "llmProvider": "anthropic",
12 |   "modelName": "claude-3-5-sonnet-20241022",
13 |   "taskDescription": "The task I am currently working on in this codebase",
14 |   "reviewFocus": "Security vulnerabilities, data validation, authentication, authorization, input sanitization, sensitive data handling, and adherence to OWASP standards",
15 |   "projectContext": "This project is being developed in Windsurf. Please review the code carefully for any security issues."
16 | }
17 | ```
18 | 
19 | <!-- 
20 | Note: 
21 | 1. Consider updating the model name to the latest available model from Anthropic
22 | 2. Customize the taskDescription with specific context for better security review results
23 | 3. You can further customize the reviewFocus to target specific security concerns for your project
24 | -->
25 | 
26 | 
```

--------------------------------------------------------------------------------
/examples/windsurf-workflows/review-branch.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Branch Diff Review
 2 | 
 3 | Perform a code review comparing the current HEAD with a specified base branch.
 4 | 
 5 | ## Step 1
 6 | 
 7 | Ask the user which branch to compare against:
 8 | "Which branch would you like to use as the base for comparison? (e.g., main, develop)"
 9 | 
10 | ## Step 2
11 | 
12 | Use the perform_code_review tool from the code-reviewer MCP server with the following parameters:
13 | ```
14 | {
15 |   "target": "branch_diff",
16 |   "diffBase": "${user_response}",
17 |   "llmProvider": "google",
18 |   "modelName": "gemini-2.5-pro-preview-05-06",
19 |   "taskDescription": "The task I am currently working on in this codebase",
20 |   "reviewFocus": "General code quality, security best practices, and performance considerations",
21 |   "projectContext": "This project is being developed in Windsurf. Please review the code changes between branches carefully for any issues."
22 | }
23 | ```
24 | 
25 | <!-- 
26 | Notes:
27 | 1. Consider updating the model name to the latest available model from Google
28 | 2. Customize the taskDescription with specific context for better review results
29 | 3. IMPORTANT: The MCP server sanitizes the diffBase parameter to prevent command injection attacks,
30 |    but you should still avoid using branch names containing special characters
31 | -->
32 | 
33 | 
```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
 1 | {
 2 |   "name": "@vibesnipe/code-review-mcp",
 3 |   "version": "1.0.0",
 4 |   "description": "MCP server for performing code reviews using external LLMs via Vercel AI SDK.",
 5 |   "main": "dist/index.js",
 6 |   "types": "dist/index.d.ts",
 7 |   "type": "module",
 8 |   "bin": {
 9 |     "code-review-mcp": "dist/index.js"
10 |   },
11 |   "scripts": {
12 |     "build": "rimraf dist && tsc -p tsconfig.json && chmod +x dist/index.js",
13 |     "start": "node dist/index.js",
14 |     "dev": "tsx src/index.ts",
15 |     "test": "vitest run",
16 |     "test:watch": "vitest watch",
17 |     "inspector": "npx @modelcontextprotocol/inspector dist/index.js",
18 |     "prepublishOnly": "npm run build"
19 |   },
20 |   "keywords": [
21 |     "mcp",
22 |     "claude code",
23 |     "cursor",
24 |     "windsurf",
25 |     "ai code review",
26 |     "code-review",
27 |     "model-context-protocol",
28 |     "review code"
29 |   ],
30 | 
31 |   "author": "Praney Behl <@praneybehl>",
32 |   "license": "MIT",
33 |   "dependencies": {
34 |     "@modelcontextprotocol/sdk": "^1.11.2",
35 |     "ai": "^4.3.15",
36 |     "@ai-sdk/openai": "^1.3.22",
37 |     "@ai-sdk/anthropic": "^1.2.11",
38 |     "@ai-sdk/google": "^1.2.18",
39 |     "dotenv": "^16.5.0",
40 |     "zod": "^3.24.4",
41 |     "execa": "^9.5.3"
42 |   },
43 |   "devDependencies": {
44 |     "@types/node": "^20.12.7", 
45 |     "rimraf": "^6.0.1",
46 |     "tsx": "^4.19.4", 
47 |     "typescript": "^5.8.3",
48 |     "vitest": "^1.2.1"
49 |   },
50 |   "files": [
51 |     "dist",
52 |     "README.md",
53 |     "LICENSE",
54 |     "examples"
55 |   ],
56 |   "publishConfig": {
57 |     "access": "public"
58 |   }
59 | }
60 | 
```

--------------------------------------------------------------------------------
/tests/config.test.ts:
--------------------------------------------------------------------------------

```typescript
 1 | import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
 2 | import { getApiKey, LLMProvider } from '../src/config.js';
 3 | 
 4 | // Setup and restore environment variables for tests
 5 | describe('config module', () => {
 6 |   // Save original env values
 7 |   const originalEnv = { ...process.env };
 8 |   
 9 |   beforeEach(() => {
10 |     // Setup test environment variables before each test
11 |     vi.stubEnv('GOOGLE_API_KEY', 'mock-google-api-key');
12 |     vi.stubEnv('OPENAI_API_KEY', 'mock-openai-api-key');
13 |     vi.stubEnv('ANTHROPIC_API_KEY', 'mock-anthropic-api-key');
14 |   });
15 |   
16 |   afterEach(() => {
17 |     // Restore original environment after each test
18 |     process.env = originalEnv;
19 |     vi.unstubAllEnvs();
20 |   });
21 |   
22 |   describe('getApiKey()', () => {
23 |     it('should return the correct API key for Google provider', () => {
24 |       const key = getApiKey('google');
25 |       expect(key).toBe('mock-google-api-key');
26 |     });
27 | 
28 |     it('should return the correct API key for OpenAI provider', () => {
29 |       const key = getApiKey('openai');
30 |       expect(key).toBe('mock-openai-api-key');
31 |     });
32 | 
33 |     it('should return the correct API key for Anthropic provider', () => {
34 |       const key = getApiKey('anthropic');
35 |       expect(key).toBe('mock-anthropic-api-key');
36 |     });
37 |     
38 |     it('should use GEMINI_API_KEY as fallback when GOOGLE_API_KEY is not available', () => {
39 |       // Reset the Google API key and set Gemini key instead
40 |       vi.stubEnv('GOOGLE_API_KEY', '');
41 |       vi.stubEnv('GEMINI_API_KEY', 'mock-gemini-fallback-key');
42 |       
43 |       const key = getApiKey('google');
44 |       expect(key).toBe('mock-gemini-fallback-key');
45 |     });
46 |     
47 |     it('should return undefined when no API key is available for a provider', () => {
48 |       // Clear all API keys
49 |       vi.stubEnv('GOOGLE_API_KEY', '');
50 |       vi.stubEnv('GEMINI_API_KEY', '');
51 |       vi.stubEnv('OPENAI_API_KEY', '');
52 |       
53 |       const googleKey = getApiKey('google');
54 |       const openaiKey = getApiKey('openai');
55 |       
56 |       expect(googleKey).toBeUndefined();
57 |       expect(openaiKey).toBeUndefined();
58 |     });
59 |   });
60 | });
61 | 
```

--------------------------------------------------------------------------------
/src/config.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { z } from "zod";
  2 | import dotenv from "dotenv";
  3 | 
  4 | /**
  5 |  * Load environment variables in order of precedence:
  6 |  * 1. First load from the current working directory (where user runs npx)
  7 |  *    This allows users to place a .env file in their project root with their API keys
  8 |  * 2. Then load from the package's directory as a fallback (less common)
  9 |  * Variables from step 1 will take precedence over those from step 2.
 10 |  */
 11 | dotenv.config({ path: process.cwd() + "/.env" });
 12 | dotenv.config();
 13 | 
 14 | // Define valid log levels and parse the environment variable
 15 | export const LogLevelEnum = z.enum(["debug", "info", "warn", "error"]);
 16 | export type LogLevel = z.infer<typeof LogLevelEnum>;
 17 | 
 18 | // Convert numeric log levels to string equivalents
 19 | function normalizeLogLevel(level: string | undefined): string {
 20 |   if (!level) return 'info';
 21 |   
 22 |   // Map numeric levels to string values
 23 |   switch (level) {
 24 |     case '0': return 'debug';
 25 |     case '1': return 'info';
 26 |     case '2': return 'warn';
 27 |     case '3': return 'error';
 28 |     default: return level; // Pass through string values for validation
 29 |   }
 30 | }
 31 | 
 32 | export const LOG_LEVEL: LogLevel = LogLevelEnum.parse(normalizeLogLevel(process.env.LOG_LEVEL));
 33 | 
 34 | export const LLMProviderEnum = z.enum(["google", "openai", "anthropic"]);
 35 | export type LLMProvider = z.infer<typeof LLMProviderEnum>;
 36 | 
 37 | export const ReviewTargetEnum = z.enum(["staged", "HEAD", "branch_diff"]);
 38 | export type ReviewTarget = z.infer<typeof ReviewTargetEnum>;
 39 | 
 40 | export const CodeReviewToolParamsSchema = z.object({
 41 |   target: ReviewTargetEnum.describe(
 42 |     "The git target to review (e.g., 'staged', 'HEAD', or 'branch_diff')."
 43 |   ),
 44 |   taskDescription: z
 45 |     .string()
 46 |     .min(1)
 47 |     .describe(
 48 |       "Description of the task/feature/bugfix that led to these code changes."
 49 |     ),
 50 |   llmProvider: LLMProviderEnum.describe(
 51 |     "The LLM provider to use (google, openai, anthropic)."
 52 |   ),
 53 |   modelName: z
 54 |     .string()
 55 |     .min(1)
 56 |     .describe(
 57 |       "The specific model name from the provider (e.g., 'gemini-2.5-pro-preview-05-06', 'o4-mini', 'claude-3-7-sonnet-20250219')."
 58 |     ),
 59 |   reviewFocus: z
 60 |     .string()
 61 |     .optional()
 62 |     .describe(
 63 |       "Specific areas or aspects to focus the review on (e.g., 'security vulnerabilities', 'performance optimizations', 'adherence to SOLID principles')."
 64 |     ),
 65 |   projectContext: z
 66 |     .string()
 67 |     .optional()
 68 |     .describe(
 69 |       "General context about the project, its architecture, or coding standards."
 70 |     ),
 71 |   diffBase: z
 72 |     .string()
 73 |     .optional()
 74 |     .describe(
 75 |       "For 'branch_diff' target, the base branch or commit SHA to compare against (e.g., 'main', 'develop', 'specific-commit-sha'). Required if target is 'branch_diff'."
 76 |     ),
 77 |   maxTokens: z
 78 |     .number()
 79 |     .positive()
 80 |     .optional()
 81 |     .describe(
 82 |       "Maximum number of tokens to use for the LLM response. Defaults to 32000 if not specified."
 83 |     ),
 84 | });
 85 | 
 86 | export type CodeReviewToolParams = z.infer<typeof CodeReviewToolParamsSchema>;
 87 | 
 88 | /**
 89 |  * Gets the appropriate API key for the specified LLM provider.
 90 |  * For Google, the primary key name is GOOGLE_API_KEY with GEMINI_API_KEY as fallback.
 91 |  * 
 92 |  * @param provider - The LLM provider (google, openai, anthropic)
 93 |  * @returns The API key or undefined if not found
 94 |  */
 95 | export function getApiKey(provider: LLMProvider): string | undefined {
 96 |   let key: string | undefined;
 97 |   
 98 |   switch (provider) {
 99 |     case "google":
100 |       key = process.env.GOOGLE_API_KEY || process.env.GEMINI_API_KEY;
101 |       break;
102 |     case "openai":
103 |       key = process.env.OPENAI_API_KEY;
104 |       break;
105 |     case "anthropic":
106 |       key = process.env.ANTHROPIC_API_KEY;
107 |       break;
108 |     default:
109 |       // Should not happen due to Zod validation
110 |       console.warn(
111 |         `[MCP Server Config] Attempted to get API key for unknown provider: ${provider}`
112 |       );
113 |       return undefined;
114 |   }
115 |   
116 |   // If the key is an empty string or undefined, return undefined
117 |   return key && key.trim() !== "" ? key : undefined;
118 | }
119 | 
120 | /**
121 |  * Determines whether to log verbose debug information.
122 |  * Set the LOG_LEVEL environment variable to 'debug' for verbose output.
123 |  */
124 | export function isDebugMode(): boolean {
125 |   return LOG_LEVEL === 'debug';
126 | }
127 | 
```

--------------------------------------------------------------------------------
/src/llm-service.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { CoreMessage, generateText } from "ai";
  2 | import { createGoogleGenerativeAI } from "@ai-sdk/google";
  3 | import { createAnthropic } from "@ai-sdk/anthropic";
  4 | import { createOpenAI } from "@ai-sdk/openai";
  5 | import { LLMProvider, getApiKey, isDebugMode } from "./config.js"; // Ensure .js for ESM NodeNext
  6 | 
  7 | // Define model types for typechecking
  8 | type GoogleModelName = string;
  9 | type AnthropicModelName = string;
 10 | type OpenAIModelName = string;
 11 | 
 12 | // Get the appropriate model type based on provider
 13 | type ModelName<T extends LLMProvider> = T extends "openai"
 14 |   ? OpenAIModelName
 15 |   : T extends "anthropic"
 16 |   ? AnthropicModelName
 17 |   : T extends "google"
 18 |   ? GoogleModelName
 19 |   : never;
 20 | 
 21 | /**
 22 |  * Generates a code review using the specified LLM provider.
 23 |  * 
 24 |  * NOTE: The default maximum token limit was reduced from 60000 to 32000 tokens in v0.11.0
 25 |  * to better balance cost and quality. This can be configured using the new maxTokens parameter.
 26 |  * 
 27 |  * @param provider - LLM provider to use (google, openai, anthropic)
 28 |  * @param modelName - Specific model name from the provider
 29 |  * @param systemPrompt - System prompt to guide the LLM
 30 |  * @param userMessages - User message(s) containing the code diff to review
 31 |  * @param maxTokens - Optional maximum token limit for the response, defaults to 32000
 32 |  * @returns Promise with the generated review text
 33 |  */
 34 | export async function getLLMReview<T extends LLMProvider>(
 35 |   provider: T,
 36 |   modelName: ModelName<T>,
 37 |   systemPrompt: string,
 38 |   userMessages: CoreMessage[],
 39 |   maxTokens: number = 32000
 40 | ): Promise<string> {
 41 |   // Make sure we have the API key
 42 |   const apiKey = getApiKey(provider);
 43 |   if (!apiKey) {
 44 |     throw new Error(
 45 |       `API key for ${provider} is not configured. Please set the appropriate environment variable.`
 46 |     );
 47 |   }
 48 | 
 49 |   // Create the LLM client with proper provider configuration
 50 |   let llmClient;
 51 |   switch (provider) {
 52 |     case "google":
 53 |       // Create Google provider with explicit API key
 54 |       const googleAI = createGoogleGenerativeAI({
 55 |         apiKey,
 56 |       });
 57 |       llmClient = googleAI(modelName);
 58 |       break;
 59 |     case "openai":
 60 |       // Create OpenAI provider with explicit API key
 61 |       const openaiProvider = createOpenAI({
 62 |         apiKey,
 63 |       });
 64 |       llmClient = openaiProvider(modelName);
 65 |       break;
 66 |     case "anthropic":
 67 |       // Create Anthropic provider with explicit API key
 68 |       const anthropicProvider = createAnthropic({
 69 |         apiKey,
 70 |       });
 71 |       llmClient = anthropicProvider(modelName);
 72 |       break;
 73 |     default:
 74 |       throw new Error(`Unsupported LLM provider: ${provider}`);
 75 |   }
 76 | 
 77 |   try {
 78 |     if (isDebugMode()) {
 79 |       console.log(
 80 |         `[MCP Server LLM] Requesting review from ${provider} model ${modelName} with max tokens ${maxTokens}.`
 81 |       );
 82 |     } else {
 83 |       console.log(
 84 |         `[MCP Server LLM] Requesting review from ${provider} model ${modelName}.`
 85 |       );
 86 |     }
 87 |     
 88 |     const { text, finishReason, usage, warnings } = await generateText({
 89 |       model: llmClient,
 90 |       system: systemPrompt,
 91 |       messages: userMessages,
 92 |       maxTokens: maxTokens, // Now configurable with default value
 93 |       temperature: 0.2, // Lower temperature for more deterministic and factual reviews
 94 |     });
 95 | 
 96 |     if (warnings && warnings.length > 0) {
 97 |       warnings.forEach((warning) =>
 98 |         console.warn(`[MCP Server LLM] Warning from ${provider}:`, warning)
 99 |       );
100 |     }
101 |     
102 |     if (isDebugMode() && usage) {
103 |       console.log(
104 |         `[MCP Server LLM] Review received from ${provider}. Finish Reason: ${finishReason}, Tokens Used: Input=${usage.promptTokens}, Output=${usage.completionTokens}`
105 |       );
106 |     } else {
107 |       console.log(
108 |         `[MCP Server LLM] Review received from ${provider}.`
109 |       );
110 |     }
111 |     
112 |     return text;
113 |   } catch (error: any) {
114 |     console.error(
115 |       `[MCP Server LLM] Error getting LLM review from ${provider} (${modelName}):`,
116 |       error
117 |     );
118 |     let detailedMessage = error.message;
119 |     if (error.cause) {
120 |       detailedMessage += ` | Cause: ${JSON.stringify(error.cause)}`;
121 |     }
122 |     // Attempt to get more details from common API error structures
123 |     if (error.response && error.response.data && error.response.data.error) {
124 |       detailedMessage += ` | API Error: ${JSON.stringify(
125 |         error.response.data.error
126 |       )}`;
127 |     } else if (error.error && error.error.message) {
128 |       // Anthropic SDK style
129 |       detailedMessage += ` | API Error: ${error.error.message}`;
130 |     }
131 |     throw new Error(
132 |       `LLM API call failed for ${provider} (${modelName}): ${detailedMessage}`
133 |     );
134 |   }
135 | }
```

--------------------------------------------------------------------------------
/src/git-utils.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { execSync, ExecSyncOptionsWithStringEncoding } from "child_process";
  2 | import { ReviewTarget, isDebugMode } from "./config.js"; // Ensure .js for ESM NodeNext
  3 | 
  4 | /**
  5 |  * Gets the git diff for the specified target.
  6 |  * 
  7 |  * @param target - The git target to review ('staged', 'HEAD', or 'branch_diff')
  8 |  * @param baseBranch - For 'branch_diff' target, the base branch/commit to compare against
  9 |  * @returns The git diff as a string or a message if no changes are found
 10 |  * @throws Error if not in a git repository, or if git encounters any errors
 11 |  * 
 12 |  * Note: For branch_diff, this function assumes the remote is named 'origin'.
 13 |  * If your repository uses a different remote name, this operation may fail.
 14 |  */
 15 | export function getGitDiff(target: ReviewTarget, baseBranch?: string): string {
 16 |   const execOptions: ExecSyncOptionsWithStringEncoding = {
 17 |     encoding: "utf8",
 18 |     maxBuffer: 20 * 1024 * 1024, // Increased to 20MB buffer
 19 |     stdio: ["pipe", "pipe", "pipe"], // pipe stderr to catch git errors
 20 |   };
 21 | 
 22 |   let command: string = "";
 23 | 
 24 |   try {
 25 |     // Verify it's a git repository first
 26 |     execSync("git rev-parse --is-inside-work-tree", {
 27 |       ...execOptions,
 28 |       stdio: "ignore",
 29 |     });
 30 |   } catch (error) {
 31 |     console.error(
 32 |       "[MCP Server Git] Current directory is not a git repository or git is not found."
 33 |     );
 34 |     throw new Error(
 35 |       "Execution directory is not a git repository or git command is not available. Please run from a git project root."
 36 |     );
 37 |   }
 38 | 
 39 |   try {
 40 |     switch (target) {
 41 |       case "staged":
 42 |         command = "git diff --staged --patch-with-raw --unified=10"; // More context
 43 |         break;
 44 |       case "HEAD":
 45 |         command = "git diff HEAD --patch-with-raw --unified=10";
 46 |         break;
 47 |       case "branch_diff":
 48 |         if (!baseBranch || baseBranch.trim() === "") {
 49 |           throw new Error(
 50 |             "Base branch/commit is required for 'branch_diff' target and cannot be empty."
 51 |           );
 52 |         }
 53 |         // Sanitize baseBranch to prevent command injection
 54 |         // Only allow alphanumeric characters, underscore, dash, dot, and forward slash
 55 |         const sanitizedBaseBranch = baseBranch.replace(
 56 |           /[^a-zA-Z0-9_.\-/]/g,
 57 |           ""
 58 |         );
 59 |         if (sanitizedBaseBranch !== baseBranch) {
 60 |           throw new Error(
 61 |             `Invalid characters in base branch name. Only alphanumeric characters, underscore, dash, dot, and forward slash are allowed. Received: "${baseBranch}"`
 62 |           );
 63 |         }
 64 |         // Fetch the base branch to ensure the diff is against the latest version of it
 65 |         // Note: This assumes the remote is named 'origin'
 66 |         const fetchCommand = `git fetch origin ${sanitizedBaseBranch}:${sanitizedBaseBranch} --no-tags --quiet`;
 67 |         try {
 68 |           execSync(fetchCommand, execOptions);
 69 |         } catch (fetchError: any) {
 70 |           // Log a warning but proceed; the branch might be local or already up-to-date
 71 |           console.warn(
 72 |             `[MCP Server Git] Warning during 'git fetch' for base branch '${sanitizedBaseBranch}': ${fetchError.message}. Diff will proceed with local state.`
 73 |           );
 74 |         }
 75 |         command = `git diff ${sanitizedBaseBranch}...HEAD --patch-with-raw --unified=10`;
 76 |         break;
 77 |       default:
 78 |         // This case should ideally be caught by Zod validation on parameters
 79 |         throw new Error(`Unsupported git diff target: ${target}`);
 80 |     }
 81 | 
 82 |     // Only log the command if in debug mode
 83 |     if (isDebugMode()) {
 84 |       console.log(`[MCP Server Git] Executing: ${command}`);
 85 |     }
 86 |     
 87 |     // Execute the command (execOptions has encoding:'utf8' so the result should already be a string)
 88 |     const diffOutput = execSync(command, execOptions);
 89 |     
 90 |     // Ensure we always have a string to work with
 91 |     // This is for type safety and to handle any unexpected Buffer return types
 92 |     const diffString = Buffer.isBuffer(diffOutput) ? diffOutput.toString('utf8') : String(diffOutput);
 93 |     
 94 |     if (!diffString.trim()) {
 95 |       return "No changes found for the specified target.";
 96 |     }
 97 |     return diffString;
 98 |   } catch (error: any) {
 99 |     const errorMessage =
100 |       error.stderr?.toString().trim() || error.message || "Unknown git error";
101 |     console.error(
102 |       `[MCP Server Git] Error getting git diff for target "${target}" (base: ${
103 |         baseBranch || "N/A"
104 |       }):`
105 |     );
106 |     console.error(`[MCP Server Git] Command: ${command || "N/A"}`);
107 |     
108 |     // Only log the full error details in debug mode
109 |     if (isDebugMode()) {
110 |       console.error(
111 |         `[MCP Server Git] Stderr: ${error.stderr?.toString().trim()}`
112 |       );
113 |       console.error(
114 |         `[MCP Server Git] Stdout: ${error.stdout?.toString().trim()}`
115 |       );
116 |     }
117 |     
118 |     throw new Error(
119 |       `Failed to get git diff. Git error: ${errorMessage}. Ensure you are in a git repository and the target/base is valid.`
120 |     );
121 |   }
122 | }
```

--------------------------------------------------------------------------------
/tests/git-utils.test.ts:
--------------------------------------------------------------------------------

```typescript
  1 | import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
  2 | import { getGitDiff } from '../src/git-utils.js';
  3 | import { execSync } from 'child_process';
  4 | 
  5 | // Mock the child_process module
  6 | vi.mock('child_process', () => ({
  7 |   execSync: vi.fn(),
  8 | }));
  9 | 
 10 | describe('git-utils module', () => {
 11 |   beforeEach(() => {
 12 |     // Reset mocks between tests
 13 |     vi.resetAllMocks();
 14 |   });
 15 |   
 16 |   describe('getGitDiff()', () => {
 17 |     it('should throw an error if not in a git repository', () => {
 18 |       // Mock the execSync to throw an error for the git repo check
 19 |       vi.mocked(execSync).mockImplementationOnce(() => {
 20 |         throw new Error('Not a git repository');
 21 |       });
 22 |       
 23 |       expect(() => getGitDiff('HEAD')).toThrow(/not a git repository/i);
 24 |     });
 25 |     
 26 |     it('should handle staged changes correctly', () => {
 27 |       // Mock successful git repo check
 28 |       vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
 29 |       
 30 |       // Mock the diff command - this is the second execSync call in the function
 31 |       vi.mocked(execSync).mockImplementationOnce(() => 
 32 |         Buffer.from('diff --git a/file.js b/file.js\nsample diff output')
 33 |       );
 34 |       
 35 |       const result = getGitDiff('staged');
 36 |       expect(result).toContain('sample diff output');
 37 |     });
 38 |     
 39 |     it('should handle HEAD changes correctly', () => {
 40 |       // Mock successful git repo check
 41 |       vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
 42 |       
 43 |       // Mock the diff command - this is the second execSync call in the function
 44 |       vi.mocked(execSync).mockImplementationOnce(() => 
 45 |         Buffer.from('diff --git a/file.js b/file.js\nHEAD diff output')
 46 |       );
 47 |       
 48 |       const result = getGitDiff('HEAD');
 49 |       expect(result).toContain('HEAD diff output');
 50 |     });
 51 |     
 52 |     it('should return "No changes found" message when diff is empty', () => {
 53 |       // Mock successful git repo check
 54 |       vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
 55 |       
 56 |       // Mock empty diff output - this is the second execSync call
 57 |       vi.mocked(execSync).mockImplementationOnce(() => Buffer.from(''));
 58 |       
 59 |       const result = getGitDiff('HEAD');
 60 |       expect(result).toBe('No changes found for the specified target.');
 61 |     });
 62 |     
 63 |     it('should handle branch_diff correctly with successful fetch', () => {
 64 |       // Mock successful git repo check
 65 |       vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
 66 |       
 67 |       // Mock successful git fetch
 68 |       vi.mocked(execSync).mockImplementationOnce(() => Buffer.from(''));
 69 |       
 70 |       // Mock the diff command
 71 |       vi.mocked(execSync).mockImplementationOnce(() => 
 72 |         Buffer.from('diff --git a/file.js b/file.js\nbranch diff output')
 73 |       );
 74 |       
 75 |       const result = getGitDiff('branch_diff', 'main');
 76 |       expect(result).toContain('branch diff output');
 77 |     });
 78 |     
 79 |     it('should proceed with branch_diff even if fetch fails', () => {
 80 |       // Mock successful git repo check
 81 |       vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
 82 |       
 83 |       // Mock failed git fetch
 84 |       vi.mocked(execSync).mockImplementationOnce(() => {
 85 |         throw new Error('fetch failed');
 86 |       });
 87 |       
 88 |       // Mock the diff command
 89 |       vi.mocked(execSync).mockImplementationOnce(() => 
 90 |         Buffer.from('diff --git a/file.js b/file.js\nlocal branch diff output')
 91 |       );
 92 |       
 93 |       const result = getGitDiff('branch_diff', 'main');
 94 |       expect(result).toContain('local branch diff output');
 95 |     });
 96 |     
 97 |     it('should throw error for branch_diff with empty baseBranch', () => {
 98 |       // Mock successful git repo check
 99 |       vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
100 |       
101 |       expect(() => getGitDiff('branch_diff', '')).toThrow(/required for 'branch_diff'/i);
102 |     });
103 |     
104 |     it('should throw error for branch_diff with invalid characters', () => {
105 |       // Mock successful git repo check
106 |       vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
107 |       
108 |       expect(() => getGitDiff('branch_diff', 'main;rm -rf /')).toThrow(/invalid characters in base branch/i);
109 |     });
110 |     
111 |     it('should sanitize branch name correctly', () => {
112 |       // Mock successful git repo check
113 |       vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
114 |       
115 |       // Mock successful git fetch - check that command has sanitized branch
116 |       vi.mocked(execSync).mockImplementationOnce((command) => {
117 |         expect(command).toContain('git fetch origin feature/branch:feature/branch');
118 |         return Buffer.from('');
119 |       });
120 |       
121 |       // Mock the diff command - check that command has sanitized branch
122 |       vi.mocked(execSync).mockImplementationOnce((command) => {
123 |         expect(command).toContain('git diff feature/branch...HEAD');
124 |         return Buffer.from('branch diff output');
125 |       });
126 |       
127 |       const result = getGitDiff('branch_diff', 'feature/branch');
128 |       expect(result).toContain('branch diff output');
129 |     });
130 |   });
131 | });
132 | 
```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
  1 | #!/usr/bin/env node
  2 | /**
  3 |  * MCP Server for performing code reviews using LLMs.
  4 |  * 
  5 |  * IMPORTANT: MCP Server logs are written to stderr to keep stdout clean for MCP communication.
  6 |  * All console.log/error/warn will output to stderr, preserving stdout exclusively for MCP protocol.
  7 |  */
  8 | import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
  9 | import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
 10 | import { CodeReviewToolParamsSchema, CodeReviewToolParams, isDebugMode } from "./config.js";
 11 | import { getGitDiff } from "./git-utils.js";
 12 | import { getLLMReview } from "./llm-service.js";
 13 | import { CoreMessage } from "ai";
 14 | import { readFileSync } from "fs";
 15 | import { fileURLToPath } from "url";
 16 | import { dirname, resolve } from "path";
 17 | 
 18 | // Get package.json data using file system
 19 | const __filename = fileURLToPath(import.meta.url);
 20 | const __dirname = dirname(__filename);
 21 | const packagePath = resolve(__dirname, "../package.json");
 22 | const pkg = JSON.parse(readFileSync(packagePath, "utf8"));
 23 | 
 24 | // Maximum number of transport connection retry attempts
 25 | const MAX_CONNECTION_ATTEMPTS = 3;
 26 | const CONNECTION_RETRY_DELAY_MS = 2000;
 27 | 
 28 | async function main() {
 29 |   console.error("[MCP Server] Initializing Code Reviewer MCP Server...");
 30 | 
 31 |   const server = new McpServer({
 32 |     name: pkg.name,
 33 |     version: pkg.version,
 34 |     capabilities: {
 35 |       tools: { listChanged: false }, // Tool list is static
 36 |     },
 37 |   });
 38 | 
 39 |   // Register the code review tool
 40 |   registerCodeReviewTool(server);
 41 | 
 42 |   // Set up the MCP transport with connection retry logic
 43 |   await setupTransport(server);
 44 | }
 45 | 
 46 | /**
 47 |  * Registers the code review tool with the MCP server.
 48 |  * 
 49 |  * @param server - The MCP server instance
 50 |  */
 51 | function registerCodeReviewTool(server: McpServer) {
 52 |   server.tool(
 53 |     "perform_code_review",
 54 |     "Performs a code review using a specified LLM on git changes. Requires being run from the root of a git repository.",
 55 |     CodeReviewToolParamsSchema.shape,
 56 |     async (params: CodeReviewToolParams) => {
 57 |       try {
 58 |         console.error(
 59 |           `[MCP Server Tool] Received 'perform_code_review' request. Target: ${params.target}, Provider: ${params.llmProvider}, Model: ${params.modelName}`
 60 |         );
 61 | 
 62 |         // Step 1: Get the diff from git
 63 |         const diffResult = await getGitDiffForReview(params);
 64 |         if (diffResult.noChanges) {
 65 |           return {
 66 |             content: [
 67 |               { type: "text", text: "No changes detected for review." },
 68 |             ],
 69 |           };
 70 |         }
 71 | 
 72 |         // Step 2: Prepare LLM prompt and get the review
 73 |         const reviewResult = await generateLLMReview(params, diffResult.diff);
 74 | 
 75 |         return {
 76 |           content: [{ type: "text", text: reviewResult }],
 77 |           isError: false, // Explicitly set isError
 78 |         };
 79 |       } catch (error: any) {
 80 |         console.error(
 81 |           "[MCP Server Tool] Error in 'perform_code_review' tool:",
 82 |           error.stack || error.message
 83 |         );
 84 |         return {
 85 |           isError: true,
 86 |           content: [
 87 |             {
 88 |               type: "text",
 89 |               text: `Error performing code review: ${error.message}`,
 90 |             },
 91 |           ],
 92 |         };
 93 |       }
 94 |     }
 95 |   );
 96 | }
 97 | 
 98 | /**
 99 |  * Gets the git diff for review based on the provided parameters.
100 |  * 
101 |  * @param params - Code review tool parameters
102 |  * @returns Object with the diff and a flag indicating if there are no changes
103 |  */
104 | async function getGitDiffForReview(params: CodeReviewToolParams): Promise<{ diff: string; noChanges: boolean }> {
105 |   const diff = getGitDiff(params.target, params.diffBase);
106 |   
107 |   if (diff === "No changes found for the specified target.") {
108 |     console.error("[MCP Server Tool] No changes detected for review.");
109 |     return { diff: "", noChanges: true };
110 |   }
111 |   
112 |   if (isDebugMode()) {
113 |     console.error(
114 |       `[MCP Server Tool] Git diff obtained successfully. Length: ${diff.length} chars.`
115 |     );
116 |   }
117 |   
118 |   return { diff, noChanges: false };
119 | }
120 | 
121 | /**
122 |  * Generates a code review using the specified LLM based on the git diff.
123 |  * 
124 |  * @param params - Code review tool parameters
125 |  * @param diff - The git diff to review
126 |  * @returns The generated code review
127 |  */
128 | async function generateLLMReview(params: CodeReviewToolParams, diff: string): Promise<string> {
129 |   const systemPrompt = `You are an expert code reviewer. Your task is to review the provided code changes (git diff format) and offer constructive feedback.
130 | ${params.projectContext ? `Project Context: ${params.projectContext}\n` : ""}
131 | The changes were made as part of the following task: "${params.taskDescription}"
132 | ${
133 |   params.reviewFocus
134 |     ? `Please specifically focus your review on: "${params.reviewFocus}"\n`
135 |     : ""
136 | }
137 | Provide your review in a clear, concise, and actionable markdown format. Highlight potential bugs, suggest improvements for readability, maintainability, performance, and adherence to best practices. If you see positive aspects, mention them too. Structure your review logically, perhaps by file or by theme.`;
138 | 
139 |   const userMessages: CoreMessage[] = [
140 |     {
141 |       role: "user",
142 |       content: `Please review the following code changes (git diff). Ensure your review is thorough and actionable:\n\n\`\`\`diff\n${diff}\n\`\`\``,
143 |     },
144 |   ];
145 | 
146 |   // Use the provided maxTokens parameter or default value
147 |   const maxTokens = params.maxTokens || 32000;
148 | 
149 |   const review = await getLLMReview(
150 |     params.llmProvider,
151 |     params.modelName,
152 |     systemPrompt,
153 |     userMessages,
154 |     maxTokens
155 |   );
156 |   
157 |   console.error(`[MCP Server Tool] LLM review generated successfully.`);
158 |   return review;
159 | }
160 | 
161 | /**
162 |  * Sets up the MCP transport with connection retry logic.
163 |  * 
164 |  * @param server - The MCP server instance
165 |  */
166 | async function setupTransport(server: McpServer) {
167 |   let connectionAttempts = 0;
168 |   let connected = false;
169 | 
170 |   while (!connected && connectionAttempts < MAX_CONNECTION_ATTEMPTS) {
171 |     connectionAttempts++;
172 |     try {
173 |       const transport = new StdioServerTransport();
174 |       await server.connect(transport);
175 |       
176 |       // Add event handler for disconnect
177 |       transport.onclose = () => {
178 |         console.error("[MCP Server] Transport connection closed unexpectedly.");
179 |         process.exit(1); // Exit process to allow restart by supervisor
180 |       };
181 |       
182 |       connected = true;
183 |       console.error(
184 |         "[MCP Server] Code Reviewer MCP Server is running via stdio and connected to transport."
185 |       );
186 |     } catch (error) {
187 |       console.error(
188 |         `[MCP Server] Connection attempt ${connectionAttempts}/${MAX_CONNECTION_ATTEMPTS} failed:`,
189 |         error
190 |       );
191 |       
192 |       if (connectionAttempts < MAX_CONNECTION_ATTEMPTS) {
193 |         console.error(`[MCP Server] Retrying in ${CONNECTION_RETRY_DELAY_MS/1000} seconds...`);
194 |         await new Promise(resolve => setTimeout(resolve, CONNECTION_RETRY_DELAY_MS));
195 |       } else {
196 |         console.error("[MCP Server] Maximum connection attempts exceeded. Exiting.");
197 |         process.exit(1); 
198 |       }
199 |     }
200 |   }
201 | }
202 | 
203 | // Graceful shutdown
204 | process.on("SIGINT", () => {
205 |   console.error("[MCP Server] Received SIGINT. Shutting down...");
206 |   // Perform any cleanup if necessary
207 |   process.exit(0);
208 | });
209 | 
210 | process.on("SIGTERM", () => {
211 |   console.error("[MCP Server] Received SIGTERM. Shutting down...");
212 |   // Perform any cleanup if necessary
213 |   process.exit(0);
214 | });
215 | 
216 | // Handle unhandled promise rejections
217 | process.on("unhandledRejection", (reason, promise) => {
218 |   console.error("[MCP Server] Unhandled Promise Rejection:", reason);
219 |   // Continue running but log the error
220 | });
221 | 
222 | main().catch((error) => {
223 |   console.error(
224 |     "[MCP Server] Unhandled fatal error in main execution:",
225 |     error.stack || error.message
226 |   );
227 |   process.exit(1);
228 | });
```