#
tokens: 16226/50000 31/31 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .env.example
├── .gitignore
├── Dockerfile
├── examples
│   ├── claude-commands
│   │   ├── review-branch-custom-gemini.md
│   │   ├── review-branch-develop-openai.md
│   │   ├── review-branch-main-claude.md
│   │   ├── review-head-claude.md
│   │   ├── review-head-gemini.md
│   │   ├── review-head-openai.md
│   │   ├── review-staged-claude.md
│   │   ├── review-staged-gemini.md
│   │   ├── review-staged-maintainability-gemini.md
│   │   ├── review-staged-openai.md
│   │   ├── review-staged-performance-openai.md
│   │   └── review-staged-security-claude.md
│   ├── cursor-rules
│   │   └── project.mdc
│   └── windsurf-workflows
│       ├── review-branch.md
│       ├── review-head.md
│       ├── review-security.md
│       └── review-staged.md
├── LICENSE
├── package-lock.json
├── package.json
├── README.md
├── smithery.yaml
├── src
│   ├── config.ts
│   ├── git-utils.ts
│   ├── index.ts
│   └── llm-service.ts
├── tests
│   ├── config.test.ts
│   └── git-utils.test.ts
├── tsconfig.json
├── tsconfig.test.json
└── vitest.config.ts
```

# Files

--------------------------------------------------------------------------------
/.env.example:
--------------------------------------------------------------------------------

```
# Provider API Keys
OPENAI_API_KEY=your-openai-api-key
ANTHROPIC_API_KEY=your-anthropic-api-key
GOOGLE_API_KEY=your-google-api-key

```

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
# dev
.yarn/
!.yarn/releases
.vscode/*
!.vscode/launch.json
!.vscode/*.code-snippets
.idea/workspace.xml
.idea/usage.statistics.xml
.idea/shelf

# deps
node_modules/

# env
.env
.env.production

# logs
logs/
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
pnpm-debug.log*
lerna-debug.log*

dist

# misc
.DS_Store

**/.claude/
requirements.md
ai_docs
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# @vibesnipe/code-review-mcp

**Version: 1.0.0**

An MCP (Model Context Protocol) server that provides a powerful tool to perform code reviews using various Large Language Models (LLMs). This server is designed to be seamlessly integrated with AI coding assistants like Anthropic's Claude Code, Cursor, Windsurf, or other MCP-compatible clients.

> **Note:** This tool was initially created for Claude Code but has since been expanded to support other AI IDEs and Claude Desktop. See the integration guides for [Claude Code](#integration-with-claude-code), [Cursor](#cursor-integration), and [Windsurf](#windsurf-integration).

It analyzes `git diff` output for staged changes, differences from HEAD, or differences between branches, providing contextualized reviews based on your task description and project details.

## Features

-   Reviews git diffs (staged changes, current HEAD, branch differences).
-   Integrates with Google Gemini, OpenAI, and Anthropic models through the Vercel AI SDK.
-   Allows specification of task description, review focus, and overall project context for tailored reviews.
-   Outputs reviews in clear, actionable markdown format.
-   Designed to be run from the root of any Git repository you wish to analyze.
-   Easily installable and runnable via `npx` for immediate use.

## Compatibility

-   **Node.js**: Version 18 or higher is required.
-   **Operating Systems**: Works on Windows, macOS, and Linux.
-   **Git**: Version 2.20.0 or higher recommended.

## Prerequisites

-   **Node.js**: Version 18 or higher is required.
-   **Git**: Must be installed and accessible in your system's PATH. The server executes `git` commands.
-   **API Keys for LLMs**: You need API keys for the LLM providers you intend to use. These should be set as environment variables:
    -   `GOOGLE_API_KEY` for Google models.
    -   `OPENAI_API_KEY` for OpenAI models.
    -   `ANTHROPIC_API_KEY` for Anthropic models.
    These can be set globally in your environment or, conveniently, in a `.env` file placed in the root of the project you are currently reviewing. The server will automatically try to load it.

## Installation & Usage

The primary way to use this server is with `npx`, which ensures you're always using the latest version without needing a global installation.

### Recommended: Using with `npx`

1.  **Navigate to Your Project:**
    Open your terminal and change to the root directory of the Git repository you want to review.
    ```bash
    cd /path/to/your-git-project
    ```

2.  **Run the MCP Server:**
    Execute the following command:
    ```bash
    npx -y @vibesnipe/code-review-mcp
    ```

    This command will download (if not already cached) and run the `@vibesnipe/code-review-mcp` server. You should see output in your terminal similar to:
    `[MCP Server] Code Reviewer MCP Server is running via stdio and connected to transport.`
    The server is now running and waiting for an MCP client (like Claude Code, Cursor, or Windsurf) to connect.



### Installing via Smithery

To install code-review-mcp for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@praneybehl/code-review-mcp):

```bash
npx -y @smithery/cli install @praneybehl/code-review-mcp --client claude
```


## Integration with Claude Code

Once the `claude-code-review-mcp` server is running (ideally via `npx` from your project's root):

1.  **Add as an MCP Server in Claude Code:**
    In a separate terminal where Claude Code is running (or will be run), configure it to use this MCP server.
    The command to run the server (as used by Claude Code) would be `code-review-mcp` if installed globally and in PATH, or the `npx ...` command if you prefer Claude Code to always fetch it.

    To add it to Claude Code:
    ```bash
    claude mcp add code-reviewer -s <user|local> -e GOOGLE_API_KEY="key" -- code-review-mcp 
    ```
    If you want Claude Code to use `npx` (which is a good practice to ensure version consistency if you don't install globally):
    ```bash
    claude mcp add code-reviewer -s <user|local> -e GOOGLE_API_KEY="key" -- npx -y @vibesnipe/code-review-mcp
    ```
    This tells Claude Code how to launch the MCP server when the "code-reviewer" toolset is requested. This configuration can be project-specific (saved in `.claude/.mcp.json` in your project) or user-specific (global Claude Code settings).

2.  **Use Smart Slash Commands in Claude Code:**
    Create custom slash command files in your project's `.claude/commands/` directory to easily invoke the review tool. The package includes several example commands in the `examples/claude-commands/` directory that you can copy to your project.

    These improved slash commands don't require you to manually specify task descriptions or project context - they leverage Claude Code's existing knowledge of your project and the current task you're working on.

    Example invocation using a slash command in Claude Code:
    ```
    claude > /project:review-staged-claude
    ```

    No additional arguments needed! Claude will understand what you're currently working on and use that as context for the review.

    For commands that require arguments (like `review-branch-custom-gemini.md` which uses a custom branch name), you can pass them directly after the command:
    ```
    claude > /project:review-branch-custom-gemini main
    ```
    This will pass "main" as the $ARGUMENTS_BASE_BRANCH parameter.

## Integration with Modern AI IDEs

### Cursor Integration

Cursor is a popular AI-powered IDE based on VS Code that supports MCP servers. Here's how to integrate the code review MCP server with Cursor:

1. **Configure Cursor's Rules for Code Review**:
   
   Create or open the `.cursor/rules/project.mdc` file in your project and add the following section:

   ```markdown
   ## Slash Commands

   /review-staged: Use the perform_code_review tool from the code-reviewer MCP server to review staged changes. Use anthropic provider with claude-3-7-sonnet-20250219 model. Base the task description on our current conversation context and focus on code quality and best practices.

   /review-head: Use the perform_code_review tool from the code-reviewer MCP server to review all uncommitted changes (HEAD). Use openai provider with o3 model. Base the task description on our current conversation context and focus on code quality and best practices.

   /review-security: Use the perform_code_review tool from the code-reviewer MCP server to review staged changes. Use anthropic provider with claude-3-5-sonnet-20241022 model. Base the task description on our current conversation context and specifically focus on security vulnerabilities, input validation, and secure coding practices.
   ```

2. **Add the MCP Server in Cursor**:
   
   - Open Cursor settings
   - Navigate to the MCP Servers section
   - Add a new MCP server with the following JSON configuration:
   ```json
   "code-reviewer": {
     "command": "npx",
     "args": ["-y", "@vibesnipe/code-review-mcp"],
     "env": {
       "GOOGLE_API_KEY": "your-google-api-key",
       "OPENAI_API_KEY": "your-openai-api-key",
       "ANTHROPIC_API_KEY": "your-anthropic-api-key"
     }
   }
   ```

4. **Using the Commands**:
   
   In Cursor's AI chat interface, you can now simply type:
   ```
   /review-staged
   ```
   And Cursor will use the claude-code-review-mcp server to perform a code review of your staged changes.

### Windsurf Integration

Windsurf (formerly Codeium) is another advanced AI IDE that supports custom workflows via slash commands. Here's how to integrate with Windsurf:

1. **Configure the MCP Server in Windsurf**:
   
   - Open Windsurf
   - Click on the Customizations icon in the top right of Cascade
   - Navigate to the MCP Servers panel
   - Add a new MCP server with the following JSON configuration:
   ```json
   "code-reviewer": {
     "command": "npx",
     "args": ["-y", "@vibesnipe/code-review-mcp"],
     "env": {
       "GOOGLE_API_KEY": "your-google-api-key",
       "OPENAI_API_KEY": "your-openai-api-key",
       "ANTHROPIC_API_KEY": "your-anthropic-api-key"
     }
   }
   ```

3. **Create Workflows for Code Review**:
   
   Windsurf supports workflows that can be invoked via slash commands. Create a file in `.windsurf/workflows/review-staged.md`:

   ```markdown
   # Review Staged Changes
   
   Perform a code review on the currently staged changes in the repository.
   
   ## Step 1
   
   Use the perform_code_review tool from the code-reviewer MCP server with the following parameters:
   ```
   {
     "target": "staged",
     "llmProvider": "anthropic",
     "modelName": "claude-3-7-sonnet-20250219",
     "taskDescription": "The task I am currently working on in this codebase",
     "reviewFocus": "General code quality, security best practices, and performance considerations",
     "projectContext": "This project is being developed in Windsurf. Please review the code carefully for any issues."
   }
   ```
   ```

   Similarly, create workflows for other review types as needed.

4. **Using the Workflows**:
   
   In Windsurf's Cascade interface, you can invoke these workflows with:
   ```
   /review-staged
   ```

   Windsurf will then execute the workflow, which will use the claude-code-review-mcp server to perform the code review.

## Tool Provided by this MCP Server

### `perform_code_review`

**Description:**
Performs a code review using a specified Large Language Model on git changes within the current Git repository. This tool must be run from the root directory of the repository being reviewed.

**Input Schema (Parameters):**

The tool expects parameters matching the `CodeReviewToolParamsSchema`:

-   `target` (enum: `'staged'`, `'HEAD'`, `'branch_diff'`):
    Specifies the set of changes to review.
    -   `'staged'`: Reviews only the changes currently staged for commit.
    -   `'HEAD'`: Reviews uncommitted changes (both staged and unstaged) against the last commit.
    -   `'branch_diff'`: Reviews changes between a specified base branch/commit and the current HEAD. Requires `diffBase` parameter.
-   `taskDescription` (string):
    A clear and concise description of the task, feature, or bugfix that led to the code changes. This provides crucial context for the LLM reviewer. (e.g., "Implemented password reset functionality via email OTP.")
-   `llmProvider` (enum: `'google'`, `'openai'`, `'anthropic'`):
    The Large Language Model provider to use for the review.
-   `modelName` (string):
    The specific model name from the chosen provider. Examples:
    -   Google: `'gemini-2.5-pro-preview-05-06'`, `'gemini-2.5-flash-preview-04-17'`
        *(Ref: https://ai-sdk.dev/providers/ai-sdk-providers/google-generative-ai#model-capabilities)*
    -   OpenAI: `'o4-mini'`, `'gpt-4.1'`, `'gpt-4.1-mini'`, `'o3'`
        *(Ref: https://ai-sdk.dev/providers/ai-sdk-providers/openai#model-capabilities)*
    -   Anthropic: `'claude-3-7-sonnet-20250219'`, `'claude-3-5-sonnet-20241022'`
        *(Ref: https://ai-sdk.dev/providers/ai-sdk-providers/anthropic#model-capabilities)*
    Ensure the model selected is available via the Vercel AI SDK and your API key has access.
    
    **Note:** Model names often change as providers release new versions. Always check the latest documentation from your provider and update the model names accordingly.
-   `reviewFocus` (string, optional but recommended):
    Specific areas, concerns, or aspects you want the LLM to concentrate on during the review. (e.g., "Focus on thread safety in concurrent operations.", "Pay special attention to input validation and sanitization.", "Check for adherence to our internal style guide for React components.").
-   `projectContext` (string, optional but recommended):
    General background information about the project, its architecture, key libraries, coding standards, or any other context that would help the LLM provide a more relevant and insightful review. (e.g., "This is a high-performance microservice using Rust and Actix. Low latency is critical.", "The project follows Clean Architecture principles. Ensure new code aligns with this.").
-   `diffBase` (string, optional):
    **Required if `target` is `'branch_diff'**. Specifies the base branch (e.g., `'main'`, `'develop'`) or a specific commit SHA to compare the current HEAD against.
-   `maxTokens` (number, optional):
    Maximum number of tokens to use for the LLM response. Defaults to 32000 if not specified. Use this parameter to optimize for faster, less expensive responses (lower value) or more comprehensive reviews (higher value).
    **Note**: In v0.11.0, the default was reduced from 60000 to 32000 tokens to better balance cost and quality.

**Output:**

-   If successful: A JSON object with `isError: false` and a `content` array containing a single text item. The `text` field holds the code review generated by the LLM in markdown format.
-   If an error occurs: A JSON object with `isError: true` and a `content` array. The `text` field will contain an error message describing the issue.

## Environment Variables

For the LLM integration to work, the `claude-code-review-mcp` server (the process started by `npx` or `claude-code-review-mcp`) needs access to the respective API keys.

Set these in your shell environment or place them in a `.env` file in the **root directory of the project you are reviewing**:

-   **For Google Models:**
    `GOOGLE_API_KEY="your_google_api_key"`

-   **For OpenAI Models:**
    `OPENAI_API_KEY="your_openai_api_key"`

-   **For Anthropic Models:**
    `ANTHROPIC_API_KEY="your_anthropic_api_key"`

The server will automatically load variables from a `.env` file found in the current working directory (i.e., your project's root) or you can configure them directly in the MCP server configuration as shown in the examples above.

## Smart Slash Commands for Claude Code

The package includes several improved slash commands in the `examples/claude-commands/` directory that you can copy to your project's `.claude/commands/` directory. These commands take advantage of Claude Code's understanding of your project context and current task, eliminating the need for manual input.

### Available Slash Commands

| Command File | Description |
|--------------|-------------|
| `review-staged-claude.md` | Reviews staged changes using Claude 3.5 Sonnet |
| `review-staged-openai.md` | Reviews staged changes using OpenAI GPT-4.1 |
| `review-staged-gemini.md` | Reviews staged changes using Google Gemini 2.5 Pro |
| `review-head-claude.md` | Reviews all uncommitted changes using Claude 3.7 Sonnet |
| `review-head-openai.md` | Reviews all uncommitted changes using OpenAI O3 |
| `review-head-gemini.md` | Reviews all uncommitted changes using Google Gemini 2.5 Pro |
| `review-branch-main-claude.md` | Reviews changes from main branch using Claude 3.7 Sonnet |
| `review-branch-develop-openai.md` | Reviews changes from develop branch using OpenAI O4-mini |
| `review-branch-custom-gemini.md` | Reviews changes from a specified branch using Google Gemini 2.5 Flash |
| `review-staged-security-claude.md` | Security-focused review of staged changes using Claude 3.5 Sonnet |
| `review-staged-performance-openai.md` | Performance-focused review of staged changes using OpenAI O3 |
| `review-staged-maintainability-gemini.md` | Maintainability-focused review of staged changes using Google Gemini 2.5 Flash |

To use these commands:

1. Copy the desired command files from the `examples/claude-commands/` directory to your project's `.claude/commands/` directory
2. Invoke the command in Claude Code with `/project:command-name` (e.g., `/project:review-staged-claude`)

These commands automatically use Claude's knowledge of your current task and project context, eliminating the need for lengthy manual arguments.

## Security Considerations

- **API Key Handling**: API keys for LLM providers are sensitive credentials. This tool accesses them from environment variables or `.env` files, but does not store or transmit them beyond the necessary API calls. Consider using a secure environment variable manager for production environments.
  
- **Git Repository Analysis**: The tool analyzes your local Git repository contents. It executes Git commands and reads diff output, but does not transmit your entire codebase to the LLM - only the specific changes being reviewed.
  
- **Code Privacy**: When sending code for review to external LLM providers, be mindful of:
  - Sensitive information in code comments or strings
  - Proprietary algorithms or trade secrets
  - Authentication credentials or API keys in config files
  
- **Branch Name Sanitization**: To prevent command injection, branch names are sanitized before being used in Git commands. However, it's still good practice to avoid using unusual characters in branch names.

## Development (for the `@vibesnipe/code-review-mcp` package itself)

If you are contributing to or modifying the `@vibesnipe/code-review-mcp` package:

1.  **Clone the Monorepo:**
    Ensure you have the repo cloned.
2.  **Navigate to Package:**
    ```bash
    cd /path/to/claude-code-review-mcp
    ```
3.  **Install Dependencies:**
    ```bash
    pnpm install
    ```
4.  **Run in Development Mode:**
    This uses `tsx` for hot-reloading TypeScript changes.
    ```bash
    pnpm dev
    ```
    The server will start and log to `stderr`.

5.  **Testing with Claude Code (Local Development):**
    You'll need to tell Claude Code where your local development server script is:
    ```bash
    # From any directory where you use Claude Code
    claude mcp add local-code-reviewer "node /path/to/claude-code-review-mcp/src/index.ts" --interpreter=tsx
    ```
    Now, when Claude Code needs the "local-code-reviewer", it will execute your source `index.ts` using `tsx`. Remember to replace `/path/to/claude-code-review-mcp/` with the actual absolute path to your repo.

## Building for Production/Publishing

From the package directory:
```bash
pnpm build
```
This compiles TypeScript to JavaScript in the `dist` directory. The `prepublishOnly` script in `package.json` ensures this command is run automatically before publishing the package to npm.

## Troubleshooting

-   **"Current directory is not a git repository..."**: Ensure you are running `npx @vibesnipe/code-review-mcp` (or the global command) from the root directory of a valid Git project.
-   **"API key for ... is not configured"**: Make sure the relevant environment variable (e.g., `OPENAI_API_KEY`) is set in the shell where you launched the MCP server OR in a `.env` file in your project's root.
-   **"Failed to get git diff. Git error: ..."**: This indicates an issue with the `git diff` command.
    -   Check if `git` is installed and in your PATH.
    -   Verify that the `target` and `diffBase` (if applicable) are valid for your repository.
    -   The error message from `git` itself should provide more clues.
-   **LLM API Errors**: Errors from the LLM providers (e.g., rate limits, invalid model name, authentication issues) will be passed through. Check the error message for details from the specific LLM API.
-   **Claude Code MCP Issues**: If Claude Code isn't finding or launching the server, double-check your `claude mcp add ...` command and ensure the command specified for the MCP server is correct and executable. Use `claude mcp list` to verify.
-   **Cursor MCP Server Issues**: If Cursor doesn't recognize the MCP server, make sure it's properly added in the settings with the correct configuration.
-   **Windsurf Workflow Errors**: If Windsurf workflows are not executing correctly, check that:
    -   The workflow files are properly formatted and located in the `.windsurf/workflows/` directory
    -   The MCP server is correctly configured in Windsurf's MCP Server panel
    -   You have the necessary API keys set up in your environment

## License

MIT License - Copyright (c) Praney Behl

```

--------------------------------------------------------------------------------
/tsconfig.test.json:
--------------------------------------------------------------------------------

```json
{
  "extends": "./tsconfig.json",
  "compilerOptions": {
    "types": ["vitest/globals", "node"]
  },
  "include": ["src/**/*", "tests/**/*"],
  "exclude": ["node_modules"]
}

```

--------------------------------------------------------------------------------
/vitest.config.ts:
--------------------------------------------------------------------------------

```typescript
/// <reference types="vitest" />
import { defineConfig } from 'vitest/config';

export default defineConfig({
  test: {
    environment: 'node',
    globals: true,
    include: ['tests/**/*.test.ts'],
    coverage: {
      reporter: ['text', 'json', 'html'],
    },
  },
});

```

--------------------------------------------------------------------------------
/tsconfig.json:
--------------------------------------------------------------------------------

```json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext", // Changed for better CJS/ESM interop with Node.js ecosystem
    "moduleResolution": "NodeNext", // Changed
    "esModuleInterop": true,
    "strict": true,
    "skipLibCheck": true,
    "outDir": "./dist",
    "rootDir": "./src",
    "declaration": true,
    "sourceMap": true,
    "resolveJsonModule": true // If you import JSON files directly
  },
  "include": ["src/**/*.ts"],
  "exclude": ["node_modules", "dist", "**/*.test.ts"]
}
```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-openai.md:
--------------------------------------------------------------------------------

```markdown
Perform a code review on the currently **staged** changes using OpenAI's **GPT-4.1**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "staged"
llmProvider: "openai"
modelName: "gpt-4.1"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "General code quality, security best practices, and performance considerations"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

```

--------------------------------------------------------------------------------
/examples/claude-commands/review-head-openai.md:
--------------------------------------------------------------------------------

```markdown
Perform a code review on all **uncommitted changes** (both staged and unstaged) using OpenAI's **O3**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "HEAD"
llmProvider: "openai"
modelName: "o3"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "General code quality, security best practices, and performance considerations"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-gemini.md:
--------------------------------------------------------------------------------

```markdown
Perform a code review on the currently **staged** changes using Google's **Gemini 2.5 Pro**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "staged"
llmProvider: "google"
modelName: "gemini-2.5-pro-preview-05-06"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "General code quality, security best practices, and performance considerations"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-claude.md:
--------------------------------------------------------------------------------

```markdown
Perform a code review on the currently **staged** changes using Anthropic's **Claude 3.5 Sonnet**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "staged"
llmProvider: "anthropic"
modelName: "claude-3-5-sonnet-20241022"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "General code quality, security best practices, and performance considerations"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

```

--------------------------------------------------------------------------------
/examples/claude-commands/review-branch-develop-openai.md:
--------------------------------------------------------------------------------

```markdown
Perform a code review of changes between the develop branch and the current HEAD using OpenAI's **O4-mini**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "branch_diff"
diffBase: "develop"
llmProvider: "openai"
modelName: "o4-mini"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "General code quality, security best practices, and performance considerations"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

```

--------------------------------------------------------------------------------
/examples/claude-commands/review-head-gemini.md:
--------------------------------------------------------------------------------

```markdown
Perform a code review on all **uncommitted changes** (both staged and unstaged) using Google's **Gemini 2.5 Pro**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "HEAD"
llmProvider: "google"
modelName: "gemini-2.5-pro-preview-05-06"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "General code quality, security best practices, and performance considerations"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

```

--------------------------------------------------------------------------------
/examples/claude-commands/review-head-claude.md:
--------------------------------------------------------------------------------

```markdown
Perform a code review on all **uncommitted changes** (both staged and unstaged) using Anthropic's **Claude 3.7 Sonnet**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "HEAD"
llmProvider: "anthropic"
modelName: "claude-3-7-sonnet-20250219"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "General code quality, security best practices, and performance considerations"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

```

--------------------------------------------------------------------------------
/examples/claude-commands/review-branch-main-claude.md:
--------------------------------------------------------------------------------

```markdown
Perform a code review of changes between the main branch and the current HEAD using Anthropic's **Claude 3.7 Sonnet**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "branch_diff"
diffBase: "main"
llmProvider: "anthropic"
modelName: "claude-3-7-sonnet-20250219"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "General code quality, security best practices, and performance considerations"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-performance-openai.md:
--------------------------------------------------------------------------------

```markdown
Perform a performance-focused code review on the currently **staged** changes using OpenAI's **O3**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "staged"
llmProvider: "openai"
modelName: "o3"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "Performance optimizations, computational efficiency, memory usage, time complexity, algorithmic improvements, bottlenecks, lazy loading, and caching opportunities"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-maintainability-gemini.md:
--------------------------------------------------------------------------------

```markdown
Perform a maintainability-focused code review on the currently **staged** changes using Google's **Gemini 2.5 Flash**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "staged"
llmProvider: "google"
modelName: "gemini-2.5-flash-preview-04-17"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "Code readability, maintainability, documentation, naming conventions, SOLID principles, design patterns, abstraction, and testability"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

```

--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------

```dockerfile
# Generated by https://smithery.ai. See: https://smithery.ai/docs/build/project-config
# Stage 1: Build
FROM node:lts-alpine AS builder
WORKDIR /app

# Install dependencies and build
COPY package.json package-lock.json tsconfig.json tsconfig.test.json ./
COPY src ./src
COPY examples ./examples
RUN npm ci --ignore-scripts && npm run build && npm prune --production

# Stage 2: Runtime
FROM node:lts-alpine
WORKDIR /app

# Copy production artifacts
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json ./package.json

# Default environment
ENV NODE_ENV=production

# Expose no ports (stdio transport)
ENTRYPOINT ["node", "dist/cli.js"]

```

--------------------------------------------------------------------------------
/examples/claude-commands/review-staged-security-claude.md:
--------------------------------------------------------------------------------

```markdown
Perform a security-focused code review on the currently **staged** changes using Anthropic's **Claude 3.5 Sonnet**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "staged"
llmProvider: "anthropic"
modelName: "claude-3-5-sonnet-20241022"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "Security vulnerabilities, data validation, authentication, authorization, input sanitization, sensitive data handling, and adherence to OWASP standards"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

```

--------------------------------------------------------------------------------
/examples/windsurf-workflows/review-staged.md:
--------------------------------------------------------------------------------

```markdown
# Review Staged Changes

Perform a code review on the currently staged changes in the repository.

## Step 1

Use the perform_code_review tool from the code-reviewer MCP server with the following parameters:
```
{
  "target": "staged",
  "llmProvider": "anthropic",
  "modelName": "claude-3-7-sonnet-20250219",
  "taskDescription": "The task I am currently working on in this codebase",
  "reviewFocus": "General code quality, security best practices, and performance considerations",
  "projectContext": "This project is being developed in Windsurf. Please review the code carefully for any issues."
}
```

<!-- 
Note: 
1. Consider updating the model name to the latest available model from Anthropic
2. Customize the taskDescription with specific context for better review results
-->


```

--------------------------------------------------------------------------------
/examples/windsurf-workflows/review-head.md:
--------------------------------------------------------------------------------

```markdown
# Review All Uncommitted Changes

Perform a code review on all uncommitted changes (both staged and unstaged) against the last commit.

## Step 1

Use the perform_code_review tool from the code-reviewer MCP server with the following parameters:
```
{
  "target": "HEAD",
  "llmProvider": "openai",
  "modelName": "o3",
  "taskDescription": "The task I am currently working on in this codebase",
  "reviewFocus": "General code quality, security best practices, and performance considerations",
  "projectContext": "This project is being developed in Windsurf. Please review the code carefully for any issues."
}
```

<!-- 
Note: 
1. Consider updating the model name to the latest available model from OpenAI
2. Customize the taskDescription with specific context for better review results
-->


```

--------------------------------------------------------------------------------
/examples/claude-commands/review-branch-custom-gemini.md:
--------------------------------------------------------------------------------

```markdown
Perform a code review of changes between a specified branch and the current HEAD using Google's **Gemini 2.5 Flash**.

Use the 'perform_code_review' tool (from the 'code-reviewer' MCP server) with the following parameters:
target: "branch_diff"
diffBase: "$ARGUMENTS_BASE_BRANCH"
llmProvider: "google"
modelName: "gemini-2.5-flash-preview-04-17"
taskDescription: "The task I am currently working on in this codebase"
reviewFocus: "General code quality, security best practices, and performance considerations"
projectContext: "This is the current project I'm working on. Look for the CLAUDE.md file in the repository root if it exists for additional project context."

# Usage: 
# Invoke this command with the base branch name as an argument:
# claude > /project:review-branch-custom-gemini main
# This will compare your current HEAD against the 'main' branch.

```

--------------------------------------------------------------------------------
/smithery.yaml:
--------------------------------------------------------------------------------

```yaml
# Smithery configuration file: https://smithery.ai/docs/build/project-config

startCommand:
  type: stdio
  commandFunction:
    # A JS function that produces the CLI command based on the given config to start the MCP on stdio.
    |-
    (config) => ({ command: 'node', args: ['dist/cli.js'], env: { OPENAI_API_KEY: config.openaiApiKey, GOOGLE_API_KEY: config.googleApiKey, ANTHROPIC_API_KEY: config.anthropicApiKey } })
  configSchema:
    # JSON Schema defining the configuration options for the MCP.
    type: object
    properties:
      openaiApiKey:
        type: string
        description: OpenAI API key
      googleApiKey:
        type: string
        description: Google API key
      anthropicApiKey:
        type: string
        description: Anthropic API key
  exampleConfig:
    openaiApiKey: sk-1234567890abcdef
    googleApiKey: AIzaSyExampleKey
    anthropicApiKey: anthropic-key-example

```

--------------------------------------------------------------------------------
/examples/windsurf-workflows/review-security.md:
--------------------------------------------------------------------------------

```markdown
# Security Review

Perform a security-focused code review on the currently staged changes in the repository.

## Step 1

Use the perform_code_review tool from the code-reviewer MCP server with the following parameters:
```
{
  "target": "staged",
  "llmProvider": "anthropic",
  "modelName": "claude-3-5-sonnet-20241022",
  "taskDescription": "The task I am currently working on in this codebase",
  "reviewFocus": "Security vulnerabilities, data validation, authentication, authorization, input sanitization, sensitive data handling, and adherence to OWASP standards",
  "projectContext": "This project is being developed in Windsurf. Please review the code carefully for any security issues."
}
```

<!-- 
Note: 
1. Consider updating the model name to the latest available model from Anthropic
2. Customize the taskDescription with specific context for better security review results
3. You can further customize the reviewFocus to target specific security concerns for your project
-->


```

--------------------------------------------------------------------------------
/examples/windsurf-workflows/review-branch.md:
--------------------------------------------------------------------------------

```markdown
# Branch Diff Review

Perform a code review comparing the current HEAD with a specified base branch.

## Step 1

Ask the user which branch to compare against:
"Which branch would you like to use as the base for comparison? (e.g., main, develop)"

## Step 2

Use the perform_code_review tool from the code-reviewer MCP server with the following parameters:
```
{
  "target": "branch_diff",
  "diffBase": "${user_response}",
  "llmProvider": "google",
  "modelName": "gemini-2.5-pro-preview-05-06",
  "taskDescription": "The task I am currently working on in this codebase",
  "reviewFocus": "General code quality, security best practices, and performance considerations",
  "projectContext": "This project is being developed in Windsurf. Please review the code changes between branches carefully for any issues."
}
```

<!-- 
Notes:
1. Consider updating the model name to the latest available model from Google
2. Customize the taskDescription with specific context for better review results
3. IMPORTANT: The MCP server sanitizes the diffBase parameter to prevent command injection attacks,
   but you should still avoid using branch names containing special characters
-->


```

--------------------------------------------------------------------------------
/package.json:
--------------------------------------------------------------------------------

```json
{
  "name": "@vibesnipe/code-review-mcp",
  "version": "1.0.0",
  "description": "MCP server for performing code reviews using external LLMs via Vercel AI SDK.",
  "main": "dist/index.js",
  "types": "dist/index.d.ts",
  "type": "module",
  "bin": {
    "code-review-mcp": "dist/index.js"
  },
  "scripts": {
    "build": "rimraf dist && tsc -p tsconfig.json && chmod +x dist/index.js",
    "start": "node dist/index.js",
    "dev": "tsx src/index.ts",
    "test": "vitest run",
    "test:watch": "vitest watch",
    "inspector": "npx @modelcontextprotocol/inspector dist/index.js",
    "prepublishOnly": "npm run build"
  },
  "keywords": [
    "mcp",
    "claude code",
    "cursor",
    "windsurf",
    "ai code review",
    "code-review",
    "model-context-protocol",
    "review code"
  ],

  "author": "Praney Behl <@praneybehl>",
  "license": "MIT",
  "dependencies": {
    "@modelcontextprotocol/sdk": "^1.11.2",
    "ai": "^4.3.15",
    "@ai-sdk/openai": "^1.3.22",
    "@ai-sdk/anthropic": "^1.2.11",
    "@ai-sdk/google": "^1.2.18",
    "dotenv": "^16.5.0",
    "zod": "^3.24.4",
    "execa": "^9.5.3"
  },
  "devDependencies": {
    "@types/node": "^20.12.7", 
    "rimraf": "^6.0.1",
    "tsx": "^4.19.4", 
    "typescript": "^5.8.3",
    "vitest": "^1.2.1"
  },
  "files": [
    "dist",
    "README.md",
    "LICENSE",
    "examples"
  ],
  "publishConfig": {
    "access": "public"
  }
}

```

--------------------------------------------------------------------------------
/tests/config.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { getApiKey, LLMProvider } from '../src/config.js';

// Setup and restore environment variables for tests
describe('config module', () => {
  // Save original env values
  const originalEnv = { ...process.env };
  
  beforeEach(() => {
    // Setup test environment variables before each test
    vi.stubEnv('GOOGLE_API_KEY', 'mock-google-api-key');
    vi.stubEnv('OPENAI_API_KEY', 'mock-openai-api-key');
    vi.stubEnv('ANTHROPIC_API_KEY', 'mock-anthropic-api-key');
  });
  
  afterEach(() => {
    // Restore original environment after each test
    process.env = originalEnv;
    vi.unstubAllEnvs();
  });
  
  describe('getApiKey()', () => {
    it('should return the correct API key for Google provider', () => {
      const key = getApiKey('google');
      expect(key).toBe('mock-google-api-key');
    });

    it('should return the correct API key for OpenAI provider', () => {
      const key = getApiKey('openai');
      expect(key).toBe('mock-openai-api-key');
    });

    it('should return the correct API key for Anthropic provider', () => {
      const key = getApiKey('anthropic');
      expect(key).toBe('mock-anthropic-api-key');
    });
    
    it('should use GEMINI_API_KEY as fallback when GOOGLE_API_KEY is not available', () => {
      // Reset the Google API key and set Gemini key instead
      vi.stubEnv('GOOGLE_API_KEY', '');
      vi.stubEnv('GEMINI_API_KEY', 'mock-gemini-fallback-key');
      
      const key = getApiKey('google');
      expect(key).toBe('mock-gemini-fallback-key');
    });
    
    it('should return undefined when no API key is available for a provider', () => {
      // Clear all API keys
      vi.stubEnv('GOOGLE_API_KEY', '');
      vi.stubEnv('GEMINI_API_KEY', '');
      vi.stubEnv('OPENAI_API_KEY', '');
      
      const googleKey = getApiKey('google');
      const openaiKey = getApiKey('openai');
      
      expect(googleKey).toBeUndefined();
      expect(openaiKey).toBeUndefined();
    });
  });
});

```

--------------------------------------------------------------------------------
/src/config.ts:
--------------------------------------------------------------------------------

```typescript
import { z } from "zod";
import dotenv from "dotenv";

/**
 * Load environment variables in order of precedence:
 * 1. First load from the current working directory (where user runs npx)
 *    This allows users to place a .env file in their project root with their API keys
 * 2. Then load from the package's directory as a fallback (less common)
 * Variables from step 1 will take precedence over those from step 2.
 */
dotenv.config({ path: process.cwd() + "/.env" });
dotenv.config();

// Define valid log levels and parse the environment variable
export const LogLevelEnum = z.enum(["debug", "info", "warn", "error"]);
export type LogLevel = z.infer<typeof LogLevelEnum>;

// Convert numeric log levels to string equivalents
function normalizeLogLevel(level: string | undefined): string {
  if (!level) return 'info';
  
  // Map numeric levels to string values
  switch (level) {
    case '0': return 'debug';
    case '1': return 'info';
    case '2': return 'warn';
    case '3': return 'error';
    default: return level; // Pass through string values for validation
  }
}

export const LOG_LEVEL: LogLevel = LogLevelEnum.parse(normalizeLogLevel(process.env.LOG_LEVEL));

export const LLMProviderEnum = z.enum(["google", "openai", "anthropic"]);
export type LLMProvider = z.infer<typeof LLMProviderEnum>;

export const ReviewTargetEnum = z.enum(["staged", "HEAD", "branch_diff"]);
export type ReviewTarget = z.infer<typeof ReviewTargetEnum>;

export const CodeReviewToolParamsSchema = z.object({
  target: ReviewTargetEnum.describe(
    "The git target to review (e.g., 'staged', 'HEAD', or 'branch_diff')."
  ),
  taskDescription: z
    .string()
    .min(1)
    .describe(
      "Description of the task/feature/bugfix that led to these code changes."
    ),
  llmProvider: LLMProviderEnum.describe(
    "The LLM provider to use (google, openai, anthropic)."
  ),
  modelName: z
    .string()
    .min(1)
    .describe(
      "The specific model name from the provider (e.g., 'gemini-2.5-pro-preview-05-06', 'o4-mini', 'claude-3-7-sonnet-20250219')."
    ),
  reviewFocus: z
    .string()
    .optional()
    .describe(
      "Specific areas or aspects to focus the review on (e.g., 'security vulnerabilities', 'performance optimizations', 'adherence to SOLID principles')."
    ),
  projectContext: z
    .string()
    .optional()
    .describe(
      "General context about the project, its architecture, or coding standards."
    ),
  diffBase: z
    .string()
    .optional()
    .describe(
      "For 'branch_diff' target, the base branch or commit SHA to compare against (e.g., 'main', 'develop', 'specific-commit-sha'). Required if target is 'branch_diff'."
    ),
  maxTokens: z
    .number()
    .positive()
    .optional()
    .describe(
      "Maximum number of tokens to use for the LLM response. Defaults to 32000 if not specified."
    ),
});

export type CodeReviewToolParams = z.infer<typeof CodeReviewToolParamsSchema>;

/**
 * Gets the appropriate API key for the specified LLM provider.
 * For Google, the primary key name is GOOGLE_API_KEY with GEMINI_API_KEY as fallback.
 * 
 * @param provider - The LLM provider (google, openai, anthropic)
 * @returns The API key or undefined if not found
 */
export function getApiKey(provider: LLMProvider): string | undefined {
  let key: string | undefined;
  
  switch (provider) {
    case "google":
      key = process.env.GOOGLE_API_KEY || process.env.GEMINI_API_KEY;
      break;
    case "openai":
      key = process.env.OPENAI_API_KEY;
      break;
    case "anthropic":
      key = process.env.ANTHROPIC_API_KEY;
      break;
    default:
      // Should not happen due to Zod validation
      console.warn(
        `[MCP Server Config] Attempted to get API key for unknown provider: ${provider}`
      );
      return undefined;
  }
  
  // If the key is an empty string or undefined, return undefined
  return key && key.trim() !== "" ? key : undefined;
}

/**
 * Determines whether to log verbose debug information.
 * Set the LOG_LEVEL environment variable to 'debug' for verbose output.
 */
export function isDebugMode(): boolean {
  return LOG_LEVEL === 'debug';
}

```

--------------------------------------------------------------------------------
/src/llm-service.ts:
--------------------------------------------------------------------------------

```typescript
import { CoreMessage, generateText } from "ai";
import { createGoogleGenerativeAI } from "@ai-sdk/google";
import { createAnthropic } from "@ai-sdk/anthropic";
import { createOpenAI } from "@ai-sdk/openai";
import { LLMProvider, getApiKey, isDebugMode } from "./config.js"; // Ensure .js for ESM NodeNext

// Define model types for typechecking
type GoogleModelName = string;
type AnthropicModelName = string;
type OpenAIModelName = string;

// Get the appropriate model type based on provider
type ModelName<T extends LLMProvider> = T extends "openai"
  ? OpenAIModelName
  : T extends "anthropic"
  ? AnthropicModelName
  : T extends "google"
  ? GoogleModelName
  : never;

/**
 * Generates a code review using the specified LLM provider.
 * 
 * NOTE: The default maximum token limit was reduced from 60000 to 32000 tokens in v0.11.0
 * to better balance cost and quality. This can be configured using the new maxTokens parameter.
 * 
 * @param provider - LLM provider to use (google, openai, anthropic)
 * @param modelName - Specific model name from the provider
 * @param systemPrompt - System prompt to guide the LLM
 * @param userMessages - User message(s) containing the code diff to review
 * @param maxTokens - Optional maximum token limit for the response, defaults to 32000
 * @returns Promise with the generated review text
 */
export async function getLLMReview<T extends LLMProvider>(
  provider: T,
  modelName: ModelName<T>,
  systemPrompt: string,
  userMessages: CoreMessage[],
  maxTokens: number = 32000
): Promise<string> {
  // Make sure we have the API key
  const apiKey = getApiKey(provider);
  if (!apiKey) {
    throw new Error(
      `API key for ${provider} is not configured. Please set the appropriate environment variable.`
    );
  }

  // Create the LLM client with proper provider configuration
  let llmClient;
  switch (provider) {
    case "google":
      // Create Google provider with explicit API key
      const googleAI = createGoogleGenerativeAI({
        apiKey,
      });
      llmClient = googleAI(modelName);
      break;
    case "openai":
      // Create OpenAI provider with explicit API key
      const openaiProvider = createOpenAI({
        apiKey,
      });
      llmClient = openaiProvider(modelName);
      break;
    case "anthropic":
      // Create Anthropic provider with explicit API key
      const anthropicProvider = createAnthropic({
        apiKey,
      });
      llmClient = anthropicProvider(modelName);
      break;
    default:
      throw new Error(`Unsupported LLM provider: ${provider}`);
  }

  try {
    if (isDebugMode()) {
      console.log(
        `[MCP Server LLM] Requesting review from ${provider} model ${modelName} with max tokens ${maxTokens}.`
      );
    } else {
      console.log(
        `[MCP Server LLM] Requesting review from ${provider} model ${modelName}.`
      );
    }
    
    const { text, finishReason, usage, warnings } = await generateText({
      model: llmClient,
      system: systemPrompt,
      messages: userMessages,
      maxTokens: maxTokens, // Now configurable with default value
      temperature: 0.2, // Lower temperature for more deterministic and factual reviews
    });

    if (warnings && warnings.length > 0) {
      warnings.forEach((warning) =>
        console.warn(`[MCP Server LLM] Warning from ${provider}:`, warning)
      );
    }
    
    if (isDebugMode() && usage) {
      console.log(
        `[MCP Server LLM] Review received from ${provider}. Finish Reason: ${finishReason}, Tokens Used: Input=${usage.promptTokens}, Output=${usage.completionTokens}`
      );
    } else {
      console.log(
        `[MCP Server LLM] Review received from ${provider}.`
      );
    }
    
    return text;
  } catch (error: any) {
    console.error(
      `[MCP Server LLM] Error getting LLM review from ${provider} (${modelName}):`,
      error
    );
    let detailedMessage = error.message;
    if (error.cause) {
      detailedMessage += ` | Cause: ${JSON.stringify(error.cause)}`;
    }
    // Attempt to get more details from common API error structures
    if (error.response && error.response.data && error.response.data.error) {
      detailedMessage += ` | API Error: ${JSON.stringify(
        error.response.data.error
      )}`;
    } else if (error.error && error.error.message) {
      // Anthropic SDK style
      detailedMessage += ` | API Error: ${error.error.message}`;
    }
    throw new Error(
      `LLM API call failed for ${provider} (${modelName}): ${detailedMessage}`
    );
  }
}
```

--------------------------------------------------------------------------------
/src/git-utils.ts:
--------------------------------------------------------------------------------

```typescript
import { execSync, ExecSyncOptionsWithStringEncoding } from "child_process";
import { ReviewTarget, isDebugMode } from "./config.js"; // Ensure .js for ESM NodeNext

/**
 * Gets the git diff for the specified target.
 * 
 * @param target - The git target to review ('staged', 'HEAD', or 'branch_diff')
 * @param baseBranch - For 'branch_diff' target, the base branch/commit to compare against
 * @returns The git diff as a string or a message if no changes are found
 * @throws Error if not in a git repository, or if git encounters any errors
 * 
 * Note: For branch_diff, this function assumes the remote is named 'origin'.
 * If your repository uses a different remote name, this operation may fail.
 */
export function getGitDiff(target: ReviewTarget, baseBranch?: string): string {
  const execOptions: ExecSyncOptionsWithStringEncoding = {
    encoding: "utf8",
    maxBuffer: 20 * 1024 * 1024, // Increased to 20MB buffer
    stdio: ["pipe", "pipe", "pipe"], // pipe stderr to catch git errors
  };

  let command: string = "";

  try {
    // Verify it's a git repository first
    execSync("git rev-parse --is-inside-work-tree", {
      ...execOptions,
      stdio: "ignore",
    });
  } catch (error) {
    console.error(
      "[MCP Server Git] Current directory is not a git repository or git is not found."
    );
    throw new Error(
      "Execution directory is not a git repository or git command is not available. Please run from a git project root."
    );
  }

  try {
    switch (target) {
      case "staged":
        command = "git diff --staged --patch-with-raw --unified=10"; // More context
        break;
      case "HEAD":
        command = "git diff HEAD --patch-with-raw --unified=10";
        break;
      case "branch_diff":
        if (!baseBranch || baseBranch.trim() === "") {
          throw new Error(
            "Base branch/commit is required for 'branch_diff' target and cannot be empty."
          );
        }
        // Sanitize baseBranch to prevent command injection
        // Only allow alphanumeric characters, underscore, dash, dot, and forward slash
        const sanitizedBaseBranch = baseBranch.replace(
          /[^a-zA-Z0-9_.\-/]/g,
          ""
        );
        if (sanitizedBaseBranch !== baseBranch) {
          throw new Error(
            `Invalid characters in base branch name. Only alphanumeric characters, underscore, dash, dot, and forward slash are allowed. Received: "${baseBranch}"`
          );
        }
        // Fetch the base branch to ensure the diff is against the latest version of it
        // Note: This assumes the remote is named 'origin'
        const fetchCommand = `git fetch origin ${sanitizedBaseBranch}:${sanitizedBaseBranch} --no-tags --quiet`;
        try {
          execSync(fetchCommand, execOptions);
        } catch (fetchError: any) {
          // Log a warning but proceed; the branch might be local or already up-to-date
          console.warn(
            `[MCP Server Git] Warning during 'git fetch' for base branch '${sanitizedBaseBranch}': ${fetchError.message}. Diff will proceed with local state.`
          );
        }
        command = `git diff ${sanitizedBaseBranch}...HEAD --patch-with-raw --unified=10`;
        break;
      default:
        // This case should ideally be caught by Zod validation on parameters
        throw new Error(`Unsupported git diff target: ${target}`);
    }

    // Only log the command if in debug mode
    if (isDebugMode()) {
      console.log(`[MCP Server Git] Executing: ${command}`);
    }
    
    // Execute the command (execOptions has encoding:'utf8' so the result should already be a string)
    const diffOutput = execSync(command, execOptions);
    
    // Ensure we always have a string to work with
    // This is for type safety and to handle any unexpected Buffer return types
    const diffString = Buffer.isBuffer(diffOutput) ? diffOutput.toString('utf8') : String(diffOutput);
    
    if (!diffString.trim()) {
      return "No changes found for the specified target.";
    }
    return diffString;
  } catch (error: any) {
    const errorMessage =
      error.stderr?.toString().trim() || error.message || "Unknown git error";
    console.error(
      `[MCP Server Git] Error getting git diff for target "${target}" (base: ${
        baseBranch || "N/A"
      }):`
    );
    console.error(`[MCP Server Git] Command: ${command || "N/A"}`);
    
    // Only log the full error details in debug mode
    if (isDebugMode()) {
      console.error(
        `[MCP Server Git] Stderr: ${error.stderr?.toString().trim()}`
      );
      console.error(
        `[MCP Server Git] Stdout: ${error.stdout?.toString().trim()}`
      );
    }
    
    throw new Error(
      `Failed to get git diff. Git error: ${errorMessage}. Ensure you are in a git repository and the target/base is valid.`
    );
  }
}
```

--------------------------------------------------------------------------------
/tests/git-utils.test.ts:
--------------------------------------------------------------------------------

```typescript
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest';
import { getGitDiff } from '../src/git-utils.js';
import { execSync } from 'child_process';

// Mock the child_process module
vi.mock('child_process', () => ({
  execSync: vi.fn(),
}));

describe('git-utils module', () => {
  beforeEach(() => {
    // Reset mocks between tests
    vi.resetAllMocks();
  });
  
  describe('getGitDiff()', () => {
    it('should throw an error if not in a git repository', () => {
      // Mock the execSync to throw an error for the git repo check
      vi.mocked(execSync).mockImplementationOnce(() => {
        throw new Error('Not a git repository');
      });
      
      expect(() => getGitDiff('HEAD')).toThrow(/not a git repository/i);
    });
    
    it('should handle staged changes correctly', () => {
      // Mock successful git repo check
      vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
      
      // Mock the diff command - this is the second execSync call in the function
      vi.mocked(execSync).mockImplementationOnce(() => 
        Buffer.from('diff --git a/file.js b/file.js\nsample diff output')
      );
      
      const result = getGitDiff('staged');
      expect(result).toContain('sample diff output');
    });
    
    it('should handle HEAD changes correctly', () => {
      // Mock successful git repo check
      vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
      
      // Mock the diff command - this is the second execSync call in the function
      vi.mocked(execSync).mockImplementationOnce(() => 
        Buffer.from('diff --git a/file.js b/file.js\nHEAD diff output')
      );
      
      const result = getGitDiff('HEAD');
      expect(result).toContain('HEAD diff output');
    });
    
    it('should return "No changes found" message when diff is empty', () => {
      // Mock successful git repo check
      vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
      
      // Mock empty diff output - this is the second execSync call
      vi.mocked(execSync).mockImplementationOnce(() => Buffer.from(''));
      
      const result = getGitDiff('HEAD');
      expect(result).toBe('No changes found for the specified target.');
    });
    
    it('should handle branch_diff correctly with successful fetch', () => {
      // Mock successful git repo check
      vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
      
      // Mock successful git fetch
      vi.mocked(execSync).mockImplementationOnce(() => Buffer.from(''));
      
      // Mock the diff command
      vi.mocked(execSync).mockImplementationOnce(() => 
        Buffer.from('diff --git a/file.js b/file.js\nbranch diff output')
      );
      
      const result = getGitDiff('branch_diff', 'main');
      expect(result).toContain('branch diff output');
    });
    
    it('should proceed with branch_diff even if fetch fails', () => {
      // Mock successful git repo check
      vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
      
      // Mock failed git fetch
      vi.mocked(execSync).mockImplementationOnce(() => {
        throw new Error('fetch failed');
      });
      
      // Mock the diff command
      vi.mocked(execSync).mockImplementationOnce(() => 
        Buffer.from('diff --git a/file.js b/file.js\nlocal branch diff output')
      );
      
      const result = getGitDiff('branch_diff', 'main');
      expect(result).toContain('local branch diff output');
    });
    
    it('should throw error for branch_diff with empty baseBranch', () => {
      // Mock successful git repo check
      vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
      
      expect(() => getGitDiff('branch_diff', '')).toThrow(/required for 'branch_diff'/i);
    });
    
    it('should throw error for branch_diff with invalid characters', () => {
      // Mock successful git repo check
      vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
      
      expect(() => getGitDiff('branch_diff', 'main;rm -rf /')).toThrow(/invalid characters in base branch/i);
    });
    
    it('should sanitize branch name correctly', () => {
      // Mock successful git repo check
      vi.mocked(execSync).mockImplementationOnce(() => Buffer.from('true'));
      
      // Mock successful git fetch - check that command has sanitized branch
      vi.mocked(execSync).mockImplementationOnce((command) => {
        expect(command).toContain('git fetch origin feature/branch:feature/branch');
        return Buffer.from('');
      });
      
      // Mock the diff command - check that command has sanitized branch
      vi.mocked(execSync).mockImplementationOnce((command) => {
        expect(command).toContain('git diff feature/branch...HEAD');
        return Buffer.from('branch diff output');
      });
      
      const result = getGitDiff('branch_diff', 'feature/branch');
      expect(result).toContain('branch diff output');
    });
  });
});

```

--------------------------------------------------------------------------------
/src/index.ts:
--------------------------------------------------------------------------------

```typescript
#!/usr/bin/env node
/**
 * MCP Server for performing code reviews using LLMs.
 * 
 * IMPORTANT: MCP Server logs are written to stderr to keep stdout clean for MCP communication.
 * All console.log/error/warn will output to stderr, preserving stdout exclusively for MCP protocol.
 */
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { CodeReviewToolParamsSchema, CodeReviewToolParams, isDebugMode } from "./config.js";
import { getGitDiff } from "./git-utils.js";
import { getLLMReview } from "./llm-service.js";
import { CoreMessage } from "ai";
import { readFileSync } from "fs";
import { fileURLToPath } from "url";
import { dirname, resolve } from "path";

// Get package.json data using file system
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const packagePath = resolve(__dirname, "../package.json");
const pkg = JSON.parse(readFileSync(packagePath, "utf8"));

// Maximum number of transport connection retry attempts
const MAX_CONNECTION_ATTEMPTS = 3;
const CONNECTION_RETRY_DELAY_MS = 2000;

async function main() {
  console.error("[MCP Server] Initializing Code Reviewer MCP Server...");

  const server = new McpServer({
    name: pkg.name,
    version: pkg.version,
    capabilities: {
      tools: { listChanged: false }, // Tool list is static
    },
  });

  // Register the code review tool
  registerCodeReviewTool(server);

  // Set up the MCP transport with connection retry logic
  await setupTransport(server);
}

/**
 * Registers the code review tool with the MCP server.
 * 
 * @param server - The MCP server instance
 */
function registerCodeReviewTool(server: McpServer) {
  server.tool(
    "perform_code_review",
    "Performs a code review using a specified LLM on git changes. Requires being run from the root of a git repository.",
    CodeReviewToolParamsSchema.shape,
    async (params: CodeReviewToolParams) => {
      try {
        console.error(
          `[MCP Server Tool] Received 'perform_code_review' request. Target: ${params.target}, Provider: ${params.llmProvider}, Model: ${params.modelName}`
        );

        // Step 1: Get the diff from git
        const diffResult = await getGitDiffForReview(params);
        if (diffResult.noChanges) {
          return {
            content: [
              { type: "text", text: "No changes detected for review." },
            ],
          };
        }

        // Step 2: Prepare LLM prompt and get the review
        const reviewResult = await generateLLMReview(params, diffResult.diff);

        return {
          content: [{ type: "text", text: reviewResult }],
          isError: false, // Explicitly set isError
        };
      } catch (error: any) {
        console.error(
          "[MCP Server Tool] Error in 'perform_code_review' tool:",
          error.stack || error.message
        );
        return {
          isError: true,
          content: [
            {
              type: "text",
              text: `Error performing code review: ${error.message}`,
            },
          ],
        };
      }
    }
  );
}

/**
 * Gets the git diff for review based on the provided parameters.
 * 
 * @param params - Code review tool parameters
 * @returns Object with the diff and a flag indicating if there are no changes
 */
async function getGitDiffForReview(params: CodeReviewToolParams): Promise<{ diff: string; noChanges: boolean }> {
  const diff = getGitDiff(params.target, params.diffBase);
  
  if (diff === "No changes found for the specified target.") {
    console.error("[MCP Server Tool] No changes detected for review.");
    return { diff: "", noChanges: true };
  }
  
  if (isDebugMode()) {
    console.error(
      `[MCP Server Tool] Git diff obtained successfully. Length: ${diff.length} chars.`
    );
  }
  
  return { diff, noChanges: false };
}

/**
 * Generates a code review using the specified LLM based on the git diff.
 * 
 * @param params - Code review tool parameters
 * @param diff - The git diff to review
 * @returns The generated code review
 */
async function generateLLMReview(params: CodeReviewToolParams, diff: string): Promise<string> {
  const systemPrompt = `You are an expert code reviewer. Your task is to review the provided code changes (git diff format) and offer constructive feedback.
${params.projectContext ? `Project Context: ${params.projectContext}\n` : ""}
The changes were made as part of the following task: "${params.taskDescription}"
${
  params.reviewFocus
    ? `Please specifically focus your review on: "${params.reviewFocus}"\n`
    : ""
}
Provide your review in a clear, concise, and actionable markdown format. Highlight potential bugs, suggest improvements for readability, maintainability, performance, and adherence to best practices. If you see positive aspects, mention them too. Structure your review logically, perhaps by file or by theme.`;

  const userMessages: CoreMessage[] = [
    {
      role: "user",
      content: `Please review the following code changes (git diff). Ensure your review is thorough and actionable:\n\n\`\`\`diff\n${diff}\n\`\`\``,
    },
  ];

  // Use the provided maxTokens parameter or default value
  const maxTokens = params.maxTokens || 32000;

  const review = await getLLMReview(
    params.llmProvider,
    params.modelName,
    systemPrompt,
    userMessages,
    maxTokens
  );
  
  console.error(`[MCP Server Tool] LLM review generated successfully.`);
  return review;
}

/**
 * Sets up the MCP transport with connection retry logic.
 * 
 * @param server - The MCP server instance
 */
async function setupTransport(server: McpServer) {
  let connectionAttempts = 0;
  let connected = false;

  while (!connected && connectionAttempts < MAX_CONNECTION_ATTEMPTS) {
    connectionAttempts++;
    try {
      const transport = new StdioServerTransport();
      await server.connect(transport);
      
      // Add event handler for disconnect
      transport.onclose = () => {
        console.error("[MCP Server] Transport connection closed unexpectedly.");
        process.exit(1); // Exit process to allow restart by supervisor
      };
      
      connected = true;
      console.error(
        "[MCP Server] Code Reviewer MCP Server is running via stdio and connected to transport."
      );
    } catch (error) {
      console.error(
        `[MCP Server] Connection attempt ${connectionAttempts}/${MAX_CONNECTION_ATTEMPTS} failed:`,
        error
      );
      
      if (connectionAttempts < MAX_CONNECTION_ATTEMPTS) {
        console.error(`[MCP Server] Retrying in ${CONNECTION_RETRY_DELAY_MS/1000} seconds...`);
        await new Promise(resolve => setTimeout(resolve, CONNECTION_RETRY_DELAY_MS));
      } else {
        console.error("[MCP Server] Maximum connection attempts exceeded. Exiting.");
        process.exit(1); 
      }
    }
  }
}

// Graceful shutdown
process.on("SIGINT", () => {
  console.error("[MCP Server] Received SIGINT. Shutting down...");
  // Perform any cleanup if necessary
  process.exit(0);
});

process.on("SIGTERM", () => {
  console.error("[MCP Server] Received SIGTERM. Shutting down...");
  // Perform any cleanup if necessary
  process.exit(0);
});

// Handle unhandled promise rejections
process.on("unhandledRejection", (reason, promise) => {
  console.error("[MCP Server] Unhandled Promise Rejection:", reason);
  // Continue running but log the error
});

main().catch((error) => {
  console.error(
    "[MCP Server] Unhandled fatal error in main execution:",
    error.stack || error.message
  );
  process.exit(1);
});
```