#
tokens: 12096/50000 17/17 files
lines: off (toggle) GitHub
raw markdown copy
# Directory Structure

```
├── .gitignore
├── debug_lmstudio.py
├── debug_server.py
├── install.bat
├── install.sh
├── INSTALLATION.md
├── prepare_for_claude.bat
├── prepare_for_claude.sh
├── README.md
├── requirements.txt
├── run_server.bat
├── run_server.sh
├── server.py
├── setup.bat
├── setup.sh
├── test_mcp.py
└── verify_setup.py
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
venv/
__pycache__/
*.py[cod]
*$py.class
.env
.venv
env/
ENV/
.DS_Store
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
# Claude-LMStudio Bridge

An MCP server that bridges Claude with local LLMs running in LM Studio.

## Overview

This tool allows Claude to interact with your local LLMs running in LM Studio, providing:

- Access to list all available models in LM Studio
- The ability to generate text using your local LLMs
- Support for chat completions through your local models
- A health check tool to verify connectivity with LM Studio

## Prerequisites

- [Claude Desktop](https://claude.ai/desktop) with MCP support
- [LM Studio](https://lmstudio.ai/) installed and running locally with API server enabled
- Python 3.8+ installed

## Quick Start (Recommended)

### For macOS/Linux:

1. Clone the repository
```bash
git clone https://github.com/infinitimeless/claude-lmstudio-bridge.git
cd claude-lmstudio-bridge
```

2. Run the setup script
```bash
chmod +x setup.sh
./setup.sh
```

3. Follow the setup script's instructions to configure Claude Desktop

### For Windows:

1. Clone the repository
```cmd
git clone https://github.com/infinitimeless/claude-lmstudio-bridge.git
cd claude-lmstudio-bridge
```

2. Run the setup script
```cmd
setup.bat
```

3. Follow the setup script's instructions to configure Claude Desktop

## Manual Setup

If you prefer to set things up manually:

1. Create a virtual environment (optional but recommended)
```bash
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
```

2. Install the required packages
```bash
pip install -r requirements.txt
```

3. Configure Claude Desktop:
   - Open Claude Desktop preferences
   - Navigate to the 'MCP Servers' section
   - Add a new MCP server with the following configuration:
     - **Name**: lmstudio-bridge
     - **Command**: /bin/bash (on macOS/Linux) or cmd.exe (on Windows)
     - **Arguments**: 
       - macOS/Linux: /path/to/claude-lmstudio-bridge/run_server.sh
       - Windows: /c C:\path\to\claude-lmstudio-bridge\run_server.bat

## Usage with Claude

After setting up the bridge, you can use the following commands in Claude:

1. Check the connection to LM Studio:
```
Can you check if my LM Studio server is running?
```

2. List available models:
```
List the available models in my local LM Studio
```

3. Generate text with a local model:
```
Generate a short poem about spring using my local LLM
```

4. Send a chat completion:
```
Ask my local LLM: "What are the main features of transformers in machine learning?"
```

## Troubleshooting

### Diagnosing LM Studio Connection Issues

Use the included debugging tool to check your LM Studio connection:

```bash
python debug_lmstudio.py
```

For more detailed tests:
```bash
python debug_lmstudio.py --test-chat --verbose
```

### Common Issues

**"Cannot connect to LM Studio API"**
- Make sure LM Studio is running
- Verify the API server is enabled in LM Studio (Settings > API Server)
- Check that the port (default: 1234) matches what's in your .env file

**"No models are loaded"**
- Open LM Studio and load a model
- Verify the model is running successfully

**"MCP package not found"**
- Try reinstalling: `pip install "mcp[cli]" httpx python-dotenv`
- Make sure you're using Python 3.8 or later

**"Claude can't find the bridge"**
- Check Claude Desktop configuration
- Make sure the path to run_server.sh or run_server.bat is correct and absolute
- Verify the server script is executable: `chmod +x run_server.sh` (on macOS/Linux)

## Advanced Configuration

You can customize the bridge behavior by creating a `.env` file with these settings:

```
LMSTUDIO_HOST=127.0.0.1
LMSTUDIO_PORT=1234
DEBUG=false
```

Set `DEBUG=true` to enable verbose logging for troubleshooting.

## License

MIT

```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
# Core dependencies
mcp[cli]>=0.1.0
httpx>=0.24.0
python-dotenv>=1.0.0

# For development and testing
pytest>=7.0.0

```

--------------------------------------------------------------------------------
/debug_server.py:
--------------------------------------------------------------------------------

```python
import sys
import traceback
from mcp.server.fastmcp import FastMCP

# Print startup message to stderr for debugging
print("Starting debug server...", file=sys.stderr)

try:
    # Initialize FastMCP server
    print("Initializing FastMCP server...", file=sys.stderr)
    mcp = FastMCP("lmstudio-bridge")

    @mcp.tool()
    async def debug_test() -> str:
        """Basic test function to verify MCP server is working.
        
        Returns:
            A simple confirmation message
        """
        print("debug_test function called", file=sys.stderr)
        return "MCP server is working correctly!"

    if __name__ == "__main__":
        print("Starting server with stdio transport...", file=sys.stderr)
        # Initialize and run the server
        mcp.run(transport='stdio')
except Exception as e:
    print(f"ERROR: {str(e)}", file=sys.stderr)
    print("Traceback:", file=sys.stderr)
    traceback.print_exc(file=sys.stderr)

```

--------------------------------------------------------------------------------
/prepare_for_claude.bat:
--------------------------------------------------------------------------------

```
@echo off
SETLOCAL

echo === Claude-LMStudio Bridge Setup ===
echo This script will prepare the environment for use with Claude Desktop
echo.

:: Try to install MCP globally to ensure it's available
echo Installing MCP package globally...
python -m pip install "mcp[cli]" httpx

:: Check if installation was successful
python -c "import mcp" >nul 2>&1
IF %ERRORLEVEL% NEQ 0 (
  echo X Failed to install MCP package. Please check your Python installation.
  EXIT /B 1
) ELSE (
  echo ✓ MCP package installed successfully
)

:: Create virtual environment if it doesn't exist
IF NOT EXIST venv (
  echo Creating virtual environment...
  python -m venv venv
  echo ✓ Created virtual environment
  
  :: Activate and install dependencies
  CALL venv\Scripts\activate.bat
  python -m pip install -r requirements.txt
  echo ✓ Installed dependencies in virtual environment
) ELSE (
  echo ✓ Virtual environment already exists
)

:: Display configuration instructions
echo.
echo === Configuration Instructions ===
echo 1. Open Claude Desktop preferences
echo 2. Navigate to the 'MCP Servers' section
echo 3. Add a new MCP server with the following configuration:
echo.
echo    Name: lmstudio-bridge
echo    Command: cmd.exe
echo    Arguments: /c %CD%\run_server.bat
echo.
echo 4. Start LM Studio and ensure the API server is running
echo 5. Restart Claude Desktop
echo.
echo Setup complete! You can now use the LMStudio bridge with Claude Desktop.

ENDLOCAL
```

--------------------------------------------------------------------------------
/test_mcp.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Simple script to test if MCP package is working correctly.
Run this script to verify the MCP installation before attempting to run the full server.
"""
import sys
import traceback

print("Testing MCP installation...")

try:
    print("Importing mcp package...")
    from mcp.server.fastmcp import FastMCP
    print("✅ Successfully imported FastMCP")
    
    print("Creating FastMCP instance...")
    mcp = FastMCP("test-server")
    print("✅ Successfully created FastMCP instance")
    
    print("Registering simple tool...")
    @mcp.tool()
    async def hello() -> str:
        return "Hello, world!"
    print("✅ Successfully registered tool")
    
    print("All tests passed! MCP appears to be correctly installed.")
    print("\nNext steps:")
    print("1. First try running the debug_server.py script")
    print("2. Then try running the main server.py script if debug_server works")

except ImportError as e:
    print(f"❌ Error importing MCP: {str(e)}")
    print("\nTry reinstalling the MCP package with:")
    print("pip uninstall mcp")
    print("pip install 'mcp[cli]'")
    
except Exception as e:
    print(f"❌ Unexpected error: {str(e)}")
    traceback.print_exc()
    
    print("\nTroubleshooting tips:")
    print("1. Make sure you're using Python 3.8 or newer")
    print("2. Check that you're in the correct virtual environment")
    print("3. Try reinstalling dependencies: pip install -r requirements.txt")

```

--------------------------------------------------------------------------------
/prepare_for_claude.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
#
# This script prepares the Claude-LMStudio Bridge for use with Claude Desktop
# It installs required packages and ensures everything is ready to run
#

echo "=== Claude-LMStudio Bridge Setup ==="
echo "This script will prepare the environment for use with Claude Desktop"
echo

# Make the run script executable
chmod +x run_server.sh
echo "✅ Made run_server.sh executable"

# Try to install MCP globally to ensure it's available
echo "Installing MCP package globally..."
python -m pip install "mcp[cli]" httpx

# Check if installation was successful
if python -c "import mcp" 2>/dev/null; then
  echo "✅ MCP package installed successfully"
else
  echo "❌ Failed to install MCP package. Please check your Python installation."
  exit 1
fi

# Create virtual environment if it doesn't exist
if [ ! -d "venv" ]; then
  echo "Creating virtual environment..."
  python -m venv venv
  echo "✅ Created virtual environment"
  
  # Activate and install dependencies
  source venv/bin/activate
  python -m pip install -r requirements.txt
  echo "✅ Installed dependencies in virtual environment"
else
  echo "✅ Virtual environment already exists"
fi

# Display configuration instructions
echo
echo "=== Configuration Instructions ==="
echo "1. Open Claude Desktop preferences"
echo "2. Navigate to the 'MCP Servers' section"
echo "3. Add a new MCP server with the following configuration:"
echo
echo "   Name: lmstudio-bridge"
echo "   Command: /bin/bash"
echo "   Arguments: $(pwd)/run_server.sh"
echo
echo "4. Start LM Studio and ensure the API server is running"
echo "5. Restart Claude Desktop"
echo
echo "Setup complete! You can now use the LMStudio bridge with Claude Desktop."

```

--------------------------------------------------------------------------------
/setup.bat:
--------------------------------------------------------------------------------

```
@echo off
REM setup.bat - Simplified setup script for Claude-LMStudio Bridge

echo === Claude-LMStudio Bridge Setup ===
echo.

REM Create and activate virtual environment
if not exist venv (
  echo Creating virtual environment...
  python -m venv venv
  echo ✓ Created virtual environment
) else (
  echo ✓ Virtual environment already exists
)

REM Activate the virtual environment
call venv\Scripts\activate.bat

REM Install dependencies
echo Installing dependencies...
pip install -r requirements.txt
echo ✓ Installed dependencies

REM Create default configuration
if not exist .env (
  echo Creating default configuration...
  (
    echo LMSTUDIO_HOST=127.0.0.1
    echo LMSTUDIO_PORT=1234
    echo DEBUG=false
  ) > .env
  echo ✓ Created .env configuration file
) else (
  echo ✓ Configuration file already exists
)

REM Check if LM Studio is running
set PORT_CHECK=0
netstat -an | findstr "127.0.0.1:1234" > nul && set PORT_CHECK=1
if %PORT_CHECK%==1 (
  echo ✓ LM Studio is running on port 1234
) else (
  echo ⚠ LM Studio does not appear to be running on port 1234
  echo   Please start LM Studio and enable the API server (Settings ^> API Server)
)

echo.
echo ✓ Setup complete!
echo.
echo To start the bridge, run:
echo   venv\Scripts\activate.bat ^&^& python server.py
echo.
echo To configure with Claude Desktop:
echo 1. Open Claude Desktop preferences
echo 2. Navigate to the 'MCP Servers' section
echo 3. Add a new MCP server with the following configuration:
echo    - Name: lmstudio-bridge
echo    - Command: cmd.exe
echo    - Arguments: /c %CD%\run_server.bat
echo.
echo Make sure LM Studio is running with API server enabled on port 1234.

REM Keep the window open
pause

```

--------------------------------------------------------------------------------
/setup.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash
# setup.sh - Simplified setup script for Claude-LMStudio Bridge

echo "=== Claude-LMStudio Bridge Setup ==="

# Create and activate virtual environment
if [ ! -d "venv" ]; then
  echo "Creating virtual environment..."
  python -m venv venv
  echo "✅ Created virtual environment"
else
  echo "✅ Virtual environment already exists"
fi

# Activate the virtual environment
source venv/bin/activate

# Install dependencies
echo "Installing dependencies..."
pip install -r requirements.txt
echo "✅ Installed dependencies"

# Create default configuration
if [ ! -f ".env" ]; then
  echo "Creating default configuration..."
  cat > .env << EOL
LMSTUDIO_HOST=127.0.0.1
LMSTUDIO_PORT=1234
DEBUG=false
EOL
  echo "✅ Created .env configuration file"
else
  echo "✅ Configuration file already exists"
fi

# Make run_server.sh executable
chmod +x run_server.sh
echo "✅ Made run_server.sh executable"

# Check if LM Studio is running
if nc -z localhost 1234 2>/dev/null; then
  echo "✅ LM Studio is running on port 1234"
else
  echo "⚠️ LM Studio does not appear to be running on port 1234"
  echo "   Please start LM Studio and enable the API server (Settings > API Server)"
fi

echo
echo "✅ Setup complete!"
echo
echo "To start the bridge, run:"
echo "  source venv/bin/activate && python server.py"
echo
echo "To configure with Claude Desktop:"
echo "1. Open Claude Desktop preferences"
echo "2. Navigate to the 'MCP Servers' section"
echo "3. Add a new MCP server with the following configuration:"
echo "   - Name: lmstudio-bridge"
echo "   - Command: /bin/bash"
echo "   - Arguments: $(pwd)/run_server.sh"
echo
echo "Make sure LM Studio is running with API server enabled on port 1234."

```

--------------------------------------------------------------------------------
/INSTALLATION.md:
--------------------------------------------------------------------------------

```markdown
# Installation Guide for Claude-LMStudio Bridge

This guide provides detailed instructions for setting up the Claude-LMStudio Bridge MCP server.

## Installing the MCP Python SDK

The primary issue users face is not having the MCP module installed properly. Here are different ways to install it:

### Using uv (Recommended)

`uv` is a modern Python package installer that's recommended for MCP development:

```bash
# Install uv if you don't have it
pip install uv

# Install the MCP SDK with CLI support
uv add "mcp[cli]"
```

### Using pip

Alternatively, you can use pip:

```bash
pip install "mcp[cli]"
```

## Verifying Installation

After installation, verify that the module is correctly installed:

```bash
python -c "import mcp; print(mcp.__version__)"
```

This should print the version of the MCP SDK if it's installed correctly.

## Ensuring the Correct Environment

Make sure you're using the correct Python environment:

1. If using a virtual environment, activate it before running your script:

   ```bash
   # Activate virtual environment
   source venv/bin/activate  # For Mac/Linux
   # or
   venv\Scripts\activate  # For Windows
   ```

2. Verify the Python path to ensure you're using the expected Python interpreter:

   ```bash
   which python  # On Mac/Linux
   where python  # On Windows
   ```

## Testing the Installation

Run the test script to verify your setup:

```bash
python test_mcp.py
```

If this works successfully, you should be ready to run the server.

## Common Issues and Solutions

1. **ModuleNotFoundError: No module named 'mcp'**
   - The MCP module isn't installed in your current Python environment
   - Solution: Install the MCP SDK as described above

2. **MCP installed but still getting import errors**
   - You might be running Python from a different environment
   - Solution: Check which Python is being used with `which python` and make sure your virtual environment is activated

3. **Error loading the server in Claude**
   - Make sure you're using absolute paths in your Claude Desktop configuration
   - Check that the server is executable and that Python has permission to access it

```

--------------------------------------------------------------------------------
/run_server.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash

# Configuration - Auto-detect Python path
if [ -z "$PYTHON_PATH" ]; then
  PYTHON_PATH=$(which python3 2>/dev/null || which python 2>/dev/null)
  if [ -z "$PYTHON_PATH" ]; then
    echo "ERROR: Python not found. Please install Python 3." >&2
    exit 1
  fi
fi

# Print current environment details
echo "Current directory: $(pwd)" >&2
echo "Using Python at: $PYTHON_PATH" >&2

# Check if Python exists at the specified path
if [ ! -f "$PYTHON_PATH" ]; then
  echo "ERROR: Python not found at $PYTHON_PATH" >&2
  echo "Please install Python or set the correct path in this script." >&2
  exit 1
fi

# Check if mcp is installed, if not, try to install it
if ! $PYTHON_PATH -c "import mcp" 2>/dev/null; then
  echo "MCP package not found, attempting to install..." >&2
  
  # Try to install using python -m pip
  $PYTHON_PATH -m pip install "mcp[cli]" httpx || {
    echo "Failed to install MCP package. Please install manually with:" >&2
    echo "$PYTHON_PATH -m pip install \"mcp[cli]\" httpx" >&2
    exit 1
  }
  
  # Check if installation was successful
  if ! $PYTHON_PATH -c "import mcp" 2>/dev/null; then
    echo "MCP package was installed but still can't be imported." >&2
    echo "This might be due to a Python path issue." >&2
    exit 1
  fi
fi

# Check if httpx is installed
if ! $PYTHON_PATH -c "import httpx" 2>/dev/null; then
  echo "httpx package not found, attempting to install..." >&2
  $PYTHON_PATH -m pip install httpx || {
    echo "Failed to install httpx package." >&2
    exit 1
  }
fi

# Check if dotenv is installed (for .env file support)
if ! $PYTHON_PATH -c "import dotenv" 2>/dev/null; then
  echo "python-dotenv package not found, attempting to install..." >&2
  $PYTHON_PATH -m pip install python-dotenv || {
    echo "Failed to install python-dotenv package." >&2
    exit 1
  }
fi

# Check if virtual environment exists and use it if it does
if [ -d "venv" ] && [ -f "venv/bin/python" ]; then
  echo "Using Python from virtual environment" >&2
  PYTHON_PATH=$(pwd)/venv/bin/python
  echo "Updated Python path to: $PYTHON_PATH" >&2
fi

# Attempt to check if LM Studio is running before starting
if command -v nc &> /dev/null; then
  if ! nc -z localhost 1234 2>/dev/null; then
    echo "WARNING: LM Studio does not appear to be running on port 1234" >&2
    echo "Please make sure LM Studio is running with the API server enabled" >&2
  else
    echo "✓ LM Studio API server appears to be running on port 1234" >&2
  fi
fi

# Run the server script
echo "Starting server.py with $PYTHON_PATH..." >&2
$PYTHON_PATH server.py

```

--------------------------------------------------------------------------------
/install.sh:
--------------------------------------------------------------------------------

```bash
#!/bin/bash

# Claude-LMStudio Bridge Installer
# This script will set up the Claude-LMStudio Bridge for use with Claude Desktop

echo "===== Claude-LMStudio Bridge Installer ====="
echo "This will configure the bridge to work with Claude Desktop"
echo

# Find Python location
PYTHON_PATH=$(which python3)
if [ -z "$PYTHON_PATH" ]; then
  echo "❌ ERROR: Python 3 not found in your PATH"
  echo "Please install Python 3 first and try again"
  exit 1
fi

echo "✅ Found Python at: $PYTHON_PATH"

# Update the run_server.sh script with the correct Python path
echo "Updating run_server.sh with Python path..."
sed -i '' "s|PYTHON_PATH=.*|PYTHON_PATH=\"$PYTHON_PATH\"|g" run_server.sh
chmod +x run_server.sh

# Install required packages
echo "Installing required Python packages..."
"$PYTHON_PATH" -m pip install "mcp[cli]" httpx

# Check if installation was successful
if ! "$PYTHON_PATH" -c "import mcp" 2>/dev/null; then
  echo "❌ ERROR: Failed to install MCP package"
  echo "Try running manually: $PYTHON_PATH -m pip install \"mcp[cli]\" httpx"
  exit 1
fi

echo "✅ MCP package installed successfully"

# Get full path to the run_server.sh script
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
SCRIPT_PATH="$SCRIPT_DIR/run_server.sh"

# Create or update Claude Desktop config
CONFIG_DIR="$HOME/Library/Application Support/Claude"
CONFIG_FILE="$CONFIG_DIR/claude_desktop_config.json"

mkdir -p "$CONFIG_DIR"

if [ -f "$CONFIG_FILE" ]; then
  # Backup existing config
  cp "$CONFIG_FILE" "$CONFIG_FILE.backup"
  echo "Created backup of existing config at $CONFIG_FILE.backup"
  
  # Check if JSON is valid and has mcpServers property
  if grep -q "\"mcpServers\"" "$CONFIG_FILE"; then
    # Add or update lmstudio-bridge entry
    TMP_FILE=$(mktemp)
    jq --arg path "$SCRIPT_PATH" '.mcpServers["lmstudio-bridge"] = {"command": "/bin/bash", "args": [$path]}' "$CONFIG_FILE" > "$TMP_FILE"
    mv "$TMP_FILE" "$CONFIG_FILE"
  else
    # Create mcpServers section
    TMP_FILE=$(mktemp)
    jq --arg path "$SCRIPT_PATH" '. + {"mcpServers": {"lmstudio-bridge": {"command": "/bin/bash", "args": [$path]}}}' "$CONFIG_FILE" > "$TMP_FILE"
    mv "$TMP_FILE" "$CONFIG_FILE"
  fi
else
  # Create new config file
  echo "{
  \"mcpServers\": {
    \"lmstudio-bridge\": {
      \"command\": \"/bin/bash\",
      \"args\": [
        \"$SCRIPT_PATH\"
      ]
    }
  }
}" > "$CONFIG_FILE"
fi

echo "✅ Updated Claude Desktop configuration at $CONFIG_FILE"

echo
echo "✅ Installation complete!"
echo "Please restart Claude Desktop to use the LMStudio bridge"
echo
echo "If you encounter any issues, edit run_server.sh to check settings"
echo "or refer to the README.md for troubleshooting steps."

```

--------------------------------------------------------------------------------
/run_server.bat:
--------------------------------------------------------------------------------

```
@echo off
SETLOCAL

REM Configuration - Auto-detect Python path
IF "%PYTHON_PATH%"=="" (
  FOR /F "tokens=*" %%i IN ('where python') DO (
    SET PYTHON_PATH=%%i
    GOTO :found_python
  )
  
  echo ERROR: Python not found in your PATH 1>&2
  echo Please install Python first and make sure it's in your PATH 1>&2
  EXIT /B 1
  
  :found_python
)

REM Print current environment details
echo Current directory: %CD% 1>&2
echo Using Python at: %PYTHON_PATH% 1>&2

REM Check if Python exists at the specified path
IF NOT EXIST "%PYTHON_PATH%" (
  echo ERROR: Python not found at %PYTHON_PATH% 1>&2
  echo Please install Python or set the correct path in this script. 1>&2
  EXIT /B 1
)

REM Check if mcp is installed, if not, try to install it
"%PYTHON_PATH%" -c "import mcp" >nul 2>&1
IF %ERRORLEVEL% NEQ 0 (
  echo MCP package not found, attempting to install... 1>&2
  
  REM Try to install using python -m pip
  "%PYTHON_PATH%" -m pip install "mcp[cli]" httpx
  IF %ERRORLEVEL% NEQ 0 (
    echo Failed to install MCP package. Please install manually with: 1>&2
    echo "%PYTHON_PATH%" -m pip install "mcp[cli]" httpx 1>&2
    EXIT /B 1
  )
  
  REM Check if installation was successful
  "%PYTHON_PATH%" -c "import mcp" >nul 2>&1
  IF %ERRORLEVEL% NEQ 0 (
    echo MCP package was installed but still can't be imported. 1>&2
    echo This might be due to a Python path issue. 1>&2
    EXIT /B 1
  )
)

REM Check if httpx is installed
"%PYTHON_PATH%" -c "import httpx" >nul 2>&1
IF %ERRORLEVEL% NEQ 0 (
  echo httpx package not found, attempting to install... 1>&2
  "%PYTHON_PATH%" -m pip install httpx
  IF %ERRORLEVEL% NEQ 0 (
    echo Failed to install httpx package. 1>&2
    EXIT /B 1
  )
)

REM Check if dotenv is installed (for .env file support)
"%PYTHON_PATH%" -c "import dotenv" >nul 2>&1
IF %ERRORLEVEL% NEQ 0 (
  echo python-dotenv package not found, attempting to install... 1>&2
  "%PYTHON_PATH%" -m pip install python-dotenv
  IF %ERRORLEVEL% NEQ 0 (
    echo Failed to install python-dotenv package. 1>&2
    EXIT /B 1
  )
)

REM Check if virtual environment exists and use it if it does
IF EXIST "venv\Scripts\python.exe" (
  echo Using Python from virtual environment 1>&2
  SET PYTHON_PATH=%CD%\venv\Scripts\python.exe
  echo Updated Python path to: %PYTHON_PATH% 1>&2
)

REM Attempt to check if LM Studio is running before starting
netstat -an | findstr "127.0.0.1:1234" >nul
IF %ERRORLEVEL% NEQ 0 (
  echo WARNING: LM Studio does not appear to be running on port 1234 1>&2
  echo Please make sure LM Studio is running with the API server enabled 1>&2
) ELSE (
  echo ✓ LM Studio API server appears to be running on port 1234 1>&2
)

REM Run the server script
echo Starting server.py with %PYTHON_PATH%... 1>&2
"%PYTHON_PATH%" server.py

ENDLOCAL

```

--------------------------------------------------------------------------------
/install.bat:
--------------------------------------------------------------------------------

```
@echo off
echo ===== Claude-LMStudio Bridge Installer =====
echo This will configure the bridge to work with Claude Desktop
echo.

:: Find Python location
for /f "tokens=*" %%i in ('where python') do (
    set PYTHON_PATH=%%i
    goto :found_python
)

echo X ERROR: Python not found in your PATH
echo Please install Python first and try again
exit /b 1

:found_python
echo v Found Python at: %PYTHON_PATH%

:: Update the run_server.bat script with the correct Python path
echo Updating run_server.bat with Python path...
powershell -Command "(Get-Content run_server.bat) -replace 'SET PYTHON_PATH=.*', 'SET PYTHON_PATH=%PYTHON_PATH%' | Set-Content run_server.bat"

:: Install required packages
echo Installing required Python packages...
"%PYTHON_PATH%" -m pip install "mcp[cli]" httpx

:: Check if installation was successful
"%PYTHON_PATH%" -c "import mcp" >nul 2>&1
if %ERRORLEVEL% NEQ 0 (
    echo X ERROR: Failed to install MCP package
    echo Try running manually: "%PYTHON_PATH%" -m pip install "mcp[cli]" httpx
    exit /b 1
)

echo v MCP package installed successfully

:: Get full path to the run_server.bat script
set SCRIPT_DIR=%~dp0
set SCRIPT_PATH=%SCRIPT_DIR%run_server.bat
echo Script path: %SCRIPT_PATH%

:: Create or update Claude Desktop config
set CONFIG_DIR=%APPDATA%\Claude
set CONFIG_FILE=%CONFIG_DIR%\claude_desktop_config.json

if not exist "%CONFIG_DIR%" mkdir "%CONFIG_DIR%"

if exist "%CONFIG_FILE%" (
    :: Backup existing config
    copy "%CONFIG_FILE%" "%CONFIG_FILE%.backup" >nul
    echo Created backup of existing config at %CONFIG_FILE%.backup
    
    :: Create new config file - we'll use a simple approach for Windows
    echo {> "%CONFIG_FILE%"
    echo   "mcpServers": {>> "%CONFIG_FILE%"
    echo     "lmstudio-bridge": {>> "%CONFIG_FILE%"
    echo       "command": "cmd.exe",>> "%CONFIG_FILE%"
    echo       "args": [>> "%CONFIG_FILE%"
    echo         "/c",>> "%CONFIG_FILE%"
    echo         "%SCRIPT_PATH:\=\\%">> "%CONFIG_FILE%"
    echo       ]>> "%CONFIG_FILE%"
    echo     }>> "%CONFIG_FILE%"
    echo   }>> "%CONFIG_FILE%"
    echo }>> "%CONFIG_FILE%"
) else (
    :: Create new config file
    echo {> "%CONFIG_FILE%"
    echo   "mcpServers": {>> "%CONFIG_FILE%"
    echo     "lmstudio-bridge": {>> "%CONFIG_FILE%"
    echo       "command": "cmd.exe",>> "%CONFIG_FILE%"
    echo       "args": [>> "%CONFIG_FILE%"
    echo         "/c",>> "%CONFIG_FILE%"
    echo         "%SCRIPT_PATH:\=\\%">> "%CONFIG_FILE%"
    echo       ]>> "%CONFIG_FILE%"
    echo     }>> "%CONFIG_FILE%"
    echo   }>> "%CONFIG_FILE%"
    echo }>> "%CONFIG_FILE%"
)

echo v Updated Claude Desktop configuration at %CONFIG_FILE%

echo.
echo v Installation complete!
echo Please restart Claude Desktop to use the LMStudio bridge
echo.
echo If you encounter any issues, edit run_server.bat to check settings
echo or refer to the README.md for troubleshooting steps.

pause
```

--------------------------------------------------------------------------------
/verify_setup.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
Verification script to check if all required packages are installed.
This script will check for the presence of essential packages and their versions.
"""
import sys
import subprocess
import platform

def check_python_version():
    """Check if Python version is 3.8 or higher."""
    version = sys.version_info
    if version.major < 3 or (version.major == 3 and version.minor < 8):
        print(f"❌ Python version too old: {platform.python_version()}")
        print("   MCP requires Python 3.8 or higher.")
        return False
    else:
        print(f"✅ Python version: {platform.python_version()}")
        return True

def check_package(package_name):
    """Check if a package is installed and get its version."""
    try:
        if package_name == "mcp":
            # Special handling for mcp to test import
            module = __import__(package_name)
            version = getattr(module, "__version__", "unknown")
            print(f"✅ {package_name} is installed (version: {version})")
            return True
        else:
            # Use pip to check other packages
            result = subprocess.run(
                [sys.executable, "-m", "pip", "show", package_name],
                capture_output=True, 
                text=True
            )
            if result.returncode == 0:
                for line in result.stdout.splitlines():
                    if line.startswith("Version:"):
                        version = line.split(":", 1)[1].strip()
                        print(f"✅ {package_name} is installed (version: {version})")
                        return True
            print(f"❌ {package_name} is not installed")
            return False
    except ImportError:
        print(f"❌ {package_name} is not installed")
        return False
    except Exception as e:
        print(f"❌ Error checking {package_name}: {str(e)}")
        return False

def check_environment():
    """Check if running in a virtual environment."""
    in_venv = hasattr(sys, "real_prefix") or (
        hasattr(sys, "base_prefix") and sys.base_prefix != sys.prefix
    )
    if in_venv:
        print(f"✅ Running in virtual environment: {sys.prefix}")
        return True
    else:
        print("⚠️ Not running in a virtual environment")
        print("   It's recommended to use a virtual environment for this project")
        return True  # Not critical

def main():
    """Run all checks."""
    print("🔍 Checking environment setup for Claude-LMStudio Bridge...")
    print("-" * 60)
    
    success = True
    
    # Check Python version
    if not check_python_version():
        success = False
    
    # Check virtual environment
    check_environment()
    
    # Check essential packages
    required_packages = ["mcp", "httpx"]
    for package in required_packages:
        if not check_package(package):
            success = False
    
    print("-" * 60)
    if success:
        print("✅ All essential checks passed! Your environment is ready.")
        print("\nNext steps:")
        print("1. Run 'python test_mcp.py' to test MCP functionality")
        print("2. Run 'python debug_server.py' to test a simple MCP server")
        print("3. Run 'python server.py' to start the full bridge server")
    else:
        print("❌ Some checks failed. Please address the issues above.")
        print("\nCommon solutions:")
        print("1. Install MCP: pip install 'mcp[cli]'")
        print("2. Install httpx: pip install httpx")
        print("3. Upgrade Python to 3.8+: https://www.python.org/downloads/")
    
    return 0 if success else 1

if __name__ == "__main__":
    sys.exit(main())

```

--------------------------------------------------------------------------------
/debug_lmstudio.py:
--------------------------------------------------------------------------------

```python
#!/usr/bin/env python3
"""
debug_lmstudio.py - Simple diagnostic tool for LM Studio connectivity

This script tests the connection to LM Studio's API server and helps identify
issues with the connection or API calls.
"""
import sys
import json
import traceback
import argparse
import os
import httpx
import asyncio

# Set up command-line arguments
parser = argparse.ArgumentParser(description="Test connection to LM Studio API")
parser.add_argument("--host", default="127.0.0.1", help="LM Studio API host (default: 127.0.0.1)")
parser.add_argument("--port", default="1234", help="LM Studio API port (default: 1234)")
parser.add_argument("--test-prompt", action="store_true", help="Test with a simple prompt")
parser.add_argument("--test-chat", action="store_true", help="Test with a simple chat message")
parser.add_argument("--verbose", "-v", action="store_true", help="Show verbose output")
args = parser.parse_args()

# Configure API URL
API_URL = f"http://{args.host}:{args.port}/v1"
print(f"Testing connection to LM Studio API at {API_URL}")

async def test_connection():
    """Test basic connectivity to the LM Studio API server"""
    try:
        print("\n=== Testing basic connectivity ===")
        async with httpx.AsyncClient() as client:
            response = await client.get(f"{API_URL}/models", timeout=5.0)
            
            if response.status_code == 200:
                print("✅ Connection successful!")
                
                # Check for available models
                data = response.json()
                if "data" in data and isinstance(data["data"], list):
                    if len(data["data"]) > 0:
                        models = [model.get("id", "Unknown") for model in data["data"]]
                        print(f"✅ Found {len(models)} available model(s): {', '.join(models)}")
                    else:
                        print("⚠️ No models are currently loaded in LM Studio")
                else:
                    print("⚠️ Unexpected response format from models endpoint")
                
                if args.verbose:
                    print("\nResponse data:")
                    print(json.dumps(data, indent=2))
                
                return True
            else:
                print(f"❌ Connection failed with status code: {response.status_code}")
                print(f"Response: {response.text[:200]}")
                return False
    except Exception as e:
        print(f"❌ Connection error: {str(e)}")
        if args.verbose:
            traceback.print_exc()
        return False

async def test_completion():
    """Test text completion API with a simple prompt"""
    if not await test_connection():
        return False
    
    print("\n=== Testing text completion API ===")
    try:
        # Simple test prompt
        payload = {
            "prompt": "Hello, my name is",
            "max_tokens": 50,
            "temperature": 0.7,
            "stream": False
        }
        
        print("Sending test prompt: 'Hello, my name is'")
        
        async with httpx.AsyncClient() as client:
            response = await client.post(
                f"{API_URL}/completions",
                json=payload,
                timeout=10.0
            )
            
            if response.status_code == 200:
                data = response.json()
                if "choices" in data and len(data["choices"]) > 0:
                    completion = data["choices"][0].get("text", "")
                    print(f"✅ Received completion response: '{completion[:50]}...'")
                    
                    if args.verbose:
                        print("\nFull response data:")
                        print(json.dumps(data, indent=2))
                    
                    return True
                else:
                    print("❌ No completion text received in the response")
                    print(f"Response: {json.dumps(data, indent=2)}")
                    return False
            else:
                print(f"❌ Completion request failed with status code: {response.status_code}")
                print(f"Response: {response.text[:200]}")
                return False
    except Exception as e:
        print(f"❌ Error during completion test: {str(e)}")
        if args.verbose:
            traceback.print_exc()
        return False

async def test_chat():
    """Test chat completion API with a simple message"""
    if not await test_connection():
        return False
    
    print("\n=== Testing chat completion API ===")
    try:
        # Simple test chat message
        payload = {
            "messages": [
                {"role": "user", "content": "What is the capital of France?"}
            ],
            "max_tokens": 50,
            "temperature": 0.7,
            "stream": False
        }
        
        print("Sending test chat message: 'What is the capital of France?'")
        
        async with httpx.AsyncClient() as client:
            response = await client.post(
                f"{API_URL}/chat/completions",
                json=payload,
                timeout=10.0
            )
            
            if response.status_code == 200:
                data = response.json()
                if "choices" in data and len(data["choices"]) > 0:
                    if "message" in data["choices"][0] and "content" in data["choices"][0]["message"]:
                        message = data["choices"][0]["message"]["content"]
                        print(f"✅ Received chat response: '{message[:50]}...'")
                        
                        if args.verbose:
                            print("\nFull response data:")
                            print(json.dumps(data, indent=2))
                        
                        return True
                    else:
                        print("❌ No message content received in the response")
                        print(f"Response: {json.dumps(data, indent=2)}")
                        return False
                else:
                    print("❌ No choices received in the response")
                    print(f"Response: {json.dumps(data, indent=2)}")
                    return False
            else:
                print(f"❌ Chat request failed with status code: {response.status_code}")
                print(f"Response: {response.text[:200]}")
                return False
    except Exception as e:
        print(f"❌ Error during chat test: {str(e)}")
        if args.verbose:
            traceback.print_exc()
        return False

async def run_tests():
    """Run all selected tests"""
    try:
        connection_ok = await test_connection()
        
        if args.test_prompt and connection_ok:
            await test_completion()
        
        if args.test_chat and connection_ok:
            await test_chat()
            
        if not args.test_prompt and not args.test_chat and connection_ok:
            # If no specific tests are requested, but connection is OK,
            # give a helpful message about next steps
            print("\n=== Next Steps ===")
            print("Connection to LM Studio API is working.")
            print("Try these additional tests:")
            print("  python debug_lmstudio.py --test-prompt  # Test text completion")
            print("  python debug_lmstudio.py --test-chat    # Test chat completion")
            print("  python debug_lmstudio.py -v --test-chat # Verbose test output")
    
    except Exception as e:
        print(f"❌ Unexpected error: {str(e)}")
        traceback.print_exc()

# Run the tests
if __name__ == "__main__":
    try:
        asyncio.run(run_tests())
    except KeyboardInterrupt:
        print("\nTests interrupted.")
        sys.exit(1)

```

--------------------------------------------------------------------------------
/server.py:
--------------------------------------------------------------------------------

```python
import sys
import traceback
import os
import json
import logging
from typing import Any, Dict, List, Optional, Union
from mcp.server.fastmcp import FastMCP
import httpx

# Configure logging
logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s - %(levelname)s - %(message)s",
    handlers=[
        logging.StreamHandler(sys.stderr)
    ]
)

# Print startup message
logging.info("Starting LMStudio bridge server...")

try:
    # ===== Configuration =====
    # Load from environment variables with defaults
    LMSTUDIO_HOST = os.getenv("LMSTUDIO_HOST", "127.0.0.1")
    LMSTUDIO_PORT = os.getenv("LMSTUDIO_PORT", "1234")
    LMSTUDIO_API_URL = f"http://{LMSTUDIO_HOST}:{LMSTUDIO_PORT}/v1"
    DEBUG = os.getenv("DEBUG", "false").lower() in ("true", "1", "yes")

    # Set more verbose logging if debug mode is enabled
    if DEBUG:
        logging.getLogger().setLevel(logging.DEBUG)
        logging.debug(f"Debug mode enabled")

    logging.info(f"Configured LM Studio API URL: {LMSTUDIO_API_URL}")

    # Initialize FastMCP server
    mcp = FastMCP("lmstudio-bridge")

    # ===== Helper Functions =====
    async def call_lmstudio_api(endpoint: str, payload: Dict[str, Any], timeout: float = 60.0) -> Dict[str, Any]:
        """Unified API communication function with better error handling"""
        headers = {
            "Content-Type": "application/json",
            "User-Agent": "claude-lmstudio-bridge/1.0"
        }
        
        url = f"{LMSTUDIO_API_URL}/{endpoint}"
        logging.debug(f"Making request to {url}")
        logging.debug(f"Payload: {json.dumps(payload, indent=2)}")
        
        try:
            async with httpx.AsyncClient() as client:
                response = await client.post(
                    url,
                    json=payload,
                    headers=headers,
                    timeout=timeout
                )
                
                # Better error handling with specific error messages
                if response.status_code != 200:
                    error_message = f"LM Studio API error: {response.status_code}"
                    try:
                        error_json = response.json()
                        if "error" in error_json:
                            if isinstance(error_json["error"], dict) and "message" in error_json["error"]:
                                error_message += f" - {error_json['error']['message']}"
                            else:
                                error_message += f" - {error_json['error']}"
                    except:
                        error_message += f" - {response.text[:100]}"
                    
                    logging.error(f"Error response: {error_message}")
                    return {"error": error_message}
                
                result = response.json()
                logging.debug(f"Response received: {json.dumps(result, indent=2, default=str)[:200]}...")
                return result
        except httpx.RequestError as e:
            logging.error(f"Request error: {str(e)}")
            return {"error": f"Connection error: {str(e)}"}
        except Exception as e:
            logging.error(f"Unexpected error: {str(e)}")
            return {"error": f"Unexpected error: {str(e)}"}

    def prepare_chat_messages(messages_input: Union[str, List, Dict]) -> List[Dict[str, str]]:
        """Convert various input formats to what LMStudio expects"""
        try:
            # If messages_input is a string
            if isinstance(messages_input, str):
                # Try to parse it as JSON
                try:
                    parsed = json.loads(messages_input)
                    if isinstance(parsed, list):
                        return parsed
                    else:
                        # If it's parsed but not a list, make it a user message
                        return [{"role": "user", "content": messages_input}]
                except json.JSONDecodeError:
                    # If not valid JSON, assume it's a simple message
                    return [{"role": "user", "content": messages_input}]
            
            # If it's a list already
            elif isinstance(messages_input, list):
                return messages_input
            
            # If it's a dict, assume it's a single message
            elif isinstance(messages_input, dict) and "content" in messages_input:
                if "role" not in messages_input:
                    messages_input["role"] = "user"
                return [messages_input]
                
            # If it's some other format, convert to string and make it a user message
            else:
                return [{"role": "user", "content": str(messages_input)}]
        except Exception as e:
            logging.error(f"Error preparing chat messages: {str(e)}")
            # Fallback to simplest format
            return [{"role": "user", "content": str(messages_input)}]

    # ===== MCP Tools =====
    @mcp.tool()
    async def check_lmstudio_connection() -> str:
        """Check if the LM Studio server is running and accessible.
        
        Returns:
            Connection status and model information
        """
        try:
            # Try to get the server status via models endpoint
            async with httpx.AsyncClient() as client:
                response = await client.get(f"{LMSTUDIO_API_URL}/models", timeout=5.0)
                
            if response.status_code == 200:
                models_data = response.json()
                if "data" in models_data and len(models_data["data"]) > 0:
                    active_model = models_data["data"][0]["id"]
                    return f"✅ Connected to LM Studio. Active model: {active_model}"
                else:
                    return "✅ Connected to LM Studio but no models are currently loaded"
            else:
                return f"❌ LM Studio returned an error: {response.status_code}"
        except Exception as e:
            return f"❌ Failed to connect to LM Studio: {str(e)}"

    @mcp.tool()
    async def list_lmstudio_models() -> str:
        """List available LLM models in LM Studio.
        
        Returns:
            A formatted list of available models with their details.
        """
        logging.info("list_lmstudio_models function called")
        try:
            # Use the API helper function
            models_response = await call_lmstudio_api("models", {}, timeout=10.0)
            
            # Check for errors from the API helper
            if "error" in models_response:
                return f"Error listing models: {models_response['error']}"
            
            if not models_response or "data" not in models_response:
                return "No models found or unexpected response format."
            
            models = models_response["data"]
            model_info = []
            
            for model in models:
                model_info.append(f"ID: {model.get('id', 'Unknown')}")
                model_info.append(f"Name: {model.get('name', 'Unknown')}")
                if model.get('description'):
                    model_info.append(f"Description: {model.get('description')}")
                model_info.append("---")
            
            if not model_info:
                return "No models available in LM Studio."
            
            return "\n".join(model_info)
        except Exception as e:
            logging.error(f"Unexpected error in list_lmstudio_models: {str(e)}")
            traceback.print_exc(file=sys.stderr)
            return f"Unexpected error: {str(e)}"

    @mcp.tool()
    async def generate_text(
        prompt: str,
        model_id: str = "",
        max_tokens: int = 1000,
        temperature: float = 0.7
    ) -> str:
        """Generate text using a local LLM in LM Studio.
        
        Args:
            prompt: The text prompt to send to the model
            model_id: ID of the model to use (leave empty for default model)
            max_tokens: Maximum number of tokens in the response (default: 1000)
            temperature: Randomness of the output (0-1, default: 0.7)
        
        Returns:
            The generated text from the local LLM
        """
        logging.info("generate_text function called")
        try:
            # Validate inputs
            if not prompt or not prompt.strip():
                return "Error: Prompt cannot be empty."
            
            if max_tokens < 1:
                return "Error: max_tokens must be a positive integer."
            
            if temperature < 0 or temperature > 1:
                return "Error: temperature must be between 0 and 1."
            
            # Prepare payload
            payload = {
                "prompt": prompt,
                "max_tokens": max_tokens,
                "temperature": temperature,
                "stream": False
            }
            
            # Add model if specified
            if model_id and model_id.strip():
                payload["model"] = model_id.strip()
            
            # Make request to LM Studio API using the helper function
            response = await call_lmstudio_api("completions", payload)
            
            # Check for errors from the API helper
            if "error" in response:
                return f"Error generating text: {response['error']}"
            
            # Extract and return the generated text
            if "choices" in response and len(response["choices"]) > 0:
                return response["choices"][0].get("text", "")
            
            return "No response generated."
        except Exception as e:
            logging.error(f"Unexpected error in generate_text: {str(e)}")
            traceback.print_exc(file=sys.stderr)
            return f"Unexpected error: {str(e)}"

    @mcp.tool()
    async def chat_completion(
        messages: str,
        model_id: str = "",
        max_tokens: int = 1000,
        temperature: float = 0.7
    ) -> str:
        """Generate a chat completion using a local LLM in LM Studio.
        
        Args:
            messages: JSON string of messages in the format [{"role": "user", "content": "Hello"}, ...]
              or a simple text string which will be treated as a user message
            model_id: ID of the model to use (leave empty for default model)
            max_tokens: Maximum number of tokens in the response (default: 1000)
            temperature: Randomness of the output (0-1, default: 0.7)
        
        Returns:
            The generated text from the local LLM
        """
        logging.info("chat_completion function called")
        try:
            # Standardize message format using the helper function
            messages_formatted = prepare_chat_messages(messages)
            
            logging.debug(f"Formatted messages: {json.dumps(messages_formatted, indent=2)}")
            
            # Validate inputs
            if not messages_formatted:
                return "Error: At least one message is required."
            
            if max_tokens < 1:
                return "Error: max_tokens must be a positive integer."
            
            if temperature < 0 or temperature > 1:
                return "Error: temperature must be between 0 and 1."
            
            # Prepare payload
            payload = {
                "messages": messages_formatted,
                "max_tokens": max_tokens,
                "temperature": temperature,
                "stream": False
            }
            
            # Add model if specified
            if model_id and model_id.strip():
                payload["model"] = model_id.strip()
            
            # Make request to LM Studio API using the helper function
            response = await call_lmstudio_api("chat/completions", payload)
            
            # Check for errors from the API helper
            if "error" in response:
                return f"Error generating chat completion: {response['error']}"
            
            # Extract and return the generated text
            if "choices" in response and len(response["choices"]) > 0:
                choice = response["choices"][0]
                if "message" in choice and "content" in choice["message"]:
                    return choice["message"]["content"]
            
            return "No response generated."
        except Exception as e:
            logging.error(f"Unexpected error in chat_completion: {str(e)}")
            traceback.print_exc(file=sys.stderr)
            return f"Unexpected error: {str(e)}"

    if __name__ == "__main__":
        logging.info("Starting server with stdio transport...")
        # Initialize and run the server
        mcp.run(transport='stdio')
except Exception as e:
    logging.critical(f"CRITICAL ERROR: {str(e)}")
    logging.critical("Traceback:")
    traceback.print_exc(file=sys.stderr)
    sys.exit(1)

```