#
tokens: 15902/50000 17/17 files
lines: on (toggle) GitHub
raw markdown copy reset
# Directory Structure

```
├── .gitignore
├── debug_lmstudio.py
├── debug_server.py
├── install.bat
├── install.sh
├── INSTALLATION.md
├── prepare_for_claude.bat
├── prepare_for_claude.sh
├── README.md
├── requirements.txt
├── run_server.bat
├── run_server.sh
├── server.py
├── setup.bat
├── setup.sh
├── test_mcp.py
└── verify_setup.py
```

# Files

--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------

```
1 | venv/
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 | .env
6 | .venv
7 | env/
8 | ENV/
9 | .DS_Store
```

--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------

```markdown
  1 | # Claude-LMStudio Bridge
  2 | 
  3 | An MCP server that bridges Claude with local LLMs running in LM Studio.
  4 | 
  5 | ## Overview
  6 | 
  7 | This tool allows Claude to interact with your local LLMs running in LM Studio, providing:
  8 | 
  9 | - Access to list all available models in LM Studio
 10 | - The ability to generate text using your local LLMs
 11 | - Support for chat completions through your local models
 12 | - A health check tool to verify connectivity with LM Studio
 13 | 
 14 | ## Prerequisites
 15 | 
 16 | - [Claude Desktop](https://claude.ai/desktop) with MCP support
 17 | - [LM Studio](https://lmstudio.ai/) installed and running locally with API server enabled
 18 | - Python 3.8+ installed
 19 | 
 20 | ## Quick Start (Recommended)
 21 | 
 22 | ### For macOS/Linux:
 23 | 
 24 | 1. Clone the repository
 25 | ```bash
 26 | git clone https://github.com/infinitimeless/claude-lmstudio-bridge.git
 27 | cd claude-lmstudio-bridge
 28 | ```
 29 | 
 30 | 2. Run the setup script
 31 | ```bash
 32 | chmod +x setup.sh
 33 | ./setup.sh
 34 | ```
 35 | 
 36 | 3. Follow the setup script's instructions to configure Claude Desktop
 37 | 
 38 | ### For Windows:
 39 | 
 40 | 1. Clone the repository
 41 | ```cmd
 42 | git clone https://github.com/infinitimeless/claude-lmstudio-bridge.git
 43 | cd claude-lmstudio-bridge
 44 | ```
 45 | 
 46 | 2. Run the setup script
 47 | ```cmd
 48 | setup.bat
 49 | ```
 50 | 
 51 | 3. Follow the setup script's instructions to configure Claude Desktop
 52 | 
 53 | ## Manual Setup
 54 | 
 55 | If you prefer to set things up manually:
 56 | 
 57 | 1. Create a virtual environment (optional but recommended)
 58 | ```bash
 59 | python -m venv venv
 60 | source venv/bin/activate  # On Windows: venv\Scripts\activate
 61 | ```
 62 | 
 63 | 2. Install the required packages
 64 | ```bash
 65 | pip install -r requirements.txt
 66 | ```
 67 | 
 68 | 3. Configure Claude Desktop:
 69 |    - Open Claude Desktop preferences
 70 |    - Navigate to the 'MCP Servers' section
 71 |    - Add a new MCP server with the following configuration:
 72 |      - **Name**: lmstudio-bridge
 73 |      - **Command**: /bin/bash (on macOS/Linux) or cmd.exe (on Windows)
 74 |      - **Arguments**: 
 75 |        - macOS/Linux: /path/to/claude-lmstudio-bridge/run_server.sh
 76 |        - Windows: /c C:\path\to\claude-lmstudio-bridge\run_server.bat
 77 | 
 78 | ## Usage with Claude
 79 | 
 80 | After setting up the bridge, you can use the following commands in Claude:
 81 | 
 82 | 1. Check the connection to LM Studio:
 83 | ```
 84 | Can you check if my LM Studio server is running?
 85 | ```
 86 | 
 87 | 2. List available models:
 88 | ```
 89 | List the available models in my local LM Studio
 90 | ```
 91 | 
 92 | 3. Generate text with a local model:
 93 | ```
 94 | Generate a short poem about spring using my local LLM
 95 | ```
 96 | 
 97 | 4. Send a chat completion:
 98 | ```
 99 | Ask my local LLM: "What are the main features of transformers in machine learning?"
100 | ```
101 | 
102 | ## Troubleshooting
103 | 
104 | ### Diagnosing LM Studio Connection Issues
105 | 
106 | Use the included debugging tool to check your LM Studio connection:
107 | 
108 | ```bash
109 | python debug_lmstudio.py
110 | ```
111 | 
112 | For more detailed tests:
113 | ```bash
114 | python debug_lmstudio.py --test-chat --verbose
115 | ```
116 | 
117 | ### Common Issues
118 | 
119 | **"Cannot connect to LM Studio API"**
120 | - Make sure LM Studio is running
121 | - Verify the API server is enabled in LM Studio (Settings > API Server)
122 | - Check that the port (default: 1234) matches what's in your .env file
123 | 
124 | **"No models are loaded"**
125 | - Open LM Studio and load a model
126 | - Verify the model is running successfully
127 | 
128 | **"MCP package not found"**
129 | - Try reinstalling: `pip install "mcp[cli]" httpx python-dotenv`
130 | - Make sure you're using Python 3.8 or later
131 | 
132 | **"Claude can't find the bridge"**
133 | - Check Claude Desktop configuration
134 | - Make sure the path to run_server.sh or run_server.bat is correct and absolute
135 | - Verify the server script is executable: `chmod +x run_server.sh` (on macOS/Linux)
136 | 
137 | ## Advanced Configuration
138 | 
139 | You can customize the bridge behavior by creating a `.env` file with these settings:
140 | 
141 | ```
142 | LMSTUDIO_HOST=127.0.0.1
143 | LMSTUDIO_PORT=1234
144 | DEBUG=false
145 | ```
146 | 
147 | Set `DEBUG=true` to enable verbose logging for troubleshooting.
148 | 
149 | ## License
150 | 
151 | MIT
152 | 
```

--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------

```
1 | # Core dependencies
2 | mcp[cli]>=0.1.0
3 | httpx>=0.24.0
4 | python-dotenv>=1.0.0
5 | 
6 | # For development and testing
7 | pytest>=7.0.0
8 | 
```

--------------------------------------------------------------------------------
/debug_server.py:
--------------------------------------------------------------------------------

```python
 1 | import sys
 2 | import traceback
 3 | from mcp.server.fastmcp import FastMCP
 4 | 
 5 | # Print startup message to stderr for debugging
 6 | print("Starting debug server...", file=sys.stderr)
 7 | 
 8 | try:
 9 |     # Initialize FastMCP server
10 |     print("Initializing FastMCP server...", file=sys.stderr)
11 |     mcp = FastMCP("lmstudio-bridge")
12 | 
13 |     @mcp.tool()
14 |     async def debug_test() -> str:
15 |         """Basic test function to verify MCP server is working.
16 |         
17 |         Returns:
18 |             A simple confirmation message
19 |         """
20 |         print("debug_test function called", file=sys.stderr)
21 |         return "MCP server is working correctly!"
22 | 
23 |     if __name__ == "__main__":
24 |         print("Starting server with stdio transport...", file=sys.stderr)
25 |         # Initialize and run the server
26 |         mcp.run(transport='stdio')
27 | except Exception as e:
28 |     print(f"ERROR: {str(e)}", file=sys.stderr)
29 |     print("Traceback:", file=sys.stderr)
30 |     traceback.print_exc(file=sys.stderr)
31 | 
```

--------------------------------------------------------------------------------
/prepare_for_claude.bat:
--------------------------------------------------------------------------------

```
 1 | @echo off
 2 | SETLOCAL
 3 | 
 4 | echo === Claude-LMStudio Bridge Setup ===
 5 | echo This script will prepare the environment for use with Claude Desktop
 6 | echo.
 7 | 
 8 | :: Try to install MCP globally to ensure it's available
 9 | echo Installing MCP package globally...
10 | python -m pip install "mcp[cli]" httpx
11 | 
12 | :: Check if installation was successful
13 | python -c "import mcp" >nul 2>&1
14 | IF %ERRORLEVEL% NEQ 0 (
15 |   echo X Failed to install MCP package. Please check your Python installation.
16 |   EXIT /B 1
17 | ) ELSE (
18 |   echo ✓ MCP package installed successfully
19 | )
20 | 
21 | :: Create virtual environment if it doesn't exist
22 | IF NOT EXIST venv (
23 |   echo Creating virtual environment...
24 |   python -m venv venv
25 |   echo ✓ Created virtual environment
26 |   
27 |   :: Activate and install dependencies
28 |   CALL venv\Scripts\activate.bat
29 |   python -m pip install -r requirements.txt
30 |   echo ✓ Installed dependencies in virtual environment
31 | ) ELSE (
32 |   echo ✓ Virtual environment already exists
33 | )
34 | 
35 | :: Display configuration instructions
36 | echo.
37 | echo === Configuration Instructions ===
38 | echo 1. Open Claude Desktop preferences
39 | echo 2. Navigate to the 'MCP Servers' section
40 | echo 3. Add a new MCP server with the following configuration:
41 | echo.
42 | echo    Name: lmstudio-bridge
43 | echo    Command: cmd.exe
44 | echo    Arguments: /c %CD%\run_server.bat
45 | echo.
46 | echo 4. Start LM Studio and ensure the API server is running
47 | echo 5. Restart Claude Desktop
48 | echo.
49 | echo Setup complete! You can now use the LMStudio bridge with Claude Desktop.
50 | 
51 | ENDLOCAL
```

--------------------------------------------------------------------------------
/test_mcp.py:
--------------------------------------------------------------------------------

```python
 1 | #!/usr/bin/env python3
 2 | """
 3 | Simple script to test if MCP package is working correctly.
 4 | Run this script to verify the MCP installation before attempting to run the full server.
 5 | """
 6 | import sys
 7 | import traceback
 8 | 
 9 | print("Testing MCP installation...")
10 | 
11 | try:
12 |     print("Importing mcp package...")
13 |     from mcp.server.fastmcp import FastMCP
14 |     print("✅ Successfully imported FastMCP")
15 |     
16 |     print("Creating FastMCP instance...")
17 |     mcp = FastMCP("test-server")
18 |     print("✅ Successfully created FastMCP instance")
19 |     
20 |     print("Registering simple tool...")
21 |     @mcp.tool()
22 |     async def hello() -> str:
23 |         return "Hello, world!"
24 |     print("✅ Successfully registered tool")
25 |     
26 |     print("All tests passed! MCP appears to be correctly installed.")
27 |     print("\nNext steps:")
28 |     print("1. First try running the debug_server.py script")
29 |     print("2. Then try running the main server.py script if debug_server works")
30 | 
31 | except ImportError as e:
32 |     print(f"❌ Error importing MCP: {str(e)}")
33 |     print("\nTry reinstalling the MCP package with:")
34 |     print("pip uninstall mcp")
35 |     print("pip install 'mcp[cli]'")
36 |     
37 | except Exception as e:
38 |     print(f"❌ Unexpected error: {str(e)}")
39 |     traceback.print_exc()
40 |     
41 |     print("\nTroubleshooting tips:")
42 |     print("1. Make sure you're using Python 3.8 or newer")
43 |     print("2. Check that you're in the correct virtual environment")
44 |     print("3. Try reinstalling dependencies: pip install -r requirements.txt")
45 | 
```

--------------------------------------------------------------------------------
/prepare_for_claude.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | #
 3 | # This script prepares the Claude-LMStudio Bridge for use with Claude Desktop
 4 | # It installs required packages and ensures everything is ready to run
 5 | #
 6 | 
 7 | echo "=== Claude-LMStudio Bridge Setup ==="
 8 | echo "This script will prepare the environment for use with Claude Desktop"
 9 | echo
10 | 
11 | # Make the run script executable
12 | chmod +x run_server.sh
13 | echo "✅ Made run_server.sh executable"
14 | 
15 | # Try to install MCP globally to ensure it's available
16 | echo "Installing MCP package globally..."
17 | python -m pip install "mcp[cli]" httpx
18 | 
19 | # Check if installation was successful
20 | if python -c "import mcp" 2>/dev/null; then
21 |   echo "✅ MCP package installed successfully"
22 | else
23 |   echo "❌ Failed to install MCP package. Please check your Python installation."
24 |   exit 1
25 | fi
26 | 
27 | # Create virtual environment if it doesn't exist
28 | if [ ! -d "venv" ]; then
29 |   echo "Creating virtual environment..."
30 |   python -m venv venv
31 |   echo "✅ Created virtual environment"
32 |   
33 |   # Activate and install dependencies
34 |   source venv/bin/activate
35 |   python -m pip install -r requirements.txt
36 |   echo "✅ Installed dependencies in virtual environment"
37 | else
38 |   echo "✅ Virtual environment already exists"
39 | fi
40 | 
41 | # Display configuration instructions
42 | echo
43 | echo "=== Configuration Instructions ==="
44 | echo "1. Open Claude Desktop preferences"
45 | echo "2. Navigate to the 'MCP Servers' section"
46 | echo "3. Add a new MCP server with the following configuration:"
47 | echo
48 | echo "   Name: lmstudio-bridge"
49 | echo "   Command: /bin/bash"
50 | echo "   Arguments: $(pwd)/run_server.sh"
51 | echo
52 | echo "4. Start LM Studio and ensure the API server is running"
53 | echo "5. Restart Claude Desktop"
54 | echo
55 | echo "Setup complete! You can now use the LMStudio bridge with Claude Desktop."
56 | 
```

--------------------------------------------------------------------------------
/setup.bat:
--------------------------------------------------------------------------------

```
 1 | @echo off
 2 | REM setup.bat - Simplified setup script for Claude-LMStudio Bridge
 3 | 
 4 | echo === Claude-LMStudio Bridge Setup ===
 5 | echo.
 6 | 
 7 | REM Create and activate virtual environment
 8 | if not exist venv (
 9 |   echo Creating virtual environment...
10 |   python -m venv venv
11 |   echo ✓ Created virtual environment
12 | ) else (
13 |   echo ✓ Virtual environment already exists
14 | )
15 | 
16 | REM Activate the virtual environment
17 | call venv\Scripts\activate.bat
18 | 
19 | REM Install dependencies
20 | echo Installing dependencies...
21 | pip install -r requirements.txt
22 | echo ✓ Installed dependencies
23 | 
24 | REM Create default configuration
25 | if not exist .env (
26 |   echo Creating default configuration...
27 |   (
28 |     echo LMSTUDIO_HOST=127.0.0.1
29 |     echo LMSTUDIO_PORT=1234
30 |     echo DEBUG=false
31 |   ) > .env
32 |   echo ✓ Created .env configuration file
33 | ) else (
34 |   echo ✓ Configuration file already exists
35 | )
36 | 
37 | REM Check if LM Studio is running
38 | set PORT_CHECK=0
39 | netstat -an | findstr "127.0.0.1:1234" > nul && set PORT_CHECK=1
40 | if %PORT_CHECK%==1 (
41 |   echo ✓ LM Studio is running on port 1234
42 | ) else (
43 |   echo ⚠ LM Studio does not appear to be running on port 1234
44 |   echo   Please start LM Studio and enable the API server (Settings ^> API Server)
45 | )
46 | 
47 | echo.
48 | echo ✓ Setup complete!
49 | echo.
50 | echo To start the bridge, run:
51 | echo   venv\Scripts\activate.bat ^&^& python server.py
52 | echo.
53 | echo To configure with Claude Desktop:
54 | echo 1. Open Claude Desktop preferences
55 | echo 2. Navigate to the 'MCP Servers' section
56 | echo 3. Add a new MCP server with the following configuration:
57 | echo    - Name: lmstudio-bridge
58 | echo    - Command: cmd.exe
59 | echo    - Arguments: /c %CD%\run_server.bat
60 | echo.
61 | echo Make sure LM Studio is running with API server enabled on port 1234.
62 | 
63 | REM Keep the window open
64 | pause
65 | 
```

--------------------------------------------------------------------------------
/setup.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | # setup.sh - Simplified setup script for Claude-LMStudio Bridge
 3 | 
 4 | echo "=== Claude-LMStudio Bridge Setup ==="
 5 | 
 6 | # Create and activate virtual environment
 7 | if [ ! -d "venv" ]; then
 8 |   echo "Creating virtual environment..."
 9 |   python -m venv venv
10 |   echo "✅ Created virtual environment"
11 | else
12 |   echo "✅ Virtual environment already exists"
13 | fi
14 | 
15 | # Activate the virtual environment
16 | source venv/bin/activate
17 | 
18 | # Install dependencies
19 | echo "Installing dependencies..."
20 | pip install -r requirements.txt
21 | echo "✅ Installed dependencies"
22 | 
23 | # Create default configuration
24 | if [ ! -f ".env" ]; then
25 |   echo "Creating default configuration..."
26 |   cat > .env << EOL
27 | LMSTUDIO_HOST=127.0.0.1
28 | LMSTUDIO_PORT=1234
29 | DEBUG=false
30 | EOL
31 |   echo "✅ Created .env configuration file"
32 | else
33 |   echo "✅ Configuration file already exists"
34 | fi
35 | 
36 | # Make run_server.sh executable
37 | chmod +x run_server.sh
38 | echo "✅ Made run_server.sh executable"
39 | 
40 | # Check if LM Studio is running
41 | if nc -z localhost 1234 2>/dev/null; then
42 |   echo "✅ LM Studio is running on port 1234"
43 | else
44 |   echo "⚠️ LM Studio does not appear to be running on port 1234"
45 |   echo "   Please start LM Studio and enable the API server (Settings > API Server)"
46 | fi
47 | 
48 | echo
49 | echo "✅ Setup complete!"
50 | echo
51 | echo "To start the bridge, run:"
52 | echo "  source venv/bin/activate && python server.py"
53 | echo
54 | echo "To configure with Claude Desktop:"
55 | echo "1. Open Claude Desktop preferences"
56 | echo "2. Navigate to the 'MCP Servers' section"
57 | echo "3. Add a new MCP server with the following configuration:"
58 | echo "   - Name: lmstudio-bridge"
59 | echo "   - Command: /bin/bash"
60 | echo "   - Arguments: $(pwd)/run_server.sh"
61 | echo
62 | echo "Make sure LM Studio is running with API server enabled on port 1234."
63 | 
```

--------------------------------------------------------------------------------
/INSTALLATION.md:
--------------------------------------------------------------------------------

```markdown
 1 | # Installation Guide for Claude-LMStudio Bridge
 2 | 
 3 | This guide provides detailed instructions for setting up the Claude-LMStudio Bridge MCP server.
 4 | 
 5 | ## Installing the MCP Python SDK
 6 | 
 7 | The primary issue users face is not having the MCP module installed properly. Here are different ways to install it:
 8 | 
 9 | ### Using uv (Recommended)
10 | 
11 | `uv` is a modern Python package installer that's recommended for MCP development:
12 | 
13 | ```bash
14 | # Install uv if you don't have it
15 | pip install uv
16 | 
17 | # Install the MCP SDK with CLI support
18 | uv add "mcp[cli]"
19 | ```
20 | 
21 | ### Using pip
22 | 
23 | Alternatively, you can use pip:
24 | 
25 | ```bash
26 | pip install "mcp[cli]"
27 | ```
28 | 
29 | ## Verifying Installation
30 | 
31 | After installation, verify that the module is correctly installed:
32 | 
33 | ```bash
34 | python -c "import mcp; print(mcp.__version__)"
35 | ```
36 | 
37 | This should print the version of the MCP SDK if it's installed correctly.
38 | 
39 | ## Ensuring the Correct Environment
40 | 
41 | Make sure you're using the correct Python environment:
42 | 
43 | 1. If using a virtual environment, activate it before running your script:
44 | 
45 |    ```bash
46 |    # Activate virtual environment
47 |    source venv/bin/activate  # For Mac/Linux
48 |    # or
49 |    venv\Scripts\activate  # For Windows
50 |    ```
51 | 
52 | 2. Verify the Python path to ensure you're using the expected Python interpreter:
53 | 
54 |    ```bash
55 |    which python  # On Mac/Linux
56 |    where python  # On Windows
57 |    ```
58 | 
59 | ## Testing the Installation
60 | 
61 | Run the test script to verify your setup:
62 | 
63 | ```bash
64 | python test_mcp.py
65 | ```
66 | 
67 | If this works successfully, you should be ready to run the server.
68 | 
69 | ## Common Issues and Solutions
70 | 
71 | 1. **ModuleNotFoundError: No module named 'mcp'**
72 |    - The MCP module isn't installed in your current Python environment
73 |    - Solution: Install the MCP SDK as described above
74 | 
75 | 2. **MCP installed but still getting import errors**
76 |    - You might be running Python from a different environment
77 |    - Solution: Check which Python is being used with `which python` and make sure your virtual environment is activated
78 | 
79 | 3. **Error loading the server in Claude**
80 |    - Make sure you're using absolute paths in your Claude Desktop configuration
81 |    - Check that the server is executable and that Python has permission to access it
82 | 
```

--------------------------------------------------------------------------------
/run_server.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | 
 3 | # Configuration - Auto-detect Python path
 4 | if [ -z "$PYTHON_PATH" ]; then
 5 |   PYTHON_PATH=$(which python3 2>/dev/null || which python 2>/dev/null)
 6 |   if [ -z "$PYTHON_PATH" ]; then
 7 |     echo "ERROR: Python not found. Please install Python 3." >&2
 8 |     exit 1
 9 |   fi
10 | fi
11 | 
12 | # Print current environment details
13 | echo "Current directory: $(pwd)" >&2
14 | echo "Using Python at: $PYTHON_PATH" >&2
15 | 
16 | # Check if Python exists at the specified path
17 | if [ ! -f "$PYTHON_PATH" ]; then
18 |   echo "ERROR: Python not found at $PYTHON_PATH" >&2
19 |   echo "Please install Python or set the correct path in this script." >&2
20 |   exit 1
21 | fi
22 | 
23 | # Check if mcp is installed, if not, try to install it
24 | if ! $PYTHON_PATH -c "import mcp" 2>/dev/null; then
25 |   echo "MCP package not found, attempting to install..." >&2
26 |   
27 |   # Try to install using python -m pip
28 |   $PYTHON_PATH -m pip install "mcp[cli]" httpx || {
29 |     echo "Failed to install MCP package. Please install manually with:" >&2
30 |     echo "$PYTHON_PATH -m pip install \"mcp[cli]\" httpx" >&2
31 |     exit 1
32 |   }
33 |   
34 |   # Check if installation was successful
35 |   if ! $PYTHON_PATH -c "import mcp" 2>/dev/null; then
36 |     echo "MCP package was installed but still can't be imported." >&2
37 |     echo "This might be due to a Python path issue." >&2
38 |     exit 1
39 |   fi
40 | fi
41 | 
42 | # Check if httpx is installed
43 | if ! $PYTHON_PATH -c "import httpx" 2>/dev/null; then
44 |   echo "httpx package not found, attempting to install..." >&2
45 |   $PYTHON_PATH -m pip install httpx || {
46 |     echo "Failed to install httpx package." >&2
47 |     exit 1
48 |   }
49 | fi
50 | 
51 | # Check if dotenv is installed (for .env file support)
52 | if ! $PYTHON_PATH -c "import dotenv" 2>/dev/null; then
53 |   echo "python-dotenv package not found, attempting to install..." >&2
54 |   $PYTHON_PATH -m pip install python-dotenv || {
55 |     echo "Failed to install python-dotenv package." >&2
56 |     exit 1
57 |   }
58 | fi
59 | 
60 | # Check if virtual environment exists and use it if it does
61 | if [ -d "venv" ] && [ -f "venv/bin/python" ]; then
62 |   echo "Using Python from virtual environment" >&2
63 |   PYTHON_PATH=$(pwd)/venv/bin/python
64 |   echo "Updated Python path to: $PYTHON_PATH" >&2
65 | fi
66 | 
67 | # Attempt to check if LM Studio is running before starting
68 | if command -v nc &> /dev/null; then
69 |   if ! nc -z localhost 1234 2>/dev/null; then
70 |     echo "WARNING: LM Studio does not appear to be running on port 1234" >&2
71 |     echo "Please make sure LM Studio is running with the API server enabled" >&2
72 |   else
73 |     echo "✓ LM Studio API server appears to be running on port 1234" >&2
74 |   fi
75 | fi
76 | 
77 | # Run the server script
78 | echo "Starting server.py with $PYTHON_PATH..." >&2
79 | $PYTHON_PATH server.py
80 | 
```

--------------------------------------------------------------------------------
/install.sh:
--------------------------------------------------------------------------------

```bash
 1 | #!/bin/bash
 2 | 
 3 | # Claude-LMStudio Bridge Installer
 4 | # This script will set up the Claude-LMStudio Bridge for use with Claude Desktop
 5 | 
 6 | echo "===== Claude-LMStudio Bridge Installer ====="
 7 | echo "This will configure the bridge to work with Claude Desktop"
 8 | echo
 9 | 
10 | # Find Python location
11 | PYTHON_PATH=$(which python3)
12 | if [ -z "$PYTHON_PATH" ]; then
13 |   echo "❌ ERROR: Python 3 not found in your PATH"
14 |   echo "Please install Python 3 first and try again"
15 |   exit 1
16 | fi
17 | 
18 | echo "✅ Found Python at: $PYTHON_PATH"
19 | 
20 | # Update the run_server.sh script with the correct Python path
21 | echo "Updating run_server.sh with Python path..."
22 | sed -i '' "s|PYTHON_PATH=.*|PYTHON_PATH=\"$PYTHON_PATH\"|g" run_server.sh
23 | chmod +x run_server.sh
24 | 
25 | # Install required packages
26 | echo "Installing required Python packages..."
27 | "$PYTHON_PATH" -m pip install "mcp[cli]" httpx
28 | 
29 | # Check if installation was successful
30 | if ! "$PYTHON_PATH" -c "import mcp" 2>/dev/null; then
31 |   echo "❌ ERROR: Failed to install MCP package"
32 |   echo "Try running manually: $PYTHON_PATH -m pip install \"mcp[cli]\" httpx"
33 |   exit 1
34 | fi
35 | 
36 | echo "✅ MCP package installed successfully"
37 | 
38 | # Get full path to the run_server.sh script
39 | SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
40 | SCRIPT_PATH="$SCRIPT_DIR/run_server.sh"
41 | 
42 | # Create or update Claude Desktop config
43 | CONFIG_DIR="$HOME/Library/Application Support/Claude"
44 | CONFIG_FILE="$CONFIG_DIR/claude_desktop_config.json"
45 | 
46 | mkdir -p "$CONFIG_DIR"
47 | 
48 | if [ -f "$CONFIG_FILE" ]; then
49 |   # Backup existing config
50 |   cp "$CONFIG_FILE" "$CONFIG_FILE.backup"
51 |   echo "Created backup of existing config at $CONFIG_FILE.backup"
52 |   
53 |   # Check if JSON is valid and has mcpServers property
54 |   if grep -q "\"mcpServers\"" "$CONFIG_FILE"; then
55 |     # Add or update lmstudio-bridge entry
56 |     TMP_FILE=$(mktemp)
57 |     jq --arg path "$SCRIPT_PATH" '.mcpServers["lmstudio-bridge"] = {"command": "/bin/bash", "args": [$path]}' "$CONFIG_FILE" > "$TMP_FILE"
58 |     mv "$TMP_FILE" "$CONFIG_FILE"
59 |   else
60 |     # Create mcpServers section
61 |     TMP_FILE=$(mktemp)
62 |     jq --arg path "$SCRIPT_PATH" '. + {"mcpServers": {"lmstudio-bridge": {"command": "/bin/bash", "args": [$path]}}}' "$CONFIG_FILE" > "$TMP_FILE"
63 |     mv "$TMP_FILE" "$CONFIG_FILE"
64 |   fi
65 | else
66 |   # Create new config file
67 |   echo "{
68 |   \"mcpServers\": {
69 |     \"lmstudio-bridge\": {
70 |       \"command\": \"/bin/bash\",
71 |       \"args\": [
72 |         \"$SCRIPT_PATH\"
73 |       ]
74 |     }
75 |   }
76 | }" > "$CONFIG_FILE"
77 | fi
78 | 
79 | echo "✅ Updated Claude Desktop configuration at $CONFIG_FILE"
80 | 
81 | echo
82 | echo "✅ Installation complete!"
83 | echo "Please restart Claude Desktop to use the LMStudio bridge"
84 | echo
85 | echo "If you encounter any issues, edit run_server.sh to check settings"
86 | echo "or refer to the README.md for troubleshooting steps."
87 | 
```

--------------------------------------------------------------------------------
/run_server.bat:
--------------------------------------------------------------------------------

```
 1 | @echo off
 2 | SETLOCAL
 3 | 
 4 | REM Configuration - Auto-detect Python path
 5 | IF "%PYTHON_PATH%"=="" (
 6 |   FOR /F "tokens=*" %%i IN ('where python') DO (
 7 |     SET PYTHON_PATH=%%i
 8 |     GOTO :found_python
 9 |   )
10 |   
11 |   echo ERROR: Python not found in your PATH 1>&2
12 |   echo Please install Python first and make sure it's in your PATH 1>&2
13 |   EXIT /B 1
14 |   
15 |   :found_python
16 | )
17 | 
18 | REM Print current environment details
19 | echo Current directory: %CD% 1>&2
20 | echo Using Python at: %PYTHON_PATH% 1>&2
21 | 
22 | REM Check if Python exists at the specified path
23 | IF NOT EXIST "%PYTHON_PATH%" (
24 |   echo ERROR: Python not found at %PYTHON_PATH% 1>&2
25 |   echo Please install Python or set the correct path in this script. 1>&2
26 |   EXIT /B 1
27 | )
28 | 
29 | REM Check if mcp is installed, if not, try to install it
30 | "%PYTHON_PATH%" -c "import mcp" >nul 2>&1
31 | IF %ERRORLEVEL% NEQ 0 (
32 |   echo MCP package not found, attempting to install... 1>&2
33 |   
34 |   REM Try to install using python -m pip
35 |   "%PYTHON_PATH%" -m pip install "mcp[cli]" httpx
36 |   IF %ERRORLEVEL% NEQ 0 (
37 |     echo Failed to install MCP package. Please install manually with: 1>&2
38 |     echo "%PYTHON_PATH%" -m pip install "mcp[cli]" httpx 1>&2
39 |     EXIT /B 1
40 |   )
41 |   
42 |   REM Check if installation was successful
43 |   "%PYTHON_PATH%" -c "import mcp" >nul 2>&1
44 |   IF %ERRORLEVEL% NEQ 0 (
45 |     echo MCP package was installed but still can't be imported. 1>&2
46 |     echo This might be due to a Python path issue. 1>&2
47 |     EXIT /B 1
48 |   )
49 | )
50 | 
51 | REM Check if httpx is installed
52 | "%PYTHON_PATH%" -c "import httpx" >nul 2>&1
53 | IF %ERRORLEVEL% NEQ 0 (
54 |   echo httpx package not found, attempting to install... 1>&2
55 |   "%PYTHON_PATH%" -m pip install httpx
56 |   IF %ERRORLEVEL% NEQ 0 (
57 |     echo Failed to install httpx package. 1>&2
58 |     EXIT /B 1
59 |   )
60 | )
61 | 
62 | REM Check if dotenv is installed (for .env file support)
63 | "%PYTHON_PATH%" -c "import dotenv" >nul 2>&1
64 | IF %ERRORLEVEL% NEQ 0 (
65 |   echo python-dotenv package not found, attempting to install... 1>&2
66 |   "%PYTHON_PATH%" -m pip install python-dotenv
67 |   IF %ERRORLEVEL% NEQ 0 (
68 |     echo Failed to install python-dotenv package. 1>&2
69 |     EXIT /B 1
70 |   )
71 | )
72 | 
73 | REM Check if virtual environment exists and use it if it does
74 | IF EXIST "venv\Scripts\python.exe" (
75 |   echo Using Python from virtual environment 1>&2
76 |   SET PYTHON_PATH=%CD%\venv\Scripts\python.exe
77 |   echo Updated Python path to: %PYTHON_PATH% 1>&2
78 | )
79 | 
80 | REM Attempt to check if LM Studio is running before starting
81 | netstat -an | findstr "127.0.0.1:1234" >nul
82 | IF %ERRORLEVEL% NEQ 0 (
83 |   echo WARNING: LM Studio does not appear to be running on port 1234 1>&2
84 |   echo Please make sure LM Studio is running with the API server enabled 1>&2
85 | ) ELSE (
86 |   echo ✓ LM Studio API server appears to be running on port 1234 1>&2
87 | )
88 | 
89 | REM Run the server script
90 | echo Starting server.py with %PYTHON_PATH%... 1>&2
91 | "%PYTHON_PATH%" server.py
92 | 
93 | ENDLOCAL
94 | 
```

--------------------------------------------------------------------------------
/install.bat:
--------------------------------------------------------------------------------

```
 1 | @echo off
 2 | echo ===== Claude-LMStudio Bridge Installer =====
 3 | echo This will configure the bridge to work with Claude Desktop
 4 | echo.
 5 | 
 6 | :: Find Python location
 7 | for /f "tokens=*" %%i in ('where python') do (
 8 |     set PYTHON_PATH=%%i
 9 |     goto :found_python
10 | )
11 | 
12 | echo X ERROR: Python not found in your PATH
13 | echo Please install Python first and try again
14 | exit /b 1
15 | 
16 | :found_python
17 | echo v Found Python at: %PYTHON_PATH%
18 | 
19 | :: Update the run_server.bat script with the correct Python path
20 | echo Updating run_server.bat with Python path...
21 | powershell -Command "(Get-Content run_server.bat) -replace 'SET PYTHON_PATH=.*', 'SET PYTHON_PATH=%PYTHON_PATH%' | Set-Content run_server.bat"
22 | 
23 | :: Install required packages
24 | echo Installing required Python packages...
25 | "%PYTHON_PATH%" -m pip install "mcp[cli]" httpx
26 | 
27 | :: Check if installation was successful
28 | "%PYTHON_PATH%" -c "import mcp" >nul 2>&1
29 | if %ERRORLEVEL% NEQ 0 (
30 |     echo X ERROR: Failed to install MCP package
31 |     echo Try running manually: "%PYTHON_PATH%" -m pip install "mcp[cli]" httpx
32 |     exit /b 1
33 | )
34 | 
35 | echo v MCP package installed successfully
36 | 
37 | :: Get full path to the run_server.bat script
38 | set SCRIPT_DIR=%~dp0
39 | set SCRIPT_PATH=%SCRIPT_DIR%run_server.bat
40 | echo Script path: %SCRIPT_PATH%
41 | 
42 | :: Create or update Claude Desktop config
43 | set CONFIG_DIR=%APPDATA%\Claude
44 | set CONFIG_FILE=%CONFIG_DIR%\claude_desktop_config.json
45 | 
46 | if not exist "%CONFIG_DIR%" mkdir "%CONFIG_DIR%"
47 | 
48 | if exist "%CONFIG_FILE%" (
49 |     :: Backup existing config
50 |     copy "%CONFIG_FILE%" "%CONFIG_FILE%.backup" >nul
51 |     echo Created backup of existing config at %CONFIG_FILE%.backup
52 |     
53 |     :: Create new config file - we'll use a simple approach for Windows
54 |     echo {> "%CONFIG_FILE%"
55 |     echo   "mcpServers": {>> "%CONFIG_FILE%"
56 |     echo     "lmstudio-bridge": {>> "%CONFIG_FILE%"
57 |     echo       "command": "cmd.exe",>> "%CONFIG_FILE%"
58 |     echo       "args": [>> "%CONFIG_FILE%"
59 |     echo         "/c",>> "%CONFIG_FILE%"
60 |     echo         "%SCRIPT_PATH:\=\\%">> "%CONFIG_FILE%"
61 |     echo       ]>> "%CONFIG_FILE%"
62 |     echo     }>> "%CONFIG_FILE%"
63 |     echo   }>> "%CONFIG_FILE%"
64 |     echo }>> "%CONFIG_FILE%"
65 | ) else (
66 |     :: Create new config file
67 |     echo {> "%CONFIG_FILE%"
68 |     echo   "mcpServers": {>> "%CONFIG_FILE%"
69 |     echo     "lmstudio-bridge": {>> "%CONFIG_FILE%"
70 |     echo       "command": "cmd.exe",>> "%CONFIG_FILE%"
71 |     echo       "args": [>> "%CONFIG_FILE%"
72 |     echo         "/c",>> "%CONFIG_FILE%"
73 |     echo         "%SCRIPT_PATH:\=\\%">> "%CONFIG_FILE%"
74 |     echo       ]>> "%CONFIG_FILE%"
75 |     echo     }>> "%CONFIG_FILE%"
76 |     echo   }>> "%CONFIG_FILE%"
77 |     echo }>> "%CONFIG_FILE%"
78 | )
79 | 
80 | echo v Updated Claude Desktop configuration at %CONFIG_FILE%
81 | 
82 | echo.
83 | echo v Installation complete!
84 | echo Please restart Claude Desktop to use the LMStudio bridge
85 | echo.
86 | echo If you encounter any issues, edit run_server.bat to check settings
87 | echo or refer to the README.md for troubleshooting steps.
88 | 
89 | pause
```

--------------------------------------------------------------------------------
/verify_setup.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | Verification script to check if all required packages are installed.
  4 | This script will check for the presence of essential packages and their versions.
  5 | """
  6 | import sys
  7 | import subprocess
  8 | import platform
  9 | 
 10 | def check_python_version():
 11 |     """Check if Python version is 3.8 or higher."""
 12 |     version = sys.version_info
 13 |     if version.major < 3 or (version.major == 3 and version.minor < 8):
 14 |         print(f"❌ Python version too old: {platform.python_version()}")
 15 |         print("   MCP requires Python 3.8 or higher.")
 16 |         return False
 17 |     else:
 18 |         print(f"✅ Python version: {platform.python_version()}")
 19 |         return True
 20 | 
 21 | def check_package(package_name):
 22 |     """Check if a package is installed and get its version."""
 23 |     try:
 24 |         if package_name == "mcp":
 25 |             # Special handling for mcp to test import
 26 |             module = __import__(package_name)
 27 |             version = getattr(module, "__version__", "unknown")
 28 |             print(f"✅ {package_name} is installed (version: {version})")
 29 |             return True
 30 |         else:
 31 |             # Use pip to check other packages
 32 |             result = subprocess.run(
 33 |                 [sys.executable, "-m", "pip", "show", package_name],
 34 |                 capture_output=True, 
 35 |                 text=True
 36 |             )
 37 |             if result.returncode == 0:
 38 |                 for line in result.stdout.splitlines():
 39 |                     if line.startswith("Version:"):
 40 |                         version = line.split(":", 1)[1].strip()
 41 |                         print(f"✅ {package_name} is installed (version: {version})")
 42 |                         return True
 43 |             print(f"❌ {package_name} is not installed")
 44 |             return False
 45 |     except ImportError:
 46 |         print(f"❌ {package_name} is not installed")
 47 |         return False
 48 |     except Exception as e:
 49 |         print(f"❌ Error checking {package_name}: {str(e)}")
 50 |         return False
 51 | 
 52 | def check_environment():
 53 |     """Check if running in a virtual environment."""
 54 |     in_venv = hasattr(sys, "real_prefix") or (
 55 |         hasattr(sys, "base_prefix") and sys.base_prefix != sys.prefix
 56 |     )
 57 |     if in_venv:
 58 |         print(f"✅ Running in virtual environment: {sys.prefix}")
 59 |         return True
 60 |     else:
 61 |         print("⚠️ Not running in a virtual environment")
 62 |         print("   It's recommended to use a virtual environment for this project")
 63 |         return True  # Not critical
 64 | 
 65 | def main():
 66 |     """Run all checks."""
 67 |     print("🔍 Checking environment setup for Claude-LMStudio Bridge...")
 68 |     print("-" * 60)
 69 |     
 70 |     success = True
 71 |     
 72 |     # Check Python version
 73 |     if not check_python_version():
 74 |         success = False
 75 |     
 76 |     # Check virtual environment
 77 |     check_environment()
 78 |     
 79 |     # Check essential packages
 80 |     required_packages = ["mcp", "httpx"]
 81 |     for package in required_packages:
 82 |         if not check_package(package):
 83 |             success = False
 84 |     
 85 |     print("-" * 60)
 86 |     if success:
 87 |         print("✅ All essential checks passed! Your environment is ready.")
 88 |         print("\nNext steps:")
 89 |         print("1. Run 'python test_mcp.py' to test MCP functionality")
 90 |         print("2. Run 'python debug_server.py' to test a simple MCP server")
 91 |         print("3. Run 'python server.py' to start the full bridge server")
 92 |     else:
 93 |         print("❌ Some checks failed. Please address the issues above.")
 94 |         print("\nCommon solutions:")
 95 |         print("1. Install MCP: pip install 'mcp[cli]'")
 96 |         print("2. Install httpx: pip install httpx")
 97 |         print("3. Upgrade Python to 3.8+: https://www.python.org/downloads/")
 98 |     
 99 |     return 0 if success else 1
100 | 
101 | if __name__ == "__main__":
102 |     sys.exit(main())
103 | 
```

--------------------------------------------------------------------------------
/debug_lmstudio.py:
--------------------------------------------------------------------------------

```python
  1 | #!/usr/bin/env python3
  2 | """
  3 | debug_lmstudio.py - Simple diagnostic tool for LM Studio connectivity
  4 | 
  5 | This script tests the connection to LM Studio's API server and helps identify
  6 | issues with the connection or API calls.
  7 | """
  8 | import sys
  9 | import json
 10 | import traceback
 11 | import argparse
 12 | import os
 13 | import httpx
 14 | import asyncio
 15 | 
 16 | # Set up command-line arguments
 17 | parser = argparse.ArgumentParser(description="Test connection to LM Studio API")
 18 | parser.add_argument("--host", default="127.0.0.1", help="LM Studio API host (default: 127.0.0.1)")
 19 | parser.add_argument("--port", default="1234", help="LM Studio API port (default: 1234)")
 20 | parser.add_argument("--test-prompt", action="store_true", help="Test with a simple prompt")
 21 | parser.add_argument("--test-chat", action="store_true", help="Test with a simple chat message")
 22 | parser.add_argument("--verbose", "-v", action="store_true", help="Show verbose output")
 23 | args = parser.parse_args()
 24 | 
 25 | # Configure API URL
 26 | API_URL = f"http://{args.host}:{args.port}/v1"
 27 | print(f"Testing connection to LM Studio API at {API_URL}")
 28 | 
 29 | async def test_connection():
 30 |     """Test basic connectivity to the LM Studio API server"""
 31 |     try:
 32 |         print("\n=== Testing basic connectivity ===")
 33 |         async with httpx.AsyncClient() as client:
 34 |             response = await client.get(f"{API_URL}/models", timeout=5.0)
 35 |             
 36 |             if response.status_code == 200:
 37 |                 print("✅ Connection successful!")
 38 |                 
 39 |                 # Check for available models
 40 |                 data = response.json()
 41 |                 if "data" in data and isinstance(data["data"], list):
 42 |                     if len(data["data"]) > 0:
 43 |                         models = [model.get("id", "Unknown") for model in data["data"]]
 44 |                         print(f"✅ Found {len(models)} available model(s): {', '.join(models)}")
 45 |                     else:
 46 |                         print("⚠️ No models are currently loaded in LM Studio")
 47 |                 else:
 48 |                     print("⚠️ Unexpected response format from models endpoint")
 49 |                 
 50 |                 if args.verbose:
 51 |                     print("\nResponse data:")
 52 |                     print(json.dumps(data, indent=2))
 53 |                 
 54 |                 return True
 55 |             else:
 56 |                 print(f"❌ Connection failed with status code: {response.status_code}")
 57 |                 print(f"Response: {response.text[:200]}")
 58 |                 return False
 59 |     except Exception as e:
 60 |         print(f"❌ Connection error: {str(e)}")
 61 |         if args.verbose:
 62 |             traceback.print_exc()
 63 |         return False
 64 | 
 65 | async def test_completion():
 66 |     """Test text completion API with a simple prompt"""
 67 |     if not await test_connection():
 68 |         return False
 69 |     
 70 |     print("\n=== Testing text completion API ===")
 71 |     try:
 72 |         # Simple test prompt
 73 |         payload = {
 74 |             "prompt": "Hello, my name is",
 75 |             "max_tokens": 50,
 76 |             "temperature": 0.7,
 77 |             "stream": False
 78 |         }
 79 |         
 80 |         print("Sending test prompt: 'Hello, my name is'")
 81 |         
 82 |         async with httpx.AsyncClient() as client:
 83 |             response = await client.post(
 84 |                 f"{API_URL}/completions",
 85 |                 json=payload,
 86 |                 timeout=10.0
 87 |             )
 88 |             
 89 |             if response.status_code == 200:
 90 |                 data = response.json()
 91 |                 if "choices" in data and len(data["choices"]) > 0:
 92 |                     completion = data["choices"][0].get("text", "")
 93 |                     print(f"✅ Received completion response: '{completion[:50]}...'")
 94 |                     
 95 |                     if args.verbose:
 96 |                         print("\nFull response data:")
 97 |                         print(json.dumps(data, indent=2))
 98 |                     
 99 |                     return True
100 |                 else:
101 |                     print("❌ No completion text received in the response")
102 |                     print(f"Response: {json.dumps(data, indent=2)}")
103 |                     return False
104 |             else:
105 |                 print(f"❌ Completion request failed with status code: {response.status_code}")
106 |                 print(f"Response: {response.text[:200]}")
107 |                 return False
108 |     except Exception as e:
109 |         print(f"❌ Error during completion test: {str(e)}")
110 |         if args.verbose:
111 |             traceback.print_exc()
112 |         return False
113 | 
114 | async def test_chat():
115 |     """Test chat completion API with a simple message"""
116 |     if not await test_connection():
117 |         return False
118 |     
119 |     print("\n=== Testing chat completion API ===")
120 |     try:
121 |         # Simple test chat message
122 |         payload = {
123 |             "messages": [
124 |                 {"role": "user", "content": "What is the capital of France?"}
125 |             ],
126 |             "max_tokens": 50,
127 |             "temperature": 0.7,
128 |             "stream": False
129 |         }
130 |         
131 |         print("Sending test chat message: 'What is the capital of France?'")
132 |         
133 |         async with httpx.AsyncClient() as client:
134 |             response = await client.post(
135 |                 f"{API_URL}/chat/completions",
136 |                 json=payload,
137 |                 timeout=10.0
138 |             )
139 |             
140 |             if response.status_code == 200:
141 |                 data = response.json()
142 |                 if "choices" in data and len(data["choices"]) > 0:
143 |                     if "message" in data["choices"][0] and "content" in data["choices"][0]["message"]:
144 |                         message = data["choices"][0]["message"]["content"]
145 |                         print(f"✅ Received chat response: '{message[:50]}...'")
146 |                         
147 |                         if args.verbose:
148 |                             print("\nFull response data:")
149 |                             print(json.dumps(data, indent=2))
150 |                         
151 |                         return True
152 |                     else:
153 |                         print("❌ No message content received in the response")
154 |                         print(f"Response: {json.dumps(data, indent=2)}")
155 |                         return False
156 |                 else:
157 |                     print("❌ No choices received in the response")
158 |                     print(f"Response: {json.dumps(data, indent=2)}")
159 |                     return False
160 |             else:
161 |                 print(f"❌ Chat request failed with status code: {response.status_code}")
162 |                 print(f"Response: {response.text[:200]}")
163 |                 return False
164 |     except Exception as e:
165 |         print(f"❌ Error during chat test: {str(e)}")
166 |         if args.verbose:
167 |             traceback.print_exc()
168 |         return False
169 | 
170 | async def run_tests():
171 |     """Run all selected tests"""
172 |     try:
173 |         connection_ok = await test_connection()
174 |         
175 |         if args.test_prompt and connection_ok:
176 |             await test_completion()
177 |         
178 |         if args.test_chat and connection_ok:
179 |             await test_chat()
180 |             
181 |         if not args.test_prompt and not args.test_chat and connection_ok:
182 |             # If no specific tests are requested, but connection is OK,
183 |             # give a helpful message about next steps
184 |             print("\n=== Next Steps ===")
185 |             print("Connection to LM Studio API is working.")
186 |             print("Try these additional tests:")
187 |             print("  python debug_lmstudio.py --test-prompt  # Test text completion")
188 |             print("  python debug_lmstudio.py --test-chat    # Test chat completion")
189 |             print("  python debug_lmstudio.py -v --test-chat # Verbose test output")
190 |     
191 |     except Exception as e:
192 |         print(f"❌ Unexpected error: {str(e)}")
193 |         traceback.print_exc()
194 | 
195 | # Run the tests
196 | if __name__ == "__main__":
197 |     try:
198 |         asyncio.run(run_tests())
199 |     except KeyboardInterrupt:
200 |         print("\nTests interrupted.")
201 |         sys.exit(1)
202 | 
```

--------------------------------------------------------------------------------
/server.py:
--------------------------------------------------------------------------------

```python
  1 | import sys
  2 | import traceback
  3 | import os
  4 | import json
  5 | import logging
  6 | from typing import Any, Dict, List, Optional, Union
  7 | from mcp.server.fastmcp import FastMCP
  8 | import httpx
  9 | 
 10 | # Configure logging
 11 | logging.basicConfig(
 12 |     level=logging.INFO,
 13 |     format="%(asctime)s - %(levelname)s - %(message)s",
 14 |     handlers=[
 15 |         logging.StreamHandler(sys.stderr)
 16 |     ]
 17 | )
 18 | 
 19 | # Print startup message
 20 | logging.info("Starting LMStudio bridge server...")
 21 | 
 22 | try:
 23 |     # ===== Configuration =====
 24 |     # Load from environment variables with defaults
 25 |     LMSTUDIO_HOST = os.getenv("LMSTUDIO_HOST", "127.0.0.1")
 26 |     LMSTUDIO_PORT = os.getenv("LMSTUDIO_PORT", "1234")
 27 |     LMSTUDIO_API_URL = f"http://{LMSTUDIO_HOST}:{LMSTUDIO_PORT}/v1"
 28 |     DEBUG = os.getenv("DEBUG", "false").lower() in ("true", "1", "yes")
 29 | 
 30 |     # Set more verbose logging if debug mode is enabled
 31 |     if DEBUG:
 32 |         logging.getLogger().setLevel(logging.DEBUG)
 33 |         logging.debug(f"Debug mode enabled")
 34 | 
 35 |     logging.info(f"Configured LM Studio API URL: {LMSTUDIO_API_URL}")
 36 | 
 37 |     # Initialize FastMCP server
 38 |     mcp = FastMCP("lmstudio-bridge")
 39 | 
 40 |     # ===== Helper Functions =====
 41 |     async def call_lmstudio_api(endpoint: str, payload: Dict[str, Any], timeout: float = 60.0) -> Dict[str, Any]:
 42 |         """Unified API communication function with better error handling"""
 43 |         headers = {
 44 |             "Content-Type": "application/json",
 45 |             "User-Agent": "claude-lmstudio-bridge/1.0"
 46 |         }
 47 |         
 48 |         url = f"{LMSTUDIO_API_URL}/{endpoint}"
 49 |         logging.debug(f"Making request to {url}")
 50 |         logging.debug(f"Payload: {json.dumps(payload, indent=2)}")
 51 |         
 52 |         try:
 53 |             async with httpx.AsyncClient() as client:
 54 |                 response = await client.post(
 55 |                     url,
 56 |                     json=payload,
 57 |                     headers=headers,
 58 |                     timeout=timeout
 59 |                 )
 60 |                 
 61 |                 # Better error handling with specific error messages
 62 |                 if response.status_code != 200:
 63 |                     error_message = f"LM Studio API error: {response.status_code}"
 64 |                     try:
 65 |                         error_json = response.json()
 66 |                         if "error" in error_json:
 67 |                             if isinstance(error_json["error"], dict) and "message" in error_json["error"]:
 68 |                                 error_message += f" - {error_json['error']['message']}"
 69 |                             else:
 70 |                                 error_message += f" - {error_json['error']}"
 71 |                     except:
 72 |                         error_message += f" - {response.text[:100]}"
 73 |                     
 74 |                     logging.error(f"Error response: {error_message}")
 75 |                     return {"error": error_message}
 76 |                 
 77 |                 result = response.json()
 78 |                 logging.debug(f"Response received: {json.dumps(result, indent=2, default=str)[:200]}...")
 79 |                 return result
 80 |         except httpx.RequestError as e:
 81 |             logging.error(f"Request error: {str(e)}")
 82 |             return {"error": f"Connection error: {str(e)}"}
 83 |         except Exception as e:
 84 |             logging.error(f"Unexpected error: {str(e)}")
 85 |             return {"error": f"Unexpected error: {str(e)}"}
 86 | 
 87 |     def prepare_chat_messages(messages_input: Union[str, List, Dict]) -> List[Dict[str, str]]:
 88 |         """Convert various input formats to what LMStudio expects"""
 89 |         try:
 90 |             # If messages_input is a string
 91 |             if isinstance(messages_input, str):
 92 |                 # Try to parse it as JSON
 93 |                 try:
 94 |                     parsed = json.loads(messages_input)
 95 |                     if isinstance(parsed, list):
 96 |                         return parsed
 97 |                     else:
 98 |                         # If it's parsed but not a list, make it a user message
 99 |                         return [{"role": "user", "content": messages_input}]
100 |                 except json.JSONDecodeError:
101 |                     # If not valid JSON, assume it's a simple message
102 |                     return [{"role": "user", "content": messages_input}]
103 |             
104 |             # If it's a list already
105 |             elif isinstance(messages_input, list):
106 |                 return messages_input
107 |             
108 |             # If it's a dict, assume it's a single message
109 |             elif isinstance(messages_input, dict) and "content" in messages_input:
110 |                 if "role" not in messages_input:
111 |                     messages_input["role"] = "user"
112 |                 return [messages_input]
113 |                 
114 |             # If it's some other format, convert to string and make it a user message
115 |             else:
116 |                 return [{"role": "user", "content": str(messages_input)}]
117 |         except Exception as e:
118 |             logging.error(f"Error preparing chat messages: {str(e)}")
119 |             # Fallback to simplest format
120 |             return [{"role": "user", "content": str(messages_input)}]
121 | 
122 |     # ===== MCP Tools =====
123 |     @mcp.tool()
124 |     async def check_lmstudio_connection() -> str:
125 |         """Check if the LM Studio server is running and accessible.
126 |         
127 |         Returns:
128 |             Connection status and model information
129 |         """
130 |         try:
131 |             # Try to get the server status via models endpoint
132 |             async with httpx.AsyncClient() as client:
133 |                 response = await client.get(f"{LMSTUDIO_API_URL}/models", timeout=5.0)
134 |                 
135 |             if response.status_code == 200:
136 |                 models_data = response.json()
137 |                 if "data" in models_data and len(models_data["data"]) > 0:
138 |                     active_model = models_data["data"][0]["id"]
139 |                     return f"✅ Connected to LM Studio. Active model: {active_model}"
140 |                 else:
141 |                     return "✅ Connected to LM Studio but no models are currently loaded"
142 |             else:
143 |                 return f"❌ LM Studio returned an error: {response.status_code}"
144 |         except Exception as e:
145 |             return f"❌ Failed to connect to LM Studio: {str(e)}"
146 | 
147 |     @mcp.tool()
148 |     async def list_lmstudio_models() -> str:
149 |         """List available LLM models in LM Studio.
150 |         
151 |         Returns:
152 |             A formatted list of available models with their details.
153 |         """
154 |         logging.info("list_lmstudio_models function called")
155 |         try:
156 |             # Use the API helper function
157 |             models_response = await call_lmstudio_api("models", {}, timeout=10.0)
158 |             
159 |             # Check for errors from the API helper
160 |             if "error" in models_response:
161 |                 return f"Error listing models: {models_response['error']}"
162 |             
163 |             if not models_response or "data" not in models_response:
164 |                 return "No models found or unexpected response format."
165 |             
166 |             models = models_response["data"]
167 |             model_info = []
168 |             
169 |             for model in models:
170 |                 model_info.append(f"ID: {model.get('id', 'Unknown')}")
171 |                 model_info.append(f"Name: {model.get('name', 'Unknown')}")
172 |                 if model.get('description'):
173 |                     model_info.append(f"Description: {model.get('description')}")
174 |                 model_info.append("---")
175 |             
176 |             if not model_info:
177 |                 return "No models available in LM Studio."
178 |             
179 |             return "\n".join(model_info)
180 |         except Exception as e:
181 |             logging.error(f"Unexpected error in list_lmstudio_models: {str(e)}")
182 |             traceback.print_exc(file=sys.stderr)
183 |             return f"Unexpected error: {str(e)}"
184 | 
185 |     @mcp.tool()
186 |     async def generate_text(
187 |         prompt: str,
188 |         model_id: str = "",
189 |         max_tokens: int = 1000,
190 |         temperature: float = 0.7
191 |     ) -> str:
192 |         """Generate text using a local LLM in LM Studio.
193 |         
194 |         Args:
195 |             prompt: The text prompt to send to the model
196 |             model_id: ID of the model to use (leave empty for default model)
197 |             max_tokens: Maximum number of tokens in the response (default: 1000)
198 |             temperature: Randomness of the output (0-1, default: 0.7)
199 |         
200 |         Returns:
201 |             The generated text from the local LLM
202 |         """
203 |         logging.info("generate_text function called")
204 |         try:
205 |             # Validate inputs
206 |             if not prompt or not prompt.strip():
207 |                 return "Error: Prompt cannot be empty."
208 |             
209 |             if max_tokens < 1:
210 |                 return "Error: max_tokens must be a positive integer."
211 |             
212 |             if temperature < 0 or temperature > 1:
213 |                 return "Error: temperature must be between 0 and 1."
214 |             
215 |             # Prepare payload
216 |             payload = {
217 |                 "prompt": prompt,
218 |                 "max_tokens": max_tokens,
219 |                 "temperature": temperature,
220 |                 "stream": False
221 |             }
222 |             
223 |             # Add model if specified
224 |             if model_id and model_id.strip():
225 |                 payload["model"] = model_id.strip()
226 |             
227 |             # Make request to LM Studio API using the helper function
228 |             response = await call_lmstudio_api("completions", payload)
229 |             
230 |             # Check for errors from the API helper
231 |             if "error" in response:
232 |                 return f"Error generating text: {response['error']}"
233 |             
234 |             # Extract and return the generated text
235 |             if "choices" in response and len(response["choices"]) > 0:
236 |                 return response["choices"][0].get("text", "")
237 |             
238 |             return "No response generated."
239 |         except Exception as e:
240 |             logging.error(f"Unexpected error in generate_text: {str(e)}")
241 |             traceback.print_exc(file=sys.stderr)
242 |             return f"Unexpected error: {str(e)}"
243 | 
244 |     @mcp.tool()
245 |     async def chat_completion(
246 |         messages: str,
247 |         model_id: str = "",
248 |         max_tokens: int = 1000,
249 |         temperature: float = 0.7
250 |     ) -> str:
251 |         """Generate a chat completion using a local LLM in LM Studio.
252 |         
253 |         Args:
254 |             messages: JSON string of messages in the format [{"role": "user", "content": "Hello"}, ...]
255 |               or a simple text string which will be treated as a user message
256 |             model_id: ID of the model to use (leave empty for default model)
257 |             max_tokens: Maximum number of tokens in the response (default: 1000)
258 |             temperature: Randomness of the output (0-1, default: 0.7)
259 |         
260 |         Returns:
261 |             The generated text from the local LLM
262 |         """
263 |         logging.info("chat_completion function called")
264 |         try:
265 |             # Standardize message format using the helper function
266 |             messages_formatted = prepare_chat_messages(messages)
267 |             
268 |             logging.debug(f"Formatted messages: {json.dumps(messages_formatted, indent=2)}")
269 |             
270 |             # Validate inputs
271 |             if not messages_formatted:
272 |                 return "Error: At least one message is required."
273 |             
274 |             if max_tokens < 1:
275 |                 return "Error: max_tokens must be a positive integer."
276 |             
277 |             if temperature < 0 or temperature > 1:
278 |                 return "Error: temperature must be between 0 and 1."
279 |             
280 |             # Prepare payload
281 |             payload = {
282 |                 "messages": messages_formatted,
283 |                 "max_tokens": max_tokens,
284 |                 "temperature": temperature,
285 |                 "stream": False
286 |             }
287 |             
288 |             # Add model if specified
289 |             if model_id and model_id.strip():
290 |                 payload["model"] = model_id.strip()
291 |             
292 |             # Make request to LM Studio API using the helper function
293 |             response = await call_lmstudio_api("chat/completions", payload)
294 |             
295 |             # Check for errors from the API helper
296 |             if "error" in response:
297 |                 return f"Error generating chat completion: {response['error']}"
298 |             
299 |             # Extract and return the generated text
300 |             if "choices" in response and len(response["choices"]) > 0:
301 |                 choice = response["choices"][0]
302 |                 if "message" in choice and "content" in choice["message"]:
303 |                     return choice["message"]["content"]
304 |             
305 |             return "No response generated."
306 |         except Exception as e:
307 |             logging.error(f"Unexpected error in chat_completion: {str(e)}")
308 |             traceback.print_exc(file=sys.stderr)
309 |             return f"Unexpected error: {str(e)}"
310 | 
311 |     if __name__ == "__main__":
312 |         logging.info("Starting server with stdio transport...")
313 |         # Initialize and run the server
314 |         mcp.run(transport='stdio')
315 | except Exception as e:
316 |     logging.critical(f"CRITICAL ERROR: {str(e)}")
317 |     logging.critical("Traceback:")
318 |     traceback.print_exc(file=sys.stderr)
319 |     sys.exit(1)
320 | 
```